Comment on The Absolute Minimum Every Software Developer Must Know About Unicode in 2023 (Still No Excuses!)
abhibeckert@lemmy.world 1 year ago
Check out this comparison of four programming languages:
Python 3:
len(“🤦🏼♂️”)
5
JavaScript / Java / C#:
“🤦🏼♂️”.length
7
Rust:
println!(“{}”, “🤦🏼♂️”.len());
17
Swift:
print(“🤦🏼♂️”.count)
1
Walnut356@programming.dev 1 year ago
That depends on your definition of correct lmao. Rust explicitly counts unicode scalar values, because that’s the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.
Black616Angel@feddit.de 1 year ago
And rust also has the “🤦”.chars().count() which returns 1.
I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.
Also also the len function clearly states:
lemmyvore@feddit.nl 1 year ago
None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they’re there for legacy reasons.
Knusper@feddit.de 1 year ago
That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.
The way to get a proper grapheme count in Rust is e.g. via this library: crates.io/crates/unicode-segmentation
Djehngo@lemmy.world 1 year ago
Makes sense, the code-points split is stable; meaning it’s fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.
Knusper@feddit.de 1 year ago
Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.
Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.
ono@lemmy.ca 1 year ago
It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.