Eyron
@Eyron@lemmy.world
- Comment on Microsoft finally officially confirms it's killing Windows Control Panel sometime soon 2 months ago:
That many steps? WindowsKey+Break > Change computer name.
If you’re okay with three steps, on Windows 10 and newer, you can right click the start menu and generally open system. Just about any version supports rught clicking “My Computer” or “This PC” and selecting properties, as well.
- Comment on Google pulls the plug on uBlock Origin, leaving over 30 million Chrome users susceptible to intrusive ads 2 months ago:
Do you remember the Internet Explorer days? This, unfortunately, it’s still much better.
Pretty good reason to switch the Firefox, now. Nearly everything will work, unlike the Internet Explorer days.
- Firefox User
- Comment on Why a kilobyte is 1000 and not 1024 bytes 10 months ago:
To me, your attempt at defending it or calling it a retcon is an awkward characterization. Even in your last reply: now you’re calling it an approximation. Dividing by 1024 is an approximation? Did computers have trouble dividing by 1000? Did it lead to a benefit of the 640KB/320KB memory split in the conventional memory model? Does it lead to a benefit today?
Somehow, every other computer measurement avoids this binary prefix problem. Some, like you, seem to try to defend it as the more practical choice compared to the “standard” choice every other unit uses (e.g: 1.536 Mbps T1 or “54” Mbps 802.11g).
The confusion this continues to cause does waste quite a bit of time and money today. Vendors continue to show both units on the same specs sheets (open up a page to buy a computer/server). News still reports differences as bloat. Customers still complain to customer support, which goes up to management, and down to project management and development. It’d be one thing if this didn’t waste time or cause confusion, but we’re still doing it today. It’s long past time to move on.
The standard for “kilo” was 1000 centuries before computer science existed. Things that need binary units have an option to use, but its probably not needed: even in computer science. Trying to call kilo/kibi a retcon just seems to be trying to defend the use of the 1024 usage today: despite the fact that nearly nothing else (even in computers) uses the binary prefixes.
- Comment on Why a kilobyte is 1000 and not 1024 bytes 10 months ago:
209GB? That probably doesn’t include all of the RAM: like in the SSD, GPU, NIC, and similar. Ironically, I’d probably approximate it to 200GB if that was the standard, but it isn’t. It wouldn’t be that much of a downgrade to go to 200GB from 192GiB. Is 192 and 209 that different? It’s not much different from remembering the numbers for a 1.44MiB floppy, 1.5436Mbps T1 lines, or ~3.14159 pi approximation. Numbers generally end up getting weird: trying to keep it in binary prefixes doesn’t really change that.
The definition of kilo being “1000” was standard before computer science existed. If they used it in a non-standard way: it may have been common or a decent approximation at the time, but not standard. Does that justify the situation today, where many vendors show both definitions on the same page, like buying a computer or a server? Does that justify the development time/confusion from people still not understanding the difference? Was it worth the PR reaction from Samsung, to: yet again, point out the difference?
It’d be one thing if this confusion had stopped years ago, and everyone understood the difference today, but we’re not: and we’re probably not going to get there. We have binary prefixes, it’s long past time to use them when appropriate-- but even appropriate uses are far fewer than they appear: it’s not like you have a practical 640KiB/2GiB limit per program anymore. Even in the cases you do: is it worth confusing millions/billions on consumer spec sheets?
- Comment on Why a kilobyte is 1000 and not 1024 bytes 10 months ago:
This is all explained in the post we’re commenting on. The standard “kilo” prefix, from the metric system, predates modern computing and even the definition of a byte: 1700s vs 1900s. It seems very odd to make the argument that the older definition is the one trying to retcon.
The binary usage in software was and is common, but there’s definitely more recent, and causes a lot of confusion because it doesn’t match the older and bigger standard. Computers are very good at numbers, they never should have tried the hijack in existing prefix, especially when it was already defined by existing International standards. One might be able to argue that the us haven’t really adopted the metric system at the point of development, but the usage of 1,000 to define the kilo, is clearly older than the usage of 1,024 to define the kilobyte. The main new (last 100 years) thing here, is 1,024 bytes is a kibibyte.
Kibi is the recon. Not kilo.
- Comment on Why a kilobyte is 1000 and not 1024 bytes 10 months ago:
How do you define a recon? We’re kilograms 1024 grams, too? When did that change. It seems it’s meant 1000 since metric was created in the 1700s, along with a binary prefix.
From the looks of it, software vendors were trying to recon the definition of “kilo” to be 1024.
- Comment on Why a kilobyte is 1000 and not 1024 bytes 10 months ago:
Only recent in some computers: which used a non-standard definition. The kilo prefix has meant 1000 since at least 1795-- which predates just about any kilobyte.
- Comment on 8GB RAM on M3 MacBook Pro 'Analogous to 16GB' on PCs, Claims Apple 11 months ago:
tl;dr
The memory bandwidth isn’t magic, nor special, but generally meaningless. MT/s matter more, but Apple’s non-magic is generally higher than the industry standard in compact form factors.
Long version:
How are such wrong numbers are so widely upvoted? The 6400Mbps is per pin.
Generally, DDR5 has a 64-bit data bus. The standard names also indicate the speeds per module: PC5-32000 transfers 32GB/s with 64-bits at 4000MT/s, and PC5-64000 transfers 64GB/s with 64-bits at 8000MT/s. With those speeds, it isn’t hard for a DDR5 desktop or server to reach similar bandwidth.
Apple doubles the data bus from 64-bits to 128-bits (which is still nothing compared to something like an RTX 4090, with a 384-bit data bus). With that, Apple can get 102.4GB/s with just one module instead of the standard 51.2GB/s. The cited 800GB/s is with 8: most comparable hardware does not allow 8 memory modules.
Ironically, the memory bandwidth is pretty much irrelevant compared to the MT/s. To quote Dell defending their CAMM modules:
In a 12th-gen Intel laptop using two SO-DIMMs, for example, you can reach DDR5/4800 transfer speeds. But push it to a four-DIMM design, such as in a laptop with 128GB of RAM, and you have to ratchet it back to DDR5/4000 transfer speeds.
That contradiction makes it hard to balance speed, capacity, and upgradability. Even the upcoming Core Ultra 9 185H seems rated for 5600 MT/s-- after 2 years, we’re almost getting PC laptops that have the memory speed of Macbooks. This wasn’t Apple being magical, but just taking advantage of OEMs dropping the ball on how important memory can be to performance. The memory bandwidth is just the cherry on top.
The standard supports these speeds and faster. To be clear, these speeds and capacity don’t do ANYTHING to support “8GB is analogous to…” statements. It won’t take magic to beat, but the PC industry doesn’t yet have much competition in the performance and form factors Apple is targeting. In the meantime, Apple is milking its customers: The M3s have the same MT/s and memory technology as two years ago. It’s almost as if they looked at the next 6-12 months and went: “They still haven’t caught up, so we don’t need too much faster, yet-- but we can make a lot of money until while we wait.”
- Comment on Powerful Malware Disguised as Crypto Miner Infects 1M+ Windows, Linux PCs 1 year ago:
They describe an SSH infector, as well as a credentials scanner. To me, that sounds like it started like from exploited/infected Windows computers with SSH access, and then continued from there.
With how many unencrypted SSH keys there are, how most hosts keep a list of the servers they SSH into, and how they can probably bypass some firewall protections once they’re inside the network: not a bad idea.