Same here. I wait to see real life calculations done by such circuits. They won’t be able to e.g. do a simple float addition without losing/mangling a bunch of digits.
But maybe the analog precision is sufficient for AI, which is an imprecise matter from the start.
How is it imprecise? It’s the same thing as taking two containers of water and pouring them into a third one. It will contain the sum of the precious two exactly. Or if you use gears to simulate orbits. Rounding errors are a digital thing.
Analog has its own set of issues (e.g. noise, losses, repeatability), but precision is not one of them. Arguably, the main reason digital took over is because it’s programmable and it’s good for general computing. Turing completeness means you can do anything if you throw enough memory and time at it, while analog circuits are purpose-made
The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.
In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren’t solving anything that can’t be done faster by digitally simulating an analog computer.
No, it wouldn’t. Because you cannot make it reproduceable on that scale.
Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.
Analog audio hardware has no resolution or bit depth. An analog signal (voltage on a wire/trace) is something physical, so its exact value is only limited by the precision of the instrument you’re using to measure it. It’s when you sample it into a digital system that it gains those properties. You have this the wrong way around. Digital audio (sampling of any analog/“real” signal) will always be an approximation of the real thing, by nature, no matter how many bits you throw at it.
@Treczoks@flemtone Thing is, the final LLM inference is usually done at reduced precision. 8-16 bits usually, but even 4bits or lower with different layers of varying precision.
Treczoks@lemmy.world 1 day ago
Same here. I wait to see real life calculations done by such circuits. They won’t be able to e.g. do a simple float addition without losing/mangling a bunch of digits.
But maybe the analog precision is sufficient for AI, which is an imprecise matter from the start.
floquant@lemmy.dbzer0.com 12 hours ago
You don’t need to simulate float addiction. You can sum two voltages by just connecting two wires - and that’s real number addition
Treczoks@lemmy.world 11 hours ago
I know. My point was that this is horribly imprecise, even if their circuits are exceptionally good.
There is a reason why all other chips run digital…
floquant@lemmy.dbzer0.com 11 hours ago
How is it imprecise? It’s the same thing as taking two containers of water and pouring them into a third one. It will contain the sum of the precious two exactly. Or if you use gears to simulate orbits. Rounding errors are a digital thing.
Analog has its own set of issues (e.g. noise, losses, repeatability), but precision is not one of them. Arguably, the main reason digital took over is because it’s programmable and it’s good for general computing. Turing completeness means you can do anything if you throw enough memory and time at it, while analog circuits are purpose-made
TomasEkeli@programming.dev 1 day ago
Wouldn’t analog be a lot more precise?
Accurate, though, that’s a different story…
Limonene@lemmy.world 1 day ago
The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.
In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren’t solving anything that can’t be done faster by digitally simulating an analog computer.
ivanafterall@lemmy.world 1 day ago
What does this mean, in practice? In what application does that precision show its benefit? Crazy math?
Treczoks@lemmy.world 1 day ago
No, it wouldn’t. Because you cannot make it reproduceable on that scale.
Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.
floquant@lemmy.dbzer0.com 12 hours ago
Analog audio hardware has no resolution or bit depth. An analog signal (voltage on a wire/trace) is something physical, so its exact value is only limited by the precision of the instrument you’re using to measure it. It’s when you sample it into a digital system that it gains those properties. You have this the wrong way around. Digital audio (sampling of any analog/“real” signal) will always be an approximation of the real thing, by nature, no matter how many bits you throw at it.
fcalva@cyberplace.social 1 day ago
@Treczoks @flemtone Thing is, the final LLM inference is usually done at reduced precision. 8-16 bits usually, but even 4bits or lower with different layers of varying precision.