I get that we have to impress shareholders, but why can’t they just be honest and say it doubles CPU performance with the chance of even further improvement with software optimization. Doubling performance of the same hardware is still HUGE.
Th4tGuyII@fedia.io 4 months ago
The TL;DR for the article is that the headline isn't exactly true. At this moment in time their PPU can potentially double a CPU's performance - the 100x claim comes with the caveat of "further software optimisation".
Tbh, I'm sceptical of the caveat. It feels like me telling someone I can only draw a stickman right now, but I could paint the Mona Lisa with some training.
Of course that could happen, but it's not very likely to - so I'll believe it when I see it.
Having said that they're not wrong about CPU bottlenecks and the slowed rate of CPU performance improvements - so a doubling of performance would be huge in this current market.
Clusterfck@lemmy.sdf.org 4 months ago
Zorque@lemmy.world 4 months ago
They… they did?
pop@lemmy.ml 4 months ago
I’m just glad there are companies that are trying to optimize current tech rather than just piling over new hardware every damn year with forced planned obsolescence.
Though the claim is absurd, I think double the performance is NEAT.
dustyData@lemmy.world 4 months ago
This is new hardware piling. What they claim to do requires reworking manufacturing, is not retroactive with current designs, and demands more hardware components. It is basically a hardware thread scheduler. Cool idea, but it won’t save us from planned obsolescence, if anything it is more incentive for more waste.
MadMadBunny@lemmy.ca 4 months ago
Ah, good ol’ magic wishful thinking…
barsquid@lemmy.world 4 months ago
Putting the claim instead of the reality in the headline is journalistic malpractice. 2x for free is still pretty great tho.
barsquid@lemmy.world 4 months ago
Just finished the article, it’s not for free at all. Chips need to be designed to use it. I’m skeptical again. There’s no point IMO. Nobody wants to put the R&D into massively parallel CPUs when they can put that effort into GPUs.
frezik@midwest.social 4 months ago
Not every problem is amenable to GPUs. If it has a lot of branching, or needs to fetch back and forth from memory a lot, GPUs don’t help.
Now, does this thing have exactly the same limitations? I’m guessing yes, but it’s all too vague to know for sure. It’s sounds like they’re doing what superscalar CPUs have done for a while. On x86, that starts with the original Pentium from 1993, and Crays going back to the '60s. What are they doing to supercharge this idea?
Does this avoid some of security problems that have popped up with superscalar archs? For example, some kernel code running at ring 0 is running alongside userspace code, and it all gets the same ring 0 level as a result.