This highlights really well the importance of competition. Lack of competition results in complacency and stagnation.
It’s also why I’m incredibly worried about AMD giving up on enthusiast graphics. I have very few hopes in Intel ARC.
Submitted 4 weeks ago by Alphane_Moon@lemmy.world to technology@lemmy.world
This highlights really well the importance of competition. Lack of competition results in complacency and stagnation.
It’s also why I’m incredibly worried about AMD giving up on enthusiast graphics. I have very few hopes in Intel ARC.
I expect them to merge enthusiast into the pro segment: It doesn’t make sense for them to have large RDNA cards because there’s too few customers just as it doesn’t make sense for them to make small CDNA cards but in the future there’s only going to be UDNA and the high end of gaming and the low end of professional will overlap.
I very much doubt they’re going to do compute-only cards as then you’re losing sales to people wanting a (maybe overly beefy) CAD or Blender or whatever workstation, just to save on some DP connectors. Segmenting the market only makes sense when you’re a (quasi-) monopolist and want to abuse that situation, that is, if you’re nvidia.
True, in simple words, AMD is moving towards versatile solutions that is going to satisfy corporate clients and ordinary clients while producing same thing, their apu and xdna architecture is example, apu is used in playstation and Xbox, xdna and epyc used in datacenters, and AMD is uniting btb and btc merchandise for manufacture simplification
They honestly seem to be done with high-end “enthusiast” GPUs. There is probably more money/potential for iGPUs and low/middle level products optimized for laptops.
Their last few generations of flagship GPUs have been pretty underwhelming but at least they existed. I’d been hoping for a while that they’d actually come up with something to give Nvidia’s xx80 Ti/xx90 a run for their money. I wasn’t really interested in switching teams just to be capped at the equivalent performance of a xx70 for $100-200 more.
Wouldn’t be the first time they did this though, I wouldn’t be surprised if they jump back into the high end once they’re ready.
I don’t see this happening with both consoles using AMD, honestly I could see Nvidia going less hard on graphics and pushing more towards AI and other related stuff, and with the leaked prices for the 5000s they are going to price themselves out of the market
Lack of competition results in complacency and stagnation.
This is absolutely true, but it wasn’t the case regarding 64 bit x86. It was a very bad miscalculation, where Intel wanted bigger more profitable server marketshare.
So Intel was extremely busy with profit maximization, so they wanted to sell Itanium for servers, and keep the x86 for personal computers.
The result was of course that X86 32 bit couldn’t compete when AMD made it 64bit, and Itanium failed despite HP-Compaq killing the worlds fastest CPU at the time the DEC Alpha, because they wanted to jump on Itanium instead. But the Itanium frankly was an awful CPU based on an idea they couldn’t get to work properly.
This was not complacency, and it was not stagnation in the way that Intel made actually real new products and tried to be innovative, but with the problem that the product sucked and was too expensive for what it offered.
Why the Alpha was never brought back, I don’t understand? As mentioned it was AFAIK the worlds fastest CPU when it was discontinued?
so they wanted to sell Itanium for servers, and keep the x86 for personal computers.
That’s still complacency. They assumed consumers would never want to run workloads capable of using more than 4 GiB of address space.
Sure, they’d already implemented physical address extension, but that just allowed the OS itself to address more memory by enlarging the page table. It didn’t increase the virtual address space available to applications.
The application didn’t necessarily need to use 4 GiB of RAM to hit those limitations, either. Dylibs, memmapped files, thread stacks, various paging tricks, all eat up the available address space without needing to be resident in RAM.
Mirror of Phil Park’s tweet:
Buffalox@lemmy.world 4 weeks ago
Everybody in the know, knows that x86 64 bit was held back to push Itanium, Intel was all about market segmentation, which is also why Celeron was amputated on for instance RAM compared to Pentium.
Market segmentation has a profit maximization motive. You are not allowed to use cheap parts for things that you are supposed to buy expensive parts for. Itanium was supposed to be the only viable CPU for servers, and keeping x86 32 bit was part of that strategy.
That AMD was successful with 64 bit, and Itanium failed was Karma as deserved for Intel.
Today it’s obvious how moronic Intel’s policy back then was, because even phones got 64 bit CPU’s too back around 2009.
32 bits is simply too much of a limitation for many even pretty trivial tasks. And modern X86 chips are in fact NOT 64 bit anymore, but hybrids that handle tasks with 256 bits routinely, and some even with 512 bits.
When AMD came with Ryzen Threadripper and Epyc, and prices scaled very proportionally to performance, and none were artificially hampered, it was such a nice breath of fresh air.
barsoap@lemm.ee 4 weeks ago
On a note of technical correctness: That’s not what the bitwidth of a CPU is about.
By your account a 386DX would be an 80-bit CPU because it could handle 80-bit floats natively, and the MOS6502 (of C64 fame) a 16-bit processor because it could add two 16-bit integers. Or maybe 32 bits because it could multiply two 16-bit numbers into a 32-bit result?
In reality the MOS6502 is considered an 8-bit CPU, and the 386 a 32-bit one. The “why” gets more complicated, though: The 6502 had a 16 bit address bus and 8 bit data bus, the 368DX a 32 bit address and data bus, the 368SX a 32 bit address bus and 16 bit external data bus.
Or, differently put: Somewhere around the time of the fall of the 8 bit home computer the common understanding of “x-bit CPU” switched from data bus width to address bus width.
…as, not to make this too easy, understood by the instruction set, not the CPU itself: Modern 64 bit processors use pointers which are 64 bit wide, but their address buses usually are narrower. x86_64 only requires 48 bits to be actually usable, the left-over bits are required to be either all ones or all zeroes (enforced by hardware to keep people from bit-hacking and causing forwards compatibility issues, 1/0 IIRC distinguishes between user vs. kernel memory mappings it’s been a while since I read the architecture manual). Addressable physical memory might even be lower, again IIRC. 2^48^B are 256TiB no desktop system can fit that much, and I doubt the processors in there could address it.
Buffalox@lemmy.world 4 weeks ago
No that’s not true, it’s way way more complex than that, some consider the data bus the best measure, another could be decoder. I could also have called a normal CPU bitwidth as depending on how many cores it has, each core handling up to 4 instructions per cycle, could be 256 bit, with an average 8 core CPU that would be 2048 bit.
There are several ways to evaluate, but most ways hover around the 256 bit, and none below 128 bit.
mox@lemmy.sdf.org 4 weeks ago
See also: ECC memory.
Wispy2891@lemmy.world 4 weeks ago
Sometimes for some reason, there’s no limit. Like the cheap i3-8100 can use ECC memory
frezik@midwest.social 4 weeks ago
It was also a big surprise when Intel just gave up. The industry was getting settled in for a David v Goliath battle, and then Goliath said this David kid was right.
Buffalox@lemmy.world 4 weeks ago
Yes, I absolutely thought Intel would make their own, and AMD would lose the fight.
But maybe Intel couldn’t do that, because AMD had patented it already, and whatever Intel did, it could be called a copy of that.
Anyways it’s great to see AMD finally is doing well and finally is profitable. I just never expected Intel to fail as badly as they are? So unless they fight their way to profitability again, we may be in the same boat again as we were when Intel was solo on X86?
But then again, maybe x86 is becoming obsolete, as Arm is getting ever more competitive.
Valmond@lemmy.world 4 weeks ago
I hated that you had to choose, virtualization or overclocking so much. Among a lot of other forced limitation crap from intel.
A bit like cheap mobile phones had a too small ssd and buying one at least “normal” sized bumped everything else (camera, cpu, etc) up too, including price ofc.