I can’t wait for hardware companies to let go of their designers prematurely in the pursuit of AI everything only for there to be a bug in a major board and no one available to troubleshoot thereby stranding customers with a broken board, no revision on the horizon, and no recourse.
Quilter's AI just designed an 843‑part Linux computer that booted on the first try. Hardware will never be the same.
Submitted 3 days ago by deadymouse@lemmy.world to technology@lemmy.world
https://venturebeat.com/ai/quilters-ai-just-designed-an-843-part-linux-computer-that-booted-on-the
Comments
unmagical@lemmy.ml 3 days ago
magnetosphere@fedia.io 3 days ago
I like that it’s basing its behaviors on the laws of physics. No messy human language, no opinions, slants, or agendas. AI just isn’t ready (or, possibly, will never be ready) to handle that shit. Plus, it’s not stealing work created by humans under the guise of “training”.
I assume it still relies on datacenters, which are themselves ethically questionable. Still, this seems to be the “flavor” of AI that I hate the least.
MountingSuspicion@reddthat.com 3 days ago
I may be hallucinating now, but I swear I remember nearly a decade ago there was a paper or articles about how CG PCBs were using some electrical tricks that were non standard to minimize space or something. The design purposefully had arcs or short circuits or something. Maybe it was a temperature thing? I did a more than cursory search and couldn’t find much, but I vividly remember having conversations about it. Anyone remember anything like that?
palordrolap@fedia.io 2 days ago
I seem to remember a story about how something - a neural net, or an early reinforced learning experiment - ended up accidentally exploiting a physics bug in a chip to achieve a result that should have gone through the chip's expected circuitry instead.
It was specific to that one particular chip, and swapping it out for another supposedly identical chip caused the calculation, or simulation, or whatever that was running on the larger system, to fail.
That is, it wasn't supposed to be exploiting physics glitches but that's what happened.
... I think I found it. It happened all the way back in the 1990s if this story is to be believed: https://www.damninteresting.com/on-the-origin-of-circuits/
MountingSuspicion@reddthat.com 2 days ago
Yes! Thank you for the link! I can’t guarantee it but this seems like the exact thing we had been chatting about. The age puts it in time to have made the rounds but still be tech relevant at around the time of discussion.
GreyEyedGhost@piefed.ca 3 days ago
There was a story about a researcher using evolving algorithms to build more efficient systems on FPGAs. One of the weird shortcuts was some system that normally used a clock circuit, but none was available, and it made a dead-end circuit the would give a electric pulse when used, giving it a makeshift clock circuit. The big problem was that better efficiency often used quirks of the specific board, and his next step was to start testing the results on multiple FPGAs and using the overall fitness to get past that quirk/shortcut.
Pretty sure this was before 2010. Found a possible link from 2001.
MountingSuspicion@reddthat.com 2 days ago
Yes, thank you! My timing was wrong (I’m getting old lol), but this was the exact thing being discussed. Glad other people were able to find the info.
sem@lemmy.blahaj.zone 3 days ago
Cg = computer generated?
MountingSuspicion@reddthat.com 3 days ago
Yea. I didn’t call it AI because I’m not sure the exact method of generation. It may have been AI or maybe some other generation method.
_cryptagion@anarchist.nexus 3 days ago
That middle step — the layout — creates a persistent bottleneck. For a board of moderate complexity, the process typically consumes four to eight weeks. For sophisticated systems like computers or automotive electronics, timelines stretch to three months or longer.
imagine being the poor soul who connects circuits together in some CAD program for eight weeks straight. I figure I would have pulled all my hair out by the end of the first week.
A_A@lemmy.world 3 days ago
Playing games against the laws of physics, so, games against reality. This is similar to how humans develop. So, i.m.o., this approach will go way beyond fabricating computer boards.
wjrii@lemmy.world 3 days ago
This is kind of interesting and cool, and it’s not a hallucinating LLM. I’ve designed a couple of simple circuit boards, and running traces can be sort of zen, but it is tedious and would be maddening as a job. Definitely some hype levels coming from the company that give me pause, but it seems like an actual useful task for a machine learning algorithm.
chrash0@lemmy.world 3 days ago
as someone who used to work on “expert models” i’m excited that not everyone has abandoned them for “what if we just had a model that knows everything (that doesn’t exist) and costs a billion dollars to run”
givesomefucks@lemmy.world 3 days ago
Yeah…
But you know how people are already comparing vibe coding to 40k where “priests” pray to computers and hope if they do the exact same thing they’ll get the same result they want?
If we start walking down this road of even the chat or not understanding why what it did was better…
Serious unintended consequences are going to be inevitable.
Like, I swear nobody knows the paperclip story anymore.
en.wikipedia.org/wiki/Instrumental_convergence
I mean, we can make a very very solid argument that much of our current problems are caused by high level stock trading being done by algorithms who’s only instruction is “make numbers go up”.
This shit aint even hypothetical anymore, it’s just instead of “make as many paperclips” we told it “make more money than you did yesterday”.
Which is why we’re burning down the planet to make billionaires even more money
jacksilver@lemmy.world 3 days ago
I was going to ask how this is different than a Reinforcement Learning algorithm but then they called out Deep Minds Alpha-Go