Uh no, the AI didn’t crack any problem.
The AI produced the same hypothesis that a scientist produced, one that the scientist considered his own original awesome idea.
But the truth is that science is less about producing awesome ideas and more about proving them. And AI did nothing in this regard, except to remind scientists that their original awesome ideas are often not so original.
There’s even a term scientists use when another scientist has the same idea but actually managed to do the work of proving it: “scooped”. It’s a very common occurrence.
dojan@lemmy.world 1 day ago
lmao right, because the support person he reached, if indeed he even spoke to a person at all, would know and divulge the sources they train on. Dude may think that all his research is private but they’re making use of these tech giant services. These tech giants have blatantly showed that they’re OK with piracy and copyright infringement to further their goals, why would spying on research institutions be any different?
DarkCloud@lemmy.world 1 day ago
Large Language companies weren’t even aware their data (which is so large they themselves have no idea what’s in it) had other languages.
So the models suddenly knew how to speak other languages. The above story feels like those stories “Large Language Models are super intelligent! They’ve taught themselves French!” - no, mass surveillance and corporations being above the law taught them everything they know.
A_A@lemmy.world 1 day ago
You mean like searcher have done …
... in here : ?
bturtel.substack.com/p/human-all-too-human
For AI to learn something fundamentally new - something it cannot be taught by humans - it requires exploration and ground-truth feedback.
.
www.lightningrod.ai
We’re enabling self-play that learns directly from real world feedback.
TrenchcoatFullofBats@belfry.rip 1 day ago
Later that day