Deepseak did show it could be done cheaply but imagine if you could take their optimisations and throw more power behind it (ie: buy a fuck tone of GPUs that the Chinese dont officially have access to)
Could work, the EU should pursue AI independence else it will continue its slide into irrelevance. Glad France is stepping up to the plate on this
Eyekaytee@aussie.zone 1 week ago
there’s nothing to suggest that building a cheaper model that’s not as good as the top AI models is the best way to build AI models, cheaper yes, but Deepseek isn’t the best AI model out there
A better way would be to combine the Deepseek training optimisations with raw power of Americas/nvidias hardware
We’re still at an early stage with AI, there’s nothing to suggest we’re anywhere near the end of Jevons paradox
JustJack23@slrpnk.net 1 week ago
Are you blind? There are so many things suggesting AI is a past thing already.
Most importantly there is no good use for it.
Just like Bitcoin all companies are trying to shoehorn it in shit products and making them shittier.
Eu should not built AI but try to regulate it and protect the environment from it.
Eyekaytee@aussie.zone 1 week ago
Heya!
?? Really like what? I must be blind, Deepseek just made GLOBAL headlines, like my own local logan radio station mentioned it on its news the other day!
The Paris AI summit is happening right now as we speak, you can even watch live:
www.youtube.com/watch?v=RhOrVNAQSMs
It’s being held at the Grand Palais which is very fancy :)
If you’re seeing something I’m not feel free to let me know
Enoril@jlai.lu 1 week ago
Making headlines is not a proof of quality. It’s just the latest buzz word. You should be less influenced by trends but more by real results.
Btw, I’ve participated to this kind of summit, even as speaker. These events are more a marketing and lobbying tool for consultant firms than being a real breakthrough event on the technology.
They did the same for the sovereign cloud years ago. Lot of money (our taxes) given, fancy events, fancy speeches. Concrete results: still waiting.
And yes, this ML training already show it’s limitations (hence the thing of the past remark). Until recently, you could improve the quality of the answer by providing more training data. But now, they’ve reached the limit as no more data can be given.
It’s just a matter of time before the bubble explode.
obbeel@lemmy.eco.br 1 week ago
Sometimes Claude Haiku (which has few billion parameters) knows things that ChatGPT doesn’t.
foenkyfjutschah@programming.dev 1 week ago
both have no clue.