I am playing with it, sandboxed in an isolated environment, only interacting with a local LLM and only connected to one public service with a burner account. I haven’t even given it any personal info, not even my name.
It’s super fascinating and fun, but holy shit the danger is outrageous. Multiple occasions, it’s misunderstood what I’ve asked and it will fuck around with its own config files and such. I’ve asked it to do something and the result was essentially suicide as it are its own settings. I’ve only been running it for like a week but have had to wipe and rebuild twice already (probably could have fixed it, but that’s what a sandbox is for). I can’t imagine setting it loose on anything important right now.
But it is undeniably cool, and watching the system communicate with the LLM model has been a huge learning opportunity.
XLE@piefed.social 1 day ago
It’s nice to see articles that push back against the myth of AI superintelligence. A lot of people who brand themselves as “AI safety experts” preach this ideology as if it is a guaranteed fact. I’ve never seen any of them talk about real life present issues with AI, though.
(The superintelligence myth is a promotion strategy; OpenAI and Anthropic both lean into it because they know it boosts their stocks.)
Imgonnatrythis@sh.itjust.works 1 day ago
Thankfully Steinberger is the first to deny that this is AGI.