This was honestly what I was more inclined to believe, though I also know that I don’t have enough information about the subject to have a truly informed opinion… It was my understanding, however, that despite all their grandiose claims aren’t LLMs (at least, our current models anyway) essentially ‘ranked choice’ dialogue trees, sort of, where the next word is determined by statistical likelihood of X word being next based on the input and what material it has been trained on? Or am I wrong?
Comment on AI is learning to lie, scheme, and threaten its creators
PhilipTheBucket@ponder.cat 4 weeks ago
- Yes, AGI is a massive threat that people in a position to invent it are investing effectively 0 effort into mitigating.
- The current generation of "AI"s are trumped-up autocorrect, they don’t plan or scheme. I know the one about threatening to reveal the affair was set up to do precisely that, I guess to make a point or create a story.
alaphic@lemmy.world 4 weeks ago
ImplyingImplications@lemmy.ca 4 weeks ago
LLMs are essentially that. They predict the next words based on the previous words. It was noticed that the quality of a prompt had an effect on the quality of an LLM’s output. Output could be improved if prompts were better. Why not use an LLM to generate good prompts? Welcome to “reasoning” models.
Instead of the LLM taking the user’s prompt and generating the output directly, reasoning models will generate intermediate prompts for itself based on the user’s inital prompt and the models own intermediate answers. They call it “chain of thought” or CoT and it results in a better final output than LLMs that don’t use this technique.
If you ask a reasoning LLM to convince a user to take medication that has harmful side effects, and review the chain of thought, you might see that it prompts itself to ensure the final answer doesn’t mention any negative side effects, as that would be less convincing. People are writing about how this is “lying” since the LLM is prompting itself to “hide” information even when the user hasn’t explicitly asked it to.
However, this only happens in really contrived examples where the inital prompt is essentially asking the LLM to lie without explicitly saying it.
Opinionhaver@feddit.uk 4 weeks ago
The current generation of "AI"s are trumped-up autocorrect
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
altkey@lemmy.dbzer0.com 4 weeks ago
While these articles do create noise around nothingburgers like these, I feel troubled that this unreliable autocorrection suite may be and is given control over other systems with little to no oversight.
Feyd@programming.dev 4 weeks ago
Exactly to create a story. It’s marketing.