You just ruined the magic of ChatGpt for me lol. Fuck. I knew the illusion would break eventually but damn bro it’s fuckin 6 in the morning.
AbouBenAdhem@lemmy.world 2 days ago
Fun fact: LLMs that strictly generate the most predictable output are seen as boring and vacuous by human readers, so designers add a bit of randomization they call “temperature”.
It’s that unpredictable element that makes LLMs seem humanlike—not the predictable element that’s just functioning as a carrier signal.
EchoSnail@lemmy.zip 12 hours ago
vrighter@discuss.tchncs.de 13 hours ago
i.e. their fundamental limitations is, ironically, why they are so easy to hype
spankmonkey@lemmy.world 2 days ago
The unpredictable element is also why they absolutely suck at being the reliable sources of accurate information that they are being advertised to be.
Yeah, humans are wrong a lot of the time but AI forced into everything should be more reliable than the average human.
rhombus@sh.itjust.works 2 days ago
That’s not it. Even without any added variability they would still be wrong all the time. The issue is inherent to LLMs; they don’t actually understand your questions or even their own responses. It’s just the most probable jumble of words that would follow the question.
gandalf_der_12te@discuss.tchncs.de 10 hours ago
First of all it doesn’t matter whether you think that AI can replace human workers. It only matter whether company think that AI can replace human workers.
Secondly, you’re assuming that humans typically understand the question at stake. You’ve clearly never met, or been, an under-paid, over-worked employee who doesn’t give a flying fuck about the daily bullshit.
sxan@midwest.social 2 days ago
Is it? Is random variance the source of all hallucinations? I think it’s not; it’s more the fact that they don’t understand what they’re generating, they’re just looking for the most statistically probable next character.
spankmonkey@lemmy.world 2 days ago
If there are 800 sentences/whatever chunk of information it uses about what color a ball is, using the average can result in that sentence using red when it should be blue based on the current question or it could add information about balls that are a different type because it doesn’t understand what kind of ball it is talking about. It might be randomness, it might be using an average, or a combination of both.
Like if asked about ‘what color is a basketball’ and the training set includes a it of custom color combinations by each team it might return a combination of colors that doesn’t match a team like brown (default leather) and yellow. This could also be the answer if you asked for an example of a basketball that matched team colors, because it might keep the default color from a ball that just has a team logo.
If someone doesn’t know the training set it would probably look like it made something ip. To someone who knows it is impossible to tell of it is random, due to a lack of knowing what it is talking about, or if it had some other less obvious connection that combines the two which lead to yellow and brown result.
jacksilver@lemmy.world 1 day ago
Yeah, they aren’t trained to make “correct” responses, but reasonably looking responses; they aren’t truth systems. However, I’m not sure what a truth system would even look like. At a certain point truth/fact become subjective, meaning that we probably have a fundamental problem with how we think about and evaluate these systems.
I mean, it’s the whole reason programming languages were created, natural language is ambiguous.
sxan@midwest.social 1 day ago
Yeah, solipsism existing drives the point about truth home. Thing is, LLMs outright lie without knowing they’re lying, because there’s no understanding there. It’s statistics at the character level.
AI is not my field, so I don’t know, either.
masterspace@lemmy.ca 2 days ago
Things don’t have to be more reliable if they’re fast enough.
Quantum computers are inherently unreliable, but you can perform the same calculation multiple times and average the result / discard the outliers and it will still be faster than a classical computer.
spankmonkey@lemmy.world 2 days ago
That works for pattern matching, but you don’t want to do that for doing accurate calculations. There is no reason to average the AI run calculation of 12345 x 54321 because that can be done with a tiny calculator with a solar cell the size of a pencil eraser. Doing calculations like that multiple times adds up fast and will always be less reliable than just doing it right in the first place. Same with reporting historical facts.
There is a vslidation step that AI doesn’t do. If you feed it 1000 posts from unreliable sources like reddit or don’t add even more context about whether the ‘fact’ is a joke, baseless rumor, or from a reliable source you get the current AI.
Yes, doing multiple calculations efficently and taking averages has a lot of uses, mainly in complex systems where this provides opportunities to test chaotic systems with wildly different starting states. There are a ton of great uses for AI!
But the AI that is being forced down our throats is worse than wikipedia because it averages content from ALL of reddit, facebook, and other massive sites where crackpots are given the same weight as informed individuals and there are no guardrails.
masterspace@lemmy.ca 2 days ago
I agree.
I disagree. Those are not remotely the same problem. Both in how they’re technically executed, and in what the user expects out of them.
No, it’s just different. Is it wrong sometime? Yes. But it can also get you the right answer to a normal human question orders of magnitude faster than a series of traditional searches and documentation readings.
Does that information still need to be vetted afterwards? Yeah, but it’s a lot easier to say “copilot, I’m looking at a crossover circuit and I’ve got one giant wire coil, three white ceramic rectangles and a capacitor, what is each of them doing and how kind of meter will I need to test them”, then it is to individually search for each component and search for what type of meter you need to test them.
Basically any time one human query needs to synthesize information from multiple different sources, an AI search is going to be significantly faster.
Appoxo@lemmy.dbzer0.com 2 days ago
In latter classes our teachers just told us to not blindly believe what we read on Wikipedia but cross-reference that with other sources like public newspaper or (as you said) books.