When we see LLMs struggling to demonstrate an understanding of what letters are in each of the tokens that it emits or understand a word when there are spaces between each letter, we should compare it to a human struggling to understand a word written in IPA format (/sĘtĘ Éz ðɪs/) even though we can understand the word spoken aloud perfectly fine.
AGI achieved đ¤
Submitted â¨â¨10⊠â¨months⊠ago⊠by â¨cyrano@lemmy.dbzer0.com⊠to â¨[deleted]âŠ
https://lemmy.dbzer0.com/pictrs/image/7efced45-504a-4177-a992-a5a2ce0e8b6f.webp
Comments
jsomae@lemmy.ml â¨9⊠â¨months⊠ago
GandalftheBlack@feddit.org â¨9⊠â¨months⊠ago
But if youâve learned IPA you can read it just fine
jsomae@lemmy.ml â¨9⊠â¨months⊠ago
I know IPA but I canât read English text written in pure IPA as fast as I can read English text written normally. I think this is the case for almost anyone who has learned the IPA and knows English.
sheetzoos@lemmy.world â¨10⊠â¨months⊠ago
Honey, AI just did something new. Itâs time to move the goalposts again.
Echo5@lemmy.world â¨10⊠â¨months⊠ago
Maybe OP was low on the priority list for computing power? Idk how this stuff works
bitjunkie@lemmy.world â¨10⊠â¨months⊠ago
Deep reasoning is not needed to count to 3.
sheetzoos@lemmy.world â¨10⊠â¨months⊠ago
It is if youâre creating ragebait.
RizzoTheSmall@lemm.ee â¨10⊠â¨months⊠ago
o3-pro? Damn, thatâs an expensive goof
UrPartnerInCrime@sh.itjust.works â¨10⊠â¨months⊠ago
lordbritishbusiness@lemmy.world â¨9⊠â¨months⊠ago
One of the interesting things I notice about the âreasoningâ models is their responses to questions occasionally include what my monkey brain perceives as âsassâ.
I wonder sometimes if they recognise the trivialness of some of the prompts they answer, and subtilly throw shade.
Oneâs going to respond to this with âclever monkey! đ Have a banana đ.â
cashsky@sh.itjust.works â¨10⊠â¨months⊠ago
What is that font broâŚ
UrPartnerInCrime@sh.itjust.works â¨9⊠â¨months⊠ago
Its called sweetpea and my sweatpea picked it out for me. How dare I stick with something my girl picked out for me.
But the fact that you actually care what font someone else uses is sad
ynthrepic@lemmy.world â¨10⊠â¨months⊠ago
Nice Rs.
nyamlae@lemmy.world â¨10⊠â¨months⊠ago
Is this ChatGPT o3-pro?
UrPartnerInCrime@sh.itjust.works â¨10⊠â¨months⊠ago
ChatGPT 4o
Korhaka@sopuli.xyz â¨10⊠â¨months⊠ago
I asked it how many Ts are in names of presidents since 2000. It said 4 and stated that âObamaâ contains 1 T.
TheOakTree@lemm.ee â¨10⊠â¨months⊠ago
Toebama
jsomae@lemmy.ml â¨10⊠â¨months⊠ago
People who think that LLMs having trouble with these questions is evidence one way or another about how good or bad LLMs are just donât understand tokenization. This is not a big-picture problem that indicates LLMs is deeply incapable. You may hate AI but that doesnât excuse being ignorant about how it works.
moseschrute@lemmy.world â¨10⊠â¨months⊠ago
Also just checked and every open ai model bigger than 4.1-mini can answer this. I think the joke should emphasize how we developed a super power inefficient way to solve some problems that can be accurately and efficiently answered with a single algorithm. Another example is using ChatGPT to do simple calculator math. LLMs are good at specific tasks and really bad at others, but people kinda throw everything at them.
__dev@lemmy.world â¨10⊠â¨months⊠ago
And yet they can seemingly spell and count (small numbers) just fine.
buddascrayon@lemmy.world â¨10⊠â¨months⊠ago
The problem is that itâs not actually counting anything. Itâs simply looking for some text somewhere in its database that relates to that word and the number of Râs in that word. Thereâs no mechanism within the LLM to actually count things. It is not designed with that function. This is not general AI, this is a Generative Adversarial Network thatâs using its vast vast store of text to put words together that sound like they answer the question that was asked.
jsomae@lemmy.ml â¨10⊠â¨months⊠ago
what do you mean by spell fine? Theyâre just emitting the tokens for the words. Like, itâs not writing âstrawberry,â itâs writing tokens <302, 1618, 19772>, which correspond to st, raw, and berry respectively. If you ask it to put a space between each letter, that will disrupt the tokenization mechanism, and itâs going to be quite liable to making mistakes.
I donât think itâs really fair to say that the lookup 19772 -> berry counts as the LLM being able to spell, since the LLM isnât operating at that layer. It doesnât really emit letters directly. I would argue its inability to reliably spell words when you force it to go letter-by-letter or answer queries about how words are spelled is indicative of its poor ability to spell.
untorquer@lemmy.world â¨10⊠â¨months⊠ago
These sorts of artifacts wouldnât be a huge issue except that AI is being pushed to the general public as an alternative means of learning basic information. The meme example is obvious to someone with a strong understanding of English but learners and children might get an artifact and stamp it in their memory, working for years off bad information. Not a problem for a few false things every now and then, thatâs unavoidable in learning. Thousands accumulated over long term use, however, and your understanding of the world will be coarser, like the Swiss cheese with voids so large it canât hold itself up.
jsomae@lemmy.ml â¨10⊠â¨months⊠ago
Youâre talking about hallucinations. Thatâs different from tokenization reflection errors. Iâm specifically talking about its inability to know how many of a certain type of letter are in a word that it can spell correctly. This is not a hallucination per se â at least, itâs a completely different mechanism that causes it than whatever causes other factual errors. This specific problem is due to tokenization, and thatâs why I say it has little bearing on other shortcomings of LLMs.
MrLLM@ani.social â¨10⊠â¨months⊠ago
We gotta raise the bar, so they keep struggling to make it âbetterâ
My attempt
0000000000000000 0000011111000000 0000111111111000 0000111111100000 0001111111111000 0001111111111100 0001111111111000 0000011111110000 0000111111000000 0001111111100000 0001111111100000 0001111111100000 0001111111100000 0000111111000000 0000011110000000 0000011110000000Btw, I refuse to give my money to AI bros, so I donât have the âlatest and greatestâipitco@lemmy.super.ynh.fr â¨10⊠â¨months⊠ago
Tested on ChatGPT o4-mini-high
0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0
It sent me this
xavier666@lemm.ee â¨9⊠â¨months⊠ago
I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good
Tech bros: âWorth it!â
qx128@lemmy.world â¨10⊠â¨months⊠ago
I really like checking these myself to make sure itâs true. I WAS NOT DISAPPOINTED!
(Total Rs is 8. But the LOGIC ChatGPT pulls out is âŚâŚ. remarkable!)
AnUnusualRelic@lemmy.world â¨10⊠â¨months⊠ago
What is this devilry?
scholar@lemmy.world â¨10⊠â¨months⊠ago
jsomae@lemmy.ml â¨9⊠â¨months⊠ago
This is deepseek model right? OP was posting about GPT o3
ipitco@lemmy.super.ynh.fr â¨10⊠â¨months⊠ago
Try with o4-mini-high. Itâs made to think like a human by checking its answer and doing step by step, rather than just kinda guessing one like here
Zacryon@feddit.org â¨10⊠â¨months⊠ago
âLet me know if youâd like help counting letters in any other fun words!â
Oh well, these newish calls for engagement sure take on ridiculous extents sometimes.
filcuk@lemmy.zip â¨10⊠â¨months⊠ago
I want an option to select Marvin the paranoid android mood: âthereâs your answer, now if you could leave me to wallow in self-pittyâ
LMurch@thelemmy.club â¨10⊠â¨months⊠ago
AI is amazing, weâre so fucked.
/s
Korhaka@sopuli.xyz â¨10⊠â¨months⊠ago
Unironically, we are fucked when management think AI can replace us. Not when AI can actually replace us.
slaacaa@lemmy.world â¨10⊠â¨months⊠ago
Singularity is here
ICastFist@programming.dev â¨10⊠â¨months⊠ago
Now ask how many asses there are in assassinations
Rin@lemm.ee â¨10⊠â¨months⊠ago
notdoingshittoday@lemmy.zip â¨10⊠â¨months⊠ago
rumba@lemmy.zip â¨10⊠â¨months⊠ago
LodeMike@lemmy.today â¨10⊠â¨months⊠ago
Man AI is ass at this
*laugh track*
RedstoneValley@sh.itjust.works â¨10⊠â¨months⊠ago
Itâs funny how people always quickly point out that an LLM wasnât made for this, and then continue to shill it for use cases it wasnât made for either (The âintelligenceâ part of AI, for starters)
outhouseperilous@lemmy.dbzer0.com â¨10⊠â¨months⊠ago
I would say more âblackpillingâ, i genuinely donât believe most humans are people anymore.
merc@sh.itjust.works â¨10⊠â¨months⊠ago
then continue to shill it for use cases it wasnât made for either
The only thing it was made for is âspicy autocompleteâ.
jsomae@lemmy.ml â¨10⊠â¨months⊠ago
Turns out spicy autocomplete can contribute to the bottom line. Capitalism :(
SoftestSapphic@lemmy.world â¨10⊠â¨months⊠ago
Maybe they should call it what it is
Machine Learning algorithms from 1990 repackaged and sold to us by marketing teams.
outhouseperilous@lemmy.dbzer0.com â¨10⊠â¨months⊠ago
Hey now, thatâs unfair and queerphobic.
These models are from 1950, with juiced up data sets. Alan turing personally sid a lot of work on them, before he cracked the math and figured out they were shit and would always be shit.
jsomae@lemmy.ml â¨10⊠â¨months⊠ago
Machine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.
UnderpantsWeevil@lemmy.world â¨10⊠â¨months⊠ago
LLM wasnât made for this
Thereâs a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders âDoes my conversation partner really understand what Iâm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?â
The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isnât analyzing the fundamental meaning of what Iâm saying, it is simply mapping the words Iâve input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So â2â is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.
When you hear people complain about how the LLM âwasnât made for thisâ, what theyâre really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.
Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data.
Imagine asking a librarian âWhat was happening in Los Angeles in the Summer of 1989?â and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.
Thatâs modern LLMs in a nutshell.
Leet@lemmy.zip â¨10⊠â¨months⊠ago
Can we say for certain that human brains arenât sophisticated Chinese roomsâŚ
outhouseperilous@lemmy.dbzer0.com â¨10⊠â¨months⊠ago
Yes but have you considered that it agreed with me so now i need to defend it to the death against you horrible apes, no matter the allegation or terrain?
Knock_Knock_Lemmy_In@lemmy.world â¨10⊠â¨months⊠ago
a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately
The human approach could be to write a (python) program to count the number of characters precisely.
When people refer to agents, is this what they are supposed to be doing? Is it done in a generic fashion or will it fall over with complexity?
RedstoneValley@sh.itjust.works â¨10⊠â¨months⊠ago
Thatâs a very long answer to my snarky little comment :) I appreciate it though. Personally, I find LLMs interesting and Iâve spent quite a while playing with them. But after all they are like you described, an interconnected catalogue of random stuff, with some hallucinations to fill the gaps. They are NOT a reliable source of information or general knowledge or even safe to use as an âassistantâ. The marketing of LLMs as being fit for such purposes is the problem. Humans tend to turn off their brains and to blindly trust technology, and the tech companies are encouraging them to do so by making false promises.
jsomae@lemmy.ml â¨10⊠â¨months⊠ago
Youâve missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that thereâs a person there is irrelevant, and they could be replaced with a speaker or computer terminal.
Put differently, itâs not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.
If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely canât be very good at anything are not comforting to me at all.
merc@sh.itjust.works â¨10⊠â¨months⊠ago
Imagine asking a librarian âWhat was happening in Los Angeles in the Summer of 1989?â and that person fetching you ⌠Thatâs modern LLMs in a nutshell.
I agree, but I think youâre still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.
IMO, one of the key ideas with the Chinese Room is that thereâs an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be âfooledâ when theyâre given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs canât reason so they canât distinguish between âthis is just a rephrasing of that memeâ and âthis is similar to that meme but distinct in an important wayâ.
frostysauce@lemmy.world â¨10⊠â¨months⊠ago
(damn, wish we had a tool that did exactly this back in August of 1996, amirite?)
Wait, what was going on in August of '96?
shalafi@lemmy.world â¨10⊠â¨months⊠ago
You might just love Blind Sight. Here, theyâre trying to decide if an alien life form is sentient or a Chinese Room:
âTell me more about your cousins,â Rorschach sent.
âOur cousins lie about the family tree,â Sascha replied, âwith nieces and nephews and Neandertals. We do not like annoying cousins.â
âWeâd like to know about this tree.â
Sascha muted the channel and gave us a look that said Could it be any more obvious? âIt couldnât have parsed that. There were three linguistic ambiguities in there. It just ignored them.â
âWell, it asked for clarification,â Bates pointed out.
âIt asked a follow-up question. Different thing entirely.â
Bates was still out of the loop. Szpindel was starting to get it, though⌠.
REDACTED@infosec.pub â¨10⊠â¨months⊠ago
There are different types of Artificial intelligences. Counter-Strike 1.6 bots, by definition, were AI. They even used deep learning to figure out new maps.
ouRKaoS@lemmy.today â¨10⊠â¨months⊠ago
If you want an even older example, the ghosts in Pac-Man could be considered AI as well.
BarrelAgedBoredom@lemm.ee â¨10⊠â¨months⊠ago
Itâs marketed like its AGI, so we should treat it like AGI to show that it isnât AGI. Lots of people buy the bullshit
Knock_Knock_Lemmy_In@lemmy.world â¨10⊠â¨months⊠ago
AGI is only a benchmark because it gets OpenAI out of a contract with Microsoft when it occurs.
merc@sh.itjust.works â¨10⊠â¨months⊠ago
You can even drop the âaâ and âgâ. There isnât even âintelligenceâ here. Itâs not thinking, itâs just spicy autocomplete.
Gladaed@feddit.org â¨10⊠â¨months⊠ago
Fair point, but a big part of âintelligenceâ tasks are memorization.
BussyCat@lemmy.world â¨10⊠â¨months⊠ago
Computers for all intents are purposes have perfect recall so since it was trained on a large data set it would have much better intelligence. But in reality what we consider intelligence is extrapolating from existing knowledge which is what âAIâ has shown to be pretty shit at
abfarid@startrek.website â¨10⊠â¨months⊠ago
I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesnât even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldnât be expected.
Zacryon@feddit.org â¨10⊠â¨months⊠ago
I know that words are tokenized in the vanilla transformer. But do GPT and similar LLMs still do that as well? I assumed they also tokenize on character/symbol level, possibly mixed up with additional abstraction down the chain.
abfarid@startrek.website â¨10⊠â¨months⊠ago
I donât know what part of what I said prompted all those downvotes, but of course all the reasonable people understood, that the âAGI in 2 yearsâ was a stock price pump.
kayzeekayzee@lemmy.blahaj.zone â¨10⊠â¨months⊠ago
Iâve actually messed with this a bit. The problem is more that it canât count to begin with. If you ask it to spell out each letter individually (ie each letter will be its own token), it still gets the count wrong.
abfarid@startrek.website â¨10⊠â¨months⊠ago
In my experience, when using reasoning models, it can count, but not very consistently. Iâve tried random assortments of letters and it can count them correctly sometimes. It seems to have much harder time when the same letter repeats many times, perhaps because those are tokenized irregularly.
cyrano@lemmy.dbzer0.com â¨10⊠â¨months⊠ago
True and I agree with you yet we are being told all job are going to disappear, AGI is coming tomorrow, etc. As usual the truth is more balanced
hornyalt@lemmynsfw.com â¨10⊠â¨months⊠ago
âA guy insteadâ
VirgilMastercard@reddthat.com â¨10⊠â¨months⊠ago
idiomaddict@lemmy.world â¨10⊠â¨months⊠ago
I know thereâs no logic, but itâs funny to imagine itâs because itâs pronounced Mrs. Sippy
merc@sh.itjust.works â¨10⊠â¨months⊠ago
How do you pronounce âMrsâ so that thereâs an ârâ sound in it?
sp3ctr4l@lemmy.dbzer0.com â¨10⊠â¨months⊠ago
I was gonna say something similar, I have heard a LOT of people pronounce Mississippi as if it does have an R in it.
jaybone@lemmy.zip â¨10⊠â¨months⊠ago
And if it messed up on the other word, we could say because itâs pronounced Louisianer.
cyrano@lemmy.dbzer0.com â¨10⊠â¨months⊠ago
DmMacniel@feddit.org â¨10⊠â¨months⊠ago
We are fecking doomed!
loomy@lemy.lol â¨10⊠â¨months⊠ago
I donât get it
besselj@lemmy.ca â¨10⊠â¨months⊠ago
ZILtoid1991@lemmy.world â¨9⊠â¨months⊠ago
Reality:
The AI was trained to answer 3 to this question correctly.
Wait until the AI gets burned on a different question. Skeptics will rightfully use it to criticize LLMs for just being stochastic parrots, until LLM developers teach their models to answer it correctly, then the AI bros will use it as a proof it becoming âmore and more human likeâ.
outhouseperilous@lemmy.dbzer0.com â¨9⊠â¨months⊠ago
No but see theyâre not skeptics, theyâre just haters, and there is no valid criticism of this tech. Sorry.
And also youve just been banned from like twenty places tor being A FANATIC âanti ai shillâ. Genuinely check the mod log, these fuckers are cultists.