Microsoft has published an ad promoting Copilot on Windows 11, but it contains a hilarious error that actually proves how useless the AI is.
To anyone thinking this is “not such a big deal":
Consider this is one of the most valuable, influential companies on the planet. They’re pushing Copilot like their bottom line depends on it.
Not only did Copilot give an objectively incorrect response - and bringing into question the fundamental usefulness of the feature - I think what’s being glossed over is: They published this.
For a company that’s so desperate to push a clearly half-baked, unwanted feature - publishing an ad showing it malfunctioning is hilariously inept. It’s not just a bad look for the product, it’s an embarrassment for the company.
Zachariah@lemmy.world 3 weeks ago
Enkers@sh.itjust.works 3 weeks ago
Isn’t that what almost anyone would do?
I’ve seen some pretty silly AI blunders before, but this one seems rather harmless. You’re still going to end up at the setting you need to change to solve the problem, which to me falls squarely in the “close enough” bin.
mindlesscrollyparrot@discuss.tchncs.de 3 weeks ago
Copilot didn’t suggest 150% in the context of a set of instructions. He specifically asked Copilot which percentage to set. That was the only task which Copilot was being asked to solve at that point in time.
You might also say that this is a simple task, so nobody would actually ask Copilot what percentage to use but isn’t that the point? This task, which Copilot failed on, is a simple task.
Lumidaub@feddit.org 3 weeks ago
It’s also a symptom of disregard for customers. Instead of just redoing the shot, this time making sure that it’s set to 100%, they just went “eh, nobody will notice”. It’s insulting.
rozodru@pie.andmc.ca 3 weeks ago
the problem is two fold and this applies to all LLMs.
according to the article the person asked HOW to increase the font size. Copilot then tells Aura to go increase the font size so it’s not exactly what the person asked.
like all LLMs it MUST provide a solution. it MUST do something. Copilot asks Aura to increase the font size to 150%. it’s already at 150%. but instead of going back to the user and saying “the font size is already set to 150%, would you like to increase it more?” it just goes ahead and bumps it up to 200 because it has to. it has no choice, it must provide some kind of solution. LLMs like GPT5, Claude, etc will do similar crap which ultimately shows that recent models are all collectively garbage. they now must ALL provide some sort of solution even if the majority of the time said solutions are actually hallucinations. LLMs aren’t allowed to say “I don’t know” or “it’s already done” or whatever.
So this video/article just shows how pointless and unreliable LLMs/AI are at even the most basic things. They’ve all been told that they must provide some kind of solution, always. It’s especially bad now with recent updates. Claude will hallucinate 8 times out of 10 for its solutions. GPT5 now just info dumps on you hoping something in there will resonate, it can’t provide an accurate precise solution anymore it just vomits on your plate and calls it a meal.