Or my favorite quote from the article
“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.
Submitted 8 months ago by kinther@lemmy.world to technology@lemmy.world
Or my favorite quote from the article
“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.
I once asked Gemini for steps to do something pretty basic in Linux (as a novice, I could have figured it out). The steps it gave me were not only nonsensical, but they seemed to be random steps for more than one problem all rolled into one. It was beyond useless and a waste of time.
This is the conclusion that anyone with any bit of expertise in a field has come to after 5 mins talking to an LLM about said field.
The more this broken shit gets embedded into our lives, the more everything is going to break down.
after 5 mins talking to an LLM about said field.
The insidious thing is that LLMs tend to be pretty good at 5-minute initial impressions. I’ve seen repeatedly people looking to eval LLM and they generally fall back to “ok, if this were a human, I’d ask a few job interview questions, well known enough so they have a shot at answering, but tricky enough to show they actually know the field”.
As an example, a colleague became a true believer after being directed by management to evaluate it. He decided to ask it “generate a utility to take in a series of numbers from a file and sort them and report the min, max, mean, median, mode, and standard deviation”. And it did so instantly, with “only one mistake”. Then he tried the exact same question later in the day and it happened not to make that mistake and he concluded that it must have ‘learned’ how to do it in the last couple of hours, of course that’s not how it works, there’s just a bit of probabilistic stuff and any perturbation of the prompt could produce unexpected variation, but he doesn’t know that…
Note that management frequently never makes it beyond tutorial/interview question fodder in terms of the technical aspect of their teams, and you get to see how they might tank their companies because the LLMs “interview well”.
We’re fucked. It’s becoming truly self-aware
it was probably programmed to do it, like grok and racism
Life. Don’t talk to me about life.
Next on the agenda: Doors that orgasm when you open them.
How do you know they don’t?
AAAAAAAAaaaaaahhhhhh
How much did google pay ars for this slop?
Google replicated the mental state if not necessarily the productivity of a software developer
Imposter Syndrome is an emergent property
Wait, you know productive devs?
Yeah, usually comes hand to hand with that mental state. Probably you know only healthy devs
Gemini has imposter syndrome real bad
Is it imposter syndrome, or simply an imposter?
This is the way
As it should.
JoMiran@lemmy.ml 8 months ago
I was an early tester of Google’s AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google’s search. Now I use Kagi.
Guidy@lemmy.world 8 months ago
Weird because I’ve used it many times fr things not related to coding and it has been great.
I told it the specific model of my UPS and it let me know in no uncertain terms that no, a plug adapter wasn’t good enough, that I needed an electrician to put in a special circuit or else it would be a fire hazard.
I asked it about some medical stuff, and it gave thoughtful answers along with disclaimers and a firm directive to speak with a qualified medical professional, which was always my intention. But I appreciated those thoughtful answers.
I use co-pilot for coding. It’s pretty good. Not perfect though. It can’t even generate a valid zip file (unless they’ve fixed it in the last two weeks) but it sure does try.
JoMiran@lemmy.ml 8 months ago
Beware of the confidently incorrect answers. Triple check your results with core sources (which defeats the purpose of the chatbot).
ArtificialLink@lemy.lol 8 months ago
5 bucks a month for a search engine is ridiculous. 25 bucks a month for a search engine is mental institution worthy.
ebolapie@lemmy.world 8 months ago
How much do you figure it’d cost you to run your own, all-in?
somerandomperson@lemmy.dbzer0.com 8 months ago
This is the reason why.
jj4211@lemmy.world 8 months ago
That’s the thing about AI in general, it’s really hard to “fix” issues, you maybe can try to train it out and hope for the best, but then you might play whack a mole as the attempt to fine tune to fix one issue might make others crop up. So you pretty much have to decide which problems are the most tolerable and largely accept them. You can apply alternative techniques to maybe catch egregious issues with strategies like a non-AI technique being applied to help stuff the prompt and influence the model to go a certain general direction (if it’s LLM, other AI technologies don’t have this option, but they aren’t the ones getting crazy money right now anyway).
A traditional QA approach is frustratingly less applicable because you have to more often shrug and say “the attempt to fix it would be very expensive, not guaranteed to actually fix the precise issue, and risks creating even worse issues”.
Lucidlethargy@sh.itjust.works 8 months ago
Gemrni is dogshit, but it’s objectively better than chatgpt right now.
They’re ALL just fuckig awful. Every AI.
NotSteve_@piefed.ca 8 months ago
I know Lemmy seems to very anti-AI (as am I) but we need to stop making the anti-AI talking point "AI is stupid". It has immense limitations now because yes, it is being crammed into things it shouldn't be, but we shouldn't just be saying "its dumb" because that's immediately written off by a sizable amount of the general population. For a lot of things, it is actually useful and it WILL be taking peoples jobs, like it or not (even if they're worse at it). Truth be told, this should be a utopic situation for obvious reasons
I feel like I'm going crazy here because the same people on here who'd criticise the DARE anti-drug program as being completely un-nuanced to the point of causing the harm they're trying to prevent are doing the same thing for AI and LLMs
My point is that if you're trying to convince anyone, just saying its stupid isn't going to turn anyone against AI because the minute it offers any genuine help (which it will!), they'll write you off like any DARE pupil who tried drugs for the first time.
*Countries need to start implementing UBI NOW*
JoMiran@lemmy.ml 8 months ago
It is funny that you mention this because it was after we started working with AI that I started telling one that would listen that we needed to implement UBI immediately. I think this was around 2014 IIRC.
I am not blanket calling AI stupid. That said, the AI term itself is stupid because it covers many computing aspects that aren’t even in the same space. I was and still am very excited about image analysis as it can be an amazing tool for health imaging diagnosis. My comment was specifically about Google’s Bard/Gemini. It is and has always been trash, but in an effort to stay relevant, it was released into the wild and crammed into everything. The tool can do some things very well, but not everything, and there’s the rub. It is an alpha product at best that is being forced fed down people’s throats.
PriorityMotif@lemmy.world 8 months ago
I remember there was an article years ago, before the ai hype train, that google had made an ai chatbot but had to shut it down due to racism.
a_wild_mimic_appears@lemmy.dbzer0.com 8 months ago
That was Microsoft’s Tay - the twitter crowd had their fun with it: www.theverge.com/…/tay-microsoft-chatbot-racist
tzrlk@lemmy.world 8 months ago
Are you thinking of when Microsoft’s AI turned into a Nazi within 24hrs upon contact with the internet? Or did Google have their own version of that too?