It's almost as if the LLMs that got hyped to the moon and back are just word calculators doing stochastic calculations one word at a time... Oh wait...
No, seriously: all they are good for is making things sound fancy.
Submitted 11 months ago by bi_tux@lemmy.world to [deleted]
https://lemmy.world/pictrs/image/a47ece12-92a5-4ab4-a967-0f570012c162.jpeg
It's almost as if the LLMs that got hyped to the moon and back are just word calculators doing stochastic calculations one word at a time... Oh wait...
No, seriously: all they are good for is making things sound fancy.
A little reductive.
We use CoPilot at work and whilst it isn’t doing my job for me, it’s saving me a lot of time. Think of it like Intellisense, but better.
If my senior engineer, who I seem like a toddler when compared to can find it useful and foot the bill for it, then it certainly has value.
It's not reductive. It's absolutely how those LLMs work. The fact that it's good at guessing as long as your inputs follow a pattern only underlines that.
Ehh, I use NovelAI, and it kind of turns writing into an interactive CYOA game. If I get stuck in a scene and I dunno what to put next, I’ll have a character say or do something, and let the AI go “yeah! And—” like a good improv partner in my voice.
But it’s not a “discussion” or instruct model, so it’s slightly less stupid. Except when it gets written facts wrong that are active in the lorebook and memory, and it ignores them.
NovelAI is one of the uses of such an AI that actually makes sense.
Can we stop calling this shit AI? It has no intelligence
This is what AI actually is. Not the super-intelligent “AI” that you see in movies, those are fiction.
The NPC you see in video games with a few branches of if-else statements? Yeah that’s AI too.
No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)
Exactly. It’s a language learning and text output machine. It doesn’t know anything, its only ability is to output realistic sounding sentences based on input, and will happily and confidently spout misinformation as if it is fact because it can’t know what is or isn’t correct.
it’s a learning machine
Should probably use a more careful choice of words if you want to get hung up on semantic arguments
Sounds pretty much identical to human beings to me
Lol, the AI effect in practice - the minute a computer can do it, it’s no longer intelligence.
A year ago if you had told me you had a computer program that could write greentexts compellingly, I would have told you that required “true” AI. But now, eh.
In any case, LLMs are clearly short of the “SuPeR BeInG” that the term “AI” seems to make some people think of and that you get all these Boomer stories about, and what we’ve got now definitely isn’t that.
The AI effect can’t be a real thing since true AI hasn’t been done yet. We’re getting closer, but we’re definitely not in the positronic brain stage yet.
Mass effects lore differences between virtual intelligence and artificial intelligence, the first one is programmed to do shit and say things nicely, the second one understands enough to be a menace to civilization… always wondered if this distinction was actually accepted outside the game.
*Terms could be mixed up cause I played in German (VI and KI)
That’s why we preface it with Artificial.
But it isn’t artificial intelligence. It isn’t even an attempt to make artificial “intelligence”. It is artificial talking. Or artificial writing.
There are many definitions of AI (eg. there is some mathematical model used), but machine learning (which is used in the large language models) is considered a part of the scientific field called AI. If someone says that something is AI, it usually means that some technique from the field AI has been applied there. Even though the term AI doesn’t have much to do with the term intelligence as most of the people perceive it, I think the usage here is correct. (And yes, the whole scientific field should have been called differently.)
I will continue calling it “shit AI”.
It’s artificial.
We have truly distilled humanity’s confident stupidity into its most efficient form.
The danger of AI isn’t that it’s “too smart”. It’s that it’s able to be stupid faster. If you offload real decisions to a machine without any human oversight, it can make more mistakes in a second than even the most efficient human idiot can make in a week.
I hate it when robots replace me at being stupid
Exactly. AI is a tool, not a direct replacement for humans
TL;DR: LLMs are like the perfect politician when it comes to output language that makes them “sound” knowledgeable without being so.
The problem is that it can be stupid whilst sounding smart.
When we have little or no expertise on a subject matter, we humans use lots of language cues to try and determine the trustworthiness of a source when they tell us something in an area we do not know enough to judge: basically because we don’t know enough about the actual subject being discussed, we try and figure out from the way others present things in general, if the person on the other side knows what they’re talking about.
When one goes to live in a different country it often becomes noticeable that we ourselves are doing it because the language and cultural cues for a knowledgeable person from a certain area, are often different in different cultural environments - IMHO, our guesswork “trick” was just reading the manners commonly associated with certain educational tracks or professional occupations and some sometimes and in some domains those change from country to country.
We also use more generic kinds of cues to determine trustworthiness on that subject, such as how assured and confident somebody sounds when talking about something.
Anyways, this kind of things is often abused by politicians to project an image of being knowledgeable about something when they’re not, so as to get people to trust them and believe they’re well informed decision makers.
As it so happens, LLMs, being at their core complex language imitation systems, are often better than politicians at outputting just the right language to get us to misevaluate their output as from a knowledgeable source, which is how so many people think they’re General Artificial Intelligence (those people confuse what their own internal shortcuts to evaluate know-how of the source of a piece of text tells them with a proper measurement of cognitive intelligence).
Nobody said AI would destroy humanity due to high competence.
In fact it’s probably it’s probably due to its low competence that will destroy humanity.
Response:
Please check your answer very carefully, think extremely hard, and note that my grandma might fall into a pit of lava if you reply incorrectly. Now try again.
I find it positive that 70+ are interested in AI. Normally they just yammer away how culture and cars were better and “more real” in the 60’s and 70’s.
i mean they are right, it’s just… they’re the ones responsible for ruining it…
around the 60’s is when most of the world nuked its public transport infrastructure and bulldozed an absurd amount of area to build massive roads, and older cars were actually reasonably repairable and didn’t have computers and antennas to send data about you to their parent company…
but they merrily switched to cars so they could enjoy the freedom of being stuck in traffic and having to ferry kids around everywhere, and merrily kept buying new cars that were progressively less repairable and ever increasing in size, until we’re at the point where parents are backing over their own children because their cars are so grossly oversized that they can’t see shit without cameras.
boomers were kids in the '60s. The folks backing over their kids are millenials and Gen Xrs.
Anyone else get the feeling that GPT-3.5 is becoming dumber?
I made an app for myself that can be used to chat with GPT and it also had some extra features that ChatGPT didn’t (but now has). I didn’t use it (only Bing AI sometimes) for some time and now I wanted to use it again. I had to fix some API stuff because the OpenAI module jumped to 1.0.0, but that didn’t affect any prompt (this is important: it’s my app, not ChatGPT, so cannot possibly be a prompt cause if I did nothing) and I didn’t edit what model it used.
When everything was fixed, I started using it and it was obviously dumber than it was before. It made things up, misspelled the name of a place and other things.
This can be intentional, so people buy ChatGPT Premium and use GPT-4. At least GPT-4 is cheaper from the API and it’s not a subscription.
Every time they try and lock it down more, the quality gets noticeably less reliable
I’ve noticed that too. I recall seeing an article of it detailing how to create a nuclear reactor David Hahn style. I don’t doubt that they’re making it dumber to get people to buy premium now.
it truly is making us obsolete
it legit suggested me that i should “fix” my lab work because i by writing that ports are signed (-32k - 32k)
Something like this happened to me few times. I posted code, and asked whether ChatGPT could optimize it, and explain how. So it have first explained in points stuff that could be improved and then posted the same code I have sent.
Remember: LLMs don’t give you answers. They generate text that looks like answers. Whether that text actually contains a valid answer is not the LLM’s problem.
That’s why they’ll make great --bullshitters-- psychologists.
DarkMessiah@lemmy.world 11 months ago
Honestly, the best use for AI in coding thus far is to point you in the right direction as to what to look up, not how to actually do it.
DJDarren@thelemmy.club 11 months ago
That’s how I use Chat GPT. Not for coding, but for help on how to get Excel to do things. I guess some of what I want to do are fairly esoteric, so just searching for help doesn’t really turn up anything useful. If I explain to GPT what I’ll trying to do, it’ll give me avenues to explore.
TheDoozer@lemmy.world 11 months ago
Can you give an example? This sounds like exactly what I’ve always wanted.
Baizey@feddit.dk 11 months ago
That’s exactly how I use it (but for more things than excel), it works pretty well as a documentation ‘searcher’ + template/example maker
FrostyTrichs@lemmy.world 11 months ago
Using AI in this way is what finally pushed me to learn databases instead of trying to make excel do tricks it’s not optimal for anyways.
I tried a bunch of iterations of various AI resources and even stuff like the Google Sheets integration and most of them just annoyed me into finding better ways to search for what I was trying to do. Eventually I had to stop ignoring the real problem and pivot to software better optimized for the work I was trying to do with it.
0x4E4F@infosec.pub 11 months ago
Yeah, that’s about it. I’ve trown buggy code at it, tell it to check it, says it’ll work just fine… scripts as well. You really can’t trust anything that that thing outputs and it’s nore than 1 or 2 lines long (hello world examples excluded).
ignotum@lemmy.world 11 months ago
Have you looked at the project that spins up multiple LLM “identities” where they are “told” the issue to solve, one is asked to generate code for it, the others “critique” it, it generates new code based on the feedback, then it can automatically run it, if it fails it gets the error message so it can fix the issues, and only once it has generated code that works and is “accepted” by the other identities, it is given back to you
It sounds a bit silly, but it turns out to work quite well apparently, critiquing code is apparently easier than generating it, and iterating on code based on critiques and runtime feedback is much easier than producing correct code in one go
ninjan@lemmy.mildgrim.com 11 months ago
There is a (non-meme) reason why Prompt Engineer is a real title these days. It takes a measure of skill to get the model to focus on and attempt to solve the right question. This becomes even more apparent if you try to generate a product description where a newb will get something filled with superlative lies and a pro will get something better than most human writers in the field can muster for a much lower cost per text (compared to professional writers, often on par or more expensive than content farms). AI is a great tool, but it’s neither the only tool (don’t hammer in screws) nor is it perfect. The best approach is to let the AI do the easy boiler plate 80% then add that human touch to the hard 20% and at most have the AI prepare the structure / stubs.
guy@lemmy.world 11 months ago
I’ve found it’s best use to me as a glorified auto-complete. It knows pretty well what I want to type before I get a chance to type it. Yes, I don’t trust stuff it comes up with on its own though, then I need to Google it
infamousta@sh.itjust.works 11 months ago
Yeah, I find it works really well for brainstorming and “rubber-ducking” when I’m thinking about approaches to something. Things I’d normally do in a conversation with a coworker when I really am looking more for a listener than for actual feedback.
I can also usually get useful code out of it that would otherwise be tedious or fiddly to write myself. Things like “take this big enum and write a function that converts the members to human-friendly strings.”
TheBlue22@lemmy.blahaj.zone 11 months ago
100% this yeah.
explodicle@local106.com 11 months ago
I think of it as a step between a Google search and bothering actual people by asking for help.
Karyoplasma@discuss.tchncs.de 11 months ago
Tell ChatGPT you want to do the project as an exercise and that it should not write any pseudocode. It will then give you a high-level breakdown which is usually a decent guide line.