People always misuse searchengines by writing the whole questions as a search…
With ai they still can do that and get, i think in their optinion, a better result
Submitted 3 days ago by Feddinat0r@feddit.org to showerthoughts@lemmy.world
People always misuse searchengines by writing the whole questions as a search…
With ai they still can do that and get, i think in their optinion, a better result
Well, Search engines have become nearly useless even with proper phrasing, so yeah I do often just let AI dig through it for me.
That’s basically what AI is tho. It’s a misleading term for the evolution of auto complete.
LLM can be used as a search engine for things you know absolutely zero terminology about. That’s convenient. You can’t ask Google for “tiny striped barrels with wires” and expect to get the explanation of resistors marking.
10-15 years ago Google returned the correct answers when I used the wrong words. For example, it would have most likely returned resistors for that query because of the stripes, and if you left off stripes it would have been capacitors.
AI isn’t nearly as good as Google was 10+ years ago.
There’s the theory it’s by design. They have made search so bad so that we now turn to Ai to give us what search can, and by that they can effectively charge you for searching…which generally we would baloney the idea of paying to search.
It worked yesterday trying to find a video by describing the video and what I remembered from the video. That was great. I want that for my own photoa and videos without having to upload them somewhere.
It sounds like you might be referring to miniature striped barrels used in crafts or model-making, often decorated or with wire elements for embellishment or functionality. These barrels can be used in various DIY projects, including model railroads, dioramas, or even as decorative items.
Reverse image search would find that answer more accurately than some llm
How? And don’t those image searches have LLMs under the hood?
People that use LLMs as search engines run the very high risk of “learning” misinformation. LLMs excel at being “confidently incorrect”. Not always, but also not seldomly, LLMs slip bits of information into a result that is false. That confident packaging, along with the fact that the misinformation is likely surrounded by actual facts, often convinces people that everything the LLM returned is correct.
Don’t use LLM as your sole source of information or as a complete replacement for search.
Just had a discussion with an LLM about the plot of a particular movie, particularly the parts where the plot falls short. I asked it to list all the parts that feel contrived.
It gave me 7 points that were ok, but the 8th one was 100% hallucinated. That event is not in this movie at all. It totally missed the 5 completely obivous contrived screw-ups in the ending of the movie too, so I was not very convinced of this plot analysis.
But you’re asking it an abstract idea that most people probably wouldn’t agree on.
I would never expect a good analysis of a movie from an LLM. It can’t actually produce original thought, and can’t even watch the movie itself. It maybe has some version of the script in its training database, and definitely has things that people have said about the movie, and similar movies, and similar books, and whatever else they scraped. It it just returns words that are often grouped together and that have high likelihood of relevance to your query.
That’s my main issue with llms. If I need to fact check the information, I’d save time by directly looking for the information elsewhere. It makes no sense to me.
Being old enough to remember what search was like pre-Google, I know that the AI shit is worse at finding the right results than it was before Google ever existed. Which is a shame, because Google actually made searching good initially.
I’ve unfortunately noticed that as llms have gotten more traction that search engines in my experience have gotten worse. Sometimes I have to do like 2 or 3 searches to get the exact right articles that actual relate to what I’m looking for. In the contrary llms are great for asking a question directly, and figuring out exactly what you’re looking for and then going to a search engine and doing some research on your own. It would be nice if there was a way to somehow combine the two without the ridiculously egregious environmental and intellectual issues with llms.
Is that not what Google does now? They give you a little AI summary with information taken from the first few results and break it down into a more easily digestible version.
They do. But their LLM in my experience really isn’t very good. If ChatGPT is like a B+ student, Google’s is the special Ed kid in a helmet.
I guess? I only use Google at work though so not too familiar. But still hits my issues with llms, also it’s forced in Google I believe.
They get an answer but unlike a search engine, the AI doesn’t show its work. I want a citation with the answer, I’m not taking your word for it!
Eh? You can ask it to provide sources and it will. Or Google AI does it by default
There’s lots of things wrong with AI, but that’s actually not one of them much of the time.
There is no guarantee those sources say what the answer says, or indeed that they actually exist. Generators can and do assemble words into phrases that look like citations, but those sources don't exist. It's actually a problem for librarians, who keep getting accused of hiding nonexistent books "cited" by ChatGPT
Oh interesting. It should do this by default then.
Defaults matter. They normalize patterns of behaviour. People who are normalized not to care about citations are being trained to blindly accept whatever they’re told. That’s a recipe for an unthinking, obedient, submissive society.
Yes and no. It sometimes kind of tries to extrapolate from lots of sources and just gives you a few of them that don’t really give an answer.
It used to be funny when someone wrote a two sentence long “search query” on google. Nowadays, you can literally do that on any LLM and you’ll get a summary based on a few results. There are a whole bunch of problems with that, but I’ll just let the people from !fuck_ai@lemmy.world to elaborate.
You already told it you were interested in soup. It didn’t provide cook times, prep work needed or portions. It didn’t mention any other alternatives or possibilities.
You will need to open a recipe blog anyway, after taking the time to read that and determine that it’s not everything you need to know, and it drank the volume of a Honda Civic in water and used enough electricity to heat your house for resistive space heaters for 17 hours in below-zero F weather.
It created that answer by comparing its statistical word tree to other, similar word combinations and then autocompleting the next most likely word you might want to hear. It did not consider your topic in any way, it doesn’t know what’s carrot is, only its token number and that it kind of belong in paragraphs that roughly resemble the one it gave you. It is a reverse-Gaussian-blur of a Gaussian-blurred overlay of a million photos of paragraphs about carrots, soups, and carrot soups.
It carved away forests and poisoned nearby pensioner’s air just to give your this gray area of an answer, devoid of all thought or creativity. It is objectively worse than the ad-strewn sites written by an actual person, in every way, and you’d have to be a fucking madman to offer any praise upon it.
Wow! That’s pretty intense.
9/10, would recommend.
an llm is little more than a search engine
Some people like AI because they treat it as if it's the voice of God speaking directly to them.
Yeah that’s what I use it for mostly. On DDG I’ll ask it stuff like someones age, or when did someone pass etc, to get a quick description of something. And if I need more info I’ll look it up on my own.
NONE_dc@lemmy.world 3 days ago
I tend to think that people use AI (and yeah, search engines too) the way children use their parents:
“Mom, why is the sky blue?” “Mom, where is China?” “Mom, can you help me with this school project?” (The mother ends up doing everything).
The thing is, unlike a parent, AI is unable to tell users that they don’t know everything and that users should do things on their own. Because that would reduce the number of users.
BenderRodriguez@lemmy.world 3 days ago
Mom, why is China?
curtainshowers@lemmynsfw.com 2 days ago
snooggums@piefed.world 3 days ago
The world would be a better place if most parents did that ibstead of confidently spewing bigotry, misogyny, and other terrible opinions. I only knew of a few that were able to say ‘I don’t know’ as a kid, and the ratio is about the same with adults.
NONE_dc@lemmy.world 3 days ago
Blame the Dunning-Kruger effect. The people I have seen most likely to acknowledge their lack of knowledge in a certain area have been those who are very wise and well-versed in at least one field, such as science, History (like my mom), art, etc.
Mediocre people are mostly convinced that they know everything.
MagicShel@lemmy.zip 3 days ago
AI has a lot more surface knowledge about a lot more things than my parents ever did. I think one of the more insidious things about AI though, is that will a human you can generally tell when they are out of their depth. They grasp for words. Their speech cadence is more hesitant. Their hesitation is palpable. (I think palpable might be considered slop these days, but fuck haters it’s how I write — emdashes and all.)
AI never gives you that hint. It’s like an autistic encyclopedia. “You want to know about the sun? I read just the book. Turns out there’s a god who pulls it across the sky every day.” And then it proceeds to gaslight you when you ask probing questions.
(It has gotten better about this due to the advanced meta prompting behind the scenes and other improvements, but the guardrails are leaky.)