Archive: archive.is/lP0lT
Will cut the AI results out of your google searches by switching the browser’s default to the web api…
I cannot tell you how much I love it.
Submitted 3 weeks ago by tonytins@pawb.social to technology@lemmy.world
https://www.404media.co/wikipedia-says-ai-is-causing-a-dangerous-decline-in-human-visitors/
Archive: archive.is/lP0lT
Will cut the AI results out of your google searches by switching the browser’s default to the web api…
I cannot tell you how much I love it.
Or better yet, ditch Google altogether.
For Firefox on Android (which TenBlueLinks doesn’t have listed) add a new search engine and use these settings:
Lemmy also does code markup with `text`
https://www.google.com/search?q=%s&udm=14
Thank you.
Oh thank you I’ve been looking for this
I would like to say Google is still better at finding search results with more than one word. For example, if somebody searches “santa claus porn” then DuckDuckGo or Ecosia will probably return images of porn or images of santa claus instead of images of santa claus porn.
However that is no longer true either, because google search continues to get worse all the time. So it’s like there isn’t any good search engines anymore.
Yeah switching search links will help but it’s a band-aid. AI has stolen literally everyone’s work without any attempt at consent or remuneration and the reason is now your search is 100 times faster, comes back with exactly something you can copy & paste and you never have to dig through links or bat away confirmation boxes to find out it doesn’t have what you need.
It’s straight up smash-n-grab. And it’s going to work. Just like everybody and their grandma gave up all their personal information to facebook so will your searches be done through AI.
The answer is to regulate the bejesus out of AI and ensure they haven’t stolen anything. That answer was rendered moot by electing trump.
I don’t know about you, but my results have been wrong or outdated at least a quarter of the time. If you flip two coins and both are heads, your information is outright useless. What’s the point in looking something up to maybe find the right answer?
I’ve been asking a bunch of next-to-obvious questions about things that don’t really matter and it’s been pretty good. It still confidently lies when it gives instructions but a fair amount of time it does what I asked it for.
I’d prefer to not have it, because it’s ethically putrid. But it converts currency and weights and translates things as well as expected and in half the time i’d spend doing it manually. Plus I kind of hope using it puts them out of business. It’s not like I’d pay for it.
Curious what and how you’re prompting. I get solid results, but I’m only asking for hard facts, nothing that could have opinion or agenda inserted. Also, I never go past the first prompt. There be dragons that way.
If this AI stuff weren’t a bubble and the companies dumping billions into it were capable of any long term planning they’d call up wikipedia and say “how much do you need? we’ll write you a cheque”
They’re trying to figure out nefarious ways of getting data from people and wikipedia literally has people doing work to try to create high quality data for a relatively small amount of money that’s very valuable to these AI companies.
But nah, they’ll just shove AI into everything blow the equivalent of Wikipedia’s annual budget in a week on just electricity to shove unwanted AI slop into people’s faces.
Because they already ate through every piece of content on wikipedia years and years ago. They’re at the stage where they’ve trawled nearly the entire internet and are running out of content to find.
So now the AI trawls other AI slop, so it’s essentially getting inbred. So they literally need you to subscribe to their AI slop so they can get new data directly from you because we’re still nowhere near AGI.
But nah, they’ll just shove AI into everything blow the equivalent of Wikipedia’s annual budget in a week on just electricity to shove unwanted AI slop into people’s faces.
You’re off my several order of magnitude unfortunately. Tech giants are spending the equivalent of the entire fucking Apollo program on various AI investments every year at this point.
I eat out and lately overhearing some people talking about how they find shit with ChatGPT, and it’s not a good sign.
I was chatting with some folks the other day and somebody was going on about how they had gotten asymptomatic long-COVID from the vaccine. When asked about her sources her response was that AI had pointed her to studies and you could viscerally feel everybody else’s cringe.
asymptomatic long-COVID
The hell even is that? Asymptomatic means no symptoms. Long-COVID isn’t a contagious thing, it’s literally a description of the symptoms you have from having COVID and the long term effects.
God that makes my freaking blood boil.
Damn @BigBenis@lemmy.world that was a hell of a conversation you we having.
“Cool, send me the actual studies.”
*crickets*
Assuming this AI shit doesn’t kill us all and we make it to the conclusion that robots writing lies on websites perhaps isn’t the best thing for the internet, there’s gonna be a giant hole of like 10 years where you just shouldn’t trust anything written online. Someone’s gonna make a bespoke search engine that automatically excludes searching for anything from 2023 to 2035.
I can’t really fault them for it tbh. Google has gotten so fucking bad over the last 10 years. Half of the results are just ads that don’t necesarily have anything to do with your search.
Sure, use something else like Duckduckgo, but when you’re already switching, why not switch to something that tends to be right 95% of the time, and where you don’t need to be good at keywords, and can just write a paragraph of text and it’ll figure out what you’re looking for. If you’re actually researching something you’re bound to look at the sources anyway, instead of just what the LLM writes.
The ease of access of LLMs, and the complete and utter enshittyfication of Google is why so many people choose an LLM.
I believe DuckDuckGo is just as bad. I think they changed their search to match Google. I’m not sure if you are allowed to exclude search terms, use quotes, etc.
I had a song intermittently stuck in my head for over a decade, couldn’t remember the artist, song name, or any of the lyrics. I only had the genre, language it was in, and a vague, memory-degraded description of a music video. Over the years I’d tried to find it on search engines a bunch of times to no avail, using every prompt I could think of. ChatGPT got it in one. So yeah, it’s very useful for stuff like that. Was a great feeling to scratch that itch after so long. But I wouldn’t trust an LLM with anything important.
They stopped doing research as it used to be for about 30 years.
Was it really “like that” for any length of time? To me it seems like most people just believed whatever bullshit they saw on Facebook/Twitter/Insta/Reddit, otherwise it wouldn’t make sense to have so many bots pushing political content there. Before the internet it would be reading some random book/magazine you found, and before then it was hearsay from a relative.
I think that the people who did the research will continue doing the research. It doesn’t matter if it’s thru a library, or a search engine, or Wikipedia sources, or AI sources, as long as you read the actual source you’ll be fine; if you didn’t want to do that it was always easy to stumble upon misinfo or disinfo anyways.
One actual problem that AI might cause is if the actual scientists doing the research start using it without due diligence. People are definitely using LLMs to help them write/structure the papers ¹ but if they actually use it to “help” with methodology or other content… Then we would indeed be in trouble, given how confidently incorrect LLM output can be.
I think that the people who did the research will continue doing the research.
Yes, but that number is getting smaller. Where I live, most households rarely have a full bookshelf, and instead nearly every member of the family has a “smart” phone; they’ll grab the chance to use anything that would be easier than spending hours going through a lot of books. I do sincerely hope methods of doing good research are still continually being taught, including the ability to distinguish good information from bad.
(pasting a Mastodon post I wrote few days ago on StackOverflow but IMHO applies to Wikipedia too)
“AI, as in the current LLM hype, is not just pointless but rather harmful epistemologically speaking.
It’s a big word so let me unpack the idea with 1 example :
So SO is cratering in popularity. Maybe it’s related to LLM craze, maybe not but in practice, less and less people is using SO.
SO is basically a software developer social network that goes like this :
then people discuss via comments, answers, vote, etc until, hopefully the most appropriate (which does not mean “correct”) answer rises to the top.
The next person with the same, or similar enough, problem gets to try right away what might work.
SO is very efficient in that sense but sometimes the tone itself can be negative, even toxic.
Sometimes the person asking did not bother search much, sometimes they clearly have no grasp of the problem, so replies can be terse, if not worst.
Yet the content itself is often correct in the sense that it does solve the problem.
So SO in a way is the pinnacle of “technically right” yet being an ass about it.
Meanwhile what if you could get roughly the same mapping between a problem and its solution but in a nice, even sycophantic, matter?
Of course the switch will happen.
That’s nice, right?.. right?!
It is. For a bit.
It’s actually REALLY nice.
Until the “thing” you “discuss” with maybe KPI is keeping you engaged (as its owner get paid per interaction) regardless of how usable (let’s not even say true or correct) its answer is.
That’s a deep problem because that thing does not learn.
It has no learning capability. It’s not just “a bit slow” or “dumb” but rather it does not learn, at all.
It gets updated with a new dataset, fine tuned, etc… but there is no action that leads to invalidation of a hypothesis generated a novel one that then … setup a safe environment to test within (that’s basically what learning is).
So… you sit there until the LLM gets updated but… with that? Now that less and less people bother updating your source (namely SO) how is your “thing” going to lean, sorry to get updated, without new contributions?
Now if we step back not at the individual level but at the collective level we can see how short-termist the whole endeavor is.
Yes, it might help some, even a lot, of people to “vile code” sorry I mean “vibe code”, their way out of a problem, but if :
well I guess we are going faster right now, for some, but overall we will inexorably slow down.
So yes epistemologically we are slowing down, if not worst.
Anyway, I’m back on SO, trying to actually understand a problem. Trying to actually learn from my “bad” situation and rather than randomly try the statistically most likely solution, genuinely understand WHY I got there in the first place.
I’ll share my answer back on SO hoping to help other.
Don’t just “use” a tool, think, genuinely, it’s not just fun, it’s also liberating.
Literally.
Don’t give away your autonomy for a quick fix, you’ll get stuck.”
originally on mastodon.pirateparty.be/…/115315866570543792
I honestly think that LLM will result in no progress made ever in computer science.
Most past inventions and improvements were made because of necessity of how sucky computers are and how unpleasant it is to work with them (we call it “abstraction layers”). And it was mostly done on company’s dime.
Now company will prefer to produce slop because it will hope to automate slop production.
As an expert in my engineering field I would agree. LLMs has been a great tool for my job in being better at technical writing or getting over the hump of coding something every now and then. That’s where I see the future for ChatGPT/AI LLMs; providing a tool that can help people broaden their skills.
There is no future for the expertise in fields and the depth of understanding that would be required to make progress in any field unless specifically trained and guided. I do not trust it with anything that is highly advanced or technical as I feel I start to teach it.
Most importantly, the pipeline from finding a question on SO that you also have, to answering that question after doing some more research is now completely derailed because if you ask an AI a question and it doesn’t have a good answer you have no way to contribute your eventual solution to the problem.
Maybe SO should run everyone’s answers through a LLM and revoke any points a person gets for a condescending answer even if accepted.
It can be very toxic there.
AI will inevitably kill all the sources of actual information. Then all were going to be left with is the fuzzy learned version of information plus a heap of hallucinations.
What a time to be alive.
AI just cuts pastes from the websites like Wikipedia. The problem is when it gets information that’s old or from a sketchy source. Hopefully people will still know how to check sources, should probably be taught in schools. Who’s the author, how olds the article, is it a reputable website, is there a bias. I know I’m missing some pieces
You replied to OP while somehow missing the entire point of what he said lol
Much of the time, AI paraphrases, because it is generating plausible sentences not quoting factual material. Rarely do I see direct quotes that don’t involve some form of editorialising or restating of information, but perhaps I’m just not asking those sorts of questions much.
Man, we hardly did that shit 20 years ago. Ain’t no way the kids doing that now.
I guess I’m a bit old school, I still love Wikipedia.
I use Wikipedia when I want to know stuff. I use chatGPT when I need quick information about something that’s not necessarily super critical.
It’s also much better at looking up stuff than Google. Which is amazing, because it’s pretty bad. Google has become absolute garbage.
Yep, that an occasionally Wiktionary, Wikidata, and even Rationalwiki.
You’re right bro but I feel comfortable searching the old fashioned way!
Same but with Encyclopedia Brittanica
Unfortunately, it’s gonna get bad before it gets worse.
Well that’s kind of reassu… oh
I’ve been meaning to donate to those guys.
I use their site frequently. I love it, and it can’t be cheap to keep that stuff online.
“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
I understand the donors aspect, but I don’t think anyone who is satisfied with AI slop would bother to improve wiki articles anyway.
The idea that there’s a certain type of person that’s immune to a social tide is not very sound, in my opinion. If more people use genAI, they may teach people who could have been editors in later years to use genAI instead.
That’s a good point, scary to think that there are people growing up now for whom LLMs are the default way of accessing knowledge.
Not me. I value Wikipedia content over AI slop.
Alternative for DuckDuckGo:
Using backticks can help
https://noai.duckduckgo.com/?q
I asked a chatbot scenarios for AI wiping out humanity and the most believable one is where it makes humans so dependent and infantilized that we just eventually die out.
So we get the Wall-e future…
Tbh, I’d say that’s not a bad scenario all in all, and much more preferably than scenarios with world war, epidemics, starvation etc.
That is too bad. Wikipedia is important.
I’m surprised no-one has asked an LLM to produce a plausible version and just released that, claiming it’s a leak.
because people are just reading AI summarized explanation of your searches, many of them are derived from blogs and they cant be verified from an official source.
Or the ai search just rips off Wikipedia.
This will be unpopular, but hear me out. Maybe the decline in visitors is only a decline in the folks who are simply looking for a specific word or name and the forgot. Like, that one guy who believed in the survival of the fittest. Um. Let me try to remember. I think he had an epic beard. Ah! Darwin! I just needed a reminder, I didn’t want to read the entire article on him because I did that years ago.
Look at your own behaviors on lemmy. How often do you click/tap through to the complete article? What if it’s just a headline? What if it’s the whole article pasted into the body of the post? Click bait headlines are almost universally hated, but it’s a desperate attempt to drive traffic to the site. Sometimes all you need is the article synopsis. Soccer team A beats team B in overtime. Great, that’s all I need to know…unless I have a fantasy team.
all websites should block ai and bot traffic on principle.
I wonder if it’s just AI. I know some people moved to backing up older versions of Wikipida via Kiwix out of fear that the site gets censored.
There’s a certain irony in a website that caused a decline in visitors to primary sources complaining about something new causing a decline in visitors to its tertiary sources
Yet I still have to go to the page for the episode lists of my favorite TV shows because every time I ask AI which ones to watch it starts making up episodes that either don’t exist or it gives me the wrong number.
I am kinda a big hater on AI and what danger it represents to the future of humanity
But. as a hobby programmer, I was surprised at how good these llms can answer very technical questions and provide conceptual insight and suggestions about how to glue different pieces of software together and which are the limitations of each one. I know that if AI knows about this stuff it must have been produced by a human. but considering the shitty state of the internet where copycat website are competing to outrank each other with garbage blocks of text that never answer what you are looking for. the honest blog post is instead burried at the 99 page in google search. I can’t see how old school search will win over.
Add to that I have found forums and platforms like stack overflow to be not always very helpful, I have many unanswered questions on stackoverflow piled-up over many years ago. things that llms can answer in details in just seconds without ever being annoyed at me or passing passive aggressive remarks.
Every time someone visits Wikipedia they make exactly $0. In fact, it costs them money. Are people still contributing and/or donating? These seem like more important questions to me.
In my case, I simply ended up buying a subscription to Brittanica, which I started using instead. I just don't trust wikipedia in this era.
I sympathize with Wikipedia here because I really like the platform. That being said, modernize and get yourself a new front end. People don’t like AI because of it’s intrusiveness. They want convenience. Create “Knowledge-bot” or something similar that is focused on answering questions in a more meaningful way.
Seems like clickbait. Wikipedia does not need actual visitors that badly.
Surly it can’t be because of the decline in quality because of deposit admins defending their own personal fiefdoms.
It used to be that the first result to a lot of queries, was a link to the relevant Wikipedia article. But that first result has now been replaced by an ai summary of the relevant Wikipedia article. If people don’t need more info than that summary, they don’t click through. That Ai summary is a layer of abstraction that wouldn’t be able to exist without the source material that it’s now making less viable to exist. Kinda like a parasite.
Maybe the humans are going outside and the library?
badbytes@lemmy.world 3 weeks ago
Wikipedia, is becoming one of few places I trust the information.
SatansMaggotyCumFart@piefed.world 3 weeks ago
It’s funny that MAGA and ml tankies both think that Wikipedia is the devil.
OsrsNeedsF2P@lemmy.ml 3 weeks ago
There’s a lot of problems with Wikipedia, but in my years editing there (I’m extended protected rank), I’ve come to terms that it’s about as good as it can be.
In all but one edit war, the better sourced team came out on top. Source quality discussion is also quite good. There’s a problem with from positive/negative tone in articles, and sometimes articles get away with bad sourcing before someone can correct it, but this is about as good as any information hub can get.
NauticalNoodle@lemmy.ml 3 weeks ago
It’s worth checking out the contribs and talk regarding articles that can be divisive. People acting with ulterior motives and inserting their own bias are fairly common. They also make regular corrections for this reason. I still place more faith and trust in Wikipedia as an info source more than most news articles
devolution@lemmy.world 3 weeks ago
MAGA and tankies are pretty much the same except MAGA votes while tankies whine.
mistermodal@lemmy.ml 3 weeks ago
The site engages in holocaust denial, apologia for wehrmacht, and directly collaborates with western governments. Jimmy Wales is a far-right libertarian. It might be a reliable source of information for reinforcing your own worldview, but it’s not a project to create the world’s encyclopedia. Something like that would at least be less stingy about what a “notable sandwich” is.
Ulvain@sh.itjust.works 3 weeks ago
So very much on-script though
Socialism_Everyday@reddthat.com 3 weeks ago
Tankies don’t think Wikipedia is the devil. You could call me a tankie from my political views, and I very much appreciate Wikipedia and use it on a daily basis. That is not to say it should be used uncritically and unaware of its biases.
Because of the way Wikipedia works, it requires sourcing claims with references, which is a good thing. The problem comes when you have an overwhelming majority of available references in one topic being heavily biased in one particular direction for whatever reason.
For example, when doing research on geopolitically charged topics, you may expect an intrinsic bias in the source availability. Say you go to China and create an open encyclopedia, Wikipedia style, and make an article about the Tiananmen Square events. You may expect that, if the encyclopedia is primarily edited by Chinese users using Chinese language sources, given the bias in the availability of said sources, the article will end up portraying the bias that the sources suffer from.
This is the criticism of tankies towards Wikipedia: in geopolitically charged topics, western sources are quick to unite. We saw it with the genocide in Palestine, where most media regardless of supposed ideological allegiance was reporting on the “both sides are bad” style at best, and outright Israeli propaganda at worst.
So, the point is not to hate on Wikipedia, Wikipedia is as good as an open encyclopedia edited by random people can get. The problem is that if you don’t specifically incorporate filters to compensate for the ideological bias present in the demographic cohort of editors (white, young males of English-speaking countries) and their sources, you will end up with a similar bias in your open encyclopedia. This is why us tankies say that Wikipedia isn’t really that reliable when it comes to, e.g., the eastern block or socialist history.
username123@sh.itjust.works 3 weeks ago
That instance is fucking bananas
scala@lemmy.ml 3 weeks ago
They are scared of facts.
krypt@lemmy.world 3 weeks ago
growing up I got taught by teachers not trust Wiki bc of misinformation. times have changed
isVeryLoud@lemmy.ca 3 weeks ago
Nope, we all misunderstood what they meant. Wikipedia is not an authoritative source, it is a derivative work. However, you can use the sources provided by the Wikipedia article and use the article itself to understand the topic.
Wikipedia isn’t and was never a primary source of information, and that is by design. You don’t declare information in encyclopedias, you inventory information.
wesker@lemmy.sdf.org 3 weeks ago
Now in some states, you can’t trust teachers not to be giving you misinformation.
buttnugget@lemmy.world 3 weeks ago
Not to trust wiki as a format? Or did you mean Wikipedia specifically?
FosterMolasses@leminal.space 3 weeks ago
How ironic that school teachers spent decades lecturing us about not trusting Wikipedia… and now, the vast majority of them seem to rely on Youtube and ChatGPT for their lesson plans. Lmao
ill_presence55@lemmy.zip 3 weeks ago
Who would’ve thought??
slaacaa@lemmy.world 3 weeks ago
One thing I don’t get: why the fuck LLM’s don’t use wikipedia as a source of info? Would help them coming up with less bullshit. I experimented around with some, even perplexity that searches the webs and give you links, but it always has shit sources like reddit or SEO optimized nameless news sites
finitebanjo@lemmy.world 2 weeks ago
It’s not that AI don’t or cannot use Wikipedia they do actually, but AI can’t properly create a reliable statement in general. It halucinates so goddamn much, and that can never, ever, be solved, because it is at the end of the day just arranging tokens based on statistical approximation of things people might say. It has been proven that modern LLMs can never approach even close to human accuracy with infinite power and resources.
That said, if an AI is blocked from using Wikipedia then that would be because the company realized Wikipedia is way more useful than their dumb chatbot.
finitebanjo@lemmy.world 2 weeks ago
Unfortunately the current head of Wikipedia is pro-AI which has contributed to this lack of trust.