Chulk
@Chulk@lemmy.ml
- Comment on ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit 2 days ago:
my experience was that Wikipedia was specifically called out as being especially unreliable and that’s just nonsense.
Let me clarify then. It’s unreliable as a cited source in Academia. I’m drawing parallels and criticizing the way people use chatgpt. I.e. taking it at face value with zero caution and using it as if it’s a primary source of information.
Eesh. The value of a tertiary source is that it cites the secondary sources (which cite the primary). If you strip that out, how’s it different from “some guy told me…”? I think your professors did a bad job of teaching you about how to read sources. Maybe because they didn’t know themselves. :-(
Did you read beyond the sentence that you quoted?
Here:
I can get summarized information about new languages and frameworks really quickly, and then I can dive into the official documentation when I have a high level understanding of the topic at hand.
Example: you’re a junior developer trying to figure out what this JavaScript syntax is
const {x} = response?.data
. It’s difficult to figure out what destructuring and optional chaining are without knowing what they’re called.With Chatgpt, you can copy and paste that code and ask “tell me what every piece of syntax is in this line of Javascript.” Then you can check the official docs to learn more.
- Comment on ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit 2 days ago:
I think the academic advice about Wikipedia was sadly mistaken.
Yeah, a lot of people had your perspective about Wikipedia while I was in college, but they are wrong, according to Wikipedia.
From the link:
We advise special caution when using Wikipedia as a source for research projects. Normal academic usage of Wikipedia is for getting the general facts of a problem and to gather keywords, references and bibliographical pointers, but not as a source in itself. Remember that Wikipedia is a wiki. Anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn’t be citing Wikipedia. This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information.
I personally use ChatGPT like I would Wikipedia. It’s a great introduction to a subject, especially in my line of work, which is software development. I can get summarized information about new languages and frameworks really quickly, and then I can dive into the official documentation when I have a high level understanding of the topic at hand. Unfortunately, most people do not use LLMs this way.
- Comment on ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit 2 days ago:
You shouldn’t cite Wikipedia because it is not a source of information, it is a summary of other sources which are referenced.
Right, and if an LLM is citing Wikipedia 47.9% of the time, that means that it’s summarizing Wikipedia’s summary.
You shouldn’t cite Wikipedia for the same reason you shouldn’t cite a library’s book report, you should read and cite the book itself.
Exactly my point.
- Comment on ChatGPT Mostly Source Wikipedia; Google AI Overviews Mostly Source Reddit 3 days ago:
Throughout most of my years of higher education as well as k-12, I was told that sourcing Wikipedia was forbidden. In fact, many professors/teachers would automatically fail an assignment if they felt you were using wikipedia. The claim was that the information was often inaccurate, or changing too frequently to be reliable. This reasoning, while irritating at times, always made sense to me.
Fast forward to my professional life today. I’ve been told on a number of occasions that I should trust LLMs to give me an accurate answer. I’m told that I will “be left behind” if I don’t use ChatGPT to accomplish things faster. I’m told that my concerns of accuracy and ethics surrounding generative AI is simply “negativity.”
These tools are (abstractly) referencing random users on the internet as well as Wikipedia and treating them both as legitimate sources of information. That seems crazy to me. How can we trust a technology that just references flawed sources from our past? I know there’s ways to improve accuracy with things like RAG, but most people are hitting the LLM directly.
The culture around Generative AI should be scientific and cautious, but instead it feels like a cult with a good marketing team.
- Comment on We Should Immediately Nationalize SpaceX and Starlink 6 days ago:
Don’t threaten me with a good time!
- Comment on A Judge Accepted AI Video Testimony From a Dead Man 5 weeks ago:
If anyone ever did this with my likeness after death, evem with good intentions, i would haunt the fuck out of them.
- Comment on Why I don't use AI in 2025 5 weeks ago:
marketing hype is pushing anything with AI in the name, but it will all settle out eventually
Agreed. “use it or be left behind” itself sounds like a phrase straight out of a marketing pitch from every single AI-centric" company that pushes their “revolutionary” product. It’s a phrase that i hear daily from c-suite executives that know very little of what they’re talking about. AI (specifically generative) has its usecases, but it’s nowhere near where the marketing says it is. And when it finally does get there, i think people are going to be surprised when they don’t find themselves in the utopia that they’ve been promised.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 5 weeks ago:
If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.
These two groups are not mutually exclusive
- Comment on ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw 1 month ago:
And to what end? Just to have a misinformed populace over literally every subject!
This is a feature; not a bug. We’re entering a new dark age, and generative AI is the tool that will usher it in. The only “problem” generative AI is efficiently solving is a populace with too much access to direct and accurate information. We’re watching as perfectly functional tools and services are being rapidly replaced by a something with inherent issues with reliability, ethics and accountability.
- Comment on [deleted] 2 months ago:
Anyone who opposes mass surveillance should.
- Comment on Microsoft is killing OneNote for Windows 10 2 months ago:
I personally use Obsidian, but I know that other have suggested logseq. Might be useful to have in your table. Also, Obsidian does have an Android app.
- Comment on Lenovo joins growing China exodus as manufacturers flee US tariffs — OEM moving production lines to India 2 months ago:
You seem to be operating under the assumption that Trump isn’t an over confident idiot with terrible ideas. Unfortunately for your narrative, he’s straight-up said that he wants to annex these places. So, Trump seriously believes that the USA can colonize Canada.
- Comment on [deleted] 3 months ago:
My vote is:
- Button layouts that have worked for 20-30 years
- Heads-up displays for readouts of current values. Mph/kmph is displayed by default and the display temporarily changes when something like volume, heat, radio station, track, etc. is adjusted
Best of both worlds
- Comment on Digital Fingerprinting: Google launched a new era of tracking worse than cookie banners | Tuta 3 months ago:
I’m still trying to wrap my head around fingerprinting, so excuse my ignorance. Doesn’t an installed plugin such as Canvas Blocker make you more uniquely identifiable? My reasoning is that very few people have this plugin relatively speaking.
- Comment on AMD rakes in cash with best quarterly revenue ever amid datacenter business rise, but gaming business craters 7 months ago:
I believe it’s even more bleak than that. My theory/prediction:
Once these companies manage to make game streaming a reality, my guess is that they will scale back their consumer GPU divisions without hesitation. The goal is for us to ultimately own nothing. Software is already leased to us (you don’t technically own the games in your steam or epic library). The end game is for hardware to be that way as well.Until then, we’re going to see most people priced out of consumer hardware.
If game streaming services become a reality (I’m talking about a situation where latency and data transfer are less of an issue), they will be positioned as a revolution in entertainment that deliver high-end gaming performance to the masses. As the technology matures, we will see multiple services take hold. It will be like Netflix/Hulu/prime/peacock/etc. but Blizzard/Steam/Epic/Ubisoft/etc. Essentially we will have to pay the equivalent of a new PC/Console price tag every year to rent hardware.
Ironically, what holds this back in today’s world is the greed and shitty infrastructure that’s offered by US ISPs.