Pro Tip: Never quote ChatGPT. Use it to find the real source of info and then quote that.
[deleted]
Submitted 1 month ago by flightyhobler@lemmy.world to selfhosted@lemmy.world
Comments
oshu@lemmy.world 1 month ago
ObsidianZed@lemmy.world 1 month ago
Phind.com is a great alternative that provides sources.
N0x0n@lemmy.ml 1 month ago
I was quite impressed by how it looks and the free option ! However, seeing Google tag manager and tiktok analytics domains and I’m already out !
30p87@feddit.org 1 month ago
Never use ChatGPT anyway. There are better (for privacy) alternatives.
hellothere@sh.itjust.works 1 month ago
Yes, definitely.
Clearwater@lemmy.world 1 month ago
Seagate’s error rate values (IDs 1, 7, and 195) are busted. Not in that they’re wrong or anything, but that they’re misleading to people who don’t know exactly how to read them.
ALL of those are actually reporting zero errors. This calculator can confirm it for you, but s.i.wtf
It is likely that GSmartControl is simply reading either the normalized or raw values, seeing a non-100/non-0 value respectively, and reporting that as an error.
flightyhobler@lemmy.world 1 month ago
Already packed. I am also having trouble copying files due to issues with the file name (6 characters, nothing fancy). And it makes a ticking sound once in a while. I’m done with it.
rumba@lemmy.zip 1 month ago
Inside the nominal return period for a device absolutely.
If it’s a warranty repair I’ll wait for an actual trend, maybe run a burning on it and force its hand.
Appoxo@lemmy.dbzer0.com 1 month ago
Within return/rma window: Yes? Why not?
SnotFlickerman@lemmy.blahaj.zone 1 month ago
catloaf@lemm.ee 1 month ago
SMART data can be hard to read. But it doesn’t look like any of the normalized values are approaching the failure thresholds. It doesn’t show any bad sectors. But it does show read errors.
I would check the cable first, make sure it’s securely connected. You said it clicks sometimes, but that could be normal. Check the kernel log/dmesg for errors. Keep an eye on the SMART values to see if they’re trending towards the failure thresholds.
sxan@midwest.social 1 month ago
Seriously, do not use LLMs as a source of authority. They are stochistic machines predicting the next character they type; if what they say is true, it’s pure chance.
Use them to draft outlines. Use them to summarize meeting notes (and review the summaries). But do not trust them to give you reliable information. You may as well go to a party, find the person who’s taken the most acid, and ask them for an answer.
flightyhobler@lemmy.world 1 month ago
yeah, that’s why I’m here, dude.
BaroqueInMind@lemmy.one 1 month ago
So then, if you knew this, why did you bother to ask it first?
bizzle@lemmy.world 1 month ago
Acid freaks are probably more reliable than chat gpt
sxan@midwest.social 1 month ago
You’ll certainly gain some valuable insight, even if it has nothing to do with your question. Which is more than I can say for LLMs.
SnotFlickerman@lemmy.blahaj.zone 1 month ago
I’d say those SMART attributes look pretty bad… Don’t need an LLM for that…
slazer2au@lemmy.world 1 month ago
I call them regurgitation machines prone to hallucinations.
sxan@midwest.social 1 month ago
That is a perfect description.
roofuskit@lemmy.world 1 month ago
Just a reminder that LLMS can only truncate text, they are incapable of summarization.
dream_weasel@sh.itjust.works 1 month ago
First sentence of each paragraph: correct.
Basically all the rest is bunk besides the fact that you can’t count on always getting reliable information. Right answers (especially for something that is technical but non-verifiable), wrong reasons.
There are “stochastic language models” I suppose (e.g., click the middle suggestion from your phone after typing the first word to create a message), but something like chatgpt or perplexity or deepseek are not that, beyond using tokenization / word2vect-like setups to make human readable text. These are a lot more like “don’t trust everything you read on Wikipedia” than a randomized acid drop response.