ChatGPT can also search the internet
Comment on AP Shares Guidelines Prohibiting Staff From Using AI to Write Publishable Content
JackGreenEarth@lemm.ee 1 year ago
ChatGPT is unreliable, but AIs that can search the internet can be just as reliable and trustworthy as human authors. Of course, Bing Chat is not FOSS, so I don’t fully support it, but it is very good at writing accurate articles.
excel@lemmy.megumin.org 1 year ago
sheogorath@lemmy.world 1 year ago
Didn’t they turned off that feature? Or has it been turned back on now?
excel@lemmy.megumin.org 1 year ago
I don’t think it was ever turned off, it just requires a subscription
Khalic@kbin.social 1 year ago
"just as trustworthy as human authors" - Ok so you have no idea how these chatbots work do you?
wmassingham@lemmy.world 1 year ago
You have a lot of faith in human authors.
Khalic@kbin.social 1 year ago
Oh I do not, but the choice is: a human who might understand what happens vs a probabilistic model that is unable to understand ANYTHING
bioemerl@kbin.social 1 year ago
You're the one who doesn't understand how these things work.
monkic@kbin.social 1 year ago
LLM AI bases its responses from aggregated texts written by ... human authors, just without having any sense of context or logic or understanding of the actual words being put together.
JackGreenEarth@lemm.ee 1 year ago
I understand they are just fancy text prediction algorithms, which is probably justa as much as you do (if you are a machine learning expert, I do apologise). Still, the good ones that get their data from the internet rarely make mistakes.
Khalic@kbin.social 1 year ago
I'm not an ML expert but we've been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It's just a fancy mirror. That's why, for example, world of warcraft player have been able to trick does bots into making an article about a feature that doesn't exist.
Do you really want to lose your time reading a blob of data with no coherency?
whataboutshutup@discuss.online 1 year ago
We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you’ve found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it’s possible to make a reliable model with that bullshit detector.