Duckduckgos llm cites it’s sources. I’ve seen screenshots of googles llm doing the same.
Comment on Clever, clever
psud@aussie.zone 1 year agoLLMs can’t cite. They don’t know what a citation is other than a collection of text of a specific style
You’d be lucky if the number of references equalled the number of referenced items even if you were lucky enough to get real sources out of an LLM
If the student is clever enough to remove the trap reference, the fact that the others won’t be in the University library should be enough to sink the paper
TachyonTele@lemm.ee 1 year ago
Swedneck@discuss.tchncs.de 1 year ago
i’ve tried to get their ai thing to cite and it does the standard AI thing of generating a sensible-looking output that isn’t actually valid, it’ll link to pages that don’t exist but that do lead to valid websites.
uis@lemm.ee 1 year ago
LLMs can’t cite. They don’t know what a citation is other than a collection of text of a specific style
LLMs can cite. It’s called Retrival-Augmented Generation. Basically LLM that can do Information Retrival, which is just academic term for search engines.
You’d be lucky if the number of references equalled the number of referenced items even if you were lucky enough to get real sources out of an LLM
Well, you can just print retrival logs into references. Well, kinda stretching definition of “just”.
notthebees@reddthat.com 1 year ago
My question is that the thing they are citing actually exists and if it does exist, contains the information it claims.
uis@lemm.ee 1 year ago
the thing they are citing actually exists
In case of RAGs it exists in search engine database.
and if it does exist, contains the information it claims.
Not guaranteed.
FutileRecipe@lemmy.world 1 year ago
Depends. In my experience, it usually does exist. Now there are hallucinations where GPT makes up stuff or just misinterprets what it read. But it’s super easy to read the GPT output, look at the cited work, skim works for relevance, then tweak the wording and citing to match.
If you just copy/paste and take GPT’s word for it without the minimal amount of checking, you’re digging your own grave.
auzy@lemmy.world 1 year ago
They can. There was that court case where the cases cited were made up by chatgpt. Upon investigation it was discovered it was all made up and the lawyer got into deep crap
interdimensionalmeme@lemmy.ml 1 year ago
Let’s give that a try, this is classic model, no retrieval
Chatgpt how deadly is covid19, two paragraph answer, include citations
COVID-19 varies in severity, with mortality rates heavily influenced by factors such as age, underlying health conditions, and access to healthcare. The World Health Organization (WHO) estimates a global case fatality rate (CFR) of around 1-3%, though this can be much higher among older adults and immunocompromised individuals (WHO, 2023). In comparison to seasonal influenza, COVID-19 has a higher mortality rate, particularly during the initial outbreaks before widespread vaccination. Long-term complications, often referred to as “long COVID,” further increase the disease’s burden beyond acute infection (NIH, 2022).
Vaccination efforts, antiviral treatments, and improved healthcare protocols have reduced mortality rates over time, especially in developed countries (CDC, 2023). However, variants of concern, such as Delta and Omicron, have shown differing degrees of transmissibility and immune escape, complicating pandemic management. Even with lowered death rates in certain populations, COVID-19 remains a significant public health threat, particularly in areas with low vaccination coverage or overstretched healthcare systems (Johns Hopkins University, 2023).
References: