Comment on How The New York Times is using generative AI as a reporting tool
ohwhatfollyisman@lemmy.world 2 weeks ago
In general, the report found that the AI summaries showed “a limited ability to analyze and summarize complex content requiring a deep understanding of context, subtle nuances, or implicit meaning.” Even worse, the Llama summaries often “generated text that was grammatically correct, but on occasion factually inaccurate,”
how is this being accepted? one would have to go through any output with a fine-toothed comb anyway to weed out ai hallucinations, as well as to preserve nuance and context.
it’s like the ai tells you that mona lisa has three eyes and a nose and her mouth is closed but her denim jacket is open. you’re going to report that in your story without ever looking at the painting?
Grimy@lemmy.world 2 weeks ago
It’s literally the paragraph right after.
They verify it.
umami_wasbi@lemmy.ml 2 weeks ago
Won’t the checking cost more time then to just write it themselves?
asap@lemmy.world 2 weeks ago
It’s harder to create new content than to correct existing content.
Grimy@lemmy.world 2 weeks ago
It’s 400 hours of audio, the transcripts ended up being 5 million words, and only snippets of it are useful.