Comment on The Generative AI Con.

<- View Parent
Greg@lemmy.ca ⁨1⁩ ⁨week⁩ ago

It is the best option for certain use cases. OpenAI, Anthropic, etc sell tokens, so they have a clear incentive to promote LLM reasoning as an everything solution. LLM read is normally an inefficient use of processor cycles for most use cases. However, because LLM reasoning is so flexible, even though it’s inefficient from a cycle perspective, it is still the best option in many cases because the current alternatives are even more inefficient (from a cycle or human time perspective).

Identifying typos in a project update is a task that LLMs can efficiently solve.

source
Sort:hotnewtop