Comment on [deleted]

<- View Parent
finitebanjo@lemmy.world ⁨2⁩ ⁨days⁩ ago

LLMs will never reach human accuracy, OpenAI and Deepmind proved that in 2022 research papers that have never been refuted, they have a lot of obvious signs and are also largely incompetent due to lack of reasoning skills or memory, and require updating training sets in order to “learn” from past mistakes. They also become less capable when overconstrained as would be necessary to make them useful for a specific task.

The reasons the Pentagon and Defence Contractors like Microsoft want it to seem like we have this capability is 1) we want our enemies to think we have this capability and 2) they need to justify their exorbitant expenses trying to make these capabilities real.

But 1 in 100 bot getting past automated detection is not a reason not to use it.

source
Sort:hotnewtop