LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for “Trump is divine”, the LLM will predict that Trump is divine, with no second thought (no first thought either). And it’s not even great to use as a language-based database.
Please don’t even consider LLMs as “AI”.
EncryptKeeper@lemmy.world 9 months ago
It kinda seems like you don’t understand the actual technology.