Comment on Is possible to run a LLM on a mini-pc like the GMKtec K8 and K9?

<- View Parent
al4s@feddit.de ⁨4⁩ ⁨months⁩ ago

LLMs work by always predicting the next most likely token and LLM detection works by checking how often the next most likely token was chosen. You can tell the LLM to choose less likely tokens more often (turn up the heat parameter) but you will only get gibberish out if you do. So no, there is not.

source
Sort:hotnewtop