Comment on Does anyone else have experience with koboldcpp? How do I make it give me longer outputs?

fhein@lemmy.world ⁨4⁩ ⁨months⁩ ago

llama.cpp uses the gpu if you compile it with gpu support and you tell it to use the gpu…

Never used koboldcpp, so I don’t know why it would it would give you shorter responses if both the model and the prompt are the same (also assuming you’ve generated multiple times, and it’s always the same). If you don’t want to use discord to visit the official koboldcpp server, you might get more answers from a more llm-focused community such as !localllama@sh.itjust.works

source
Sort:hotnewtop