Sam Altman says ChatGPT should be ‘much less lazy now’::ChatGPT users previously complained that the chatbot was slacking off and refusing to complete some tasks.
PSA: give open-source LLMs a try folks. If you’re on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc… Obviously it’s faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.
You can combine that with a UI like ollama-webui or a text-based UI like oterm.
otp@sh.itjust.works 10 months ago
Man this thing really IS just like a human!
/joke
fidodo@lemmy.world 10 months ago
It’s based on text produced by humans so yes, it does retrieve text that was written by humans therefore it acts like a human.
General_Effort@lemmy.world 10 months ago
It’s still weird. That reasoning implies that there is a correlation between promising money and long answers in the training data. Seems plausible at first blush, but where can this be actually seen? It’s hardly ever seen in social media, where similar QA formats exists. It’s certainly not in textbooks, where the real good answers are. OTOH there are a lot of tips promised in completely different contexts.
I’m not saying it’s wrong, but there is definitely a lot of cargo cult in prompting strategies.