You can try setting up Ollama on your RPi, then use a highly-optimized variant of the Mistral model (or quantize it yourself with GGUF/llama.cpp). You can do some very heavy quantization (2-bit), which will increase the error rate. But if you are only planning to use the generated text as a starting point, it might be useful nevertheless. Also see: github.com/ollama/ollama/blob/main/…/import.md#im…
[deleted]
Submitted 9 months ago by ChasingEnigma@lemmy.world to nostupidquestions@lemmy.world
Comments
kby@feddit.de 9 months ago
Ziggurat@sh.itjust.works 9 months ago
Have you tried GPT4All gpt4all.io/index.html ? It runs on CPU so is a bit slow, but it’s a way to run various LLM locally with an plug and play, easy to use solution. That said, LLM are huge, and perform better on GPU, provided you have a GPU big enough. Here is the trap. How much do you want to spend in a GPU ?
kindenough@kbin.social 9 months ago
On GPU it is okay. GTX-1080 with a R5 3700X.
It has just written a 24 page tourist info booklet about the town I live in and a bunch of it is very inaccurate or outdated on the places to go. Fun and impressive anyway. Took only a few minutes.
PeterPoopshit@lemmy.world 9 months ago
If you get just the right gguf model (read the description when you download them to get the right K-optimization or whatever it’s called) and actually use multithreading (llamacpp supports multithreading so in theory gpt4all should too), then it’s reasonably fast. I’ve achieved roughly half the speed of ChatGPT just on an overclocked 8 core amd fx.
Thavron@lemmy.ca 9 months ago
Are you looking to make an easy buck by generating novels and self publishing them on Amazon?
dojan@lemmy.world 9 months ago
The open source LLMs are really capable, I think the method used to feed the plot might be the more important part in making this work.
PeterPoopshit@lemmy.world 9 months ago
This probably isn’t very helpful but the best way I’ve found to make an ai write an entire book is still a lot of work. You have to make it write it in sections, pay attention to the prompts and spend a lot of time copy pasting the good sentences into a better quality section and then use those blocks of text to create chapters. You’re basically plagiarizing a document using ai written documents rather than making the ai shit it out in 1 continuous stream.
Hjalamanger@feddit.nu 9 months ago
I found this blog post were the author tries to use chat GPT to generate theatre manuscript/narrative. It’s based on the paper “Co-Writing Screenplays and Theatre Scripts with Language Models: An Evaluation by Industry Professionals”. In the blog post they outline their narrative generation procedure in this chart:
Fig. 1. Dramatron’s Hierarchical Coherent Story Generation. Dramatron starts from a log line to generate a title and characters Characters generated are used as prompts to generate a sequence of scene summaries in the plot. Descriptions are subsequently generated for each unique location. Finally, these elements are all combined to generate dialogue for each scene. The arrows in the figure indicate how text generated is used to construct prompts for further LLM text generation.
I also found this GitHub repo with links to more resources on this topic.