the AI that nextcloud is offering uses openAI, sign up get a api key and add it. Your ai requests goto the cloud. (and i couldnt get it to work, constant " too many request" or a straight “failed”)
The other option is the addon " local llm", you download a cutdown llm like llama2 or falcon and it runs locally. I did get thoes all installed, but it didnt work for general prompts.
Nextcloud will probably fix things over time, and the developer who made the local llm plugin will to, but right now this isnt very useful to selfhosters.
Dave@lemmy.nz 1 year ago
The blog post states:
So it sounds like you pick what works for you. I’d guess on a raspberry pi, on board processing would be both slow and poor quality, but I’ll probably give it a go anyway.
pixxelkick@lemmy.world 1 year ago
Yeah sorry I was specifically referring to the on prem LLM if that wasnt clear, and how much juice running that thing takes.
Dave@lemmy.nz 1 year ago
Some of the other Nextcloud stuff (like that chat stuff) isn’t suitable on Raspberry Pi, I expect this will be the same. It’s released though, right? Might have to have a play.
EatYouWell@lemmy.world 1 year ago
You’d be surprised at how little computing power it can take, depending on the LLM.