the AI that nextcloud is offering uses openAI, sign up get a api key and add it. Your ai requests goto the cloud. (and i couldnt get it to work, constant " too many request" or a straight âfailedâ)
The other option is the addon " local llm", you download a cutdown llm like llama2 or falcon and it runs locally. I did get thoes all installed, but it didnt work for general prompts.
Nextcloud will probably fix things over time, and the developer who made the local llm plugin will to, but right now this isnt very useful to selfhosters.
Dave@lemmy.nz â¨11⊠â¨months⊠ago
The blog post states:
So it sounds like you pick what works for you. Iâd guess on a raspberry pi, on board processing would be both slow and poor quality, but Iâll probably give it a go anyway.
pixxelkick@lemmy.world â¨11⊠â¨months⊠ago
Yeah sorry I was specifically referring to the on prem LLM if that wasnt clear, and how much juice running that thing takes.
Dave@lemmy.nz â¨11⊠â¨months⊠ago
Some of the other Nextcloud stuff (like that chat stuff) isnât suitable on Raspberry Pi, I expect this will be the same. Itâs released though, right? Might have to have a play.
EatYouWell@lemmy.world â¨11⊠â¨months⊠ago
Youâd be surprised at how little computing power it can take, depending on the LLM.