Captain_Stupid
@Captain_Stupid@lemmy.world
- Comment on hmmm 6 days ago:
But I thought the jar always breaks…
- Comment on Can I self host a VPN that sneakies through the China firewall? 6 days ago:
Social Credit --;
- Comment on The therapy I can afford 1 week ago:
It’s alright, my second point was more something to keep in mind and not an acual argument against using AI for therapie.
- Comment on The therapy I can afford 1 week ago:
That is not what I mean. I was talking about Sam Altman using your trauma as training data.
- Comment on The therapy I can afford 1 week ago:
The smallest Modells that I run on my PC take about 6-8 GB of VRAM and would be very slow if I ran them purely with my CPU. So it is unlikely that you Phone has enough RAM and enough Cores to run a decent LLM smootly.
If you still want to use selfhosted AI on you phone:
- Install Ollama and OpenWebUI in a docker container (guides can be found on the internet)
- Make sure they use your GPU (Some AMD Cards require an HSA override Flag to work
- Make sure the docker container is secure (Blocking the Port for comunication outside of your network should work fine as long as you only use the AI Modell at home)
- Get youself an openwight modell (I recomend llama 3.1 for 8 GB of VRAM and Phi4 if you got more or have enough RAM)
- Type the IP-Adress and Port into the browser on your phone.
You now can use selfhosted AI with your phone
- Comment on The therapy I can afford 1 week ago:
If you use Ai for therapie atleast selfhost and keep in mind that its goal is not to help you but to have a conversation that statisvies you. You are basicly talking to a yes-man.
Ollama with OpenWebUi is relativly easy to install, you can even use something like edge-tts to give it a Voice.
- Comment on NSA is the only government instution that actually listens to you 5 weeks ago:
Dang Glowies