There is ipex-llm from Intel which you can use with your intel IGPU/GPU/CPU for llms.
Comment on Self-Hosted AI is pretty darn cool
HumanPerson@sh.itjust.works 2 months ago
Yeah, I like it too. My only issue is ollama’s lack of intel support. I have been looking at issue 1590 on their GitHub. For now I have a 1050ti in a cardboard box PC with other hardware being 10+ years old and a mixed set of RAM totalling 12G. It also has a 100Mbit nic, so I can’t take advantage of full internet speed when downloading models. The worst part is they can support intel, but haven’t merged the solution because of an issue with the windows intel drivers. Linux is fine but I can 't have it. I wasn’t planning to rant, but I already typed it so… enjoy?
klopstock@lemmy.specksick.com 2 months ago
chagall@lemmy.world 2 months ago
Yeah, I have an NVDIA GPU and it is magic. The best part is if you are using it, open a second terminal window and enter the command,
watch -n 0.5 nvidia-smi
and you can see your GPU usage go up and down in real-time as you ask the GPT questions. Pretty cool.Hopefully they get the ARC folks up and running soon.