Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone
BlackSnack@lemmy.zip 20 hours agoYes exactly! I would love to keep it on my network for now. I’ve read that “exposing a port” is something I may have to do in my windows firewall options.
Yes I have Ollama on my windows rig. But im down to try out a different one if you suggest so. TBH, im not sure if librechat has a web ui. I think accessing the LLM on my phone via web browser would be easiest. But there are apps out there like Reins and Enchanted that I could take advantage of.
For right now I just want to do whatever is easiest so I can get a better understanding of what I’m doing wrong.
tal@lemmy.today 20 hours ago
Okay, gotcha. I don’t know if Ollama has a native Web UI itself; if so, I haven’t used it myself. I know that it can act as a backend for various front-end chat-based applications. I do know that kobold.cpp can operate both as an LLM backend and run a limited Web UI, so at least some backends do have Web UIs built in. You said that you’ve already used Ollama successfully. Was this via some Web-based UI that you would like to use on your phone, or just some other program (LibreChat?) running natively on the Windows machine?
BlackSnack@lemmy.zip 20 hours ago
Backend/ front end. I see those a lot but I never got an explanation for it. In my case, the backend would be Ollama on my rig, and the front end would be me using it on my phone, whether that’s with and app or web ui. Is that correct?
I will add kobold to my list of AIs to check out in the future. Thanks!
Ollama has an app (or maybe interface is a better term for it) on windows right that I download models too. Then I can use said app to talk to the models. I believe Reins: Chat for Ollama is the app for iPhone that allows me to use my phone to chat with my models that are on the windows rig.
tal@lemmy.today 19 hours ago
For Web-based LLM setups, it’s not common to have two different software packages. One loads the LLM into video memory and executes queries on the hardware. That’s the backend. It doesn’t need to have a user interface at all. Ollama or llama.cpp (though I know that llama.cpp also has a minimal frontend) are examples of this.
Then there’s a frontend component. It runs a small Web server that displays a webpage that a Web browser can access, provides some helpful features, and can talk to various backends (e.g. ollama or llama.cpp or some of the cloud-based LLM services).
Normally the terms are used in the context of Web-based stuff; it’s common for Web services, even outside of LLM stuff, to have a “front end” and a “back end” and to have different people working on those different aspects. If Reins is a native iOS app, I guess it could be called a frontend.
But, okay, it sounds like probably the most-reasonable thing to do, if you like the idea of using Reins, is to run Ollama on the Windows machine, expose ollama’s port to the network, and then install Reins on iOS.
So, yeah, probably need to open a port on Windows Firewall (or Windows Defender…not sure what the correct terminology is these days, long out of date on Windows). It sounds like having said firewall active has been the default on Windows for some years. I’m pretty out-of-date on Windows, but I should be able to stumble through this.
While it’s very likely that you aren’t directly exposing your computer to the Internet — that is, nobody from the outside world can connect to an open port on your desktop — it is possible to configure consumer routers to do that. Might be called “putting a machine in the DMZ”, forwarding a port, or forwarding a range of ports. I don’t want to have you open a port on your home computer and have it inadvertently exposed to the Internet as a whole. I’d like to make sure that there’s no port forwarding to your Windows machine from the Internet.
Okay, first step. You probably have a public IP address. I don’t need or want to know that — that’d give some indication to your location. If you go somewhere like whatismyipaddress.com in a web browser from your computer, then it will show that – don’t post that here.
That IP address is most-likely handed by your ISP to your consumer broadband router.
There will then be a set of “private” IP addresses that your consumer broadband router hands out to all the devices on your WiFi network, like your Windows machine and your phone. These will very probably be
192.168.something.something
, though they could also be172.something.something.something
or10.something.something.something
. It’s okay to mention those in comments here — they won’t expose any meaningful information about where you are or your setup. This may be old hat to you, or new, but I’m going to mention it in case you’re not familiar with it; I don’t know what your level of familiarity is.What you’re going to want is your “private” IP address from the Windows machine. On your Windows machine, if you hit Windows Key-R and then enter “cmd” into the resulting dialog, you should get a command-line prompt. If you type “ipconfig” there, it should have a line listing your private IPv4 address. Probably be something like that “192.168.something.something”. You’re going to want to grab that address. It may also be possible to use the name of your Windows machine to reach it from your phone, if you’ve named it — there’s a network protocol, mDNS, that may let you do that — but I don’t know whether it’s active out-of-box on Windows or not, and would rather confirm that the thing is working via IP before adding more twists to this.
Go ahead and fire up ollama, if you need to start it — I don’t know if, on Windows, it’s installed as a Windows service (once installed, always runs) or as a regular application that you need to launch, but it sounds like you’re already familiar with that bit, so I’ll let you handle that.
Back in the console window that you opened, go ahead and run
netstat -a -b -n
.Will look kinda like this:
i.sstatic.net/mJali.jpg
That should list all of the programs listening on any ports on the computer. If ollama is up and running on that Windows machine and doing so on the port that I believe it is, then you should have a line that looks like:
TCP 0.0.0.0:11434 0.0.0.0:0 LISTENING
If it’s 0.0.0.0, then it means that it’s listening on all addresses, which means that any program that can reach it over the network can talk to it (as long as it can get past Windows Firewall). We’re good, then.
Might also be “127.0.0.1”. In that case, it’ll only be listening to connections originating from the local computer. If that’s the case, then it’ll have to be configured to use 0.0.0.0.
I’m gonna stop here until you’ve confirmed that much. If that all works, and you have ollama already listening on the “0.0.0.0” address, then next step is gonna be to check that the firewall is active on the Windows machine, punch a hole in it, and then confirm that ollama is not accessible from the Internet, as you don’t want people using your hardware to do LLM computation; I’ll try and step-by-step that.
BlackSnack@lemmy.zip 18 hours ago
Dope! This is exactly what I needed! I would say that this is a very “hand holding” explanation which is perfect because I’m starting with 0% knowledge in this field! And I learned so much already from this post and your comment!
So here’s where I’m at, -A backend is where all the weird c++ language stuff happens to generate a response from an AI. -a front end is a pretty app or webpage that takes that response and make it more digestible to the user. -agreed. I’ve seen in other posts that exposing a port on windows defender firewall is the easiest (and safest?) way to go for specifically what I’m looking for. I don’t think I need to forward a port as that would be for more remote access. -I went to the whatismyipaddress website. The ipv6 was identical to one of the ones I have. The ipv4 was not identical. (But I don’t think that matters moving forward.) -I did the ipconfig in the command prompt terminal to find the info and my ipv4 is 10.blahblahblah.
I’m ready for the next steps!