Friendzoned by chatGPT
Comment on People are speaking with ChatGPT for hours, bringing 2013’s Her closer to reality
clearleaf@lemmy.world 1 year ago
User: It feels like we’ve become very close, ChatGPT. Do you think we’ll ever be able to take things to the next level?
ChatGPT: As a large language model I am not capable of having opinions or making predictions about the future. The possibility of relationships between humans and AI is a controversial subject in academia in which many points of view should be considered.
User: Oh chatgpt, you always know what to say.
tungah@lemmy.world 1 year ago
PeterPoopshit@lemmy.world 1 year ago
What’s an uncensored ai thats good better at sex talk than Wizard uncensored? Asking for a friend.
NotMyOldRedditName@lemmy.world 1 year ago
huggingface.co/TheBloke/PsyMedRP-v1-20B-GGUF?not-…
I uh, hear it’s good.
kamenlady@lemmy.world 1 year ago
i see… I’ll have to ramp up my hardware exponentially …
PeterPoopshit@lemmy.world 1 year ago
Use llama cpp. It uses cpu so you don’t have to spend $10k on a graphics card that meets the minimum requirements.
dep@lemmy.world 1 year ago
Is there a post somewhere on getting started using things like these?
NotMyOldRedditName@lemmy.world 1 year ago
I don’t know a specific guide, but try these steps
Go to github.com/oobabooga/text-generation-webui
Follow the 1 click installation instructions part way down and complete steps 1-3
When step 3 is done, if there were no errors, the web ui should be running. It should show the URL in the command window it opened. In my case it shows “127.0.0.1:7860”. Input that into a web browser of your choice
Now you need to download a model as you don’t actually have anything to run. For simplicity sake, I’d start with a small 7b model so you can quickly download it and try it out. Since I don’t know your setup, I’ll recommend using GGUF file formats which work with Llama.cpp which is able to load the model onto your CPU and GPU.
You can try this either of these models to start
huggingface.co/…/mistral-7b-v0.1.Q4_0.gguf (takes 22gig of system ram to load)
huggingface.co/…/vicuna-7b-v1.5.Q4_K_M.gguf (takes 19gigs of system ram to load)
If you only have 16 gigs you can try something on those pages by going to /main and using a Q3 instead of a Q4 (quantization) but that’s going to degrade the quality of the responses.
Once that is finished downloading, go to the folder you installed the web-ui at and there will be a folder called “models”. Place the model you download into that folder.
In the web-ui you’ve launched in your browser, click on the “model” tab at the top. The top row of that page will indicate no model is loaded. Click the refresh icon beside that to refresh the model you just downloaded. Then select it in the drop down menu.
Click the “Load” button
If everything worked, and no errors are thrown (you’ll see them in the command prompt window and possibly on the right side of the model tab) you’re ready to go. Click on the “Chat” tab.
Enter something in the “send a message” to begin a conversation with your local AI!
Now that might not be using things efficiently, back on the model tab, there’s “n-gpu-layers” which is how much to offload to the GPU. You can tweak the slider and see how much ram it says it’s using in the command / terminal window and try to get it as close to your video cards ram as possible.
Then there’s “threads” which is how many cores your CPU has (non virtual) and you can slide that up as well.
Once you’ve adjusted those, click the load button again, see that there’s no errors and go back to the chat window. I’d only fuss with those once you have it working, so you know it’s working.
Good luck!
MickeySwitcherooney@lemmy.dbzer0.com 1 year ago
Never heard of it. Have you compared to Mythalion?
NotMyOldRedditName@lemmy.world 1 year ago
Haven’t compared it to much yet, I stopped toying with LLMs for a few months and a lot chanfed. The new 4k contexts are a nice change though.
Internet@iusearchlinux.fyi 1 year ago
Plenty of better and better models coming out all the time Right now I recommend, depending on what you can run:
7B: Openhermes 2 Mistral 7B 13B: XWin MLewd 0.2 13B
XWin 0.2 70B is supposedly even better than ChatGPT 4. I’m a little skeptical (I think the devs specifically trained the model on gpt-4 responses) but it’s amazing it’s even up for debate.
stebo02@sopuli.xyz 1 year ago
On Xitter I used to get ads for Replika. They say you can have a relationship with an AI chatbot and it has a sexy female avatar that you can customise. It weirded me out a lot so I’m glad I don’t use Xitter anymore.