Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

The therapy I can afford

⁨409⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨cm0002@lemmy.world⁩ to ⁨[deleted]⁩

https://lemmy.ml/pictrs/image/4ab57db7-34e1-40bb-86c7-a8f266b12e13.jpeg

source

Comments

Sort:hotnewtop
  • Enkers@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    Just a reminder that corporations aren’t your friends, and especially not Open AI. The data you give them can and will be used against you.

    If you find confiding in an LLM helps, run one locally. Get LM Studio, and try various models from hugging face.

    source
    • aeronmelon@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      ICE hopes gay, trans, minorities, political opponents, etc. vent to ChatGPT.

      source
    • cerement@slrpnk.net ⁨3⁩ ⁨weeks⁩ ago

      or save yourself the effort and just run ELIZA

      source
    • A_Union_of_Kobolds@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Ollama was dirt easy to set up myself and it’s super free.

      If you’re gonna talk to a bot, make sure it’s not telling tales.

      source
    • otacon239@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Yep. I use mine exclusively for code I’m going to open-source anyway and work stuff. And never for anything critical. I treat it like an intern. You still have to review their work…

      source
    • dingus@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Goddamn you guys are the most paranoid people I’ve ever witnessed. What in the world do you think mega corps are going to do to me for babbling incoherent nonsense to ChatGPT?

      No, it’s not a substitute for a real therapist. But therapy is goddamn expensive and sometimes you just need to vent about something and you don’t necessarily have someone to vent to. It doesn’t yield anything useful, but it can help a bit mentally do to do.

      source
      • spooky2092@lemmy.blahaj.zone ⁨3⁩ ⁨weeks⁩ ago

        Goddamn you guys are the most paranoid people I’ve ever witnessed. What in the world do you think mega corps are going to do to me for sharing incoherent nonsense to Facebook?

        You, 10-20 years ago. I heard these arguments from people in the early days, well before Facebook blew up or Cambridge Analytica was a name any normies knew.

        This isn’t the early 00s anymore where we can pretend that every big corp isn’t vacuuming up every shred of data they can. Add on the fascistic government taking shape in the US and the general trend towards right leaning parties gaining power in governments across the world, and you’d have to be completely naive to not see the issues with using a ‘therapist’ that will save every datapoint to its training and could be mined to use against you or willingly handed over to an oppressive government to use however they so choose.

        source
        • -> View More Comments
      • LeninsOvaries@lemmy.cafe ⁨3⁩ ⁨weeks⁩ ago

        Mine the data for microanalysis of social trends and use it to influence elections through subliminal messaging.

        source
      • Lucidlethargy@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

        If it’s incoherent, you’re fine… Just don’t ever tell it anything you wouldn’t want a stalker to know, or your family, or your friends, or your neighbors, etc.

        source
        • -> View More Comments
    • IndiBrony@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Can you run one locally on your phone?

      source
      • Captain_Stupid@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        The smallest Modells that I run on my PC take about 6-8 GB of VRAM and would be very slow if I ran them purely with my CPU. So it is unlikely that you Phone has enough RAM and enough Cores to run a decent LLM smootly.

        If you still want to use selfhosted AI on you phone:

        • Install Ollama and OpenWebUI in a docker container (guides can be found on the internet)
        • Make sure they use your GPU (Some AMD Cards require an HSA override Flag to work
        • Make sure the docker container is secure (Blocking the Port for comunication outside of your network should work fine as long as you only use the AI Modell at home)
        • Get youself an openwight modell (I recomend llama 3.1 for 8 GB of VRAM and Phi4 if you got more or have enough RAM)
        • Type the IP-Adress and Port into the browser on your phone.

        You now can use selfhosted AI with your phone

        source
  • JohnDClay@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    Would not recommend, it’ll regurgitate what you want to hear.

    slrpnk.net/post/20991559

    source
    • stebo02@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

      imagine thinking a language model trained on Reddit comments would do any good for therapy

      source
      • Kecessa@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

        It just reached the one “I’ll disagree with everyone else” comment from a r/relationshipadvice post

        source
    • Lucidlethargy@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

      Yes, this is a massive problem with them these days. They have some information if you’re willing to understand they WILL lie to you, but it’s often very frustrating to seek meaningful answers. Like, it’s not even an art form… It’s gambling.

      source
  • Captain_Stupid@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    If you use Ai for therapie atleast selfhost and keep in mind that its goal is not to help you but to have a conversation that statisvies you. You are basicly talking to a yes-man.

    Ollama with OpenWebUi is relativly easy to install, you can even use something like edge-tts to give it a Voice.

    source
    • Robust_Mirror@aussie.zone ⁨3⁩ ⁨weeks⁩ ago

      Therapy is more about talking to yourself anyway. A therapists job generally isn’t to give you the answers, but help lead you down the right path.

      If you have serious issues get an actual professional, but if you’re mostly just trying to process things and understand yourself or a situation better, it’s not bad.

      source
      • pupbiru@aussie.zone ⁨3⁩ ⁨weeks⁩ ago

        to lead you down the right path, yes… llms will lead you down an arbitrary bath, and when that path is biased by your own negative feelings it can be incredibly damaging

        source
      • Captain_Stupid@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        That is not what I mean. I was talking about Sam Altman using your trauma as training data.

        source
        • -> View More Comments
  • coherent_domain@infosec.pub ⁨3⁩ ⁨weeks⁩ ago

    I wouldn’t give my most vulnerable moment to a company that is more than happy to exploit it.

    source
  • untakenusername@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    Actually please don’t use chatgpt for tharapy, they record everything people put in there to use to further train their ai models. If you wanna use ai for that use one of those self-hosted models on ur computer or something, like those from ollama.com.

    source
    • pupbiru@aussie.zone ⁨3⁩ ⁨weeks⁩ ago

      don’t do that either… llms say things that sound reasonable but can be incredibly damaging when used for therapy. they are not therapists

      source
    • laserm@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Eliza from 1960s was made for this.

      source
  • Aurenkin@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    Wait, why did my insurance premiums just go up?

    source
  • sunglocto@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

    That’s not how I use it… Image

    WRITE 200 PAGES OF WHY YOUR EXISTENCE IS FUTILE! NOW!

    source
  • Lucidlethargy@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    This is a severely unhealthy thing to do. Stop doing it immediately…

    ChatGPT is incredibly broken, and it’s getting worse by the day. Seriously.

    source
  • AI_toothbrush@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

    And then i just have the stupidest shit ever, mostly trying to gaslight chatgpt into agreeing with me about random stuff thats actually incorrect. Btw psa: please never use ai for school or work, it produces slop and acts like a cruch that youre going to start relying on. Ive seen it so many times in the people around me. Ai is like a drug.

    source
    • Kecessa@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

      Btw PSA: please never use AI

      That’s it.

      source
      • absentbird@lemm.ee ⁨3⁩ ⁨weeks⁩ ago

        Since studying machine learning I’ve become a lot less opposed to AI as a concept and specifically opposed to corporate/cloud LLMs.

        Like a simple on-device model that helps turn speech to text isn’t something to be opposed, it’s great for privacy and accessibility. Same for the models used by hospitals for assistive analysis of medical imaging, or to remove background noise from voice calls.

        People don’t seem to think of that as ‘AI’ anymore though, it’s like these big corporations have colonized the term for their buggy wasteful products. Maybe we need new terminology.

        source
    • TronBronson@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      To be fair, it is actually quite useful from a business standpoint. I think it’s a tool that you should understand. It can be a crutch but it can also be a pretty good assistant. It’s like any other technology you can adopt.

      They said the same thing about Wikipedia/internt in the early 2000’s and really believed you should have to go to a library to get bonafide sources. I’m sure that’s long gone now judging by literacy rates. You can check the AI’s sources just like a wiki article. Kids are going to need to understand the uses, and drawbacks of this technology.

      source
      • AI_toothbrush@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

        The problem is, you cant check the ai’s sources in most cases. Id also say blindly trusting wikipedia and the internet is a huge problem nowadays. Wikipadia only has a few dozen instances of there being mass manipulation of facts but for example twitter, tiktok, etc are a huge breeding ground for misinformation. So no you shouldnt blindly rely on wikipedia/internet the same way you shouldnt rely on ai. Also the other thing is, if every time you search the internet you kill one turtle then eveey question asked to an ai is like killing a thousand…

        source
        • -> View More Comments
  • nick@midwest.social ⁨3⁩ ⁨weeks⁩ ago

    Image

    Just beware

    source
    • Nurse_Robot@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      What am I looking at here

      source
      • King3d@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        I think it’s the ability to recall past information that you provided to AI. The scary part is that you are providing potentially personal or private information that is saved and could be leaked or used in other ways that you never intended.

        source
        • -> View More Comments
    • Snowcano@startrek.website ⁨3⁩ ⁨weeks⁩ ago

      How do you access this output?

      source
      • crt0o@lemm.ee ⁨3⁩ ⁨weeks⁩ ago

        It’s under your profile > personalization > memory, but I think it’s off by default

        source
        • -> View More Comments
    • MajesticElevator@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

      Ephemeral chat is there for a reason

      source
      • nick@midwest.social ⁨3⁩ ⁨weeks⁩ ago

        You’re a clown and a fool if you think they still don’t log that shit.

        Don’t be naive.

        source
        • -> View More Comments
  • rdri@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    I also facepalm often when that guy writes stuff.

    source