Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Google quietly released an app that lets you download and run AI models locally

⁨247⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨days⁩ ago⁩ by ⁨Pro@programming.dev⁩ to ⁨technology@lemmy.world⁩

https://github.com/google-ai-edge/gallery

source

Comments

Sort:hotnewtop
  • Wazowski@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Excellent, I will be sure not to use this, like all Google shit.

    source
    • bizzle@lemmy.world ⁨1⁩ ⁨day⁩ ago

      In a few years you won’t be able to anyway

      source
      • Kolanaki@pawb.social ⁨1⁩ ⁨day⁩ ago

        I’m just reaching the end game faster then.

        source
  • cupcakezealot@lemmy.blahaj.zone ⁨18⁩ ⁨hours⁩ ago

    god i can’t wait for the ai bubble to pop

    source
  • Allero@lemmy.today ⁨18⁩ ⁨hours⁩ ago

    There is already GPT4All.

    Convenient graphical interface, any model you like, fully local, easy to opt in or out of data collection, and no fuss to install - it’s just a Linux/Windows/MacOS app.

    For Linux folks, it is also available as flatpak for your convenience.

    source
  • rickyrigatoni@lemm.ee ⁨1⁩ ⁨day⁩ ago

    All the time I spent trying to get rid of gemini just to now download this. Am I stupid?

    source
    • JustARegularNerd@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

      I wouldn’t think so - it depends on your priorities.

      The open source and offline nature of this without the pretenses of “Hey, we’re gonna use every query you give as a data point to shove more products down your face” seems very appealing over Gemini. There’s also that Gemini is constantly being shoved in our faces and preinstalled, whereas this is a completely optional download.

      source
  • AmbiguousProps@lemmy.today ⁨2⁩ ⁨days⁩ ago

    Why would I use this over Ollama?

    source
    • Greg@lemmy.ca ⁨2⁩ ⁨days⁩ ago

      Ollama can’t run on Android

      source
      • AmbiguousProps@lemmy.today ⁨2⁩ ⁨days⁩ ago

        That’s fair, but I think I’d rather self host an Ollama server and connect to it with an Android client in that case. Much better performance.

        source
        • -> View More Comments
      • Euphoma@lemmy.ml ⁨2⁩ ⁨days⁩ ago

        You can use it in termux

        source
        • -> View More Comments
      • Diplomjodler3@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Is there any useful model you can run on a phone?

        source
      • gens@programming.dev ⁨1⁩ ⁨day⁩ ago

        Llama.cpp (on which ollama runs on) can. And many chat programs for phones can use it.

        source
      • pirat@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Try PocketPal instead

        source
  • toastmeister@lemmy.ca ⁨2⁩ ⁨days⁩ ago

    Duck.ai doesn’t data mine, and has o3 mini which I have found to be very good. Its got some extra functionality like lines to break up text.

    source
    • Kuma@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Nice! I saw Mozilla also added an ai chat in the browser recently (not in the phone version that I have seen tho).

      It is too bad duck.ai only runs the small models. Gpt4o-mini is not very good, it can be very inaccurate and very inconsistent :( I would like to see the 4.1-mini instead, faster and better and got function calling, so it can do web searches for example. O3 can’t so it can only know what it knows until 2023.

      But thanks for the information I will be looking out for when 4.1 is added!

      source
    • Libra@lemmy.ml ⁨1⁩ ⁨day⁩ ago

      I’ve been using duck.ai recently myself and quite like it. My only complaint with it is that the chats have a length limit, so if you’re working on complex projects you can run into those limits pretty quick. I use it for worldbuilding for a novel I’m working on and I have to use chatgpt for thematic stuff because it has a better memory, but otherwise it’s great for quick/small things.

      source
    • stardust@lemmy.ca ⁨2⁩ ⁨days⁩ ago

      Yeah duck is all over bothered with since it came out since you don’t even need to login to use it.

      source
  • Obelix@feddit.org ⁨13⁩ ⁨hours⁩ ago

    Google hosting their shit on Microsofts servers and telling you to sideload and not using their own software distribution method for their own OS is kind of crazy if you think about it

    source
  • KeenFlame@feddit.nu ⁨17⁩ ⁨hours⁩ ago

    Wonder what this has over its competitors, I hesitate to think they released this for fun though

    source
  • rirus@feddit.org ⁨2⁩ ⁨days⁩ ago

    Alibaba also provides an OpenSource App, it even has support for their multimodal voice chat Model qwen2.5 omni: github.com/alibaba/MNN

    source
  • RizzoTheSmall@lemm.ee ⁨1⁩ ⁨day⁩ ago

    You never heard of ollama or docker model runner?

    source
    • fmstrat@lemmy.nowsci.com ⁨1⁩ ⁨day⁩ ago

      Android and iOS.

      source
  • NGC2346@sh.itjust.works ⁨2⁩ ⁨days⁩ ago

    Enclave on iOS does the trick for the rare times i need a local LLM

    source
    • th3dogcow@lemmy.world ⁨2⁩ ⁨days⁩ ago

      Didn’t know about this. Checking it out now, thanks!

      source
  • moonlight6205@lemm.ee ⁨2⁩ ⁨days⁩ ago

    Is the chat uncensored?

    source
    • ofcourse@lemmy.ml ⁨2⁩ ⁨days⁩ ago

      Censoring is model dependent so you can select one of the models without the guardrails.

      source