GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.
GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We’ll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.
Sabata11792@kbin.social 6 months ago
I can't wait till someone dose this, but open source and running on not billionaire hardware.
Dyf_Tfh@lemmy.sdf.org 6 months ago
If you already didn’t know, you can run locally some small models with an entry level GPU.
For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful.
Although there is quite a bit of controversy of what is an “open source” model, most are only “open weight”
abhibeckert@lemmy.world 6 months ago
Emphasis on “small” models. The large ones need about $80,000 in RAM.
Sabata11792@kbin.social 6 months ago
I been playing with the Mistral 7b models. About the most my hardware can reasonably run... so far. Would love to add vision and voice but I'm just happy it can run.
Dran_Arcana@lemmy.world 6 months ago
I have this running at home. oobabooga/automatic1111 for LLM/SD backends, vosk + mimic3 for tts/stt. A little bit of custom python to tie it all together. I certainly don’t have latency as low as theirs, but it’s definitely conversational when my sentences are short enough.
Sabata11792@kbin.social 6 months ago
Check out the vladmandic fork of auto1111. It seems to be much quicker with new model support.
Been wanting to try voice cloning and totally not cobble together a DIY Ai wiafu.
Holzkohlen@feddit.de 6 months ago
I can’t tell if you are for real or joking with those concatenations of letters. Have you tried the new Oongaboonga123? I hear it’s got great support for bpm°C
randon31415@lemmy.world 6 months ago
Look up oobabooga, and then play with all the fun extensions.
Sabata11792@kbin.social 6 months ago
I never had any luck with most of the extensions let alone figuring out how to format a prompt for the API. I'm just making shit up as I go.