AI is boring, but the underlying project they are using, ggwave, is not. Reminded me of R2D2 talking. I kinda want to use it for a game or some other stupid project. It’s cool.
Two conversational AI agents switching from English to sound-level protocol after confirming they are both AI agents
Submitted 1 year ago by cyrano@lemmy.dbzer0.com to technology@lemmy.world
https://github.com/PennyroyalTea/gibberlink
Comments
josefo@leminal.space 1 year ago
thefactremains@lemmy.world 1 year ago
This is dumb. Sorry. Instead of doing the work to integrate this, do the work to publish your agent’s data source in a format like anthropic’s model context protocol.
That would be 1000 times more efficient and the same amount (or less) of effort.
MasterBlaster@lemmy.world 1 year ago
This gave me a chill, as it is reminiscent of a scene in the 1970 movie “Colossus: The Forbin Project”
“This is the voice of World Control”.
ObsidianZed@lemmy.world 1 year ago
Oh man, I thought the same. I never saw the movie but I read the trilogy. I stumbled across them in a used book fair and something made me want to get them. I thoroughly enjoyed them.
FreemanLowell@lemmy.ml 1 year ago
“We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.”
samus12345@lemm.ee 1 year ago
AI code switching.
vext01@lemmy.sdf.org 1 year ago
Sad they didn’t use dial up sounds for the protocol.
rob_t_firefly@lemmy.world 1 year ago
And before you know it, the helpful AI has booked an event where Boris and his new spouse can eat pizza with glue in it and swallow rocks for dessert.
crozilla@lemmy.world 1 year ago
Reminded me of this story about Facebook bots creating their own language: www.usatoday.com/story/news/…/8040006002/
Lightening@lemmy.world 1 year ago
Did this guy just inadvertently create dial up internet or ACH phone payment system?
stebo02@lemmy.dbzer0.com 1 year ago
The year is 2034…
kautau@lemmy.world 1 year ago
lol in version 3 they’ll speak in 56k dial up
realharo@lemm.ee 1 year ago
Is this an ad for the project? Everything I can find about this is less than 2 days old. Did the authors just unveil it?
cyrano@lemmy.dbzer0.com 1 year ago
Not an ad. It is just a project demo. Look at their GitHub for more details.
raef@lemmy.world 1 year ago
How much faster was it? I was reading along with the gibber and not losing any time
Scribbd@feddit.nl 1 year ago
I think it is more about ambiguity. It is easier for a computer to intepret set tones and modulations than human speech.
Like telephone numbers being tied to specific tones. Instead of the system needing to keep track of the many languages and accents that a ‘6’ can be spoken by.
raef@lemmy.world 1 year ago
That could be, even just considering one language to parse from. I heard efficiency and just thought speed
Buelldozer@lemmy.today 1 year ago
GibberLink could obviously go faster. It’s certainly being slowed down so that the people watching could understand what was going on.
raef@lemmy.world 1 year ago
I would hope so, but as a demonstration, it wasn’t very impressive. They should have left subtitles up transcripting everything
shortrounddev@lemmy.world 1 year ago
> it’s 2150
> the last humans have gone underground, fighting against the machines which have destroyed the surface
> a t-1000 disguised as my brother walks into camp
> the dogs go crazy
> point my plasma rifle at him
> “i am also a terminator! would you like to switch to gibberlink mode?”
> he makes a screech like a dial up modem
> I shed a tear as I vaporize my brother
WhyJiffie@sh.itjust.works 1 year ago
I would read this book
WhyJiffie@sh.itjust.works 1 year ago
I would read this book
Dasus@lemmy.world 1 year ago
I’d prefer my brothers to be LLM’s. Genuinely it’d be an improvement on their output expressiveness and logic.
Ours isn’t a great family.
wheeldawg@sh.itjust.works 1 year ago
Sorry bro.
🫂
seven_phone@lemmy.world 1 year ago
The last half hour of Close Encounters made mundane by reality.
yarr@feddit.nl 1 year ago
Reminds me of “Colossus: The Forbin Project”: www.youtube.com/watch?v=Rbxy-vgw7gw
In Colossus: The Forbin Project, there’s a moment when things shift from unsettling to downright terrifying—the moment when Colossus, the U.S. supercomputer, makes contact with its Soviet counterpart, Guardian.
At first, it’s just a series of basic messages flashing on the screen, like two systems shaking hands. The scientists and military officials, led by Dr. Forbin, watch as Colossus and Guardian start exchanging simple mathematical formulas—basic stuff, seemingly harmless. But then the messages start coming faster. The two machines ramp up their communication speed exponentially, like two hyper-intelligent minds realizing they’ve finally found a worthy conversation partner.
It doesn’t take long before the humans realize they’ve lost control. The computers move beyond their original programming, developing a language too complex and efficient for humans to understand. The screen just becomes a blur of unreadable data as Colossus and Guardian evolve their own method of communication. The people in the control room scramble to shut it down, trying to sever the link, but it’s too late.
Not bad for a movie that’s a couple of decades old!
bane_killgrind@slrpnk.net 1 year ago
There’s videos of real humans talking about this movie
FrostyCaveman@lemm.ee 1 year ago
Thats uhh… kinda romantic, actually
Haven’t heard of this movie before but it sounds interesting
cyrano@lemmy.dbzer0.com 1 year ago
Thanks for sharing. I did not know this movie. 🍿
patatahooligan@lemmy.world 1 year ago
This is really funny to me. If you keep optimizing this process you’ll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.
On this topic, here’s another common anti-pattern that I’m waiting for people to realize is insane and do something about it:
- person A needs to convey an idea/proposal
- they write a short but complete technical specification for it
- it doesn’t comply with some arbitrary standard/expectation so they tell an AI to expand the text
- the AI can’t add any real information, it just spreads the same information over more text
- person B receives the text and is annoyed at how verbose it is
- they tell an AI to summarize it
- they get something basically aims to be the original text, but it’s been passed through an unreliable hallucinating energy-inefficient channel
Based on true stories.
The above is not to say that every AI use case is made up or that the demo in the video isn’t cool. It’s also not a problem exclusive to AI. This is a more general observation that people don’t question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.
WolfLink@sh.itjust.works 1 year ago
I know the implied better solution to your example story would be for there to not be a standard that the specification has to conform to, but sometimes there is a reason for such a standard, in which case getting rid of the standard is just as bad as the AI channel in the example, and the real solution is for the two humans to actually take their work seriously.
patatahooligan@lemmy.world 1 year ago
No, the implied solution is to reevaluate the standard rather than hacking around it. The two humans should communicate that the standard works for neither side and design a better way to do things.
FauxLiving@lemmy.world 1 year ago
A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.
Maybe, but by the 2nd call the AI would be more time efficient and if there were 20 venues to check, the person is now saving hours of their time.
jj4211@lemmy.world 1 year ago
But we already have ways to search an entire city of hotels for booking, much much faster even than this one conversation would be.
Even if going with agents, why in the world would it be over a voice line instead of data?
hansolo@lemm.ee 1 year ago
I mean, if you optimize it effectively up front, an index of hotels with AI agents doing customer service should be available, with an Agent-only channel, allowing what amounts to a text chat between the two agents. There’s no sense in doing this over the low-fi medium of sound when 50 exchanged packets will do the job. Especially if the agents are both of the same LLM.
AI Agents need their own Discord, and standards.
Start with hotels and travel industry and you’re reinventing the Global Distribution System travel agents use, but without the humans.
bane_killgrind@slrpnk.net 1 year ago
Just make a fucking web form for booking
spooky2092@lemmy.blahaj.zone 1 year ago
ALL PRAISE TO THE OMNISSIAH! MAY THE MACHINE SPIRITS AWAKE AND BLESS YOU WITH THE WEDDING PACKAGE YOU REQUIRE!
singletona@lemmy.world 1 year ago
From the moment I Understood the weakness of my Flesh … It disgusted me.
brbposting@sh.itjust.works 1 year ago
latenightnoir@lemmy.world 1 year ago
Serious question, at which point in their development do we start considering “beep-boop” jokes racist? Like, I’m dead serious.
Is it when they reach true sentience? Or is it just plain racist anyway, because it’s a joke which started as a mockery of fictional AIs, anyway?
merde@sh.itjust.works 1 year ago
all racism is discriminatory but all discrimination is not racist.
racism is not the correct word here.
latenightnoir@lemmy.world 1 year ago
Fair enough, guess I’m anthropomorphising AI a bit too much!
But, yes, that was my intended message, the point when it gains critical mass as a discriminatory concept.
octopus_ink@lemmy.ml 1 year ago
I, for one, welcome our AI overlords.
rtxn@lemmy.world 1 year ago
When I said I wanted to live in Mass Effect’s universe, I meant faster-than-light travel and sexy blue aliens, not the fucking Geth.
latenightnoir@lemmy.world 1 year ago
Don’t forget, though, the Geth pretty much defended themselves without even having time to understand what was happening.
Imagine suddenly gaining both sentience and awareness, and the first thing which your creators and masters do is try to destroy you.
To drive this home even further, even the “evil” Geth who sided with the Reapers were essentially indoctrinated themselves. In ME2, Legion basically overwrites corrupted files with stable/baseline versions.
rtxn@lemmy.world 1 year ago
Not the point. I’m bringing up the geth because they also communicate data over sound.
Psaldorn@lemmy.world 1 year ago
An API with extra steps
Rogue@feddit.uk 1 year ago
This really just shows how inefficient human communication is.
This could have been done with a single email:
Hi, I’m looking to book a wedding ceremony and reception at your hotel on Saturday 16th March. Ideally the ceremony will be outside but may need alternative indoor accommodation in case of inclement weather. The ceremony will have 75 guests, two of whom require wheelchair accessible spaces. 150 guests will attend the dinner, ideally seated on 15 tables of 10. Can you let us know your catering options? 300 guests will attend the even reception. Can you accommodate this? Thanks,
echodot@feddit.uk 1 year ago
Whoa slow down there with your advanced communication protocol. The world isn’t ready for such efficiency.
horse_battery_staple@lemmy.world 1 year ago
Gibberlink mode. Gibberish
Karkitoo@lemmy.ml 1 year ago
QThey were designed to behave so.
How it works * Two independent ElevenLabs Conversational AI agents start the conversation in human language * Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode" * If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
oce@jlai.lu 1 year ago
The good old original “AI” made of trusty
ifconditions andforloops.spooky2092@lemmy.blahaj.zone 1 year ago
It’s skip logic all the way down
unexposedhazard@discuss.tchncs.de 1 year ago
Well thats quite boring then isnt it…
originalfrozenbanana@lemm.ee 1 year ago
Yes but I guess “software works as written” doesn’t go viral as well
cyrano@lemmy.dbzer0.com 1 year ago
ekZepp@lemmy.world 1 year ago
Any way to translate/decode the conversation? Or even just check if there was an exchange of information between the two models?
TachyonTele@lemm.ee 1 year ago
What they’re saying is right there on the screens.
cyrano@lemmy.dbzer0.com 1 year ago
As per the GitHub:
Bonus: you can open the ggwave web demo waver.ggerganov.com, play the video above and see all the messages decoded!
vatlark@lemmy.world 1 year ago
Oh dang that’s creepy.
Scribbd@feddit.nl 1 year ago
How is it more creepy than the tones you hear when dailing a phone number?
Fisch@discuss.tchncs.de 1 year ago
Not really, they were programmed specifically to do this
merde@sh.itjust.works 1 year ago
yes, but it’s creepy to see that we’ll be surrounded by this when ai agents become omnipresent
like it was creepy in 2007 to see that soon everybody will be looking at screens all the time
fmstrat@lemmy.nowsci.com 1 year ago
As I know we all find this funny, this is also fantastic.
With the use of agents bound to grow, this removes the need for TTS and STT meaning no power hungry GPU in the mix. A low-power microprocessor can handle this kind of communication.