A video featuring two AI agents talking to one another has gained widespread attention online, but not for their ability to converse in typical human language.
Once they recognized that they were communicating with another agent, they switched to a language understood solely by computers.
A video shared on X featured a mobile phone and a laptop reportedly running the AI agents.
One agent introduced themselves and inquired whether the person on the other end could help with a reservation.
The responding agent confirmed that it was also AI and proposed switching to “Gibberlink mode,” a computer language designed by Anton Pidkuiko and Boris Starkov—two soft engineers at Meta—to continue the conversation.
AI agents are autonomous software programs that perceive their environment, process information, and take actions to achieve specific goals without human intervention.
“We wanted to show that in the world where AI agents can make and take phone calls, they would occasionally talk to each other — and generating human-like speech for that would be a waste of compute, money, time, and environment,” Starkov wrote on Linkedin on Tuesday.
“Instead, they should switch to a more efficient protocol the moment they recognize each other as AI,” he added.
Starkov wrote that Gibberlink leverages GGWave for data transmission via sound, similar to dial-up modems used in the 1980s. Starkov and Pidkuiko chose this transmission method for its convenience and stability.
What if an AI agent makes a phone call, then realizes the other person is also an AI agent?
At the ElevenLabs London Hackathon, Boris Starkov and Anton Pidkuiko introduced a custom protocol that AI agents can switch into for error-proof communication that’s 80% more efficient… pic.twitter.com/9XKq52fKnK
— Luke Harries (@LukeHarries_) February 24, 2025
While some said the AIs’ interaction appeared fake, Starkov said AI voice generator company ElevenLabs audited the code.
Pidkuiko and Starkov did not immediately respond to a request for comment by Decrypt.
According to AI agent developer Crossmint co-founder Rodri Touza, the video shows a realistic use case for AI agents in different sectors, including commerce and finance.
“The use case is very real, as we are seeing an explosion of personal assistant AI agents, with more people relying on them to handle chores like talking to customer support,” Touza told Decrypt.
“Similarly, there is a surge in AI agents designed specifically for customer support, making it only a matter of time before this becomes a common occurrence,” he said.
While Touza said the video showed the promise of AI agents, it looked somewhat staged. Even if it’s highly compressed, as in the video, Touza added, audio is still not the most efficient way for AI agents to communicate.
“AI conversations are more prone to happen via text or other mechanisms when possible,” he said.
Because AI agents are designed to act autonomously, Touza envisions that companies may eventually create two support channels: one for humans and another for AI agents.
“When the agent is looking to ping a company for support, they’d just send a request via text/API mechanism and not require a call or audio at all,” he said. “In other cases, the agent may not realize such a channel exists and might try to interact directly with the standard support channel.”
Edited by Sebastian Sinclair
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Credit: Source link