Google Launches DolphinGemma to Understand Dolphin Communication
On National Dolphin Day, Google teamed up with Georgia Tech and the Wild Dolphin Project (WDP), to launch DolphinGemma, an AI model developed to analyse and generate dolphin vocalisations.
This marks a big step forward in using AI to better understand how dolphins communicate, and it shows how technology can support marine biology.
What is DolphinGemma?
DolphinGemma is an AI model that can analyze and generate dolphin vocalizations (sounds). It helps scientists study how dolphins “talk” to each other.
Technology and Data of Dolphin Gemma
Dolphin Gemma is built on Google’s Gemma open-source foundation model but has been fine-tuned with specialized training to enhance its capabilities.
Dolphin Gemma uses an advanced audio tool called SoundStream, which converts dolphin sounds into a form that the AI can understand and work with.
DolphinGemma is trained on years of underwater video and audio recordings collected by the Wild Dolphin Project, focusing on Atlantic spotted dolphins in the Bahamas.
How It Works
The model identifies patterns in dolphin vocal sequences and generates realistic dolphin-like sounds.
With around 400 million parameters, DolphinGemma is small enough to run on mobile devices like Google Pixel phones, making it useful for field research in real-time.
Why It Matters
DolphinGemma helps scientists get closer to decoding dolphin “language.”
It could lead to new discoveries in interspecies communication, support marine conservation, and expand our understanding of intelligent life in the ocean.
News Gist
Google launched DolphinGemma, an AI model developed with Georgia Tech and WDP, to analyze and generate dolphin sounds.
Built on Google’s Gemma and SoundStream, it helps decode dolphin communication using real data, supporting marine research and conservation efforts.