Google’s AI Robotics Leap, Unveils Model “Gemini Robotics ER”
Google DeepMind has introduced two new AI models, Gemini Robotics and Gemini Robotics-ER (Embodied Reasoning), that greatly improve how robots think and move.
These models help robots handle more real-world tasks with better accuracy and flexibility.
Key Points
- According to Google, Gemini Robotics and Gemini Robotics-ER, models improve how robots interact with their surroundings, making them more responsive and adaptable.
- Google has enhanced three major areas—generality, interactivity, and dexterity—using a single AI model.
- Gemini Robotics, a model that helps robots understand and act in new situations, even without prior training. It enables precise tasks like folding paper or opening a bottle cap.
- Gemini Robotics-ER, a more advanced model that helps robots navigate and interact with complex environments. It can work with existing robotic controllers, making AI-powered robotics development easier.
- Gemini Robotics-ER can assess whether an action is safe before performing it.
- DeepMind is also introducing new benchmarks and tools to improve AI safety in robotics.
DeepMind’s Vision for the Future of Robotics
Google’s Gemini Robotics models are built on the Gemini 2.0 foundation, designed to connect how robots see the world with how they physically interact with it.
DeepMind considers Gemini Robotics a major step toward developing versatile robots that can adapt to real-world situations on their own.
To advance humanoid robotics, Google DeepMind is collaborating with leading robotics companies, including Apptronik, Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools.
News Gist
Google DeepMind’s Gemini Robotics and Gemini Robotics-ER enhance robotic intelligence, adaptability, and safety.
These AI models improve perception, dexterity, and interactivity, enabling robots to handle real-world tasks with greater precision and flexibility.