AI Tools & Products NewsFeatured News

Meta’s TRIBE Model Predicts Brain Reactions to Videos

Meta’s FAIR team has unveiled TRIBE (TRImodal Brain Encoder), a powerful 1-billion-parameter AI model that can predict human brain responses to movies without requiring any brain scans.

It analyzes video, audio, and dialogue to estimate which brain regions would activate while watching.

Key Features of TRIBE

Three-in-one input: Combines visual, auditory, and textual cues.

Scan-free predictions: No physical brain scans needed for estimation.

High accuracy: Over 50% correct predictions across extensive brain mapping.

Outperforms single-modality models by 30%, especially in integration hubs like the frontal cortex.

How It Works

TRIBE uses video, audio, and dialogue to estimate which parts of the brain would activate during viewing.

It learned this by training on 80 hours of films and TV shows paired with corresponding fMRI data, and can predict activity across over 1,000 brain regions, achieving more than 50% accuracy.

It excels particularly in brain areas tied to attention, emotion, and decision-making, delivering improvements up to 30% better than models that use only one type of input.

Achievement

TRIBE earned first place in the Algonauts 2025 brain modeling competition, an open-science initiative focused on advancing models that predict neural responses.

Whats Next 

In the coming years, Meta and other research institutions are expected to explore the ethical and practical applications of TRIBE-like AI.

For content creators, such systems could become advanced analytics tools, offering unprecedented insights into audience reactions beyond traditional metrics.

Future versions may deliver real-time predictions of brain responses, enabling iterative content refinement.

While commercial rollout timelines remain uncertain, this research signals a transformative shift in digital experience design within the next 3–5 years.

News Gist:

Meta’s TRIBE AI predicts brain responses to videos without scans, using video, audio, and text for over 50% accuracy across 1,000+ brain regions.

The breakthrough raises both creative possibilities and ethical concerns, with potential real-world use in 3–5 years.

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Binger
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.