
From Intention to Expression
Every time someone smiles, grimaces, or flashes a look of surprise, the movement feels effortless – but behind the scenes the brain is running a complex choreography. A new study in Science shows that facial gestures are not controlled by two separate systems (one for voluntary expressions and one for emotional ones), as long believed. Instead, multiple face‑control regions in the brain work together using different kinds of signals: some fast and constantly changing, like real‑time choreography, and others steadier, like a sustained intention.
These neural patterns appear before the face even moves, meaning the brain prepares a gesture in advance, shaping it not just as a movement but as a socially meaningful message. This deeper understanding of how facial expressions are built in the brain may eventually guide new ways to restore or interpret facial communication after injury or in conditions that affect social signalling.
When someone smiles politely, flashes a grin of recognition, or tightens their lips in disapproval, the movement is small, but the message can be enormous. Facial gestures are among the most powerful forms of communication in primates, conveying emotion, intention and social meaning in fractions of a second. The new study, “Facial gestures are enacted via a cortical hierarchy of dynamic and stable codes”, reveals how the brain prepares and produces these gestures through a temporally organised hierarchy of neural “codes”, including signals that emerge well before any visible movement begins.
A Continuous Network, Not Two Separate Systems
The research was led by Prof Winrich A. Freiwald of The Rockefeller University in New York and Prof Yifat Prut of the Edmond & Lily Safra Center for Brain Sciences (ELSC) at the Hebrew University of Jerusalem, working with Dr Geena R. Ianni and Dr Yuriria Vázquez at Rockefeller and clinical collaborators in Kansas and Rochester. For decades, neuroscience was guided by a neat division: lateral cortical areas in the frontal lobe were thought to control deliberate, voluntary facial movements, while medial areas were held responsible for emotional expressions, a view supported by clinical observations in patients with focal brain lesions.
By directly recording activity from individual neurons across both lateral and medial face areas, the researchers found something striking: both regions encode both voluntary and emotional gestures, and they do so in ways that are distinguishable well before any visible facial movement occurs. In other words, facial communication is orchestrated not by two independent systems, but by a continuous cortical hierarchy in which different regions contribute information at different time‑scales—some fast‑changing and dynamic, others stable and sustained.
Dynamic and Stable Codes Working Together
The team discovered that the brain uses area‑specific timing patterns that form a continuum:
Dynamic neural activity reflects the rapid unfolding of facial motion, akin to the shifting muscle choreography of an expression. These signals change quickly as the face moves through time.
Stable neural activity acts more like a sustained “intent” or “context” signal, persisting over longer periods to help ensure the gesture fits the social situation.
Working together, these dynamic and stable patterns allow the brain to generate coherent facial gestures that match the moment—whether deliberate or spontaneous, subtle or pronounced, socially calibrated and ready to be read by others.
Why It Matters for Brain and Behaviour
Facial gestures are not just physical movements; they are social actions, and the brain appears to treat them as such. This work offers a new framework for understanding how facial gestures are coordinated in real time, how communication‑related motor control is organised in the cortex, and what may go wrong in disorders where facial signalling is disrupted, whether due to neurological injury or conditions that affect social communication. It also reframes facial expression as something more sophisticated than a simple reflex or binary decision: it is the product of a coordinated neural hierarchy that links emotion, intention and action. By showing that multiple cortical regions work in parallel, each contributing different timing‑based codes, the study opens new avenues for exploring how the brain produces socially meaningful behaviour and how clinicians might one day restore lost forms of facial communication.
“Facial gestures may look effortless,” the researchers note, “but the neural machinery behind them is remarkably structured and begins preparing for communication well before movement even starts.”
The study, “Facial gestures are enacted via a cortical hierarchy of dynamic and stable codes”, is published and can be accessed in Science .
Researchers: Geena R. Ianni, Yuriria Vázquez, Adam G. Rouse, Marc H. Schieber, Yifat Prut and Winrich A. Freiwald.
Institutions:
Laboratory of Neural Systems, The Rockefeller University, New York
Department of Neurosurgery and Department of Cell Biology & Physiology, University of Kansas Medical Center
Department of Neurology, University of Rochester Medical Center, Rochester, NY
Edmond & Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem
The Price Family Center for the Social Brain, The Rockefeller University, New York