Whilst a two-month-old might just look like they are simply absorbing the world around them in a blur of faces, toys and formula milk, a new study by Trinity Scientists suggest their brains are much more sophisticated than we previously thought. It turns out that even months before they show any kind of adult-like recognition behaviour, they already have the ability to sort things into meaningful categories.
In their paper “Infants have rich visual categories in ventrotemporal cortex at two months of age” the neuroscientists show the results of their experiment, which conducted an awake functional Magnetic Resonance Imaging (fMRI) on over 100 two-month old babies, following up on 66 of them at nine months. Because baby brain scans are notoriously difficult (they wriggle, they get bored, they fall asleep), the team used attention-grabbing “looming” images and fun nursery songs, to keep them engaged.
The main finding that came from the study is that the infant ventral visual cortex – part of the brain’s object-recognition pathway – already shows clear “categorical structure”. In layman’s terms, the different activity patterns in the babies’ brains aren’t random, but rather they map object categories to each other, much like how adult brains cluster the visual world. The research found early signs of high-level groupings such as differentiating animate vs inanimate and even size (small things vs big things). They found that at nine months these representations get more refined, but the fact they exist in infants as young as two months is groundbreaking science.

What makes this so groundbreaking is that it puts into question the common theory of bottom-up development. This was the idea that babies only start with simple visual features (eg, edges, colours and simple shapes) and only later build these richer categories through a lot of world experience. This new study however suggests that category information is present in ventral regions from two months, while a lateral object-selective region known as LO looks less mature and shows little consistent categorisation capabilities at that age group. The authors theorise that this isn’t simply a measurement problem. They suggest LO’s signal quality looked very similar to early visual cortex, suggesting a different developmental timing than previously thought. Rather than a neat step-by-step simple-to-complex ladder, brain development is “non-hierarchical”.
The neuroscientists also looked at AI to better understand how the human brain develops. To do this they compared infant’s brain maps with those of Deep Neural Networks (DNN). They found that the DNN computer vision systems better represent infant brain development when they are trained, compared to when they are untrained. They also tested self‑supervised models (trained without labels), which is closer to how babies learn, and found similar broad correspondences.
The big open question is when this category framework gets set up, is it rapidly learned in the first weeks of life, or does the brain arrive with more scaffolding than we assumed? Either way, the study offers a tantalising bridge between infant neuroscience and next‑generation AI. If we can work out what kind of “training signal” real babies effectively use and why some brain regions mature earlier than others, it could inspire machine-vision systems that learn more efficiently, with less data, and in more human-like ways.