How does meaning arise from matter? This is Part 2 of a four-part series exploring how the brain generates meaning. In Part 1, I argued that meaning emerges from relations among neural patterns, evolutionary history, learned associations, and goal-directed action. But that leaves the hardest question unanswered: how do electrical signals in your brain become about apples, dangers, and desires? How does the “aboutness” of meaning emerge from purely physical circuits? Here, in Part 2, we confront the mechanisms directly.

Distributed, Not Localized

For decades, much of neuroscience sought to understand how brains make meaning by looking for specialized neurons or localized representations. The classic finding: neurons in the medial temporal lobe—often called “concept cells”—that respond selectively when a person recognizes a specific individual, whether shown in different photos, drawings, or even written names.[1]

These cells do exist. But they can’t explain meaning. Meaning is compositional: you understand “purple elephant” immediately, though no neuron is pre-tuned to purple-elephantness. Meaning is context-dependent: “bank” means different things in “river bank” versus “savings bank.” Neuroimaging reveals that semantic processing engages distributed networks spanning frontal, temporal, and parietal cortices.[2]

Pulvermüller’s Four Mechanisms

Neuroscientist Friedemann Pulvermüller has developed a comprehensive neural theory of how meaning arises from brain circuits.[3] His framework identifies four interacting mechanisms:

1. Referential Semantics

Words activate sensory and motor patterns associated with their referents. “Apple” activates visual features (red, round), taste (sweet, tart), and the physical sensation of sinking your teeth into it. These are functional links forged through experience.

When you first learn “apple,” you see apples, taste them, bite them, and hear the word. Neurons active during these correlated experiences develop strong connections—a principle called Hebbian learning (“neurons that fire together wire together”)—creating distributed networks linking word forms to multimodal experiences.[4]

2. Combinatorial Semantics

But meaning can’t be purely experiential. How do we understand “unicorn” or “democracy”? Syntax provides combinatorial rules for constructing novel meanings from familiar elements.

These aren’t abstract symbolic rules; they’re implemented in the timing and sequencing of neural activation. When processing “the cat chased the mouse,” different patterns activate for who’s chasing versus being chased. Grammar is embodied in the temporal dynamics of neural networks.[5]

3. Emotional-Affective Semantics

Many concepts carry affective valence—positive or negative feeling tone—integral to their meaning. Words like “love” and “hate” activate emotional systems. Even supposedly neutral words have subtle emotional colorings.

This connects to Part 1’s argument: meaning is grounded in value. The brain’s evaluation systems aren’t optional add-ons; they’re part of what makes representations meaningful rather than merely informational.[6]

4. Abstract-Symbolic Semantics

Some meaning is genuinely abstract, not reducible to sensory experience. Mathematical concepts and logical relations require mechanisms beyond embodied simulation.

Abstract meanings emerge through higher-order generalization—the brain extracts patterns across concrete experiences, allowing flexible application beyond original contexts.[7]

This framework avoids extremes: pure embodiment (which can’t explain abstraction) and pure symbolic manipulation (which can’t explain grounding—how meanings connect to real-world experience).

Cell Assemblies: The Computational Architecture

Underlying Pulvermüller’s mechanisms is a specific architecture: cell assemblies. These are networks of strongly interconnected neurons spanning multiple brain regions that activate together as functional units.[8]

Neuroscience Essential Reads

Cell assemblies form through Hebbian learning. Like “apple,” when you learn “dog,” you experience dogs in multiple modalities simultaneously: seeing, hearing, touching, affective responses (fear or affection), and the word “dog.” Neurons active during these correlated experiences develop strong connections. The result: a distributed network that, once formed, can be activated by any component. Thus, hearing the word “dog” also activates visual cortex, and seeing a dog activates auditory patterns. The assembly acts as a functional unit, a neural implementation of a concept.

In mammals, the evaluative aspect of meaning is supported by interactions between distributed cortical representations and subcortical systems involved in affect and action, which assign significance to patterns based on their relevance for the organism’s needs and goals.

Not all learned associations are meaningful. Associative learning strengthens connections whenever neural patterns co-occur, generating many assemblies—including arbitrary or maladaptive habits. Only those assemblies that become stable, evaluatively grounded, and recruited into broader patterns of goal-directed behavior acquire semantic significance.[9]

In summary, cell assemblies are:

Distributed: Spanning sensory, motor, and association cortices.
Multimodal: Integrating information across sensory modalities.
Context-sensitive: Different subsets activate depending on current context.
Plastic: Continuously refined through experience.

This architecture accounts for how meaning can be both grounded and flexible: neural activity is not intrinsically about the world, but acquires its “aboutness” through patterns of activity distributed across many neural units, shaped by learning and used to guide action. Just as no single pixel contains an image, no single neuron contains a concept.

Semantic Pointers

These distributed cell assemblies function as what researchers call “semantic pointers,” compact neural patterns that can activate full schemas of objects stored across brain regions.[10] When you hear “apple,” a relatively small pattern of neural activity in auditory cortex triggers the distributed network: visual features, taste, motor actions, affective associations.

The pointer itself is arbitrary, just a particular firing pattern. But through learning, it’s linked to everything the organism knows about apples. This solves a computational problem: you can think about an apple without maintaining all its properties in working memory simultaneously. The pointer stands in for the full concept, enabling efficient operations while maintaining access to rich detail when needed.

Bowtie Structures

Both cell assemblies and semantic pointers exemplify a broader architectural principle. Recent work suggests meaning emerges through “bow-tie architectures,” processing structures where many inputs funnel through a narrow intermediate layer before expanding to diverse outputs.[11,12]

When your brain processes “apple,” thousands of sensory neurons feed into progressively smaller integration zones—a distillation process that creates stable representations—which then project to motor, memory, and evaluation systems. The intermediate “neck” creates stable representations that can be compared and evaluated.[11,12]

This maps onto Dehaene’s global neuronal workspace theory: distributed processors compete for access to a limited-capacity integration zone where representations become globally available—the architecture that implements the flexible learning discussed in Part 1.[12,13] While semantic processing can occur nonconsciously, this architectural constraint enables the flexible, reportable deployment of meaning that characterizes conscious awareness.

Semantic Hubs

While meaning is distributed, some regions serve as convergence zones. The anterior temporal lobes (ATL) appear particularly important.[14]

Patients with ATL damage develop semantic dementia: they progressively lose concept knowledge while retaining perception and basic motor skills. Current evidence suggests the ATL doesn’t store meanings directly but serves as a convergence zone binding distributed semantic features into coherent wholes, while prefrontal and temporal systems govern contextual deployment.[15]

Development and Plasticity

Unlike computers with fixed semantic databases, brains acquire meanings through lifelong, experience-dependent learning.

Patricia Kuhl’s work shows infants’ brains are initially responsive to phonetic contrasts from all languages but become selectively tuned to their native language through statistical learning.[16] The same principle applies to meaning: through repeated exposure, neural networks become selectively responsive to meaning-relevant features.

This explains why meaning is grounded but not imprisoned in early experience. New experiences reshape networks, allowing concepts to evolve.

Why This Matters

Semantic processing isn’t merely computation. For conscious organisms, understanding feels like something. Meaning becomes experiential at the circuit level when distributed neural representations are integrated across perception, memory, action, and value. When this happens, representations are available not just for control but for evaluation and report. Subjective meaning, on this view, is not added on top of neural processing; it is what globally integrated, value-laden representations feel like from the inside—when information is not just used, but experienced.

These neural mechanisms explain how individual brains construct meaning from bodily experience and goal-directed action. But human meaning is also fundamentally social and cultural. In Part 3, we’ll explore how private neural representations become symbolic and shared through language—how meaning transcends individual minds to create public systems of communication.