From Von Neumann to Neural Manifolds: How Our Picture of the Brain’s “Notation” Evolved

Posted on 2025-08-21 12:21


John von Neumann famously argued that the brain does not compute in the same symbolic language as logic and mathematics. Instead, it appears to use spatiotemporal spikes, noisy analog mechanisms, and adaptive, distributed codes. Since mid‑20th‑century computing borrowed its metaphors from formal logic and digital circuits, this claim set up a lasting tension: Is thought fundamentally symbolic, or does it arise from dynamical, probabilistic processes with only emergent symbols on top?

Von Neumann’s Challenge

In The Computer and the Brain, von Neumann highlighted mismatches between biological computation and digital logic: neurons communicate through spikes and chemical gradients, are slow and noisy individually, yet collectively produce robust, flexible intelligence. He concluded that the brain’s internal “notation” must differ from the explicit symbols manipulated by formal systems.

Symbolic AI and the Language of Thought

The first great response was the symbolic paradigm. Newell & Simon’s Physical Symbol System Hypothesis proposed that intelligent behavior consists of rule-governed symbol manipulation. Jerry Fodor’s Language of Thought (LOT) hypothesis added that cognition occurs in an internal, compositional “mentalese.” On this view, the brain’s code is essentially symbolic, and reasoning is akin to theorem proving over structured representations.

Connectionism: Distributed Representations

Parallel Distributed Processing (PDP) models showed that networks of simple units can learn patterns, categories, and sequences without explicit symbols. Meaning lives in vector patterns across many units rather than in single, token-like symbols. Fodor & Pylyshyn’s critique pressed that systematic, combinatorial reasoning still seems to require structure; the resulting debate seeded today’s hybrid accounts.

Embodiment and Dynamical Systems

A second wave emphasized that cognition unfolds in time as continuous dynamics tightly coupled to body and world. Neural activity is better described as trajectories in a state space than as steps in a proof. This shift reframed the brain’s “notation” as process-based rather than token-based.

The Bayesian and Predictive Turn

Modern theories recast neural computation as probabilistic inference. On predictive coding and related Bayesian frameworks, the brain represents beliefs and uncertainties, updates them with prediction errors, and minimizes long-run surprise. If there is a “native format,” it looks statistical: probability distributions and precision-weighted errors, not categorical symbols.

Geometry and Neural Manifolds

Large-scale recordings reveal that population activity often lives on low-dimensional manifolds embedded in high-dimensional neural space. Tasks correspond to controlled movements along these manifolds, shaped by network connectivity. This geometric picture deepens the dynamical view: the brain’s “alphabet” is not discrete tokens, but structured flows and shapes in neural state space.

Where Symbols Still Matter

Humans write proofs, program computers, and speak languages with overtly symbolic structure. A plausible reconciliation is emergence: non-symbolic neural substrates implement learning and control, while higher-level practices (mathematics, logic, language) stabilize symbol systems in culture and cognition. Symbols are real at the cognitive-behavioral level even if the neural implementation is sub-symbolic.

Implications for Neuromorphic Computing

  • Spikes and locality: Event-driven spiking captures the brain’s timing-based, sparse communication.
  • In-memory computing: Co-locating memory and compute mirrors synapses-as-storage and reduces data movement.
  • Low precision, high robustness: Embracing noise and variability can yield efficient, fault-tolerant systems.
  • Learning as inference: On-chip probabilistic and online learning align with predictive, adaptive brains.
  • Geometry-aware control: Designing controllers over low-dimensional latent dynamics echoes neural manifolds.

Concise Timeline

  1. 1950s: Von Neumann—brain ≠ logic’s symbol game; expect statistical, distributed codes.
  2. 1960s–1970s: Symbolic AI & LOT—intelligence as rule-governed symbol manipulation.
  3. 1980s: Connectionism—distributed vectors and learned structure challenge strict symbolism.
  4. 1990s: Embodied & dynamical cognition—continuous, context-bound neural processes.
  5. 2000s: Bayesian brain—probabilistic beliefs, prediction errors, and adaptive inference.
  6. 2010s–present: Neural manifolds—geometry and dynamics of population activity; hybrid views.

Key Takeaways

  • The brain’s “native notation” is best modeled as spiking, distributed, probabilistic, and dynamical.
  • Symbols likely emerge at higher cognitive-cultural levels, implemented by sub-symbolic substrates.
  • Neuromorphic designs that embrace spikes, locality, low precision, and online learning are philosophically and technically well-motivated.

Open Questions

  • How do symbolic competencies (proof, logic, language) arise from non-symbolic neural dynamics?
  • Can we formalize a unifying framework that integrates probabilistic inference, dynamics, and geometry?
  • What learning rules most faithfully capture biological plasticity while remaining tractable in hardware?

Bottom line: Von Neumann’s intuition holds up remarkably well. Our most compelling accounts now treat neural computation as probabilistic dynamics on geometric structures, with symbols riding on top when—and only when—tasks and cultures demand them.


This post has been viewed 88 times.


Comments

1 comment

Leave a comment