Rethinking (Classical + Quantum) Brain Dynamics
Fluidic Quantum Neural Nets, the Quantum-Biased Brain, and its Potential Extraordinary Abilities
The complex biophysical machines inside our heads have provided profound and valuable inspiration to a variety of scientific fields from AI to cognitive science to quantum biology to compute hardware design (and many others). However the emerging default way of looking at brain dynamics -- in terms of networks of neurons spiking and sending charge between each other -- is well known to be only part of the story, and I have always wondered whether it may be leaving out some of the more interesting and important bits.
I don’t think the “brain as neural net” story is wrong exactly -- my contention is more that it may be importantly incomplete.
My thinking on this topic is only loosely connected to my current work on AGI development, because I think given current computer hardware and network infrastructure, closely emulating the brain is probably not going to be the most effective way to get to AGI. So Hyperon, my primary push toward AGI and ASI (which is going quite nicely!) very much does not try to be a brain simulation.
Nonetheless, though, I believe there is something to be learned from Hyperon by thinking about what kind of complex dynamical system the brain might actually be. I will elaborate on this in a follow-on to this post, which I’ll post in a couple days (in which I’ll argue that the concepts from this post may also apply to complex digital AI systems like Hyperon, but with some fun twists).
But for this post, let’s talk about brains and how to model and understand them!
This post roughly summarizes three new papers I’ve put together, that are linked at the end of the post and that together lay out something I’ve been working toward for a while. Together these papers lay out a mathematically rigorous framework that connects AI architecture design, neuroscience, and the physics of anomalous cognition (psi phenomena), all through the same set of formal structures. The papers build on each other like floors of a weird spooky tower, so here I walk through the key ideas in order, starting from the engineering and working outward.
Step 1: Fluidic Quantum Neural Networks (FluQNets)
The first paper in the series introduces a new class of AI architecture called Fluidic Quantum Neural Networks, or FluQNets for short …
The first core intuition here is to treat computational activity in a neural net as a kind of fluid flowing through the network -- a fluid that must be routed through a network without being created or destroyed. Think of it like water flowing through pipes: the total amount is fixed, and the interesting question is where it goes and what happens to it along the way.
My big realization regarding fluidic neural nets was that you can make this fully mathematical rigorous, using
a nice mapping (which I didn’t make up, it’s well known in the. math literature but hasn’t been leveraged that much) between the Hamilton-Jacobi-Bellman equation that is used for dynamic programming and optimal control (AI techniques closely related to reinforcement learning), and the Navier-Stokes equation which describes the dynamics of incompressible fluids.
Another nice mapping (also well known and not leveraged much) between the Hamilton-Jacobi-Bellman equation and the Schrodinger equation that governs standard quantum mechanics
So the first half of the story of the FluQNet paper is: When you do Bellman-style dynamic programming on the right kind of infinite-dimensional space (the group of volume-preserving diffeomorphisms, for the mathematically inclined), you get out the incompressible Navier–Stokes equations -- the same equations that govern fluid flow. So the architecture’s fluidic character is inherited from deep mathematics, not just bolted on as an analogy. It turns out that a neural net that routes an attentional/resource fluid around, is actually enacting dynamic programming, to which standard RL is an approximation. (A few early drafts on this sort of fluidic neural net, including some very crude early implementation experiments, are given in this folder.)
But FluQNets aren’t just about routing fluid. The second half of the story leverages the mapping between HJB and Schrodinger: At each node of the network, there’s also a local operator-valued state -- essentially a small density matrix, the kind of object used in quantum mechanics to describe mixed or uncertain states. These local states evolve by quantum-logic-network (QLN) style channels: completely positive, trace-preserving maps that generalize classical Bayesian updates into a richer, noncommutative algebra. When the operators happen to commute, you recover ordinary probabilistic inference. When they don’t, you can represent contextual evidence, incompatible hypotheses, and entanglement-like correlations that scalar truth values can’t capture.
So the central conceptual contribution of this first paper is showing how these two layers -- the fluidic routing layer and the local quantum operator layer -- can be glued together in a mathematically principled way. The gluing uses category theory, specifically pullback constructions. The first pullback says that the scalar mass seen by the routing layer must equal the trace of the operator state (shared mass). The second pullback says that the endpoint distributions of the routing process must match those of a Schrödinger bridge (shared endpoint geometry). These aren’t approximations or analogies; they’re exact structural agreements.
And then things get really fun! What I realized is: these two layers, the classical fluid layer and the quantum operator layer, have the ability to dynamically synergize… to resonate … to flow along with each other…!
The paper defines cross-layer naturality: a condition under which the routing layer and the operator layer are doing the same computation at different resolutions. The idea is that you can view both layers as functors from the free path category of the network graph into vector spaces. When there’s a natural transformation between these functors -- meaning certain diagrams commute on every edge -- then they automatically commute on every path. This is a rigorous version of the intuition that moving computational budget from A to B and performing local inference from A to B can be two views of the same underlying task-relevant transformation.
A dominance theorem then shows that when you train with an entropy-regularized objective that includes a naturality-defect penalty, the network self-organizes so that traffic concentrates on paths where these cross-layer morphisms are easiest to realize. Those paths are called semantic corridors -- routes that are simultaneously cheap for the routing layer and semantically coherent for the operator layer. The paper works through a complete three-node qubit example in closed form, showing how routing strength, local unitary rotation, endpoint conditioning, and semantic corridor formation all interact, and how a Reynolds-like or Péclet-like number acts as an explicit explore/exploit dial.
From an engineering standpoint, FluQNets suggest a very asymmetric hardware design: a large classical fluidic fabric handles routing, with small coherent modules (photonic, superconducting, or analog wave devices) embedded at local nodes for the operator-valued inference. You don’t need a large fault-tolerant quantum computer. You need a smart classical transport system with small quantum widgets at leverage points.
Step 2: The Quantum-Biased Neurofluid Brain
The second paper in the series takes the same mathematical spine and asks: what if brains are already doing something like a FluQNet?
The proposal is that brain-scale computation is carried primarily by a classical neurofluid controller -- built from transmembrane currents, extracellular electric fields, ephaptic coupling, extracellular-space diffusion, neuromodulatory volume transmission, and astrocyte-mediated regulation -- together with a much smaller open-quantum microchemistry layer whose outputs bias release probabilities, plasticity thresholds, and attractor selection. The classical layer does the heavy lifting. The quantum layer acts as a structured source of weak but amplifiable bias.
This is, in a way, a middle path between two extremes that have dominated the “quantum brain” literature. One extreme (exemplified by Orch-OR) says the brain is fundamentally a quantum computer, with consciousness tied to objective reduction events in microtubules. The other extreme says quantum effects decohere far too fast in warm, wet neural tissue to matter for anything. The present theory says: you don’t need brain-wide quantum coherence, and you don’t need the quantum layer to carry the main cognitive load. You just need it to tip outcomes at leverage points where the classical controller is already poised among several viable options.
The mathematical backbone is the same cross-layer morphism formalism from FluQNet, now interpreted biologically. A macro graph of neuronal-glial assemblies is linked to a micro graph of candidate quantum-active chemical domains (calcium-phosphate spin clusters, radical-pair reaction pockets, or similar) by a graph morphism. When the corresponding path-level functors admit approximate natural transformations, macro routing and micro inference become the same computation at two resolutions -- and plasticity should consolidate the corridors where this alignment is strongest.
Three dynamical theorems make the physics concrete.
First, an adiabatic reduction shows that fast open-quantum microdynamics collapse onto a slow manifold, producing a state-dependent bias field in the macro equations -- the quantum layer doesn’t need to carry content, just bias.
Second, a bifurcation amplification result shows that near a decision or routing bifurcation (where the macro controller is losing decisiveness), the amplification factor for that bias diverges. Tiny micro effects become large macro effects precisely when the brain is doing something interesting: making an ambiguous perceptual judgment, switching between metastable states, or hovering at a decision threshold.
Third, a drift-diffusion reduction gives an exact formula for how the micro bias shifts choice log-odds and commitment times in a standard threshold-crossing decision model.
The paper proposes a dual-aspect interpretation of phenomenology. The classical neurofluid layer correlates with stable, reportable content: what you see, hold in mind, and act on. The micro-biased layer correlates with selection texture: the timing of perceptual switches, the sharpness of an insight, the sense that one option rather than another “came into focus,” and the trial-to-trial freshness of exploratory thought. This isn’t a claim that quantum effects create a hidden second mind. It’s a claim that they may modulate the grammar of transitions among macro-organized contents.
The paper also includes a detailed experimental program -- layered from in vitro chemistry through cultured networks, in vivo behavior, and human psychophysics -- with explicit falsification criteria. If candidate microdomains don’t survive realistic decoherence constraints, or if the predicted state-dependence near bifurcation doesn’t appear, the theory should be discarded.
Step 3: Wu-Wei Neurofluidics and Psi
The third paper in the series may seem more like a wild leap into the void — but hey that’s the sort of thing that makes life most interesting! (well, I mean … if you happen to be thus inclined…!)
This paper asks whether the “quantum-biased neurofluid” model of the brain might provide a new angle on the physical foundations of anomalous cognition (aka psi).
While I realize the topic is still controversial in many scientific circles, I have convinced myself via careful study of the (abundant, if confusing) available data that psi phenomena (ESP, precognition, remote viewing) do actually exist, though they are mercurial and generally weak and not well understood. The cumulative meta-analytic record in parapsychology -- spanning ganzfeld and free-response studies, forced-choice precognition, presentiment, remote viewing, and micro-psychokinesis -- quite clearly includes a structured pattern of small but reliable effects moderated by participant state, task format, emotional salience, and feedback conditions.
Rigorous theoretical frameworks for explaining psi have not advanced as rapidly as the empirical literature. In my “Wu wei geodesics” framework, I have tried to explain how psi could be fit into the quantum perspective via adding just a few tweaks, rather than requiring a complete overhaul of physics foundations. What neither I nor anyone else has previously given, though, is a biologically concrete theory explaining how a very small future-conditioned or nonlocal bias could enter ordinary neural processing without requiring the entire brain to become a large quantum computer.
I propose here an answer synthesizing Wu-Wei geodesics with the quantum-biased neurofluid brain. From the Wu-Wei geodesics paper, take the idea that quantum and stochastic dynamics can be organized as a two-ended boundary-value problem. A density is factored as ρ = fg, with f propagating forward from initial conditions and g propagating backward from terminal constraints. The backward factor g induces a small drift correction on present trajectories -- a bias toward futures that are easier to realize in an information-geometric sense. This is the “Wu-Wei” tilt: reality preferring low-resistance paths between history and destiny.
From the quantum-biased neurofluid brain paper, take the amplification machinery. The Wu-Wei tilt enters through sparse open-quantum microdomains. The adiabatic reduction turns it into a state-dependent macro bias. The bifurcation theorem amplifies it where the classical brain is near a routing or decision threshold. Plasticity then consolidates the corridors on which this repeatedly works.
Via connecting ideas in this manner, one constructs a single effective corridor action that combines four terms: ordinary task cost, cross-layer naturality defect, Wu-Wei future-compatibility score, and a path-memory term representing learned corridor consolidation. The optimal path distribution is Gibbs-like: corridors dominate when they are simultaneously cheap, cross-layer coherent, future-compatible, and practiced. This gives a direct formal home to the intuition behind several existing psi theories -- Stanford’s PMIR, May’s Decision Augmentation Theory, Carpenter’s First Sight -- while adding the missing brain-level transduction mechanism.
It’s not hard to map this formalism onto specific psi modalities, at the conceptual and math level at least. Presentiment is the cleanest fit: the future target defines the terminal constraint, the backward factor biases micro path selection, the adiabatic field shifts the macro controller before the target is consciously represented, and the effect appears in pre-stimulus physiology. Precognition requires stabilizing a representational corridor all the way to report, which is harder and depends on feedback structure. Telepathy-like anomalous correlations arise from a joint path measure over two brains sharing a common terminal condition. Remote viewing is modeled as corridor formation toward a delayed semantic boundary condition. Psychokinesis is branch-selection bias applied to an external threshold system.
This approach makes a few fairly specific, falsifiable predictions. Effects should be strongest near macro bifurcation or metastability. They should be selectively modulated by micro-substrate interventions (isotopes, weak magnetic fields). They should depend on delayed or semantically sharp terminal feedback. And longitudinal training should consolidate task-specific corridors rather than producing uniform global enhancement. If these signatures fail while generic psi effects remain, the theory is wrong.
Re-Thinking Classical and Quantum Brain Dynamics
We have summarized a re-visioning and re-modeling of (classical and quantum) brain dynamics along lines that are highly eccentric relative to the current mainstream a single mathematical framework, yet rigorously grounded in well-known mathematics, and so far as I can tell consistent with available empirical data on how brains work.
We have — preliminarily but concretely — used this perspective to explore new potentials for AI architecture, neuroscience, and the physics of anomalous cognition. The FluQNet paper is math-driven AI architecture design with clear engineering implications. The quantum-biased brain paper is a speculative but falsifiable neuroscience proposal. The psi paper extends the speculation further but stays anchored in the same formal structures, and echoes the other two papers in its focus on experimental accountability.
One key insight running through all these papers is that you don’t need the quantum part to be big, for it to have a very meaningful effect on neurodynamics.
What you needs for it to be at the right place: a leverage point where a classical controller is near a decision, a bifurcation, or a creative transition. Small quantum effects, amplified by recurrent nonlinear dynamics and consolidated by plasticity, can have outsized consequences -- in an AI architecture, in a biological brain, and perhaps in domains that current physics doesn’t yet fully explain.
Whether nature actually uses the mechanisms I describe here is, of course, an empirical question. But if even part of this framework survives contact with experiment, it opens a new bridge between quantum biology, nonlinear neuroscience, optimal transport theory, and theories of cognition that could be quite consequential.
For those with the time and guts and weirdness to dig deeper, the 3 early-draft papers I’ve been discussing here are linked below.
Papers:
1. Fluidic Quantum Neural Networks: A Categorical and PDE-Level Synthesis of Bellman Routing, Shared Endpoint Geometry, and Cross-Layer Semantic Naturality
2. The Quantum-Biased Neurofluid Brain: Potential Biological, Cognitive and Phenomenological Implications of Cross-Layer Morphisms Between Macroscopic Routing and Microscopic Open-Quantum Bias
3. Wu-Wei Neurofluidics: Psi in a Quantum-Biased Neurofluid Brain via Cross-Layer Corridors and Bifurcation Bias


A very skillful speculation on a topic that can be hard to tread carefully.
Likely indeed the brain can help only that much with AI. Now we are trying to find the right balance between prior structure and self-assembly.
beautiful