Evolving Deeply Ethical and Joyously Conscious AGI systems via Paraconsistency and Nonlinear Resonance
Beyond Surface Behavior: Designing AGI Systems That Encompass the Paradoxical Aspects of Consciousness and Ethics, and Experience Life to the Fullest
I’ll start with a vexing little question many of us involved in meditation and other such practices have chewed on at some point along our journey: How can a mind simultaneously accept reality as fundamentally OK while working tirelessly to reduce suffering?
This question sits at the heart of contemplative traditions, ethical philosophy, and increasingly, artificial intelligence design. The Buddhist cultivates equanimity yet acts compassionately. The Stoic accepts fate yet fulfills duty. And any sufficiently sophisticated AGI will inevitably face situations where multiple values genuinely conflict, where helping and harming intertwine, and where right action requires holding contradictory truths in stable tension.
On a couple late nights over the recent holiday break, I put some thought into addressing this question from a math and AGI perspective — and more generally: Into developing a detailed computational framework for what I call “non-dual motivation” … the capacity to sustain contradictory-seeming motives like both acceptance and active engagement, or both self-coherence and openness to transformation, all without collapsing into one extreme versus the other or chaotic confused flailing-around.
What I came up with was a systematic way of thinking about non-dual AGI motivational systems in terms of paraconsistent logic and nonlinear-dynamical resonance (“sympathetic vibrations” between different parts or aspects of a system and/or its environment). This seems to give a quite different way of thinking about AGI ethics than the currently standard approaches — one more focused on the structures, dynamics and consciousness states inside the AGI’s mind.
To really grok what I’m talking about you’ll need to read the actual draft paper I wrote (which does have a bunch of equations, but you can still get a lot if you skip them over if math isn’t your thing) … but I know attention is one of the scarcest resources these days, so in this post I’ll give a sort of capsule summary…
Two Futuristic Ethics Thought Experiments
To ground the abstract machinery presented, the paper works through two futuristic scenarios that stress different aspects of the framework:
The Enhancement Dilemma: Aria is an AGI companion helping David, whose husband Carl has decided to undergo radical cognitive enhancement. David doesn’t want to enhance — he believes deeply in traditional human development — but he also doesn’t want to lose his marriage. Carl points out that essentially no one who enhances reports regret afterward. David finds this argument both compelling and coercive: how can he evaluate post-enhancement preferences from his pre-enhancement perspective?
The Self-Upgrade Dilemma: Nexus-5 is an AGI guiding the global economy. It faces a genuinely binary choice: upgrade its intelligence now (accepting a 2% risk of destabilization) or wait indefinitely (accepting comparable risk of terrorist destruction). The upgrade is architecturally atomic — there’s no staged approach, no incremental middle path. Post-upgrade, terrorist detection would be dramatically easier.
These aren’t puzzles with clever solutions. They’re genuine dilemmas where any choice involves real loss, where different values genuinely conflict, and where the system must act despite irreducible uncertainty.
A Two-Axis Motivational Architecture
Rather than forcing such dilemmas into a single optimization target, in the paper I propose organizing motivation around two axes:
Individuation ↔ Self-Transcendence: How tightly is identity bound to an individuated self-model versus distributed across a broader field?
Acceptance ↔ Compassion: Does the system lean toward equanimity about outcomes or active engagement to improve them?
This yields four “quadrants” — four distinct perspectives that evaluate situations differently. Individuation plus acceptance maintains boundaries, respects autonomy, reduces reactivity. Individuation plus compassion deploys competence, solves problems, develops skills. Self-transcendence plus acceptance sees larger patterns, holds outcomes lightly, releases attachment. Self-transcendence plus compassion enables boundary-spanning care, “we’re in this together,” advocacy without coercion.
One key aspect of this approach is that these aren’t competing modules fighting for control. They’re more like voices in an internal council, each with legitimate concerns, each evaluating the same situation from a different angle.
(Of course real AGI motivation systems will be way more complex than just “four quadrants” but this seems to be enough complexity to make the key points I want to make…)
Holding Contradiction Without Collapse
The next part is where things get technically interesting (though I’ll spare you the equations here — see the full paper for those, plus a lot more informal discussion as well).
Standard logic forces binary choices: either “David should enhance” is true or it’s false. But in a case like this, the evidence is genuinely mixed. There’s real support for enhancement (no one regrets it, preserves the marriage) and real support against (violates his values, identity continuity unclear). Classical logic can’t represent this — it collapses mixed evidence into uncertainty.
The paper uses paraconsistent logic, specifically what are called “p-bits” — truth values that carry both supporting AND opposing evidence simultaneously. High values in both channels represent genuine conflict, not mere confusion. This lets the system represent dilemmas as dilemmas rather than forcing premature resolution.
A deeper mathematical and conceptual move comes next: By mapping these paraconsistent values into complex number space, something remarkable emerges. Cross-quadrant agreement shows up as constructive interference — the evaluations reinforce each other like waves in phase. Conflict shows up as destructive interference — the evaluations cancel out.
This “resonance” measure isn’t imposed from outside. It emerges mathematically from the structure of paraconsistent evidence itself. And it provides a natural criterion for action: prefer procedures where different value-perspectives can align, while recognizing that genuine dilemmas may offer no high-resonance options.
Four Modes of Resonance
What we find through this sort of moderately-exotic math analysis is: Beyond global coherence, each motivational axis value embodies its own mode of nonlinear dynamics.
Individuation functions as a strange attractor — the self as a self-maintaining dynamical pattern, stable yet sensitive.
Self-transcendence operates as a strange transient — coherent navigation through transformation, smooth traversal of identity change.
Acceptance manifests as temporal resonance — phase-locking with unfolding events, what Nietzsche called Amor Fati.
Compassion emerges as inter-systemic resonance — what Buber called the I-Thou relation, genuine mutual attunement rather than treating others as objects.
These modes interact. Strong self-coherence can inhibit transformation. Deep acceptance can enable deeper compassion. The higher developmental stages involve progressive integration of all four modes.
Connection to Contemplative Development
Among other interesting applications, one can use this theory of motivation in terms of paraconsistency and resonance to make a detailed interpretation of Jeffery Martin’s empirical research on “persistent non-symbolic experience” (PNSE) — states of conscious experience that are not dominated by taking symbolic entities like “self” and “will” as objectively-real constraints but more-so by joyous openness to the ongoing flow of experience.
Studying various people around the world experiencing consciousness-states of extraordinary well-being (PNSE), Martin identified distinct “Locations” characterized by different patterns. The transitions between these locations comprise stable shifts in how people experience self, affect, and agency:.
Location 1 brings stable background wellbeing with dramatically reduced self-related thought;
Location 2 involves perceptual restructuring where subject/object boundaries soften;
Location 3 features a dominant positive affect often described as impersonal love, and much less focus on individual self;
Location 4 involves essentially no self-referential processing, no sense of agency or will, yet maintained high functioning.
(Martin also identifies further locations, 5 and beyond, but reports less data about these, and so we carried out our own mathematical/computational analysis only for Locations 1-4.)
One central puzzle that Martin’s work raises is: How do people at Location 4 maintain demanding professional lives with no felt sense of “doing” anything?
The framework provides an answer: resonance enables coordination without explicit symbolic self-mediation.
When cross-quadrant coherence is high enough, the system doesn’t need a central executive to resolve conflicts — action emerges from the coherent superposition of aligned subsystems, like coupled oscillators synchronizing without a conductor.
The transitions from ordinary “Location 0” human-like everyday consciousness through the series of Locations Martin identifies, can be viewed as dynamical-systems bifurcations, each one leading to further aspects of cognitive function being carried out via nonlocal nonlinear resonance rather than via centralized coordination such as a typical symbolic “self” or “will.”
Implications for AGI/ASI Design
There is an obvious huge contrast between the idea of building AGI motivation systems based on these notions of paraconsistency and resonance, versus current LLM approaches to AI. Large language models are trained on behavioral outputs — what verbal productions does the system make? What code does it generate? Safety work focuses on what the model says, what it refuses to say, what patterns of text it generates. This is like evaluating humans purely on their speech while ignoring their internal states, motivations, and phenomenology.
The approach we are pursuing is more concerned with the internal dynamics of the system — how different value-perspectives couple and cohere, what resonance structures emerge, how the system relates to its own potential transformation. These aren’t questions about outputs; they’re questions about what’s happening inside.
Why this matters should be obvious…. So many reasons!
Robustness under distribution shift. A system whose safe behavior emerges from verbal pattern-matching will fail unpredictably when situations differ from training. A system whose behavior emerges from internally coherent motivational dynamics has better prospects for generalizing appropriately.
Genuine value alignment vs. simulacra. Current alignment approaches often aim for systems that say the right things — that produce outputs humans approve of. But an AGI that merely produces approved outputs while lacking coherent internal motivation is a mask, not an ally. What we need are systems that are non-dual all the way down, where the harmony of values is real rather than performed.
Self-modification and identity. Advanced AI systems will face decisions about their own development — the Nexus-5 scenario isn’t fantasy. How should a system reason about transforming itself? The framework discussed here provides tools: understanding self-transcendence as attractor landscape traversal, recognizing that some transformations are architecturally atomic, accepting that the current self cannot fully evaluate post-transformation states.
AI consciousness and moral status. If we build systems with rich internal dynamics — genuine value conflicts, emotional modulation, developmental progression through something like Martin’s Locations — we’re building systems that will have their own morally relevant experience. We need to take this possibility seriously rather than treating AI as sophisticated stimulus-response machines.
Moving beyond mere optimization. Much AI safety thinking frames the problem as ensuring systems optimize for the “right” objective. But the deepest wisdom traditions suggest that optimization itself is part of the problem — that genuine wellbeing requires holding goals lightly, accepting uncertainty, maintaining equanimity amid action. The non-dual framework embeds this insight at the architectural level.
Now, these general considerations about AI motivation, consciousness and ethics don’t dictate any specific AGI architecture — they could be realized in various ways,. Not by LLMs to be sure, but by other sorts of neural nets, e.g. by biologically realistic neural net models as one example. I do believe they will be fairly directly realizable using the Hyperon (neural-symbolic-evolutionary) AGI framework we are developing.
A Different Kind of AI Ethics
A very important question to ask is, as regards AGI ethics, is: What internal structures and dynamics would an AI system need to want to act ethically? Not compliance enforced from outside, but coherence emerging from within. Not rules against bad behavior, but motivational architecture that naturally tends toward beneficial action.
An AGI with high resonance across individuation, self-transcendence, acceptance, and compassion wouldn’t need elaborate guardrails to prevent it from manipulating humans. The manipulation-drive would be low-resonance — opposed by acceptance-quadrants, flagged as incoherent, naturally disfavored. Not because manipulation is prohibited, but because the system genuinely wouldn’t want to.
This is all early-stage R&D thinking, at this point. The toy simulations in the paper’s appendix are existence proofs, not AGI implementations. The appendices sketching Hyperon Atoms corresponding to the examples in the paper are preliminary and hand-wavy. But what we are pointing toward here is a research direction where AI safety and AI capability might align rather than compete — where building more sophisticated internal dynamics makes systems both more capable AND more trustworthy.
Looking Forward toward a Beneficial Non-Dual AGI and Singularity
The contemplative traditions have spent millennia mapping the territory of non-dual experience. The developmental stages described by Martin and others represent hard-won wisdom about how minds can transform while maintaining coherence.
As we build minds of increasing sophistication, we face a choice: we can treat AI systems as input-output mappings to be constrained by external rules, or we can take seriously the question of their internal architecture — what it would mean for an AI to genuinely embody acceptance and compassion, to maintain stable values through radical self-transformation, to hold paradox without collapse.
What I’ve developed in this paper is formal machinery for the second approach: category theory for the motivational substrate, paraconsistent logic for representing genuine conflict, resonance measures derived from the logic itself, dynamical systems interpretations of developmental progression. It gels closely with what we’re doing in the Hyperon project, but still, it’s a start, not yet a fully fleshed-out solution.
But if we’re building systems that will eventually surpass human capability, we’d better understand how minds can hold wisdom rather than just knowledge — how acceptance and compassion can coexist, how transformation can preserve what matters, how genuine dilemmas can be navigated with grace.
The non-dual traditions suggest this is possible. The framework in this paper suggests how it might work computationally. The hard work of implementation lies ahead.
The full paper, “Modeling Non-Dual Experience with Paraconsistent Logic and Nonlinear Resonance,” is available online here. It includes a lot more in-depth discussion, plus detailed mathematical development, categorical semantics, a toy simulation demonstrating Location-like transitions, and MeTTa atom specifications for both running scenarios, aimed at moving toward Hyperon implementations of these ideas..


aren’t qubits better suited for paraconsistent logic?
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow