It’s been a while since I last wrote here — I haven’t been writing so much, but I’ve been thinking a lot!
In this post I’m going to introduce a new-ish concept that has started to play a significant role in my thinking on various topics: the “paraconsistent interzone.”
In brief: A paraconsistent interzone is a space in which limited forms of paradox and inconsistency are permitted to flourish, and which serves in part as a medium for interoperation and conversion between other spaces that are, in themselves, more narrowly consistent.
I have found the paraconsistent interzone concept very useful for thinking about cognitive architectures and AGI system architectures such as OpenCog Hyperon. Hyperon’s Atomspace knowledge-store may usefully be viewed as a paraconsistent interzone joining the consistent logical systems corresponding to various Hyperon subsystems (e.g. explicit logical reasoning, procedure learning, perceptual pattern recognition…).
But the paraconsistent interzone concept also seems — if one opens one’s mind a bit — to have broader relevance to philosophy of mind and consciousness, and maybe even further afield to things like unified physics, quantum biology, psi and energy medicine. So… brace yourself for some reasonably out-there thinking — all of course in the spirit of creative brainstorming and robust humor!
This is a pretty dense post and I apologize in advance for not managing to make it more expository and simplified with more diagrams and metaphors etc., but honestly I’m happy just to have found the time to write these ideas (which have been bouncing quite energetically around my head the last couple months) out at all!
To Consistency and Beyond!
Consistency is, in many cases, a very positive thing to have in a cognitive network like a mind, community or belief system. But, like anything else, consistency is not all-powerful and not always desirable. Sometimes a bit of inconsistency is what you need.
In the domain of formal logic, “consistency” has a special meaning, e.g. a logic system is considered consistent if the ability to prove a certain statement in a system implies the inability to prove the opposite. Like consistency more generally conceived, this is in many contexts a valuable property for a logic system to have. But in logic as in life in general, total consistency is not always what you want.
Paraconsistent logics allow various forms of limited inconsistency without having so much inconsistency that they become totally trivial and uninteresting. This can be useful for a variety of different reasons — for one thing because everyday human reasoning is plainly closer to paraconsistent than strictly consistent.
In a traditional consistent logic system, introducing one contradiction automatically introduces all contradictions. So e.g. in a fully consistent common-sense reasoning system, if you introduce “I love my girlfriend and I also hate her” then you can logically derive “Chickens are made of mud” and “I have a head and I do not have a head”, and any other formally expressible statement. On the other hand, human minds are able to hold some contradictions without this necessarily leading them to hold ALL contradictions. Logicians have developed a number of “paraconsistent” logic systems that also have this property — they are able to manipulate contradictory statements, and draw conclusions from them, without unintuitive properties like the implication from one contradiction to all contradictions.
Viewed from the perspective of “helping systems do complex things with limited resources,” the core merit of logical consistency is that it’s a fantastic pruning heuristic. If one has to make a binary choice of X versus not-X, then once one has made that choice, one can throw out the conflicting contradictory set of possibilities. Whereas if one doesn’t have that sort of dichotomy and one can allow both X and not-X to co-exist, one may have to ongoingly make room in one’s mind for both possibilities. For just a single X this is not necessarily a big deal, but if one has 1000 variables then making a binary choice regarding each of them narrows one down to 1/2^(1000) of the possible combinations.
“Logics” may seem abstract and disconnected from everyday experience, but the issue could be framed just as easily as being about concepts. If a mind is dealing with a concept-system in which each concept is rigidly either/or (say, an emotion is either LOVE or not-LOVE but can’t be both; a person is either ALIVE or not-ALIVE but can’t be both; etc.), then reasoning using this concept-system is going to be most naturally expressed using a conventional consistent logic. This will help bring certain reasoning chains to a relatively rapid conclusion (though it certainly doesn’t make inference control and the pruning of reasoning chains easy), but it will also restrict the system’s ability to easily model various sorts of situations that occur in everyday life and in science and mathematics (which are more naturally modeled using concepts possessing some degree of self-contradiction).
I believe consistency is a valuable pruning heuristic that has a role to play in cognitive systems that need to operate with significantly limited resources. However, I think that for systems that need to come to grips with the full complexity of the world, and/or that need to be profoundly generative and creative, paraconsistency also has a large role to play.
Paraconsistent Interzones…
The main idea I want to explore in this post is the role that paraconsistent cognitive spaces can play as intermediaries or “interzones” between various different cognitive spaces that have their own, different and diverse forms of consistency.
Given multiple different cognitive spaces with different vocabularies and systems and each with their own forms of consistency, how can one bring it all together?
There are multiple possibilities…
One mode of cross-cognitive-space cognition is unification in the logical sense -- i.e. find a Unifying Logic so that each of the logics of the various cognitive spaces involved can be viewed as a projection of the Unifying Logic, under certain simplifying assumptions. However, this is not always possible or tractable to do.
Another approach is to allow the cross-cognitive-space Unifying Logic to let go of traditional notions of consistency.
What I call is a Paraconsistent Interzone is a paraconsistent logical space that projects into a number of different consistent logical spaces, allowing concepts and relationships from the various consistent logical spaces to interact but without forcing them to be fully consistent or compatible.
Of course this might seem totally trivial -- isn’t it too easy to unify two conflicting forms of reasoning in a common space, if one just allows this common space to host various contradictory things freely? But paraconsistency isn’t quite like that -- rather, one allows inconsistencies with some limitations and guidance regarding how they can propagate. This is where things get interesting!
“Paraconsistent interzone” can be developed as a fully technical math/logic notion, however fleshing out something so subtle in a fully rigorous sense is not a job for a blog post.
The paraconsistent interzone is also intriguing and important as a purely philosophical concept.
And for those of us building practical AI systems incorporating logical inference engines and other tools practicably viewed as equivalent to logical inference engines (e.g. program learning systems), the paraconsistent interzone is also a pragmatic AI software design principle.
In this post I’ll kinda verge willy-nilly between these various aspects of the paraconsistent interzone concept -- willy-nilly is part of the fun of writing a blog post rather than something more formal, after all…. If I should contradict myself a little here or there I can always chalk it up to paraconsistency right??
Context, Fringe and Focus
One reason paraconsistent interzones interest me so much is I believe they play a key role in the human mind, and have a key role to play in various other cognitive architectures, including human-like ones and also less human-like ones (such as say AGI theorem provers).
For instance, I think the concept of paraconsistent interzone can be instrumental in understanding the relationship between different aspects or portions of an intelligent system’s consciousness.
Context, Contexts Everywhere….
To formulate the relationship between paraconsistent interzones and consciousness in the crispest and most general way, let’s start with another key concept: CONTEXT.
The most basic insight to be hammered home by the postmodernist philosophers of the 20th century -- and the earlier proto-postmodernists like Nietzsche -- was surely: There is no such thing as “context-free.”
Everything -- every concept, every judgment, every perception, every distinction -- is relative to some context.
This is the core thing objectivists and materialists don’t like about postmodernism (setting aside differences regarding communicative style). And it’s also, rather correlatedly, the core thing that many of us DO like about postmodernism.
In the different but related domain of interpretations of quantum mechanics, Carlos Rovelli’s “relational interpretation” holds that: There is no sense to speak of an observation on its own, there is only sense to speak of (observer, observed) pairs.
In other words: there is no such thing as an observation out of some observer-defined context.
If one accepts this general perspective (which I do), it follows that understanding the nature of contexts becomes highly important. Of course, in the spirit of contextualism, we need to acknowledge that any understanding of contexts is going to be more relevant and meaningful in some contexts than others -- but that’s no reason not to push ahead anyways.
Now we come to a key concept I want to put forth here is, which I believe is fairly critical to how paraconsistent interzones operate in practice: Most powerful and useful contexts seem to have an internal structure consisting of a focus and a fringe.
The focus comprises those things that are clearly within the context.
The fringe comprises those things that, to a mind operating in that context, are
Recognizable as “existing” in some sense, but also
Recognizable as not directly perceptible or manipulable within that context
The fact that contexts have fringes is critical, among other reasons, because it is key to transitions between contexts.
To verge from contexts toward consciousness, it suffices to observe that the state of consciousness of a system over some interval of time comprises one sort of context
My views on consciousness in humans and human-like systems have been articulated in a prior paper and I won’t repeat them here. Numerous subtleties exist both regarding experiential aspects (qualia) and empirical structures and dynamics. But for the present purposes it will suffice to think about consciousness in terms of the “moving bubble of attention” or “theater of explicit awareness” associated with a human mind in standard consciousness states.
The application of the focus/fringe dichotomy to human consciousness states goes back at least to William James. The fringe of an everyday human state of consciousness is the set of images, notions, feelings, etc. at the “edge of one’s mind” -- but not fully in conscious focus. It’s sort of the peripheral vision of the inner eye. The fringe of consciousness is the main way we know, phenomenologically and experientially, that there are parts of the mind that are outside the main conscious focus but still exist -- we know this because we sense these things in the fringe of our awareness.
The fringe, as I generally interpret it, is basically: How a context dimly sees outside itself, which is part of what leads to a context switching or segueing into another context
Many powerful and useful contexts are also graded, in both their foci and their fringes. That is:
Given two distinctions A and B that are both in the focus of a context, one of the two may be more intensively present in the context
Given two distinctions A and B both in the fringe of a context, one may be more intensively there.
(BTW I’ve chosen the term “distinction” to refer to general stuff that may be in a context, instead of say “thing” or “entity”, with some care… there is reference to Spencer-Brown’s Laws of Form here as well as to Deleuze and to Weaver’s reading of Deleuze in his work on Open-Ended Intelligence. The core intuition is that “distinction” is an elementary concept, in that any other concept you posit also makes a distinction, but a distinction in itself doesn’t do anything besides make a distinction.)
One key difference between the focus and the fringe is: The distinctions in the focus tend to have greater intensity.
And winding finally back to our main focus here — other difference, at least for many contexts, seems to be: The focus of a context tends to have a relatively high level of consistency and coherence.
Recalling that consistency is a way of pruning out possibilities, and that focus by its nature has lower capacity than fringe, the association of consistency with focus and inconsistency/paraconsistency with fringe makes total sense just via quantitative arguments.
One relevant kind of consistency commonly found in context foci is: Clarity of comparative intensity values.
For instance, Intensity levels within the focus of a context tend to be totally ordered [so that e.g.: Where I(X) means the intensity of X, we have: If I(A) > I(B), and I(B) > I(C), then I(A) > I(C) ].
On the other hand: The fringe of a context tends to be fairly inconsistent and incoherent.
As part of this general tendency toward inconsistency, intensity levels within the fringe of a context need not be comparable nor transitively comparable.
So, in the fringe, it can commonly be unclear whether A is more intense than B or vice versa; and one can have I(A) > I(B), and I(B) > I(C) without having I(A) > I(C) . This underlies intransitivity of preferences, which is a well known aspect of human taste and judgment that is one of many factors rendering most of classical economic theory only partially relevant to human behavior.
A Combinatory Model of Consistency
To dig further into all this requires a bit more exploration of the deeper meaning of the concept of “consistency.”
It seems worthwhile to elaborate what “consistency” means in a general conceptual manner — stepping away from the mathematical particulars of formal logic — just thinking about contexts and distinctions in the most general and simplistic way possible. (Of course all these considerations can be cast in terms of formal logic as well as one wishes…)
Compositionality is a key notion here. Most interesting contexts will contain distinctions that have a compositional relationship with other distinctions, i.e. so that one distinction A can be viewed as the result of combining say two other distinctions B and C, or three other distinctions B, C and D, etc. The combination operation will often have some order to it, so that combining B and C is different from combining C and B.
(A context comprising compositional distinctions like this, falls under the general purvey of the combinational computational model I considered in my paper on Simplicity Theory).
Many interesting contexts also have a local temporal direction associated with them, so one can look at the elements of a context’s focus or fringe as either appearing, disappearing or persisting relative to that focus of fringe.
One important sort of combination is
annihilation:
If combining Z with A within the context, tends to lead to both A disappearing from the context, then we can say Z is an annihilator of A. Often two distinctions will be mutual annihilators in a certain context.
After a bunch of reflection on the deeper meaning of “consistency” I ultimately came to the conclusion that -- The deepest core meaning of ”consistency” is: The use of annihilation to guide the growth-dynamics of the combinatory system of elements within (some portion of) a context.
What I’m calling here “annihilation” is of course what an AI algorithm designer would think of as “pruning.” One is eliminating possibilities from the scope of consideration.
Logical consistency is one example of this broader notion of consistency. One may consider the notion of an “evidentially-consistent” logical system -- defined as a system for doing reasoning based on evidence about propositions, in which items of positive or negative evidence about a certain proposition are annihilators for each other.
So in an evidentially-consistent system, if one has a bunch of positive evidence about some proposition in a context at a certain point in time (along a certain assumed timeline), one can predict that in the future this context will contain only distinctions that tend to be consequent from positive evidence about that proposition, and NOT distinctions that tend to be consequent from negative evidence about that proposition. This is a powerful form of focusing, which is not nearly so straightforwardly there in inconsistent or paraconsistent systems in which positive and negative evidence about the same proposition can peacefully co-exist.
A slightly different way to slice it is to say -- Given a combinatory system: A set of distinctions is mutually consistent with respect to that combinatory system if the elements of the set can inter-combine with each other freely without annihilating each other.
In a paraconsistent logic, annihilation is harder to come by and a great variety of distinctions are mutually consistent in this sense … but in a standard consistent logic, mutual consistency is much harder to come by.
I’m definitely playing a bit fast and loose here … but it feels like it wouldn’t be a big challenge to prove the following under fairly relaxed assumptions:
Generally, in a combinatory system where there’s lots of annihilation going on, the probability distribution of possible future context-states, conditioned on a particular current context-state, is generally going to be relatively low-entropy (as it must be focused on future context-states that are consistent with the current context-state, i.e. whose elements won’t be annihilated by those of the current context-state).
OTOH in a combinatory system without so much annihilation, the probability distribution of possible future context-states, conditioned on a particular current context-state, is generally going to be much higher-entropy, as regardless of the current state, in the future “almost anything goes” and all sorts of new things may pop up to join the context without regard to their consistency with the current contexts.
All this clarifies what is meant by the proposition that the focus of consciousness tends to be way more consistent than the fringe of consciousness.
The focus is limited in scope by its very nature -- that’s what focus is -- and in order to get meaningful stuff done within the confines of this limited scope, some rigorous pruning of the contents is going to be necessary. This is basically equivalent to annihilation, and explains why if one models the combinatory dynamics of focus-elements as a logic system, one would naturally wind up with a consistent logic system. Given limited bandwidth, entropy needs to be strongly constrained.
In the fringe, on the other hand, more resources for parallel processing can be assumed, and thus keeping various mutually inconsistent possibilities around concurrently poses less of an issue. Annihilation is far less critical.
Paraconsistent Fringe Logic
These differences in consistency between the focus and fringe have implications for the sorts of reasoning that are typically going to be most effective within these two parts of a context. To wit:
Uncertain reasoning over the focus of a context will often be naturally done via making a consistent probability model over the distinctions in the focus -- when when one gets into evidence-counting for propositions, boils down to using standard real-number probabilities.
Uncertain reasoning over the fringe of a context will often be naturally done via making a paraconsistent probability model over the distinctions in the fringe.
Ba-da-bing!
There are many ways to do uncertain paraconsistent reasoning. Probably the simplest is the one explored in my recent paper on Paraconsistent Foundations, according to which each distinction in the fringe would be assigned positive-evidence and negative-evidence based values, each of which may vary independently (e.g. between 0 and 1). So one could then have
(1,0) -- true
(0,1) -- false
(0,0) -- neither true nor false
(1,1) -- both true and false
Or any intermediate value like
(.7, .8) -- true to degree .7, false to degree .8
Given the work I did last year which suggests that a paraconsistent probability model (over the fringe of a context, or anything else) is roughly isomorphic to a complex-valued quantum probability model — the current direction would seem to suggest that fringes are naturally modeled via quantum probabilistic logic, whereas foci are naturally modeled using classical probabilistic logic. (Ba-da-bing ba-da-bing??)
The OpenCog Atomspace as a Paraconsistent Interzone
The main reason I started thinking about the use of paraconsistent logics as intermediaries between multiple consistent logics was the OpenCog Hyperon design.
The core AGI design issue at play here is: How do you deal with an intelligent system that needs to both use logic, and figure out what logic to use?
Reasoning within a specific axiom system is a major technical and cognitive challenge for an AI or AGI system. However, any AGI system that’s really worth its GI salt should also be able to modify its underlying axioms based on its perceptual experience or its social interactions.
Once you go this far, though, you realize that an AGI system undergoing ongoing radical development and improvement should be able to include subnetworks that reason using different axiom systems, even if these are incompatible. It should be able to evolve the ensemble of axiom systems its subnetworks utilize, as part of its ongoing growth process.
Among the multiple ways in which this is important, consider that a human-like AGI system has got to learn:
Systems for representing procedures (aka programs) for doing things
Systems for reasoning about language
Systems for meta-reasoning about the system itself and its potential modifications
Systems for representing and abstracting from observed sensory data patterns
It has got to experimentally evolve all these systems, and in a flexible and distributed, somewhat decentralized-control-based way. Approximately morphic mappings between these various systems must exist with the overall system’s cognitive architecture. However, maintaining tight consistency among all the in-progress evolving systems addressing each of these goals would present a major constraint and resource demand. Much better if these different systems can evolve in a loosely coupled way, trying to stay relatively coordinated with each other but also able to experimentally deviate from each other and become a little bit inconsistent with each other (while still mapping into each other approximatively).
How should we think about this? Should we view such an AGI system as a set of modular black boxes, each of which utilizes a separate axiom system, and then an overall controller that selects among black boxes, assigns resources to them, and destroys useless ones and creates new ones?
Definitely not — modular approaches are nearly always too inefficient for AGI systems with realistically limited resources. There is too much potential for sharing of knowledge and hypothesis between subnetworks operating using different, incompatible logics. One need mutually inconsistent logics to be able to swap knowledge judiciously, yet without fully polluting or overtaking each other.
Which means, precisely, paraconsistency.
And this means that a common meta-representational fabric for gluing together these various representational and (implicitly or explicitly) logical systems must be, precisely, a paraconsistent interzone.
Hyperon Atomspace, the knowledge metagraph used in Hyperon to represent and cross-connect the knowledge of the system’s various subsystems, must be a reasonably efficient way to implement both the representations and logics of these subsystems, and the paraconsistent representations and operations in which they are allowed to interest while they explore their diverse compatibilities and incompatibilities.
A Paraconsistent Interzone Between the General-Relativistic and Quantum Worlds?
My proximal purpose in fleshing out the paraconsistent interzone concept has been my work on OpenCog Hyperon — but I want to note in passing a potential additional (and in the end fundamentally closely related) application in the domain of fundamental physics.
It’s well known that our two premier fundamental physics theories — General Relativity as a theory of gravitation, and the Standard Model which gives a unified quantum field theoretic treatment to the electromagnetic/weak/strong forces — are logically incompatible and inconsistent with either other.
Attempts to unify the two theories have so far led either to failure or to massive mathematical complexities that strain the capacity of the human brain and thus lead to frustratingly slow progress without clear expectations regarding outcome. String theory, loop quantum gravity and other more recent related innovations have revealed some amazing mathematical properties in the vicinity of “unified GR and QFT”, yet without making quite clear that such a unification can fully work, and without giving empirical predictions that we can validate using currently available measurement tools.
It may be that one or more of these current directions will ultimately be good enough, and once we have automated theorem provers to help crunch the math and more powerful particle accelerators to test the predictions, we will see that e.g. some version of loop quantum gravity is exactly correct.
However there is still enough confusion and ambiguity bouncing around that it seems worthwhile to also consider more out-there possibilities. Which is the context in which I’ve mused a bit about potential applications of paraconsistent interzones to unified physics…
It would be pointlessly glib to just way “Let’s unify physics by taking these two inconsistent things and saying they can coexist in a paraconsistent interzone, where contradictions are allowed so the fact that they’re contradictory doesn’t matter.”
But there may be theories lying in this direction that are less glib and more useful.
Lucien Hardy’s work, e.g. on operational formulations of GR, could provide some clues here. Among the several related avenues toward unifying GR and QFT that he considers is the “ontological approach”, in which one posits a third model of reality into which the GR model and QFT model can be morphed.
Elsewhere in the same paper, Hardy gives a probabilistic version of GR … and we know the complex-probability aspect of QFT. He waves his hands about mapping probabilistic GR into complex QFT, but there are acknowledged hacks in his mapping. He doesn't connect his (IMO somewhat wacky) quantitative mapping with his idea of an ontological approach.
But there are some pieces in Hardy’s paper that could perhaps be reassembled in different ways, to interesting and meaningful effect. As noted above, last year I suggested how one might morph real-number probabilities into complex probabilities elegantly using uncertain paraconsistent truth values as an intermediary — and this provides some clues as to how to connect Hardy’s ontological approach with Hardy’s desire to map between real and complex probabilities…
So — what if we took as an intermediate world-model between GR and QFT: A model in which a given observer sees, in any given subjective moment, a collection of possibly mutually inconsistent GR observables?
GR then is isomorphic to the case when all the observable-sets are internally consistent.
QFT manages the case where the observed situations contain inconsistencies ... the paraconsistent truth values describing the occurrence of various observables map into complex number truth values and QM/QFT ensues ... but QFT assumes flat spacetime...
So the new idea here regarding unified physics is: the fundamental physical reality is an inconsistent one, and GR is an approximation applicable when there is approximate consistency, whereas QM/QFT is an approximation that encompasses inconsistency but is applicable only when spacetime is flat...
Phrasing this highly speculative and in some regards novel approach to unified physics in the language of consciousness with its focus and fringe, we may rephrase this by saying:
General Relativity (extending classical mechanics and electromagnetism) is a model of our physical world as it appears in the focus of consciousness of systems interacting in this world. Quantum Mechanics (and its extensions such as the Standard Model) is a model of our physical world as it appears in the fringe of consciousness of systems interacting in this world.
One interesting potential approach to formally unifying GR and QM is to bridge them via a “paraconsistent interzone” in which a situation (presumed within the fringe of some context assumed by the mind of some system operating in the physical world) is characterized by a potentially inconsistent set of GR models. This paraconsistent set of GR models is then closely approximated by a quantum superposition of GR models.
To see how much sense there really is to this direction, one would need to do the math to morph cases of inconsistent observation-sets involving curved spacetime into complex-probability-land. Which is fascinating but not especially easy, and it remains to be seen whether it’s more tractable than the monstrous-yet-beautiful borderline-transhuman mathematical messes/higher-dimensional-cathedrals that string theory and loop quantum gravity have led us into…
I certainly don’t have time to flesh out these ideas, trying to create beneficial decentralized AGI and help create a positive Singularity while also living a rewarding human life is already just about enough for this one dude as projected into this particular spacetime continuum — but hey, maybe someone whose immediate life-path is more physics-y will find these musings inspiring and turn them into a portion of something rigorous. Any takers out there?
Psi in the Paraconsistent Interzone
OK if we’re going to get crazy and speculative, why not go all out?
If we’re going to unify physics using paraconsistent interzones, why not explain psychic powers along the way? ;D
But I have only limited time this morning — too much AGI work to get back to … so, hmm, detailed consideration of potential implications of a paraconsistent interzone approach to unified physics as regards the relationship between physics and psi — will be left for another blog post. But I’ll take time to type a few paragraphs!
Readers of my prior thoughts on the foundations of psi will know my views on how how morphic resonance type explanations of psi connect closely with “precedence principle” (cf Smolin) explanations of quantum mechanics. Smolin’s Precedence Principle basically says that patterns observed along a certain timeline are especially likely to be observed again along that timeline, more than straightforward probabilistic independence assumptions would lead one to predict . This can be used as a key ingredient in the derivation of the Schrodinger Equation, and is also, intriguingly, basically equivalent to the principle Charles Peirce called the “One Law of Mind”, the “tendency to take habits.” Viewed probabilistically, this principle basically says that along timelines in our physical universe, the probability distribution of patterns is pointy — a relatively few patterns get a lot of probability.
Continuing that thread, in the approach sketched here, one could look at precedence-principle type phenomena in the paraconsistent interzone. So: When a particular timeline arises in a particular context, if a precedence principle (aka Peirce’s “tendency to take habits”) occurs along that timeline, then one may expect to see psi type phenomena popping up in that context.
This leads us to the intriguing hypothesis that: Maybe the tendency to take habits occurs more in the fringe than in the focus.
Which seems glaringly intuitively obvious. And also connects very closely with James Carpenter’s “First Sight” theory of psi , which views psi as a core aspect of the unconscious scanning of the world that all organisms do as they go about their lives.
Quantum Consciousness / Biology / Measurement and the …
(In figuring out how to organize the various pieces of this blog post, I was confronted with the aesthetic question of whether the relationship of consciousness and quantum measurement is crazier than physics underpinnings of psi, or vice versa. A real puzzler. Culturally it seems psi is currently viewed as crazier, yet from a fundamental perspective I think the consciousness/quantum-measurement connection may actually be wackier and more revolutionary. I couldn’t solve the puzzle so I just put psi first in the blog post for (quantum?)-random reasons…)
The notion of a paraconsistent fringe may also potentially be a useful model for understanding aspects of quantum biology, and for digging deeper into the relation between quantum theory and consciousness than is normally done.
My take on consciousness is first of all panpsychist — I think everything (and every non-thing) is conscious in its own special way. However, this stance doesn’t wipe away all the puzzles of consciousness, it just reframes them as problems about the relationships between different sorts of consciousness. And between different aspects of consciousness — e.g. focus and fringe — in different situations.
Let’s consider for instance the relationship between quantum theory and consciousness.
The connection between consciousness and quantum measurement is usually framed in a "Eugene Wigner" sort of way, focused on "consciousness as causing the collapse of the wave function.” I think this framing has some truth to it , but is insufficiently subtle by a long shot. Yes, there IS consciousness associated with the transition from higher-superposition-entropy to lower-superposition-entropy states (aka “collapse") — but there is also consciousness associated with the higher-superposition-entropy ("uncollapsed") and lower-superposition-entropy ("collapsed") states as well.... There is consciousness all around and placing super much emphasis on the consciousness associated with the "collapse" event doesn't make so much sense to me.
Also, ascribing causality to the relation between conscious acts and "collapse" doesn't seem directly to-the-point, because an ascription of causality is always relative to some observing mind. So you have to say "From the view of mind A, the high-intensity-of-consciousness events in the mind of observer B, were assessed as causally related to a decrease in superposition-entropy of system C" .... This certainly happens (e.g. when human A watches human B who is operating quantum-research lab machine C) but I'm not sure why it makes sense to obsess on it among so many other dynamics also happening in real-world systems
In quantum biology systems (e.g. parts of the human brain), there is nothing like collapse, rather there are numerous weakly-coupled systems operating in intermediate superposition-entropy states ... without any of them "collapsing" each other (i.e. without any of them experiencing a sudden decrease in superposition-entropy). How important such states are to human brain function is still an open question, with funky hypotheses like Matthew Fisher’s bouncing around unresolved.
This is related to what is known as "weak quantum measurement", which enables extraction of information in some cases from a quantum system without causing any sort of "collapse" -- again, by close coupling of an observer with an observed in a case where both are in a condition of intermediate superposition-entropy ...
It happens that in the classic quantum lab experiments one has extreme conditions where superposition-entropy of systems is either super high or super low. — However cells and molecules within biological systems tend not to operate in these extreme conditions, which means that the "collapse" approximation is less applicable there and the mapping of consciousness into quantum events has to be subtler if one wants to understand the relation btw consciousness and quantum biology.
In a weak measurement situation, one has a paraconsistent interzone between the two weakly-coupled systems. The two systems are both coupled and not coupled — they are both one common system and two distinct systems. In the case where two systems are bound into a coherent whole, there is — from the point of view of an external observer — effectively just one system. In the case where the two systems interact by means of a decohered intermediary system, there is plainly a pair of two systems. In the case where two systems are weakly coupled, there is a paradoxical situation and the systems are both one and not one.
An interesting area for investigation is: Could this be a case where versions of paraconsistent uncertain logic is a more directly useful tool than quantum probability logic? Quantum probabilities are the right way to model what happens inside a fully coherent quantum system. Classical real probabilities are the right way to model a fully decohered system. But how to model systems that are balanced partway between coherence and decoherence? Could it make sense to use algebras of uncertainty that lie partway along the mapping from classical to complex probability algebra — e.g. mappings of paraconsistent uncertain logic using the Schweizer-Sklar t-norm as considered in my Paraconsistent Foundations of Quantum Probability paper?
Ironically the relation of QM to traditional Chinese medicine probably lies here in the weak coupling of systems with intermediate superposition-entropy and the consciousness-dynamics associated with this. But this sort of situation is explicitly and incorrectly ruled out by the "collapse" paradigm
And how the relation of focus vs. fringe work, in the context of weak coupling of systems with intermediate superposition-entropy? This — as a final speculative postscript to the barrage of funky speculations the patient reader has by now waded through — seems closely related to Qigong and such techniques, which are about focused consciousness and its impact on weakly-coupled quantum biology systems
Qigong needs a sort of weak coupling of the healer's consciousness with the weakly-coupled quantum-biological networks in the healee's body... which requires a sort of focusing that doesn't fully collapse the healee's body-networks' quantum state but does partially guide this state toward definiteness in some respects in accord with healing intention.
That is, the focus of the healer’s attention is on part of the healer’s body, and the fringe of the healer’s attention spread through a broader portion of the healee’s mind and body networks as well as through swaths of the broader universe the healer is connected to. The healer’s body and the healee’s body, normally distinct, are connected in a sort of paraconsistent interzone — not quite unified and not quite separate. In this contradictory space healing energy is able to move from the healer to the healee, exploiting both the oneness of the two systems (which allows the energy transmission to happen) and their separation (which allows the healer to remain healthy and keep projecting their health).
Paraconsistent Vibrations
The potential connection with energy medicine reminds me of a mathematical connection that’s been bouncing around in my mind for some time now: The connection between paradoxical logical expressions and waveforms.
Spencer-Brown in Laws of Form elaborates fascinatingly on the mapping between
X = not-X
and the discrete time-series
…., True, False, True, False, ….
As Kauffmann and Varela elaborated in their work on waveform logic looking at the four waveforms
…, True, True, True, True,…
…., True, False, True, False, ….
…., False, True, False, True,…
…., False, False, False, False,…
one reconstructs the four truth values of Constructible Duality logic, the form of paraconsistent logic that I’ve mapped into OpenCog’s PLN probabilistic logic and (approximatively) into quantum probabilistic logic. E.g. one can map via
BOTH T and F - …, True, True, True, True,…
T = …., True, False, True, False, ….
F = …., False, True, False, True,…
NEITHER T nor F =…., False, False, False, False,…
What we see in this simple example is a reflection of a more general class of mappings between paradoxical logical equations, ensembles (or I prefer to think of them as “choruses”) of waveforms, and paraconsistent truth values.
If one has paradoxical equations in uncertain paraconsistent logic, e.g.
X = probably not X
Or
X = Y with probability .3 and not-Y with probability .7
Y = X with probability .2 and not-X with probability .8
or more complex equations weaving multiple variables, with uncertain conjunction/disjunction formulas, then one gets more complex and often formally chaotic waveforms.
The standard textbook example used to explore basic chaos theory is repeated iteration of
f(x) = r x (1-x)
(for various values of R), which embodies a stretching and folding dynamic (multiplying by r >1 stretches in the sense that it increases the distance between inputs, then feeding into x(1-x) folds in the sense that it maps x and 1-x into the same number).
Suppose one has an operation such as
f(x) = R ( X AND NOT-X )
where AND is an uncertain conjunction operation, and R is an operation that tends to spread truth values concentrated near “Both True and False” more broadly across the CD truth value space. This sort of function f(), if one iterates it repeatedly from an initial uncertain paraconsistent truth value X, yields a chaotic series of uncertain truth values
X, f(X), f(f(X)), …
which is conceptually similar to the
…., True, False, True, False, …
type series obtained from Spencer-Brown’s crisp logical paradoxes.
An example of an operation R of this nature is the PLN “Rule of Choice”, which in the face of conflicting positive and negative evidence in favor of a statement, simply prunes the search space by choosing one or the other. So in its simplest form, given a truth value (10,6) representing 10 items of positive and 6 items of negative evidence, the Rule of Choice would yield a truth value (10,0). This is a stretching operation in that two similar truth values like (10,9) and (9,10) may be mapped into extremely different truth values like (10,0) and (0,10). There are also uncertain, less extreme approximations of the Rule of Choice of course, which discount evidence of the minority type, but not all the way to zero … so that e.g. (10,9) and (9,10) might map into (9,1) and (1,9).
The operation f() suggested above is somewhat related to the “cognitive equation” I proposed in my 1995 book Chaotic Logic, which models mind in general as a repeated iteration of combination and filtering. Given a collection C of combinatory entities C1, C2, …, the combination operation considered is one that forms the collection of all reactive pairs Ci * Cj, and the filtering operation is one that chooses only some of these pairs to get to react and create a product Ck= Ci * Cj according to the “multiplication table” of combinatory entities. If the set of combinatory entities is a set of statements and their negations, and the filtering operator R embodies “Rule of Choice” type mechanisms, then one has an iteration
g(C) = R( C AND C)
where AND is a vectorial operation. Since C includes many pairs X and not-X, this iteration includes many instances of f() as sub-cases, meaning its overall internal trajectory includes many chaotic sub-trajectories. In this more general setting, R is still folding, but C is only stretching for some portions of the combination set C.
The operation of projecting an uncertain paraconsistent logic expression to a time-series of paraconsistent truth values can also be viewed quite pragmatically as a way of generating vectorial embeddings from sets of of paraconsistent logic expressions. In recent research with the OpenCog system, my SingularityNET colleagues and I have explored various ways of transforming OpenCog knowledge-hypergraph nodes and links (Atoms) representing uncertain logic terms and relationships into vector representations, including DeepWalk, GraphCNN and Kernel PCA. Translating OpenCog Atoms into vectors via taking the logic expressions they represent into (often chaotic) time-series would be an alternative approach with some potential advantages.
This is a big and weird enough topic to deserve a whole blog post of its own (and more) — and a bunch of funky pictures of attractors and resonance patterns — and it will get one before long — but the reason I’m briefly mentioning these ideas here is: Via this example we see how paraconsistent logic equations can be mapped into ensembles of chaotic waveforms. The example I’ve sketched here (and will elaborate more in a later post) is 1D but higher dimensional examples can be straightforwardly extrapolated.
One can then look at resonances among various chaotic waveforms. If one has a resonance pattern (“resonance mode”) wAB among waveforms A and B, and another resonance pattern wBC among waveforms B and C, then puts A, B and C all together, the resonance patterns wABC include some patterns emerging via combination of the resonance patterns wAB and wBC. One then can ask: What is the relation between the resonance patterns wABC and the logical result of combining the paradoxical expressions underlying A, B and C in a paraconsistent logic?
This leads us directly into the mathematical formulation of Akashic fields put forth by legendary chaos theory pioneer and beautiful raving mystic genius Ralph Abraham in his book Demystifying the Akashic Field and other works. Abraham points out, with copious examples, that gorgeous, complex 2D patterns — resembling the patterns people tend to see in altered states of consciousness, often with rich symbolic spiritual meanings — can emerge from nonlinear chaotic lattice dynamics. He identifies this as related to the Akashic fields and related realms noted by various spiritual traditions as corresponding to a more fundamental layer of reality than our everyday spacetime continuum. It seems that patterns similar to the ones Abraham finds — and wilder higher-dimensional analogues — would naturally emerge from sets of paradoxical logical expressions according to the route briefly indicated here.
Paraconsistent interzones become viewed as equivalent to choruses of chaotic waveforms with complex structures — related to and combining the logical structures in the consistent realms that the interzone connects — emerging and dissolving as part of the nonlinear resonance modes between these waveforms. The chaos and complexity emergent from the waveforms is tied with the paraconsistency and will fade as the conflicting options are selected among and consistency is enforced — which is part of the reason why so many complex archetypal patterns emerge in the fringe (the way “unconscious” manifests itself in consciousness) and then dissolve in the different, more coherent/consistent light of the focus. But the repeated process of consistent-izing and then paradoxicalizing — as one sees in the repetition of the rule of choice R and the paradoxical operation x AND not-X , and more generally in the cognitive equation as applied to paraconsistent statement-sets — can itself serve a role of generating the chaos that generates complex forms via resonance.
Every perception is relative to some context. Not every concept is. Information theory, for example, is not relative to any context.
It makes no sense to speak of an observation on its own, but it does make sense to speak of theories on their own, as demonstrated for instance by relativity theory. Relativity theory banished relativity from physics by showing that, while observations by different observers give different measurements, they're all following the same "laws" (which I put in quotes, because they're not foundational or proven things like Enlightenment thinkers wanted to find), and all described by the same theory.
<3 - trying to create beneficial decentralized AGI and help create a positive Singularity while also living a rewarding human life is already just about enough for this one dude as projected into this particular spacetime continuum