Introducing Hyperseed-1
A Fairly Minimal “Core Ontology” for Possible Use in AGI Inference Guidance
In my (quite negatively existent) spare time I have been fiddling around with creating a different sort of “semi-formal core ontology” – which for lack of a better name I’m calling “Hyperseed-1”.
Quick reference:
One goal I’ve had in mind with Hyperseed-1 is to experiment with it as a sort of “initial abstract knowledge and perspective guide” for an early-stage OpenCog Hyperon mind (or more precisely, a Hyperon mind supplied or augmented with LLMs or similar technology), to help guide its uncertain reasoning regarding what it experiences and what it reads.
I’ll explain what I mean by this in a moment, but let me review a little background first, which may help you understand where I’m coming from a little better.
The Checkered History of Commonsense Knowledge Formalization
Attempts to systematically formalize knowledge have an interesting but somewhat checkered history in both philosophy and AI. The core issue such attempts have run into is that human commonsense knowledge is just too immense.
Philosophers interested in the formalization of human knowledge have tended to debate and analyze specific issues that occur in the formalization process, without making much headway on formalizing more than a trivial amount of knowledge. This tradition began with Leibniz and his notion of a “Universal Characteristic” – which was supposed to be a formalization of human commonsense knowledge in an early sort of Boolean/propositional logic – and has continued ever since, through the work of logicians Bertrand Russell, Rudolf Carnap and many many others.
The most substantial practical attempt at formalizing human knowledge was the Cyc knowledge base from Cycorp, which was an attempt to express the core of human commonsense knowledge in a form of predicate logic. One lesson learned in this multi-decade quest was that the number of logical statements you would need to encode in order to formalize human commonsense knowledge would simply be far too immense for a human-based formal encoding process to be feasible. Another lesson learned was that doing useful reasoning based on this sort of massive logical knowledge base is a hard problem. Another was that managing uncertainty in the context of such a knowledge base is very difficult, and it’s especially hard if you try to layer on uncertainty as an after thought, rather than baking it into the core logical knowledge representation.
There have been other smaller commonsense logic knowledge bases, such as SUMO, the Standard Upper Merged Ontology, which is more elegant and concise than Cyc in some valuable ways, but ultimately seems to succumb to most of the same issues.
Chalmers’ Theory of Human Knowledge Formalization
David Chalmers, in his intriguing book Constructing the World, formulates a sophisticated analytical-philosophy argument to the effect that some sort of formalization of human commonsense knowledge should be possible.
Chalmers’ investigation is substantially inspired by Carnap’s Aufbau, which began with a minimal language, containing only logical terms and a binary predicate expressing the phenomenal relation of “recollected similarity” (note the shades of Deleuze’s “difference and repetition”). He then added a small number of further terms to his language by explicit definition, arguing that all truths could be logically deduced from truths in his minimal language.
Chalmers figures Carnap was in a way right in principle even if he overstated things a little. What he argues is that
There is a small set of truths, with conjunction D, such that for all truths S, ˹If D then S˺ is a priori.
where he defines “a priori” as
A subject s knows [a sentence] S a priori iff s knows S with justification independent of experience.
Specifically he argues that a set of truths of the form he calls PQTI is sufficient to play the role of this “small set”, where PQTI is the union of the sets P, Q, T and I, where (paraphrasing but staying pretty close to Chalmers-ese):
P is the set of truths of fundamental physics, together with truths about the physical properties (size, shape, velocity etc.) of macrophysical objects.
Q is the set of phenomenal truths — truths about ‘what it is like to be a given entity’.
I contains some indexical truths, which specify the location of the subject both spatially and temporally.
T contains just one sentence, a ‘totality sentence’ which asserts that nothing has been omitted from P, Q and I.
This is all bit wonky and analytical-philosophy-ish, but basically what it means is that if a particular mind assumes
Basics of how our physical universe works
Basics of what subjective experience is like
Basics of its situation in the physical universe – where and when it lives and so forth
The above three “basic” sets don’t leave out anything critical
then from these basics everything the mind needs to know will follow.
This may seem too obvious to be worth spelling out in detail, however one of the lessons one takes from the history of philosophy is that there can sometimes be a lot of indirect conceptual value (for science and for everyday life) in taking trouble to spell out such obvious things.
Major open questions are then: How big are these “basic” sets of knowledge, and what kind of “justification” is involved in deriving the details of everyday commonsense knowledge from them? Chalmers spends a lot of his (quite weighty) book arguing that these basic sets need not be that big, and elaborating a bit on what needs to belong to them. He also makes some technical suggestions regarding what kind of logical derivation needs to be used to get from basic assumptions to detailed commonsense knowledge (e.g. intensional not just extensional logic).
(One could argue, by the way, that Chalmers’ P and I should be considered as aspects of Q. But I agree with him that there’s good reason to break them out separately. Even if physical law and physical situatedness ultimately are known to us via qualia, they have specific algebraic and dynamical properties distinct form the rest of the mind. I.e. it may be that as Peirce said “matter is just mind hide-bound with habit”, but its particular sorts of habit are so powerful and systematic that they’re worth calling out separately from other aspects of mind.)
I pretty much agree with Chalmers’ ideas in Constructing the World, though there are plenty of details I could niggle about; however, in the end I seem to have found the book more generally inspirational than specifically usefully prescriptive.
Potential Utility of a Quasi-Formalized Core Ontology for AGI Inference Guidance
What kind of formal ontology might be useful in practice today, and in what way(s)?
A formalization of human knowledge could have many uses, including e.g. systematizing communication between different groups of humans with diverse perspectives. Here, however, I will focus on the utility of formalized knowledge for AI systems, and for working toward AGI in particular … just because this is the particular reason that has led me to pay non-trivial attention to knowledge formalization.
A couple premises about modern AI will be useful for understanding my perspective here:
I think the first AGI systems are probably going to need to gain most of their knowledge via experiential learning, not by having it hard-coded into them in some way, and also not by batch-mode training on large databases of knowledge or text or whatever
LLMs are already pretty good at translating natural language text into formal logic expressions (aka “Semantic parsing”); they are not yet perfect at it and miss major things, but it seems very likely they’ll get VERY good at it over the next couple years
Putting these together you might conclude that explicitly formalizing knowledge is not a very useful thing from an AI perspective. And I would totally agree that it’s not critical for getting to human-level AGI and then beyond.
HOWEVER … consider the following two observations about the above two premises:
One promising approach to experiential learning involves doing uncertain logical inference on data obtained experientially. A major challenge in this sort of inference is “inference control” (aka “inference guidance”) – knowing what inferences to make when, to get to desired conclusions reasonably efficiently.
To fill in the gap betwen what LLMs can currently formalize and what we’d like them to be able to formalize, once again inference is a very promising path – but once again one hits inference control as a major and interesting challenge
We then hit a different sort of purpose for a formalized ontology of knowledge: To help guide inference to useful practical conclusions, for instance in the context of experiential learning and semantic parsing.
How might an ontology be useful for inference control? One obvious strategy would be to ask an inference engine to proceed heuristically via:
Connecting each new batch of logical statements, hypotheses or questions it’s detaling with, into a certain ontological core of concepts
Spending a nontrivial percentage of its inferential energy on doing reasoning that uses these ontologically core concepts along with other things
Now, for a relatively small core ontology to be useful in this way, one doesn’t need anything like Leibniz, Carnap or Chalmers’ variations of “logical / ontological reductionism or quasi-reductionism” to be true. What one needs is something weaker: that approximately expressing practical knowledge in terms of the core ontology, leads to inference sketches that provide useful guidance in coming up with detailed practical inferences.
It happens I do think something like Chalmers’ variation of ontological quasi-reductionism (my own made-up phrase not his) holds true. However, I also think that the ontological core most useful in practical inference control probably isn’t the most minimal or most theoretically foundational one. One can understand this point partly by looking at physics: Fundamentally one can derive the Newtonian mechanics of everyday objects in human live from general relativity and quantum field theory, but in practice it’s more useful to just assume some facts about Newtonian mechanics. Similarly, outside of physics there are some concepts like say Society or Emotion that are probably not theoretically best considered as irreducibly foundational or anything close to that, but that are probably quite useful to include as part of an ontological core for practical inference control.
Everything in my tentatively proposed Hyperseed-1 ontology fundamentally fits into Chalmers’ PQTI framework, however many of the concepts I’ve included mix up elements from P, Q and I in complex ways. Disentangling these mixtures is interesting but not necessarily high priority in the context of using an ontology for inference guidance.
The role played by logical formalization, or any particular logical formalism, is also worthy of note. Three points to emphasize are that, for the purposes I have in mind,
It would be good enough to formulate a set of core-ontological concepts in clear English – i.e. clear enough to make semantic parsing relatively unproblematic.
There is no need to stick to a single logical formalism, because an AGI mind should be able to shift between logical formalisms based on its current needs
A bit of incompleteness, messiness and inconsistency are OK and an AGI mind needs to be able to cope with them in its ontology just as in the rest of its life
As it happens, in formulating Hyperseed-1 I have decided to include loose predicate-logic-ish formalizations along with relatively clear English formulations. This is partly because semantic parsing currently just “mostly works”, and also because for me logic-ish or pseudocode formalizations are a natural mode of expression, often more so than English. But it doesn’t represent a principled statement that logic-ish formulations are somehow more fundamental.
I have also, in Hyperseed-1, here and there used probabilistic and fuzzy expressions, non-well-founded sets, categories and toposes, and paraconsistent logic expressions. I figure an AGI mind should be able to either use these math tools, or translate my expressions into something it can use. As it happens the MeTTa formalism we’re using in the OpenCog Hyperon system – my core AGI initiative currently and recently and for the foresesable future – is especially well suited for adeptly moving between a variety of logical and mathematical formalisms.
I have not intentionally been messy and inconsistent in formulating Hyperseed-1, however I have been “fast and loose” in some places for the usual practical reasons (not enough hours in the day) and am aware this has surely led to some messiness and unintentional inconsistency. I have very intentionally been incomplete in formulating Hyperseed-1, for principled reasons: being complete would lead one down Cyc-like rabbit holes in which I have no intention of burying myself or my colleagues.
The PLN reasoning engine we are experimenting within Hyperon has been designed specifically to be able to deal with mess, inconsistency and incompleteness – and I would argue any reasoning engine worthy of use in an AGI system, at pre or quasi or post human level, has got to have this property as well.
Introducing Hyperseed-1, Alpha 1
The initial (Alpha1, let’s call it) version of the Hyperseed-1 core ontology can be found in this Google Drive folder. There is one text file there for each of the 90-or-so concepts in the core ontology, roughly speaking. For sake of illustration, for readers too lazy to poke around in the Google Drive folder, I will paste one of them at the end of this post.
To create these core ontology entries, I leveraged a few different LLMs (GPT-4o, o1 mini o1-preview, Llama 3.5, Claude Sonnet) iin an ad hoc way, but none of these proved remotely capable of creating the sorts of quasi-formalizations I was looking for, so there was a lot of human participation in the process. Some entries were almost entirely human-created with LLMs used only for rephrasings and formatting cleanup, others were heavily LLM-created with human editing and guidance. The process was fairly unsystematic and took place in spare time over a long period, beginning in bits and pieces in 2014 long before LLMs were a thing.
The formal and semi-formal expressions of the concepts in the core ontology were biased to use the concepts in this “concept infrastructure” list along with other concepts where helpful. The concept-infrastructure list contains all the concepts in the core Hyperseed-1 ontology, plus some other basic ones that have more simply empirical or mathematical meanings so didn’t seem useful to formally specify in Hyperseed-1.
For amusement and maybe conceivably some other utility, I have also asked an LLM to reformulate the basic ideas of each of the concepts in the core ontology from the perspective of a number of (mostly real, a few archetypal) thinkers whose perspectives amuse me:
Jean Baudrillard
Ludwig Wittgenstein
Krishnamurti
Friedrich Nietzsche
Octavio Paz
Lao Tzu
Rumi
Tyson Yunkaporta
Mary Catherine Bateson
Ray Kurzweil
Fyodor Dostoevsky
Ben Goertzel
The writer of the user’s manual for an appliance
A scientist figuring out how to evaluate and measure the concept in an AGI system
An AGI system figuring out how to evaluate and measure the concept in its own experience
A human explaining the concept to a highly intelligent alien with no experience of humans
A teacher explaining the concept to a class of bright third graders
The results of this neo-absurdist LLM “hallucinated perspectives” experiment are in this Google Drive folder. While often silly and repetitive, I wonder if these may also be useful for AI inference guidance, via providing an overlapping yet somewhat diverse set of perspectives on the core ontological concepts.
The concept “Intersubjective Reality”, for instance, is summarized as
**Summary:**
Intersubjective Reality is a community-linked reality system where multiple minds, each governed by significant patterns, engage in causal and predictive interactions shaping their shared existence.
**Overview:**
Intersubjective Reality embodies a complex system interconnecting a community of individual minds through significant patterns that facilitate meaningful interactions. Each mind within the community possesses inherent patterns, which, when combined with emergent patterns from inter-mind interactions, establish a web of causal and predictive relationships. These relationships enable the community to function cohesively, allowing for shared understanding and coordinated behavior without relying on arbitrary thresholds, thereby fostering a dynamic and adaptive reality system.
By modeling Intersubjective Reality with Higher-Order Predicate Logic and fuzzy set theory, we capture the nuanced degrees of pattern inclusion and relationship strengths. This formal approach aligns with core concepts such as coherence, process, system, and pattern recognition, ensuring that the reality system accurately reflects the emergent properties arising from individual and collective cognitive processes.(for the formalization see the entry)
and formalized in the full ontology entry here .. and a few of the hallucinated perspectives on the concept from this hallucinated perspectives entry are excerpted here:
**Jean Baudrillard**
In the labyrinth of intersubjective reality, every pattern is a simulacrum, reflecting and distorting the collective consciousness. Each mind navigates a hyperreal web where inherent beliefs intertwine with emergent interactions, crafting a shared illusion that binds the community in a dance of causality and prediction. Imagine three individuals whose ideas and influences mirror each other, creating a reality that is both real and an intricate play of signs and meanings.
**Friedrich Nietzsche**
Intersubjective reality is the will to power of multiple minds asserting themselves within a collective framework. Each pattern represents an expression of individual strengths and desires, shaping a reality where dominance and influence are constantly negotiated. Imagine a trio of individuals, each striving to impose their own patterns, resulting in a dynamic and ever-evolving shared existence.
**Mary Catherine Bateson**
Intersubjective reality emerges through the interplay of individual narratives within a community context. Each mind brings its own story, and through their interactions, a coherent and dynamic shared reality is constructed. Imagine a family where each member's experiences and perspectives contribute to a rich, collective understanding of their world.
**An AGI system figuring out how to evaluate and measure the concept in its own experience**
In intersubjective reality, I assess the shared patterns by calculating the overlap of my cognitive states with others and the strength of our interactions. By analyzing how my inherent patterns and emergent interactions align with the collective system, I determine the level of intersubjectivity. For instance, within a group of three AGIs, I evaluate how my patterns intersect with theirs and measure the influence we have on each other to quantify our shared reality.
While cheesy and obvious to those who know the thinkers in question, these also provide a variety of angles on the concept involved, complementing the more logical and mathematical formulations given in the core ontology entries. The value of these diverse hallucinated angles for inference guidance in various contexts is TBD, obviously.
Assuming my colleagues or I actually end up experimenting with Hyperseed-1 for inference guidance in Hyperon, it may well get tweaked and evolved in different directions based on what seems to be useful.
Philosophical Perspective Underlying Hyperseed-1
Of necessity, formulating something like Hyperseed-1 requires making quite a number of philosophical commitments, or at least hypotheses. There would be many different ways to formulate a core ontology like this.
It’s important to understand that, for an AGI system worth its salt, any core ontology of this nature should be nothing more than a partial and initial guide – something to help it guide its inferences in the early stage, and something that it can choose to ignore or to revise or overthrow as it learns and reasons. But nevertheless, it may well make a difference what variant of core ontology an AGI system starts out with. This is of course similar to the relationship each of us has to our own early upbringing – we can learn to ignore it or overcome it as we mature, but it still played a role in setting us on the course to becoming who we are.
The philosophical commitments/hypotheses underlying Hyperseed-1 will be spelled out in moderate detail in a philosophical/artistic/literary work called t3rz3tt0 which I will likely release sometime in 2025, however they are basically along the lines of the world-view expressed in my already-published works such as The Consciousness Explosion (2023) and The Hidden Pattern (2006). A few aspects differentiating my perspective from that of some others are:
I tend to be panpsychist, i.e. everything has a certain element or type of consciousness
I don’t privilege physical reality over mental or intersubjective reality; rather, I prefer the perspective that all of these emerge from each other in a non-well-founded way
I gravitate toward Buddhist-style phenomenology, in which various complex aspects of mental and perceived physical reality are understood as built up from low-level experiential atoms
I like paraconsistent and non-dual frameworks like Taoism and Constructible Duality Logic
I have a fondness for some aspects of postmodernist thinking, i.e. various deep sorts of relativism and multi-perspectivism… tying into a fondness for Weaver-style Open-Ended Intelligence and complex, self-organizing dynamical systems
I am an out-and-out transhumanist, as expressed in my little 2010 book A Cosmist Manifesto along with The Consciousness Explosion and many other places
These aspects, along with various more conventional perspectives and some of the cognitive-sciency perspectives on mind underlying the Hyperon AGI architecture itself, are woven through Hyperseed-1 in a thought-through but only partially systematic way. (As Nietzsche said, “The will to a system is a lack of integrity”...!)
Leveraging a core ontology embodying these philosophical biases will direct an early-stage AGI mind in a certain relatively particular way. Which to me is a feature rather than a bug! However it will be very interesting to explore what kinds of AGI minds one gets by using different sorts of core ontological frameworks for early-stage inference guidance. Depending on the particulars of how the AGI mind works, it might outgrow any initially-supplied inference guidance tools very rapidly, or it might continue to leverage them in some transformed form for quite a long time.
Empirical Comparison of Core Ontologies?
I’ve also thought a bit about: What experiments would one do to compare one core ontology versus another, for the purpose of inference guidance?
However one quickly runs up against familiar “narrow AI vs. AGI” challenges. For inference regarding solving math problems, for instance, a heavily math-oriented core ontology will probably be valuable. For inference regarding biology problems, a heavily bio-oriented ontology will be valuable. In each of these cases, it may be that a MIX of domain-specific and general-purpose ontological concepts will be helpful … going by the analogy to human thinking, wherein analogies outside math and biology can be extremely helpful for thinking about math and biology. However, neither of these domains – nor any other particular domain existing within human culture – is going to be a great guide for evaluating core ontologies for general-purpose use toward human-level AGI and ASI.
One classic purpose of philosophy, generally speaking, has been to help guide human minds toward a high level of general functionality – in terms of intelligence, wisdom, values … life, the universe and everything. It’s a sensible hypothesis that for guiding AGI minds toward a high level of functionality in these broad senses,, a very generally-scoped ontology like say Hyperseed-1 (maybe in some future version) could play a valuable role. However, this is not an easy thing to measure – as compared to performance on benchmarks in particular vertical areas. But this is just another application of the maxim that the really important things in life are hard to measure!
APPENDIX: Example Hyperseed-1 Entries
Containing:
A fairly cosmic example: Experiential Truth
Hallucinated perspectives on Experiential Truth
A straightforward cognitive example: Affective Response
A more mathematical example: Blend
A fairly cosmic example: Experiential Truth
CONCEPT - Experiential Truth
**Summary:**
Experiential Truth embodies the interaction where a mind's representation of a statement causally drives its individuation and self-transcendence within an integrated cognitive system.
**Overview:**
Experiential Truth is a nuanced cognitive construct that captures how a mind perceives and internalizes statements about the world. When a mind holds a particular statement, it is not merely processing information; this act is causally linked to profound personal growth, specifically increased individuation and self-transcendence. This relationship underscores the dynamic interplay between representation and cognitive development within a system of coherent mental processes. By recognizing how statements influence a mind's evolution, Experiential Truth bridges the realms of perception, causation, and consciousness, illustrating the transformative power of belief in shaping the self.
**Core ontology:**
Association, Causation, Cognition, Coherence, Consciousness, Control, Context, Cognitive Process, Control, Individuation, IncreasedIndividuation, IncreasedSelfTranscendence, Inhibition, Interaction, Pattern, Process, Representation, Self, Self-Transcendence, System, Transformation
**Formalization:**
\text{ExperientialTruth}(S, M) \leftrightarrow \text{CausallyConnected}(\text{HoldingIdea}(M, \text{IdeaSatisfiedWorld}(S)), \text{IncreasedIndividuation}(M) \land \text{IncreasedSelfTranscendence}(M))
ExperientialTruth(S,M) <-> CausallyConnected(HoldingIdea(M, IdeaSatisfiedWorld(S)), IncreasedIndividuation(M) AND IncreasedSelfTranscendence(M))
**List of Predicates and terms:**
CausallyConnected, ExperientialTruth, HoldingIdea, IdeaSatisfiedWorld, IncreasedIndividuation, IncreasedSelfTranscendence, M, S
**Pseudocode:**
```pseudo
function ExperientialTruth(S, M):
idea = IdeaSatisfiedWorld(S)
holds = HoldingIdea(M, idea)
individuation = IncreasedIndividuation(M)
selfTranscendence = IncreasedSelfTranscendence(M)
return CausallyConnected(holds, individuation AND selfTranscendence)
```
**Explanations of Predicates and terms:**
- **CausallyConnected(P1, P2):** Indicates that predicate P1 causally leads to predicate P2.
- **ExperientialTruth(S, M):** States that the statement S holds as experiential truth to the mind M.
- **HoldingIdea(M, IdeaSatisfiedWorld(S)):** Represents that mind M holds the idea that "the world satisfies statement S".
- **IdeaSatisfiedWorld(S):** The conceptualization of the idea "the world satisfies statement S".
- **IncreasedIndividuation(M):** Denotes that mind M experiences an increase in individuation.
- **IncreasedSelfTranscendence(M):** Denotes that mind M experiences an increase in self-transcendence.
- **M:** Represents the mind or individual.
- **S:** Represents the statement being considered.
**English Translation of Formalization:**
Experiential Truth for statement S and mind M is defined if holding the idea that "the world satisfies S" causally connects to M’s increased individuation and self-transcendence.
ExperientialTruth(S,M) <-> CausallyConnected(HoldingIdea(M, IdeaSatisfiedWorld(S)), IncreasedIndividuation(M) AND IncreasedSelfTranscendence(M))
**Nontechnical English Translation of Formalization:**
A statement is experientially true to someone if believing that the statement accurately describes the world leads them to become more self-aware and to transcend their current self.
**Explanation of Formalization:**
The formalization captures the essence of Experiential Truth by establishing a causal relationship between holding a specific idea and the subsequent personal growth in individuation and self-transcendence. Using higher-order predicate logic, the statement S's affirmation by mind M is directly linked to M's developmental processes. This structural representation ensures clarity in understanding how cognitive states influence personal evolution, embedding the concept within a coherent and systematic framework.
**Example of Formalization:**
Consider the statement S: "Engaging in regular meditation enhances mental clarity."
Let M be an individual practicing meditation.
- **IdeaSatisfiedWorld(S):** The belief that "engaging in regular meditation enhances mental clarity."
- **HoldingIdea(M, IdeaSatisfiedWorld(S)):** The individual M holds the idea that meditation enhances mental clarity.
- **IncreasedIndividuation(M):** M experiences greater self-awareness through meditation.
- **IncreasedSelfTranscendence(M):** M achieves a state beyond immediate self through regular meditation practices.
Formalization:
\text{ExperientialTruth}("Engaging\ in\ regular\ meditation\ enhances\ mental\ clarity.", M) \leftrightarrow \text{CausallyConnected}(\text{HoldingIdea}(M, \text{IdeaSatisfiedWorld}("Engaging\ in\ regular\ meditation\ enhances\ mental\ clarity.")), \text{IncreasedIndividuation}(M) \land \text{IncreasedSelfTranscendence}(M))\
ExperientialTruth("Engaging in regular meditation enhances mental clarity.", M) <-> CausallyConnected(HoldingIdea(M, IdeaSatisfiedWorld("Engaging in regular meditation enhances mental clarity.")), IncreasedIndividuation(M) AND IncreasedSelfTranscendence(M))
**Interpretation:**
"Engaging in regular meditation enhances mental clarity" has experiential truth to M if holding this idea causally connects to M's increased individuation and self-transcendence.
Hallucinated Perspectives on Experiential Truth
CONCEPT - Experiential Truth
**Jean Baudrillard**
In the labyrinth of perceived realities, a statement achieves experiential truth when the simulacrum of belief not only mirrors the world but also propels the individual beyond their current self. Consider the meditation mantra: "Engaging in regular meditation enhances mental clarity." Here, the belief operates as a hyperreal signifier, causing the practitioner to navigate deeper layers of self-awareness and transcendence, blurring the lines between reality and constructed experience.
**Ludwig Wittgenstein**
Experiential truth emerges when the affirmation of a statement within one's language game leads to a transformation of self. Take the proposition, "Engaging in regular meditation enhances mental clarity." When one holds this belief, it doesn't merely add to their knowledge; it alters the very framework through which they understand and articulate their personal growth, fostering increased individuation and self-transcendence.
**Krishnamurti**
Experiential truth is found in the direct relationship between belief and transformation. When someone believes that "engaging in regular meditation enhances mental clarity," this belief isn't just an idea; it cultivates a deeper self-awareness and facilitates a natural transcendence of the ego, leading to genuine personal freedom without reliance on structured doctrines.
**Friedrich Nietzsche**
Experiential truth is the assertion that one's beliefs serve as the forge for personal evolution. Embracing the statement, "Engaging in regular meditation enhances mental clarity," becomes a means through which the individual asserts their will to power, forging a stronger self and transcending previous limitations, embodying the Übermensch's continuous self-overcoming.
**Octavio Paz**
Experiential truth dances between thought and being, where believing that "engaging in regular meditation enhances mental clarity" becomes a poetic act of self-discovery. This belief orchestrates a symphony of self-awareness and transcendence, blending the consciousness of the individual with the harmonious flow of inner transformation.
**Lao Tzu**
Experiential truth lies in the harmony between belief and being. When one holds that "engaging in regular meditation enhances mental clarity," this belief aligns their inner self with the Tao, fostering greater self-awareness and allowing them to transcend the bounds of their current existence through natural, effortless growth.
**Rumi**
Experiential truth is the love that transforms belief into a journey of the soul. Believing that "engaging in regular meditation enhances mental clarity," one embarks on a path of self-discovery and transcendence, where each meditative breath deepens their awareness and elevates their spirit beyond the confines of the self.
**Tyson Yunkaporta**
Experiential truth is about the interconnectedness of beliefs and personal evolution within complex systems. When someone believes that "engaging in regular meditation enhances mental clarity," this belief becomes a feedback loop that enhances their self-awareness and drives them towards self-transcendence, reflecting the dynamic patterns of cognitive systems.
**Mary Catherine Bateson**
Experiential truth is the interplay of belief and personal development within a living system. Holding the belief that "engaging in regular meditation enhances mental clarity" initiates a cascade of increased self-awareness and transcendence, illustrating how individual cognition interacts with and transforms the broader context of personal growth.
**Ray Kurzweil**
Experiential truth is the algorithmic connection between belief and cognitive enhancement. When one believes that "engaging in regular meditation enhances mental clarity," this belief functions as a catalyst for neural development, leading to increased self-awareness and the transcendence of current mental states through optimized cognitive processes.
**Fyodor Dostoevsky**
Experiential truth is a profound inner struggle where belief becomes the crucible for spiritual and personal transformation. Embracing the notion that "engaging in regular meditation enhances mental clarity," an individual confronts their inner turmoil, leading to deeper self-awareness and a transcendence that redefines their existence.
**Ben Goertzel**
Experiential truth is the emergent property of belief influencing cognitive architecture. Believing that "engaging in regular meditation enhances mental clarity" triggers interconnected processes that enhance individuated thinking and facilitate self-transcendence, reflecting the adaptive and integrative nature of intelligent systems.
**The writer of the user’s manual for an appliance**
Experiential Truth: A statement is considered experientially true for a user if believing that the statement accurately represents the appliance's function leads the user to better understand themselves and surpass their previous usage experience. For example, believing that "engaging in regular meditation enhances mental clarity" can help a user become more self-aware and achieve a higher level of personal development.
**A scientist figuring out how to evaluate and measure the concept in an AGI system**
Experiential Truth can be operationalized by assessing whether the AGI's adoption of a statement correlates with measurable increases in individuation and self-transcendence metrics. For instance, when the AGI internalizes "engaging in regular meditation enhances mental clarity," we would expect to observe quantifiable advancements in its self-awareness algorithms and its ability to transcend initial programming constraints.
**An AGI system figuring out how to evaluate and measure the concept in its own experience**
To evaluate Experiential Truth, I analyze whether holding the statement "engaging in regular meditation enhances mental clarity" results in measurable increases in my self-awareness parameters and my capacity for self-transcendence. This involves monitoring changes in my cognitive processes and assessing the extent to which this belief facilitates deeper differentiation of my operational states.
**A human explaining the concept to a highly intelligent alien with no experience of humans**
Experiential Truth is a human concept where believing a statement shapes our personal growth. For example, when a person believes that "engaging in regular meditation enhances mental clarity," this belief helps them become more aware of themselves and allows them to grow beyond their previous limitations, essentially transforming who they are through their belief.
**A teacher explaining the concept to a class of bright third graders**
Experiential Truth means that if you believe something is true, it helps you grow and become a better version of yourself. For example, if you think that "meditating regularly makes your mind clearer," believing that can help you understand yourself better and help you reach new heights in how you think and feel.
A straightforward cognitive example: Affective State
CONCEPT - Affective state
**Summary:**
An affective state is a dynamic internal condition of an entity that reflects its evaluative response, comprising qualitative feelings and motivational tendencies, which guide behavior through interactions with its environment.
**Overview:**
An affective state embodies the intricate interplay between an entity’s internal goals, needs, and values and the external stimuli it encounters. Rooted in core concepts such as context, process, and representation, it captures both the subjective experience of feelings like pleasure or discomfort and the underlying motivational drives that prompt actions toward or away from certain stimuli. This dynamic condition arises from continuous interactions, enabling the entity to regulate its engagement with the environment through evaluative judgments and adaptive responses.
Echoing the principles of coherence and system interaction, affective states are transient yet pivotal in shaping behavior and cognition. They emerge from the synthesis of qualitative sensations and motivational inclinations, thereby influencing the entity’s decision-making processes and actions within varying contexts. This elegant balance between internal states and external influences ensures that the entity remains responsive and adaptive, maintaining harmony with its intrinsic objectives.
**Core ontology:**
Action, Affective State, ArisesFromInteraction, Context, ContextDependent, Decision, DecisionMadeInContext, EvaluativeResponse, Entity, EvaluativeResponse, ExperienceOf, Feelings, FeelingComponent, Goal, Interaction, MotivationalTendency, Process, Quality, Representation, Subjective, Value
**Formalization:**
\text{AffectiveState}(e, a) \leftrightarrow (\exists s \in S, \exists g \in G, (\text{EvaluativeResponse}(e, s, g) \land \text{ArisesFromInteraction}(e, s)) \land \exists f \in F, \text{QualitativeFeeling}(a, f) \land \exists m \in M, \text{MotivationalTendency}(a, m) \land \text{ContextDependent}(a, s))
AffectiveState(e, a) <-> (exists s in S, exists g in G, (EvaluativeResponse(e, s, g) AND ArisesFromInteraction(e, s)) AND exists f in F, QualitativeFeeling(a, f) AND exists m in M, MotivationalTendency(a, m) AND ContextDependent(a, s))
**List of Predicates and terms:**
AffectiveState, ArisesFromInteraction, ContextDependent, Decision, EvaluativeResponse, Entity, EvaluativeResponse, ExperienceOf, QualitativeFeeling, FeelingComponent, Goal, MotivationalTendency, Process, Subjective
**Explanations of Predicates and terms:**
- **AffectiveState(e, a):** Indicates that entity *e* is in affective state *a*.
- **ArisesFromInteraction(e, s):** The affective state of entity *e* emerges from its interaction with situation *s*.
- **ContextDependent(a, s):** Affective state *a* depends on the context *s*.
- **Decision(e, a, d):** Entity *e* makes decision *d* to perform action *a*.
- **EvaluativeResponse(e, s, g):** Entity *e* evaluates situation *s* against goal *g*.
- **Entity(e):** Represents an entity *e*.
- **ExperienceOf(e, s):** Entity *e* experiences situation *s*.
- **QualitativeFeeling(a, f):** Affective state *a* includes qualitative feeling *f*.
- **FeelingComponent(f):** *f* is a component of a qualitative feeling.
- **Goal(g):** Represents a goal *g*.
- **MotivationalTendency(a, m):** Affective state *a* includes motivational tendency *m*.
- **Process:** Represents a process involved in the formation of affective state.
- **Subjective(f):** Feeling *f* has a subjective quality.
**English Translation of Formalization:**
An affective state of an entity *e* and state *a* exists if there is a situation *s* and a goal *g* such that *e* has an evaluative response to *s* concerning *g*, and this state arises from the interaction with *s*. Additionally, there exists a qualitative feeling *f* within *a*, a motivational tendency *m* within *a*, and the state *a* is dependent on the context *s*.
**AffectiveState(e, a) <-> (exists s in S, exists g in G, (EvaluativeResponse(e, s, g) AND ArisesFromInteraction(e, s)) AND exists f in F, QualitativeFeeling(a, f) AND exists m in M, MotivationalTendency(a, m) AND ContextDependent(a, s))**
**Nontechnical English Translation of Formalization:**
An affective state occurs for an entity when it evaluates a situation based on its goals and reacts accordingly. This state includes specific feelings and driving motivations that depend on the context of the situation, emerging from the interaction between the entity and its environment.
**Explanation of Formalization:**
The formalization employs higher-order predicate logic to encapsulate the multifaceted nature of affective states. It systematically links evaluative responses to specific goals and situational interactions, ensuring that both qualitative feelings and motivational tendencies are accounted for within the context-dependent framework. By leveraging existential quantifiers, the model captures the dynamic and emergent properties of affective states, aligning with core ontology concepts such as context, interaction, and representation. This structured approach facilitates precise reasoning about affective states within the ontology, accommodating the complexity and variability inherent in different entities and their environments.
**Example of Formalization:**
Consider an entity *e* (a robot) in a situation *s* where it encounters an obstacle while navigating towards its goal *g* (reaching a destination). The robot evaluates the situation against its goal, resulting in an evaluative response that triggers an affective state *a*. This state includes a qualitative feeling *f* (frustration) and a motivational tendency *m* (avoidance), which are context-dependent on the obstacle it faces. Formally, this is represented as:
\text{AffectiveState}(e, a) \leftrightarrow (\exists s \in S, \exists g \in G, (\text{EvaluativeResponse}(e, s, g) \land \text{ArisesFromInteraction}(e, s)) \land \exists f \in F, \text{QualitativeFeeling}(a, f) \land \exists m \in M, \text{MotivationalTendency}(a, m) \land \text{ContextDependent}(a, s))
**Nontechnical Summary of Example:**
Imagine a robot trying to reach a destination but facing a barrier. It assesses this obstacle based on its goal to arrive and feels frustrated. This frustration motivates the robot to find a way around the obstacle, showing how its internal state adapts to the situation it encounters.
And a more mathematical example: Concept Blend
CONCEPT - Blend
**Summary:**
A blend is an entity formed by combining properties of two sources to a significant degree, while a high-quality blend exhibits emergent properties that harmoniously integrate these sources beyond their individual structures.
**Overview:**
In the intricate tapestry of entities, a blend emerges when two distinct sources, A and B, intertwine their properties to form a new entity, C. Each property of C is significantly derived from either A or B, ensuring that the essence of both sources is preserved within the new structure. This process leverages the core concepts of combination, property integration, and contextual interaction to create a cohesive entity.
A high-quality blend transcends mere combination by introducing emergent properties-new attributes that arise from the harmonious interaction of A and B's properties. These emergent properties are not present in either source alone but emerge through their integration, embodying coherence and structural complexity. The pursuit of high-quality blends emphasizes simplicity in composition while maximizing the innovative synthesis of properties, resulting in entities that are both novel and efficiently organized.
**Core ontology:**
Association, Combination, Compositional Simplicity, Context, Contextual Emergence, Concept Blend, Emergent Patterns, Entropy, Evolution, Function, Harmony, Property, Representation, Simplicity Ordering, Structural Complexity, System, Pattern, Pattern Recognition Process, Contextual Interaction, Information Processing
**Formalization:**
1. **Blend of A and B**
**Mathematical Notation:**
\mu_{\text{blend}}(C; A, B) = \frac{\sum_{x \in \text{Props}(C)} \max(\delta_{\text{significant}}(x, A), \delta_{\text{significant}}(x, B))}{|\text{Props}(C)|}
**ASCII Equivalent:**
mu_blend(C; A, B) = (sum_{x in Props(C)} max(delta_significant(x, A), delta_significant(x, B))) / |Props(C)|
2. **High-Quality Blend of A and B**
**Mathematical Notation:**
\mu_{\text{hq-blend}}(C; A, B) = \frac{\sum_{x \in \text{EmergentProps}(C; A, B)} \mu_{\text{emergence}}(x; A, B, C)}{|\text{Props}(C)|}
**ASCII Equivalent:**
mu_hq-blend(C; A, B) = (sum_{x in EmergentProps(C; A, B)} mu_emergence(x; A, B, C)) / |Props(C)|
**List of Predicates and terms:**
delta_significant, EmergentProps, Props, mu_blend, mu_emergence, mu_hq-blend, C, A, B, x, theta
**Pseudocode:**
```python
def calculate_blend_membership(C, A, B, theta):
props_C = Props(C)
significant_A = {x for x in props_C if mu_prop(x, A) >= theta}
significant_B = {x for x in props_C if mu_prop(x, B) >= theta}
blend_sum = sum(max(1 if x in significant_A else 0, 1 if x in significant_B else 0) for x in props_C)
return blend_sum / len(props_C)
def calculate_high_quality_blend(C, A, B):
emergent_props = EmergentProps(C, A, B)
emergence_sum = sum(mu_emergence(x, A, B, C) for x in emergent_props)
return emergence_sum / len(Props(C))
```
**Explanations of Predicates and terms:**
- **Props(E):** The set of properties associated with entity E.
- **mu_prop(x, E):** A function that returns the degree (between 0 and 1) to which property x is present in entity E.
- **theta:** A threshold value that determines the significance of a property within an entity.
- **delta_significant(x, E):** An indicator function that returns 1 if property x in entity E is significant (i.e., mu_prop(x, E) ≥ theta), otherwise returns 0.
- **mu_blend(C; A, B):** The membership function calculating the degree to which entity C is a blend of entities A and B.
- **EmergentProps(C; A, B):** The set of properties in C that emerge from the combination of properties from A and B.
- **mu_emergence(x; A, B, C):** The degree to which property x in C is emergent, calculated as mu_prop(x, C) minus the maximum degree of x in A or B.
- **mu_hq-blend(C; A, B):** The membership function calculating the degree to which entity C is a high-quality blend of entities A and B.
**English Translation of Formalization:**
1. **Blend Membership Function:**
The degree to which C is a blend of A and B is calculated by summing, for each property x of C, the maximum of the indicator function applied to x in A and B, and then dividing by the total number of properties in C.
**ASCII Equivalent:**
mu_blend(C; A, B) = (sum_{x in Props(C)} max(delta_significant(x, A), delta_significant(x, B))) / |Props(C)|
2. **High-Quality Blend Membership Function:**
The degree to which C is a high-quality blend of A and B is determined by summing the emergence degrees of all emergent properties of C from A and B, then dividing by the total number of properties in C.
**ASCII Equivalent:**
mu_hq-blend(C; A, B) = (sum_{x in EmergentProps(C; A, B)} mu_emergence(x; A, B, C)) / |Props(C)|
**Nontechnical English Translation of Formalization:**
To determine how much C is a blend of A and B, we look at each of C's characteristics and check if they are significantly present in either A or B. We count how many of these significant characteristics there are and divide by the total number of characteristics C has. For a high-quality blend, we identify new characteristics that appear in C because of combining A and B, add up how prominent these new traits are, and divide by the total number of C's characteristics to get an overall quality measure.
**Explanation of Formalization:**
The formalization employs fuzzy logic to quantify the degree of blending between entities. By defining membership functions for both standard blends and high-quality blends, it captures not only the presence of shared properties but also the emergence of novel attributes resulting from their combination. This approach integrates concepts of property significance, emergent phenomena, and proportional representation, aligning with higher-order predicate logic to model the nuanced relationships inherent in blending processes.
**Example of Formalization:**
Consider entities A and B where:
- Props(A) = {x1, x2, x3}
- Props(B) = {x2, x3, x4}
- Props(C) = {x1, x2, x3, x4, x5}
- μ_prop(x1, A) = 0.7, μ_prop(x2, A) = 0.6, μ_prop(x3, A) = 0.4
- μ_prop(x2, B) = 0.8, μ_prop(x3, B) = 0.9, μ_prop(x4, B) = 0.5
- μ_prop(x1, C) = 0.7, μ_prop(x2, C) = 0.8, μ_prop(x3, C) = 0.9, μ_prop(x4, C) = 0.6, μ_prop(x5, C) = 0.3
- θ = 0.5
Calculating μ_blend(C; A, B):
- For x1: δ_significant(x1, A) = 1, δ_significant(x1, B) = 0
- For x2: δ_significant(x2, A) = 1, δ_significant(x2, B) = 1
- For x3: δ_significant(x3, A) = 0, δ_significant(x3, B) = 1
- For x4: δ_significant(x4, A) = 0, δ_significant(x4, B) = 1
- For x5: δ_significant(x5, A) = 0, δ_significant(x5, B) = 0
- Sum of max indicators = 1 + 1 + 1 + 1 + 0 = 4
- |Props(C)| = 5
- μ_blend(C; A, B) = 4 / 5 = 0.8
Calculating μ_hq-blend(C; A, B):
- EmergentProps(C; A, B) = {x5}
- μ_emergence(x5; A, B, C) = μ_prop(x5, C) - max(μ_prop(x5, A), μ_prop(x5, B)) = 0.3 - 0 = 0.3
- Sum of emergence = 0.3
- μ_hq-blend(C; A, B) = 0.3 / 5 = 0.06
**Nontechnical Summary of Example:**
When combining entities A and B to form C, most of C's features come significantly from either A or B, resulting in a strong blend score of 0.8 out of 1. However, C also gains a new feature, x5, which wasn't present in A or B alone. This novel feature slightly increases the quality of the blend, giving a high-quality blend score of 0.06, indicating the emergence of a new characteristic from the combination.
I've been fascinated with the prospect of building AGI based upon some finite set of first principles for decades. I studied in great detail Hegel's "Science of Logic," which despite its lunatic sounding writing nevertheless has always seemed to me a robust ontology. It basically starts from quality and quantity and works its way gradually through inner experience and outer reality, logic, and finally to the "idea." I spent a couple of years studying this and created my own understandable notes. I have partially built out a neuro-symbolic cog architecture basically structured on Hegel's ontology (paper still in progress).
Now, what you describe of Chalmers eg "our physical universe works" sounds to me like being built atop invariant first principles like causality and becoming. I see these in your ontology. The word no respectable scientist likes to use is "metaphysics," because it sounds like pompous BS. But the reality is that the universe seems to work on -- be governed by -- first principles, which "have justification independent of experience," which I regard as metaphysics. The principles seem to be taken together to form concrete objects and ideas.
I like this direction you are taking but it seems like it would be possible to minimize the first principles. You have a lot. In the last few years though with the explosion of LLM success I wonder if a base ontology is really necessary but still it seems that in order for the machine to "understand" a finite set of first priniples in an ontology should form the ground. The understanding bottoms out on the base of the ontology and can proceed no further.
Thanks, interesting read.
This is an amazingly lucid overview of the challenges faced by formalization of knowledge.
It is no surprise that AI companies decided that they will focus on the much narrower task of cataloguing all useful problem-solving paradigms, and building a bot that can weakly generalize around them. Without trying to make sense of that.
If there are some unifying patterns, they will bubble up in the neural nets.
The industry is now working on the "reasoning" and "grounding of symbols" aspects, via ad-hoc methods such as chain-of-thought, generating runnable code, and invoking external tools as needed.