I've been fascinated with the prospect of building AGI based upon some finite set of first principles for decades. I studied in great detail Hegel's "Science of Logic," which despite its lunatic sounding writing nevertheless has always seemed to me a robust ontology. It basically starts from quality and quantity and works its way gradually through inner experience and outer reality, logic, and finally to the "idea." I spent a couple of years studying this and created my own understandable notes. I have partially built out a neuro-symbolic cog architecture basically structured on Hegel's ontology (paper still in progress).
Now, what you describe of Chalmers eg "our physical universe works" sounds to me like being built atop invariant first principles like causality and becoming. I see these in your ontology. The word no respectable scientist likes to use is "metaphysics," because it sounds like pompous BS. But the reality is that the universe seems to work on -- be governed by -- first principles, which "have justification independent of experience," which I regard as metaphysics. The principles seem to be taken together to form concrete objects and ideas.
I like this direction you are taking but it seems like it would be possible to minimize the first principles. You have a lot. In the last few years though with the explosion of LLM success I wonder if a base ontology is really necessary but still it seems that in order for the machine to "understand" a finite set of first priniples in an ontology should form the ground. The understanding bottoms out on the base of the ontology and can proceed no further.
This is an amazingly lucid overview of the challenges faced by formalization of knowledge.
It is no surprise that AI companies decided that they will focus on the much narrower task of cataloguing all useful problem-solving paradigms, and building a bot that can weakly generalize around them. Without trying to make sense of that.
If there are some unifying patterns, they will bubble up in the neural nets.
The industry is now working on the "reasoning" and "grounding of symbols" aspects, via ad-hoc methods such as chain-of-thought, generating runnable code, and invoking external tools as needed.
Thanks, finally a read that brought some inner peace and focus in a complex environment and during complex times.
You write: "Depending on the particulars of how the AGI mind works, it might outgrow any initially supplied inference guidance tools very rapidly, or it might continue to leverage them in a transformed form for quite a long time."
Do you think these outgrowing events lead to exponentially divergent systems in terms of both their internal workings and their phenomenology in the real world, or do they diverge in their internal workings while yielding more or less equilibrated interactions with the real world, or perhaps converge toward a master state characterized by either equilibrated or more or less chaotic phenomenologies?
The reason I think this is a relevant question is twofold: first, because it directly affects how we experience and interpret these processes and systems, and second, because it raises the possibility that we may need to interact with these systems - through a mixture of feeding and feedback - at some precise ranges (and scales) before the outgrowing events occur (or are 'let happen', or 'be deployed', assuming some degree of quantifiability or measurability). Such engagement could potentially induce a form of "interactive scarring" that nudges their phenomenologies toward states more congruent with our own physiology and homeostasis --> because if their ontologies have a congruence with that of humans, we could make arguments towards the possibility of human-AGI & human-ASI congruence.
I think the ideas around scarring may become relevant in the future of AGI and ASI. Let me know what you think. They may even have strong theoretical underpinnings (stable orbits, attractor dynamics, quantum scarring).
I've been fascinated with the prospect of building AGI based upon some finite set of first principles for decades. I studied in great detail Hegel's "Science of Logic," which despite its lunatic sounding writing nevertheless has always seemed to me a robust ontology. It basically starts from quality and quantity and works its way gradually through inner experience and outer reality, logic, and finally to the "idea." I spent a couple of years studying this and created my own understandable notes. I have partially built out a neuro-symbolic cog architecture basically structured on Hegel's ontology (paper still in progress).
Now, what you describe of Chalmers eg "our physical universe works" sounds to me like being built atop invariant first principles like causality and becoming. I see these in your ontology. The word no respectable scientist likes to use is "metaphysics," because it sounds like pompous BS. But the reality is that the universe seems to work on -- be governed by -- first principles, which "have justification independent of experience," which I regard as metaphysics. The principles seem to be taken together to form concrete objects and ideas.
I like this direction you are taking but it seems like it would be possible to minimize the first principles. You have a lot. In the last few years though with the explosion of LLM success I wonder if a base ontology is really necessary but still it seems that in order for the machine to "understand" a finite set of first priniples in an ontology should form the ground. The understanding bottoms out on the base of the ontology and can proceed no further.
Thanks, interesting read.
This is an amazingly lucid overview of the challenges faced by formalization of knowledge.
It is no surprise that AI companies decided that they will focus on the much narrower task of cataloguing all useful problem-solving paradigms, and building a bot that can weakly generalize around them. Without trying to make sense of that.
If there are some unifying patterns, they will bubble up in the neural nets.
The industry is now working on the "reasoning" and "grounding of symbols" aspects, via ad-hoc methods such as chain-of-thought, generating runnable code, and invoking external tools as needed.
Thanks, finally a read that brought some inner peace and focus in a complex environment and during complex times.
You write: "Depending on the particulars of how the AGI mind works, it might outgrow any initially supplied inference guidance tools very rapidly, or it might continue to leverage them in a transformed form for quite a long time."
Do you think these outgrowing events lead to exponentially divergent systems in terms of both their internal workings and their phenomenology in the real world, or do they diverge in their internal workings while yielding more or less equilibrated interactions with the real world, or perhaps converge toward a master state characterized by either equilibrated or more or less chaotic phenomenologies?
The reason I think this is a relevant question is twofold: first, because it directly affects how we experience and interpret these processes and systems, and second, because it raises the possibility that we may need to interact with these systems - through a mixture of feeding and feedback - at some precise ranges (and scales) before the outgrowing events occur (or are 'let happen', or 'be deployed', assuming some degree of quantifiability or measurability). Such engagement could potentially induce a form of "interactive scarring" that nudges their phenomenologies toward states more congruent with our own physiology and homeostasis --> because if their ontologies have a congruence with that of humans, we could make arguments towards the possibility of human-AGI & human-ASI congruence.
I think the ideas around scarring may become relevant in the future of AGI and ASI. Let me know what you think. They may even have strong theoretical underpinnings (stable orbits, attractor dynamics, quantum scarring).