I want to say a few words today about where I think we stand on the pathway to beneficial AGI — meaning AI systems that can think, learn, and reason imaginatively across diverse domains roughly as well as human mind can (or better), including domains and tasks not well represented in their training data.
The right collective architecture may increase the chance of building beneficial AGI, but beneficial AGI becomes reliable only when governance is embedded at the execution boundary.
I agree that AGI will be not about one single technique, but about orchestration of various tools.
The big question is how to build the orchestrator. That is where the LLM-based approach outcompetes any blockchain or engineered approach simply because it can immitate whatever without understanding how it works. This is not being clueless, is respecting the complexity of the world.
Very few problems are truly novel. And when they are, any existing engineered frameworks are likely not going to do better than brute-force improvisation.
Humans, at the end of day, solve new problems not by being smart, but by being persistent, and diligently applying the little toolbox of skills nature gave us and the tools we can build along they way.
A system must first of all be grounded in reality and get frequent feedback and corrections from its actions. Then some hallucination in process is fine, as it gets fixed soon, and will likely not happen as much once the paths in a domain are well-trodden.
In very specialized areas, sure, if we figured some algorithmic machinery, we can code it up and add it as a plugin.
How is the OpenCog Hyperon project going? When might we expect it to disrupt the AI industry?
The right collective architecture may increase the chance of building beneficial AGI, but beneficial AGI becomes reliable only when governance is embedded at the execution boundary.
Will existing open source LLMs be part of this solution?
I agree that AGI will be not about one single technique, but about orchestration of various tools.
The big question is how to build the orchestrator. That is where the LLM-based approach outcompetes any blockchain or engineered approach simply because it can immitate whatever without understanding how it works. This is not being clueless, is respecting the complexity of the world.
Very few problems are truly novel. And when they are, any existing engineered frameworks are likely not going to do better than brute-force improvisation.
Humans, at the end of day, solve new problems not by being smart, but by being persistent, and diligently applying the little toolbox of skills nature gave us and the tools we can build along they way.
A system must first of all be grounded in reality and get frequent feedback and corrections from its actions. Then some hallucination in process is fine, as it gets fixed soon, and will likely not happen as much once the paths in a domain are well-trodden.
In very specialized areas, sure, if we figured some algorithmic machinery, we can code it up and add it as a plugin.