How we’re gearing up to use tensor logic to help symbolic-logical-inference and neural-nets-on-GPUs synergize super-tightly together within the Hyperon AGI project
This is a brillant approach to the neural-symbolic integration problem. The insight that logical inference is basically tensor contraction is elegent and solves the serialization bottleneck in a way that feels inevitable once you see it. I've watched so many hybrid systems struggle with the translation layers between symbolic and neural components, and RAPTL's resource tracking alongside uncertainty makes this way more practical than just treating tensor logic as a theoretical exercise.
Great insights and visionary blood Ben! we will have in a few years human level AGI but I believe we will have multiple iterations of small AGIs and eventually a major one... The challenge is if it is open or centralised and if comes from a more humanistic school or military autocratic creation base. The lines are blurring and the important thing is how we deal with it now!
A fundamental aspect of true intelligence is the ability to learn autonomously, that is, the ability to find dependencies and cause-and-effect relationships.
Modern LLMs completely lack such capabilities, and experiments with systems based on symbolic data manipulation use only the most primitive approaches to self-learning, based on the evaluation of statistical probabilities and reinforcement learning, which are not capable of being useful in real-world conditions.
Progress in the development of AGI is possible only if well-known algorithms for finding dependencies and cause-and-effect relationships are integrated (one of the features of which is radical active experimentation); whether to use GPUs/TPUs for this is a secondary question (and essentially now boils down to deciding what is better - to use five passenger cars or one truck to deliver pizza).
Your work on tensor logic arrives at a moment when the field is finally recognizing what you've been arguing for years: pure scaling won't get us to AGI, symbolic-neural hybridization is necessary. It's gratifying to see this validation, even if belated.
Coming from metaphysics and category theory, I'm struck by a possible resonance with Olivia Caramello's program on toposes as 'bridges' between mathematical theories. Her core insight: the geometric (continuous) can be faithfully translated into the algebraic (discrete) without structural loss, via sites and sheaves.
Do you see potential dialogue between tensor logic and topos theory? Both address the same fundamental problem—bridging different modes of representation while preserving structure—from different mathematical traditions.
And a more speculative question: shouldn't we think from the right mathematics toward hardware yet to be invented, rather than being constrained by what current GPU/TPU architectures happen to do well?
We’re exploring using runtime metacognition strategies to enable LLMs to delay token commitment allowing multiple reasoning trajectories to be considered. Wonder if this might handle the uncertainty aspect. Interesting to imagine how these approaches might stack for wiser AI.
This is a brillant approach to the neural-symbolic integration problem. The insight that logical inference is basically tensor contraction is elegent and solves the serialization bottleneck in a way that feels inevitable once you see it. I've watched so many hybrid systems struggle with the translation layers between symbolic and neural components, and RAPTL's resource tracking alongside uncertainty makes this way more practical than just treating tensor logic as a theoretical exercise.
Great insights and visionary blood Ben! we will have in a few years human level AGI but I believe we will have multiple iterations of small AGIs and eventually a major one... The challenge is if it is open or centralised and if comes from a more humanistic school or military autocratic creation base. The lines are blurring and the important thing is how we deal with it now!
A fundamental aspect of true intelligence is the ability to learn autonomously, that is, the ability to find dependencies and cause-and-effect relationships.
Modern LLMs completely lack such capabilities, and experiments with systems based on symbolic data manipulation use only the most primitive approaches to self-learning, based on the evaluation of statistical probabilities and reinforcement learning, which are not capable of being useful in real-world conditions.
Progress in the development of AGI is possible only if well-known algorithms for finding dependencies and cause-and-effect relationships are integrated (one of the features of which is radical active experimentation); whether to use GPUs/TPUs for this is a secondary question (and essentially now boils down to deciding what is better - to use five passenger cars or one truck to deliver pizza).
Your work on tensor logic arrives at a moment when the field is finally recognizing what you've been arguing for years: pure scaling won't get us to AGI, symbolic-neural hybridization is necessary. It's gratifying to see this validation, even if belated.
Coming from metaphysics and category theory, I'm struck by a possible resonance with Olivia Caramello's program on toposes as 'bridges' between mathematical theories. Her core insight: the geometric (continuous) can be faithfully translated into the algebraic (discrete) without structural loss, via sites and sheaves.
Do you see potential dialogue between tensor logic and topos theory? Both address the same fundamental problem—bridging different modes of representation while preserving structure—from different mathematical traditions.
And a more speculative question: shouldn't we think from the right mathematics toward hardware yet to be invented, rather than being constrained by what current GPU/TPU architectures happen to do well?
Huawei seems to think so...
We’re exploring using runtime metacognition strategies to enable LLMs to delay token commitment allowing multiple reasoning trajectories to be considered. Wonder if this might handle the uncertainty aspect. Interesting to imagine how these approaches might stack for wiser AI.