Three Things the World Doesn't Yet Grok About AGI
I want to say a few words today about where I think we stand on the pathway to beneficial AGI — meaning AI systems that can think, learn, and reason imaginatively across diverse domains roughly as well as human mind can (or better), including domains and tasks not well represented in their training data.
This pronouncement won’t shock anyone who is a regular reader of this blog — but I continue to believe the impending emergence of AGI is the most important thing happening on our planet at this moment!
In fact it feels almost unreal that the notion that AGI may be near – not just a possible thing, but likely to emerge into our everyday reality during the next few years – is now being widely appreciated …not only by a variety of tech CEOs and even national leaders, but by so many members of the general public. This growing public visibility of the AGI concept and vision is dramatic for everyone, but exciting in a special way to those of us who’ve been working on AGI and toward beneficial AGI for too long.
But still – in spite of all the talk about the subject, there are two very key points about AGI that I think are not well enough understood. And they connect to a third point that may matter most of all.
NOTE: The first half of this post goes over somewhat familiar ground to those who have been following me, but the second half – which muses a bit about collective consciousness in open-source projects and networks – is a bit more novel. So if you feel you already grok my views on the path to AGI well enough and aren’t interested in modest updates in this regard, skip ahead to the “Thing 3” part about Jeffery Martin’s “consciousness locations” and their implications for organizations building AGI. There will also be a whole follow-on post just enlarging on this latter aspect, which I’ll post sometime soon
Thing 1: The Current “Default Path”to AGI Is Not the Only Path – and Maybe Not Even a Viable Path
The first point I feel obliged to drive home is: the default path toward AGI being pursued by the big tech companies of the world today is not the only approach, and IMO not even the most likely approach, to get us to AGI.
To understand why, it helps to know a little about how today’s leading AI systems actually work under the hood. The large language models (LLMs) behind tools like ChatGPT and Claude are, at their core, massive neural networks — software systems loosely inspired by the brain, made up of billions of adjustable numerical parameters. These systems learn by a process called backpropagation: during training, you feed the network enormous amounts of text, it makes predictions about what word comes next, and then an algorithm works backward through the network adjusting all those billions of parameters to make the predictions slightly more accurate. Do that trillions of times with trillions of words of text, and you get something impressively fluent and knowledgeable. That’s the training phase. Then in the inference phase — when you actually ask the model a question — it draws on everything it learned during training to generate a response.
I think it’s increasingly understood that just scaling up LLMs isn’t going to get us to AGI. But there’s still a bunch of hope that, you know, tweaking LLMs a little bit is going to get us there. And I think that’s not the right way to look at it. Gary Marcus also overstates things a bit when he says LLMs are just kind of faking it and are useless. But still — even if LLMs can serve as a significant part of the story, they’re not 75% of the story. They’re a valuable knowledge resource and input into an AGI mind, but they can’t be the crux of it. An LLM is fundamentally a next-token predictor: a fantastically sophisticated one, but one that lacks the ability to truly reason about novel situations, reflect on its own thinking, set and pursue its own goals, or build up understanding the way a developing mind does. Those are capacities you need for genuine general intelligence.
Most people don’t realize that there are alternatives out there under very active development, also looking promising. Backpropagation-based deep neural nets may be just the first of the historical approaches from the AI community to be accelerated by modern compute tech — but not the last. The AI research community has been developing a whole zoo of different approaches for decades: predictive coding networks (which learn by trying to anticipate their own sensory input, more like the way brains seem to actually work), logic-based reasoning systems (which can do the kind of step-by-step deduction and knowledge manipulation that neural nets struggle with), evolutionary learning (which breeds and mutates populations of AI programs the way natural selection breeds organisms), and cognitive architectures (integrated frameworks that combine multiple types of learning and reasoning into a unified artificial mind). These ideas have been around for a long time. What’s new is that modern hardware and software infrastructure are finally powerful enough to run them at serious scale — just as happened with backprop-based neural nets five to ten years ago.
My own team’s Hyperon project is an example of this kind of alternative approach. Hyperon is a cognitive architecture — think of it as a blueprint for a complete artificial mind, not just a single learning algorithm. At its core is something called an Atomspace metagraph: a large, flexible knowledge store where concepts, relationships, procedures, and perceptions are all represented as interconnected nodes and links, a bit like a massively complex version of a mind map. Sitting on top of this knowledge store are multiple AI algorithms that work together: a probabilistic logic system called PLN that handles reasoning under uncertainty, an attention allocation system called ECAN that decides what’s worth focusing on at any given moment, an evolutionary program-learning system called MOSES, and a neural-symbolic integration layer that can pull in the knowledge captured by LLMs and other neural networks. All of these are coordinated using a new programming language we built called MeTTa, designed specifically for this kind of multi-algorithm cognitive work. The idea is that general intelligence isn’t one trick — it’s an orchestra of different cognitive processes working together, and Hyperon is the concert hall and conductor.
It’s not so much that most people have looked at Hyperon and decided it’s doomed to fail. It’s more like they don’t even imagine that such a thing exists. They assume that ChatGPT and Claude and Gemini and their Chinese variations are all there is.
Thing 2: Decentralizing AI Is Highly Viable
The second point that’s not really widely understood is that decentralizing AI is viable. It’s possible. And this partly ties in with the first point.
Right now, the way backpropagation is being deployed for scaling up deep neural nets requires enormous centralized infrastructure. Training a state-of-the-art LLM means running calculations across thousands of specialized chips (GPUs or TPUs) that all need to communicate with each other at blazing speed, which means they need to be physically close together in massive data centers. This is why the AI race right now is largely a race to build the biggest server farms — and why it’s dominated by a handful of companies with the capital to build them.
But I think this is not the only way that AGI systems can work. Once you move beyond the algorithms being used in LLMs today by big tech companies, you open yourself up to a wide variety of other AI algorithms, many of which can run effectively on more heterogeneous, decentralized infrastructure. A cognitive architecture like Hyperon, for instance, involves many different subsystems doing different kinds of work — some doing logical inference, some doing pattern recognition, some doing evolutionary search — and these don’t all need to be crunching the same giant matrix on the same tightly coupled hardware. They can be spread across many different machines in many different places, coordinating over a network.
And this gets into the role of blockchain. It’s a commonly held perception that blockchain is about finance, is about cryptocurrency. You may love cryptocurrency as a way to make money, as a novel avenue for entrepreneurship, or you may hate it as a methodology for scamming people out of their money. But what’s not really appreciated is that blockchain is not intrinsically about crypto or money.
Here’s what blockchain actually is, at bottom: it’s a technology for getting a network of computers to coordinate and agree on what happened — who did what, in what order, with what results — without any single company or authority being in charge. It does this by having the computers in the network collectively verify each other’s work and record everything in a shared, tamper-proof ledger. The financial applications (cryptocurrency, DeFi, and so on) are just one use of this capability. The deeper capability is controlling networks of computational processes without a central owner or controller.
While many of the things done in the crypto world have dubious ethics, blockchain can actually be a solution for ethical AI — in that it can allow AI to be run in a way that can’t be controlled centrally by anyone. No single corporation decides what the AI can and can’t do. No single government can shut it down or bend it to its purposes. The rules are written into the protocol and enforced by the network itself.
From Ideas to Infrastructure
There are specific mathematical frameworks and software codebases that make these points quite well, although they haven’t yet reached a high level of maturity. Only in the last six months are the core infrastructure systems underlying our Hyperon AGI framework finally operating at scale. Now we’re working hard to leverage these to make scalable implementations of our various AGI ideas, algorithms, and architectures.
We’ve had a decentralized AI infrastructure for a while with the SingularityNET platform — which has been letting developers publish, discover, and run AI services on a decentralized network since 2018 — and there have been others like that of our partner Fetch.ai. But we’re only just now launching the next generation: a really sophisticated decentralized AI platform called ASI Chain. The key advance with ASI Chain is that it lets you run sophisticated AI processes on the blockchain itself — meaning the AI computations are actually verified and coordinated by the decentralized network — rather than just running in ordinary cloud containers that are sort of attached to the blockchain and leverage it only for payment or external communication.
This is the sort of thing I’ve been talking about for a long time. Within the last months, we finally have scalable infrastructure to back up all the ideas.
Of course, what will get the world in general interested in this stuff is not some cool code sitting in GitHub repos. It’s scaled-up launch products that everybody can use, which is exactly what we’re working toward. On the other hand, it will be great to get more people to understand what we’re doing and get excited about it right now — because to the extent we can do that, we can pull in more developer energy, more partners, to help push things forward faster. There’s got to be a process where we build enthusiasm now to help build community and build energy, and with all this energy and effort we can then move faster toward launching the scalable products that will blow people away the way Google and ChatGPT did — but much, much more so.
Thing 3: The Level of Consciousness That Creates AGI Will Influence the AGI’s Level of Consciousness
The two insufficiently well understood points about AGI I’ve reviewed so far connect with a third point, which is also not widely appreciated: the kind of Singularity we get is going to depend to a significant measure on the collective consciousness that we all display.
This wouldn’t be so true if AGI was going to be launched by big tech companies based on LLMs — because then it’s already done, it’s being rolled out. We’re the users, we’re the product, but we’re not really the creators. At best, we’re supplying raw materials.
On the other hand, if the choice of which AI algorithms are going to be used hasn’t been made yet, and if the AGI is going to be controlled only by a vast participatory swath of the population — that’s a different story. Then our mindset, our psychology, will play a role in what sort of AGI we create.
This is something I’ve been thinking about quite a lot. To frame it in a fairly precise way, I will draw here on the work of my friend Jeffery Martin, a researcher who has spent over a decade conducting large-scale scientific studies of people who report sustained states of deep well-being — not just fleeting peak experiences, but lasting shifts in how they experience everyday life. Jeffery’s research led him to map out what he calls a “continuum of fundamental well-being”: a series of identifiable psychological locations that people can inhabit, each with its own characteristic relationship to the self, emotions, and the world.
Most people mostly live at what Jeffery calls “location zero.” That’s ordinary, everyday modern human consciousness. You worry about yourself. You’re worried about your life, your family, your reputation, keeping enough resources going to support yourself and the people you love. You’re upset when something seems to threaten you or your livelihood. You’re happy when you get something you wanted or thought you needed. Your sense of well-being is conditional — it depends on circumstances going your way.
But then there’s also the potential for states of human consciousness that are different from this everyday emotional wrap-up in the rat race of ego and resources. There are consciousness locations that are more centered in fundamental well-being and simply the joy of being alive. Jeffery identified these as locations 1, 2, 3, and onward along the continuum. What distinguishes these locations from each other is the degree and flavor of the shift, but what all of them have in common is this: one is no longer so wrapped up in the emotional ups and downs of everyday life. It’s not that you don’t feel them, but you’re not traumatized by them. If you lose something that you value, you may feel regret, but you’re not going to feel anguish that debilitates you. You’re not so firmly attached to things that may come and go, because you always have a sense that you’re connected with the universe at large, and things are fundamentally full of joy and goodness at the base.
In a state of Fundamental Well-Being, one basically always feels it’s wonderfully good to be alive – even though not every single thing within life will always be good. In more typical Location 0 states of consciousness, a person will often temporarily overwhelmed and feel cut off from the broader goodness of everything — but in higher locations one doesn’t lose track of this connection, it is always right there as part of the fabric of felt experience.
What makes Jeffery’s framework useful here is that it gives us a concrete, empirically grounded vocabulary for something that spiritual traditions have talked about for millennia but that usually gets dismissed as vague or mystical. These are measurable, reportable psychological states, and they matter because they change how people relate to each other and to the projects they undertake.
Collective Consciousness is a Vibrant Reality
You can also think about different levels of consciousness for collective systems — organizations, communities, whole societies — and this is one place Jeffery’s line of thinking connects extremely directly to the question of what kind of AGI we build.
You can see there are psychopathic organizations that don’t care about anything except their own benefit and are destructive — some autocratic nations, some corporations have been that way. These might be at Location -1 in Martin-esque parlance, perhaps like certain highly problematic humans.
Then there’s what you might think of as ordinary, everyday consciousness-level organizations. They’re not wanting to be harmful to the world around them, but they’re concerned with their own egoic pursuits — making more money than the other companies around them, getting more members than the other organizations around them. Being disturbed when they lose money enough to lay off staff, being excited when they gain money and can expand, when they get more market share from their competitors. These are “location zero” organizations, you might say — their well-being is conditional on beating the competition.
Then you can envision — and there may not be many of these — organizations operating at a higher level of consciousness. They’re not so egoic, not so concerned with their own welfare relative to other organizations, and they’re flexibly able to morph their goals and ways of doing things according to the circumstance. Their sense of purpose comes from something deeper than competitive advantage.
I’d say the open source community has been like this to a significant extent. The Linux community has been like this. The global community of science has to a large degree been like this, which is part of why science has been growing and flourishing so wonderfully over the last 150 years or so. Not that open source and scientific communities are perfect — there’s plenty of issues all over the place — but I think they’ve been growing and expanding in a fairly flexible and not-so-egoic way. You could argue they’re experiencing a joy in existing and doing what they’re doing, which resonates with the pure joy displayed by the individual members of these communities.
Now, what’s really interesting here — and I think this is an underappreciated and quite profound point — is why open source communities exhibit these higher-consciousness properties. It’s not because they attract unusually enlightened people. If you actually look at the demographics and psychology of open source contributors, you find pretty ordinary human beings with pretty ordinary egos — career anxieties, status-seeking behavior, heated arguments on mailing lists, the whole works. Anyone who’s watched a flame war erupt over a merge decision knows that individual ego is alive and well in these spaces.
And yet the collective behavior of the system looks nothing like what those same individuals produce when organized into corporations or government agencies. The system is generous (permissive licensing converts any contribution, regardless of motive, into a public good). The system is non-attached (when a disagreement becomes irreconcilable, the project forks, and both forks continue — no existential crisis, no hostile takeover, just adaptive divergence). The system is transparent (all work happens in public, so the kind of information-hoarding and political maneuvering that drives corporate ego-competition becomes structurally impossible). And the system is non-coercive (because people can leave at zero cost, the community has to continuously produce conditions that feel intrinsically worthwhile, or it simply dies).
What this means is that higher-consciousness collective behavior is emergent — it arises from the organizational architecture, not from the individual psychology of the participants. Take a bunch of ordinary Location 0 humans, put them in an institutional structure with fork-ability, permissive commons, radical transparency, and voluntary participation, and the collective starts behaving like a Location 1 entity: flexible, non-defensive, intrinsically motivated, operating from a kind of foundational okayness. The individuals haven’t changed. The architecture is doing the work. It’s consciousness by design, not consciousness by selection.
This is a reasonably important insight for the AGI question, because it means institutional design is a technology for consciousness — it can produce collective behavioral signatures of higher consciousness without requiring any individual participant to be operating at a higher consciousness level. And it raises a tantalizing follow-up question: if the structural features of open source communities accidentally produce this effect, what would happen if we deliberately designed the institutions around AGI development to amplify it? What if we structured the governance, incentive systems, and collaboration practices of projects like Hyperon and ASI Chain not just to produce good code, but to actively cultivate the non-egoic, flexible, joyful collective psychology that we want the resulting AGI to embody? The medium, as Marshall McLuhan told us, really is the message — and the way we organize ourselves to build AGI may matter as much as the algorithms we write.
What about the internet itself? While some of the content on the internet is pretty ethically questionable and not psychologically healthy — the global internet as a phenomenon and community has been growing in a fantastically open-endedly beautiful way, displaying a different sort of trans-egoic consciousness than traditional organizations. Nobody owns the internet. Nobody designed it from the top down. It grew because millions of people contributed to it out of curiosity, generosity, and the sheer pleasure of building something together. That’s what a higher-consciousness collective looks like in practice.
The collective consciousness of open-source projects and associated human and compute networks is a fascinating and important topic and I will dig into it a bit more in the sequel post!
The (Huge) Challenge Before Us
What we have as a challenge now is this: by a combination of technological effort, we need to develop alternate AGI algorithms that are not just more capable of moral agency and genuine compassion, understanding of self and others, but also more amenable to being operated on decentralized infrastructure. And then we need to open our own minds and consciousnesses up beyond our own egos to the various possibilities of the future.
The technology and the consciousness are not separate issues — they feed each other. If we build AGI using centralized infrastructure controlled by a few corporations, the psychology of those corporations will be baked into the result, and the rest of us will have no meaningful say. But if we build AGI using open, decentralized systems that a broad community participates in, then the consciousness of that community — its openness, its generosity, its capacity for joy and wonder — will shape what gets built. The medium really is part of the message.
We have the ability to use the transition to AGI to uplift the collective intelligence of our society to these higher consciousness locations. Framing the emerging AGI revolution in terms of the consciousness location of our collective humanity is perhaps a highfalutin way to put it, but I think it’s an important way to look at it — in the sense that it’s a different framing from “us versus them” or “evil AI versus good AI.” This is, in my view, the sort of perspective we need as we move forward through the last few years of the pre-Singularity era.


How is the OpenCog Hyperon project going? When might we expect it to disrupt the AI industry?
The right collective architecture may increase the chance of building beneficial AGI, but beneficial AGI becomes reliable only when governance is embedded at the execution boundary.