OpenBGI is Building its Initial Network — and We’re Looking for a Few Good Humans
The post-LLM, post-scaling AGI moment has opened a twelve-to-eighteen-month window, and the network that will deploy the first human-level AGI through it isn’t yet fully built.
TL;DR — OpenBGI is pulling together an initial network of senior operators, heterodox researchers, regulated-industry leaders, infrastructure builders, and aligned capital. If that’s you and the narrative below makes sense to you, email partners@openbgi.com. Details at the end...
For the last several years, I’ve opened up the annual AGI Research Conference — which I’ve been leading since 2006 — by saying that there’s never been a more exciting time to be working on AGI. The funny thing is that it’s been true every year, because things keep getting more and more exciting, and the rate at which they’re getting more exciting is itself accelerating. It’s frankly insane how fast things are unfolding now. I tried to take a week off to be in nature with my wife and kids last month, and during that one week Anthropic released Mythos, my own team at SingularityNET had a string of breakthroughs with the OmegaClaw agent framework, and three or four other things happened that any one of which would have been the headline of the month a couple of years ago. You can’t blink your eyes without some wild new development.
Every few months a wave of media nonsense washes through about how AI is slowing down, AI is over, the bubble has popped — and I always think, well, wait a couple more months and the next amazing thing will land and everyone will be excited again. This is exactly what you’d expect in the last few years before the singularity: things happening faster and faster and faster. Not everything — socks and forks are about the same as they always were; large parts of geopolitics haven’t budged in decades, unfortunately — but the technologies actually relevant to the singularity itself, to the advance of general intelligence toward superhumanity, are compounding on a curve that bends harder every quarter.
What I want to write about here is one specific shift within this overall dynamic, because I think it’s the most strategically consequential one of the last eighteen months, and I don’t think it’s been named clearly enough yet. And it’s extremely relevant to my own efforts at creating decentralized cross-paradigm AGI for the benefit of humanity and all sentient beings.
What just broke
The mainstream narrative about AGI two years ago was simple. Three or four well-capitalized labs in the US (and then, yeah, China…) – one core architectural bet: scaled-up transformers — an arms race over compute, and a finish line measured largely in training budgets. Everything else was considered a footnote.
Most of us in the AGI R&D world never bought this story; we understood from the moment of its launch that ChatGPT, while genuinely amazing, was not an AGI architecture and not really even a cognitive architecture. In some abstract theoretical sense next-token prediction can be a route to superhuman general intelligence — given unlimited resources — but in practice, with the resource constraints we actually have, it isn’t, and to build an AGI that works in the world it may be valuable to use transformer neural nets as a key component but they’re not the whole architecture and not even the main architecture.
For a while, though, the mainstream tech world and the commercial AI world really did seem to be operating under the illusion that just scaling LLMs bigger and bigger was the road to human-level AGI. That illusion is now visibly fading. Even Sam Altman and others who’ve historically been very strongly transformer-pilled have come out saying we need more than just scaling — we need architectural changes, fundamentally better memory, different ways to ground reasoning in observation, and so forth.
There’s much less consensus on exactly what we need to get to AGI, or on whether the answer is a radical new direction or a few clever additions on top of transformers … but there’s now a decent subset of the AI and tech world that thinks AGI is probably coming soon — within one to five years — and that scaling transformers alone won’t get us there.
Cracks in the transformer-scaling orthodoxy
The cracks are visible at four different layers, and they reinforce each other.
The empirical crack is the most concrete. The compute-to-capability curve has flattened measurably across the last two model generations. Tenfold increases in training compute now produce roughly twofold improvements on benchmark tasks, and the improvements that actually show up are not the kinds of generalization that working AGI researchers care about. Reasoning over long horizons is improving slowly. Memory across sessions is being achieved by hacking scaffolding rather than by improving foundation models. Real-world grounding — the ability to track something as ordinary as a complex molecular interaction across multiple sessions and predict its behavior under novel conditions — is not being improved by scaling and may not be improvable by scaling at all.
Joelle Pineau, the former head of FAIR at Meta, put it about as plainly as anyone in her position can: the timeline to AGI through current methods is “a little bit longer than we thought.” That ‘s more polite than Yann LeCun’s “LLMs are an off-ramp on the path to AGI“ … and it’s increasingly realized that the truth is somewhere inbetween.
The talent crack is even more telling. Senior departures from frontier labs over the past twelve months have moved well past the level explainable by ordinary career churn. The chief research officer of OpenAI, the head of FAIR at Meta, the head of Amazon’s AGI lab, the head of AI at Databricks — all of them have left their roles in roughly the same window. Some have started new ventures, some have taken research-only positions at heterodox institutions, some are still in stealth. Dileep George, whose company Vicarious was acquired by Google, is now leading Astera Institute’s $1B+ committed AGI research program — and Astera’s mission statement reads almost like a paraphrase of arguments my OpenCog colleagues and I were making 15-20 years ago. The departures are not random; they cluster around researchers who’ve spent enough time inside the scaling regime to be confident about its limits, and they cluster around bets that share a common feature: each is trying to build something scaling alone is not going to produce.
The regulatory crack pushes in the same direction. The EU AI Act, US executive orders, the emerging multilateral frameworks — all are converging on demands for open governance, auditable reasoning, institutional accountability, structural separation of safety from commercial pressure. None of these are demands closed proprietary labs are well-positioned to satisfy with their huge opaque LLMs trained on everybody’s data without consent. Governments and regulated-industry buyers are increasingly looking for AGI infrastructure they can verify rather than trust, and the supply side of that demand is currently almost empty.
The commercial crack is the one that compounds the rest. The capital intensity of pure-scaling approaches is forcing labs into financial structures that are increasingly hard to sustain. Compute commitments at the hundred-billion-dollar scale require revenue trajectories that even the best labs are struggling to articulate. The waste built into transformers — internal representations that store minor copies of the same thing over and over — is hitting backlash on the sustainability side and on the unit-economics side both. It’s clear that if you can move toward general intelligence by abstracting knowledge more effectively, you can deliver the same or better functionality with less energy. But to do that you have to break from standard transformer architectures, which is exactly what the labs whose business models depend on those architectures cannot easily do.
Any one of these cracks would be a significant signal. All four together describe a paradigm in transition.
The substrate nobody is building
What comes next is less clear than what’s ending, but the requirements that are emerging across the heterodox research directions are now easy enough to enumerate, at least if you come from an AGI background. Among other things:
You need a memory substrate that persists across episodes — structured, retrievable, with full provenance — rather than context-window reconstruction dressed up as memory.
You need reasoning machinery that can compose primitives into novel behaviors and inspect its own intermediate steps, rather than producing answers from opaque single-pass inference, and that can deal with symbolic and quantitative data directly rather than via brittle bolt-ons.
You need a representational layer where structure and probability live together natively — where linguistic, symbolic, and quantitative knowledge can interoperate without translation losses.
You need an attention-allocation mechanism that distributes computational resources flexibly across goals over time.
You need language models – or something similar – that are trained in the context of agency, rather than agency being bolted onto models trained for non-agentic next-token prediction
You need a self-modeling approach that enables a system to interpret its actions and interactions in the context of who it is and who it’s interacting with, rather than a brief disembodied recent context
You need a distributed execution layer designed for AGI workloads from the ground up, rather than commandeered from infrastructure originally built for matrix multiplication at scale.
These requirements are not specific to any one AGI paradigm or cognitive architecture. They’re cross-cutting requirements of the post-scaling research direction. Continual learning needs them. Long-horizon agent capability needs them. Active inference needs them. Neural-symbolic cognition needs them. Neuroscience-informed architectures need them. The biology-efficient compute thesis needs algorithms that map onto this kind of substrate naturally rather than through translation layers.
This substrate is not being built by any closed lab, and there are structural reasons for this beyond the tendency of big companies, VCs and venture-funded startups to follow trends rather than breaking out in new directions. Building a genuine human-level-intelligence-capable cognitive architecture is fundamentally trickier in some ways than building a next-token-predictor or a chatbot – even a very smart chatbot. If you’re constantly chasing short-term rewards it’s very hard to pursue an approach that has so many moving parts and whose big intelligence payoff comes only when all the parts are effectively working together.
What is cool, though, is that my colleagues and I have been working on this sort of substrate for several decades now – figuring out theory, building prototypes, publishing papers… and critically, during the last few years, finally building scalable infrastructure that will allow us to roll out our novel AGI approach at the needed magnitude.
We now have ingredients such as
The Hyperon Atomspace – a typed metagraph for representing any kind of knowledge in a way that lets many different kinds connect together flexibly
MeTTa, the programming language of thought … the AGI language that lets different kinds of knowledge interoperate efficiently in Atomspace
PLN, Probabilistic Logic Networks, the reasoning engine designed to work alongside neural nets and sensory data as well as structured logical and mathematical information
ECAN – Economic Attention Networks – an attention-allocation mechanism designed for distributing cognitive resources across diverse processes and applications on
Evolutionary learning and algorithmic chemistry systems implemented in MeTTa and enabling radical creativity beyond training distributions
Distributed, decentralized infrastructure for Atomspace and MeTTa and everything implemented in it, developed by SingularityNET, NuNet, and ASI:Chain
OmegaClaw, an OpenClaw-esque agent with Hyperon components embedded, bringing effective long-term memory and symbolic reasoning along with agentic LLM capability
FabricPC, a software framework for predictive and causal coding neural models — an alternative approach to neural training and inference, which is stronger for continual learning, transfer learning, lifelong learning and other AGI-related capabilities than the standard backpropagation algorithm
None of this was built in response to the post-scaling moment. It long predates it. The diagnosis the mainstream is now finally arriving at is the same one I’ve been making for what feels almost like centuries but has only been decades.
Why open, decentralized, and mission-irrevocable
Another thing I was concerned about decades ago that is only currently getting mainstream attention is the little topic of AGI ethics.
I’m not extremely concerned with naive science-fictional predictions of inevitable AGI doom, nor do I think it will be a tragedy if AGI ultimately liberates humanity from working for a living (we can find better things to do with our time!). However I do think that, even if the Singularity ultimately turns out to be near-utopic due to the abundance enabled by superintelligence, the transitional period from here to there may be more or less painful depending on how the path from here to AGI to ASI unfolds.
I don’t think we should trust any one party – including my own sweet self – with something as consequential as the first human-level or superhuman thinking machine. AGI should be developed freely and openly like Linux and the Internet. Of course there are risks with this, but a rational analysis pretty quickly indicates that the risks of the “proprietary AGI silos wrapped up in a geopolitical arms race” approach are much greater.
One interesting point is that there are both business and ethical arguments pointing in this same free and open direction.
The business argument is centered on procurement. Institutional buyers — governments, regulated industries, sovereign deployments, large enterprises — will hesitate to commit to AGI infrastructure they can’t audit, or that can be unilaterally redirected. Every JV partner, every government contractor, every biotech firm building therapeutic pipelines on top of the substrate is making a long-term bet on its integrity. If that bet can be undone by a single board decision or an acquisition, that’s definitely not a positive. Open and decentralized is not just idealism – it’s a robust way to earn the institutional commitment this sort of technology needs in order to scale.
The ethical argument is centered on the extremely obvious observation that: AGI infrastructure that can be captured by a single faction will – with very high probability – be captured.
The last five years have been a controlled experiment on this, and the results are in. Regarding OpenAI, I could see from the vague statements about openness on their initial website exactly how open they were going to be … and I could see from the moment they pivoted to “capped profit” that basically what you had there was a commercial company plus a nonprofit with no money and no teeth, and that once the software started to work the money would take over its usual short-sighted way. Not because money is necessarily evil — it was one of our species’ fantastic inventions, for sure – but because money in the modern capitalist system tends to be short-sighted, and as the Singularity gets nearer it’s getting in many ways more short-sighted rather than less.
The same sort of dynamic, in less dramatic forms, has played out at every other major AGI lab. There’s no version of “trust the founders” that reliably survives sustained capital pressure at sufficient scale. The only approach that can robustly work to guide AGI toward ethical outcomes is structural — irrevocable governance, distributed control, and a commercial architecture that profits from openness rather than working against it.
The business and ethical arguments converge because they’re answers to the same underlying question. What organizational approach produces an outcome that’s durably beneficial across the multi-decade timeline AGI deployment will actually inhabit?
What we’re building, and what we need
All this is why I’m working together with a number of colleagues to create a new AGI organization called OpenBGI – BGI for Beneficial General Intelligence.
OpenBGI will work closely with SingularityNET — the organization I founded in 2017 to provide decentralized infrastructure for AGI — but it’s a new entity with two pieces:
OpenBGI Foundation holds irrevocable governance for the beneficial-AGI mission and the open-source AGI R&D code (Hyperon and predictive-coding neural nets, to be developed together with SingularityNET and a broad open source community).
OpenBGI Labs handles commercial execution, with a JV model that enables vertical deployment across domains like medical, robotics, software development, and others to come.
SingularityNET and the ASI Alliance ecosystem will provide decentralized distribution and, where needed, token economics across the network.
The technical infrastructure we need now exists in early but scalable form, and my AGI-dev colleagues and I in the SingularityNET and Hyperon communities are working hard to build out a full human-level AGI cognitive architecture on top of it, guided by our long corpus of theory, plans and prototypes.
The operating organization that will deploy this infrastructure at the scale this moment requires is not yet built — and that’s the gap I’m forming OpenBGI, and writing this piece, to address.
OpenBGI is going to need broad participation from a lot of people, and I will issue a separate, broader call — to the global community of researchers, developers, and contributors who want to help build beneficial open AGI — sometime in the next few weeks. That call is the more important one in the long run; the substrate exists to be built on, and the people who will build on it number in the thousands.
This particular post looking for a “few good (actually super-awesome) humans” is narrower but I believe tactically critical – addressed to the smaller subset of contributors whose participation is hardest to source through ordinary channels, and whose presence early shapes everything that follows.
Among the many kinds of people we need to pull into the OpenBGI network at this formative stage are:
Senior operators who’ve built and exited a serious technology business — not necessarily an AI business — and who are looking to leverage their expertise for something unprecedentedly important
Senior researchers in cognitive architecture, probabilistic programming, neuro-symbolic systems, active inference, or related lineages — species of AI that are out at the edges of the mainstream now but are rapidly going to become the mainstream.
Commercial leaders who’ve run regulated-industry P&Ls — biotech, financial services, defense, sovereign deployment — and who can structure JV partnerships that meet institutional procurement standards.
Decentralized infrastructure operators who’ve built and scaled real businesses in distributed compute, blockchain, or open-source coordination.
Capital allocators positioned to deploy meaningful capital — institutional or family-office — into the most strategically positioned bet in the open-AGI space, on terms that respect the mission.
People whose contribution shape this list doesn’t predict, who recognize themselves in the argument above and the gap it describes.
I am casting the net pretty broadly here, because my goal is to learn from who shows up and what they bring. I believe I know how to build a human-level AGI … and not only that, one that will be broadly beneficial to humanity and other sentient beings … and will be able to progressively, rationally and ethically upgrade itself to superintelligence. However, I have been around long enough to realize that scientific, technical and conceptual knowledge is not all it takes to pull off a feat of this magnitude.
If you have experience in one of the areas mentioned above – or something comparable – please email us at partners@openbgi.com. Tell us, briefly:
What you’ve built, led, or funded that you consider your most consequential or interesting work.
What about the argument above resonates — and what if anything you think is wrong about it.
How you think you might be able to contribute to the OpenBGI effort… the Foundation or Labs organizations or the broader initiative and network
What kind of engagement would fit you best
Every response that engages seriously with the ideas or opportunity presented here will get a serious reply.
On timing
I believe we could see the emergence of human-level AGI as soon as 12-18 months from now. It could easily be 36-48 months instead, and there are scenarios where things move faster still.
But if this thinking is anywhere near correct – and I’m not the only expert seriously considering such timelines – it means the window for OpenBGI to position itself as the free and open substrate underneath the post-scaling research direction is not super long.
We want to move rapidly and leverage what we’ve built so far toward the goal of making sure the first AGI on the planet is open, beneficial, and good for humanity and all sentient beings. That’s the reason for writing now, rather than waiting for the next AGI conference, or the next product release. The strategic moment is real, and it’s moving oh so rapidly, and we are very well poised to capture it, but we could use your help.
— Ben Goertzel


"You need reasoning machinery that can compose primitives into novel behaviors and inspect its own intermediate steps, rather than producing answers from opaque single-pass inference"
This is what Claude Code does in practice, at some level. Instead of starting with an internal reasoning engine it can reflect on, it starts with whatever is available in a given domain. It expresses its thinking either in a programming language, a shell script, or in some language as in law or medical field, then diligently builds and/or executes that as dictated by the application.
It is eerily similar to how people do things. We do not have a reasoning engine in our heads. We have to painstakingly write things down, ponder on them, run our recipes, carefully observe intermediate outcomes, tweak our recipes, rerun them, till eventually we go somewhere.
Emphasis on internal rigor rather than haphazard adaptable heuristics have been big problems for AI in the past. I am not sure what the future holds.