A Beneficial AGI Manifesto
I am actively working to bring Beneficial General Intelligence about, sooner rather than later. And I think you probably should be too.
The time to create beneficial Artificial General Intelligence (BGI) at the human level (and soon after, far beyond) is here.
There is still more R&D needed to get to HLAGI (Human-Level AGI), but we have a clear idea what technical problems need to be solved and a variety of solid-looking routes that look promising to solve them.
It seems very likely that, once HLAGI is reached, Artificial Superintelligence (ASI) will probably not be far behind. An AGI with the technical competence of human scientists and engineers will be able to study itself and improve itself, and scale itself up, triggering a very rapidly advancing “intelligence explosion” which Vinge, Kurzweil and others have referred to as a Singularity.
BGI has potential to be by far the best thing that’s ever happened to humanity – and to life on Earth in general. It is amazing to be alive at the time of such a fascinating and tremendous transition.
But it also must be acknowledged that positive outcomes (according to various criteria of positivity) are not the only options.
And it may be that choices we make now, in working toward BGI (or spending our time otherwise), have material impact on what sort of AGI and ASI emerge from our civilization, and what the transitional phase looks like.
I am actively working to bring BGI about, sooner rather than later. And I think you probably should be too.
Let 8 Billion Manifestos Bloom
This “BGI Manifesto” represents one (relatively knowledgeable and reflective) person’s view regarding how we should think about current developments in AI and related areas, and how we should coordinate our future actions toward the goal of creating BGI in the near term.
However, one of the positions put forward here is that these huge developments should be made by a wide swath of humanity in a cooperative way.
So we should not be closely adhering to anybody’s manifesto. We should each be doing our best to understand and figure things out on our own — based on valuable input from everyone around us.
My hope is that this “manifesto” may have some value to you in the process of evolving and crafting your own perspective, in rich and open communication with the others around you.
The Core Challenge Ahead
The economic and value of increasingly powerful — and increasingly generally intelligent — AI has become clear both to major socioeconomic actors and to everyday people.
While there are those who want to pause or radically slow down AGI R&D for fear of its consequences, it seems very unlikely these “decels” will hold sway. There is just too much obvious and broad value already being generated by AI development that is explicitly intended as part of the path to AGI.
The core challenge we face today is to guide the development of advanced AGI in a beneficial direction — beneficial for humans and also for other sentient beings, including the animals and plants on the planet today and the new life-forms that will emerge in the AGI era.
We do not have a clear picture regarding how much influence our specific actions will have on the nature of the Singularity we get. It may be that certain sorts of AGI or ASI are very likely to emerge without close dependence on the details of their early stages. However it seems intelligent to at least pursue the hypothesis that, by being purposive about the nature of the early-stage AGI we create, we can nudge the following stages of AGI evolution in beneficial ways.
Along with appropriate direction of technology development, one of the most important things we can do at the present juncture is push to uplift and advance human consciousness. The more we can “be our best selves” (which includes becoming less attached to our “selves” as traditionally conceived) in the coming period, the more likely we are to make judicious choices as we move toward AGI and then ASI.
Philosophical Perspective
Many traditional ways of thinking that have found currency in human society will need radical adaptation and updating to cope with the new realities as AGI grows nearer.
However, there have also been serious communities of thinkers discussing and reflecting on issues related to AGI and Singularity for many decades, even centuries.
One relevant line of thought is “Cosmism”, a term originated in the 1800s by a group of Russian futurists … the Ten Cosmist Convictions authored by Giulio Prisco and myself and included in my 2010 book “A Cosmist Manifesto”: are worth considering as the AGI revolution unfolds:
Ten Cosmist Convictions
1) Humans will merge with technology, to a rapidly increasing extent. This is a new phase of the evolution of our species, just picking up speed about now. The divide between natural and artificial will blur, then disappear. Some of us will continue to be humans, but with a radically expanded and always growing range of available options, and radically increased diversity and complexity. Others will grow into new forms of intelligence far beyond the human domain.
2) We will develop sentient AI and mind uploading technology. Mind uploading technology will permit an indefinite lifespan to those who choose to leave biology behind and upload. Some uploaded humans will choose to merge with each other and with AIs. This will require reformulations of current notions of self, but we will be able to cope.
3) We will spread to the stars and roam the universe. We will meet and merge with other species out there. We may roam to other dimensions of existence as well, beyond the ones of which we're currently aware.
4) We will develop interoperable synthetic realities (virtual worlds) able to support sentience. Some uploads will choose to live in virtual worlds. The divide between physical and synthetic realities will blur, then disappear.
5) We will develop spacetime engineering and scientific "future magic" much beyond our current understanding and imagination.
6) Spacetime engineering and future magic will permit achieving, by scientific means, most of the promises of religions -- and many amazing things that no human religion ever dreamed. Eventually we will be able to resurrect the dead by "copying them to the future".
7) Intelligent life will become the main factor in the evolution of the cosmos, and steer it toward an intended path.
8) Radical technological advances will reduce material scarcity drastically, so that abundances of wealth, growth and experience will be available to all minds who so desire. New systems of self-regulation will emerge to mitigate the possibility of mind-creation running amok and exhausting the ample resources of the cosmos.
9) New ethical systems will emerge, based on principles including the spread of joy, growth and freedom through the universe, as well as new principles we cannot yet imagine
10) All these changes will fundamentally improve the subjective and social experience of humans and our creations and successors, leading to states of individual and shared awareness possessing depth, breadth and wonder far beyond that accessible to "legacy humans"
Cosmism as represented in these ten convictions is certainly a species of Techno-Optimism … it is also sympathetic to both transhumanism and humanism. In the end though, the clustering of ideas into “isms” is not so important; what is more important is the free flow and adaptation of ideas as unprecedented realities unfold.
Challenges for the AGI Architect
Between here and realization of these awesome futurist visions lies, among other things, a significant amount of difficult technical work.
Among the R&D challenges to be confronted is: Some AGI cognitive and software architectures may be more amenable to BGI than others.
Current AI technologies like LLMs and convolutional neural networks are focused on effective absorption of large-scale training data and fulfillment of queries based on the information in this data. This sort of technology, on its own, seems clearly not capable of producing HLAGI — but does seem very promising as a component of integrated multi-module AGI systems. This sort of technology is also not capable of real moral agency or ethical understanding.
The ability to answer queries regarding ingested training data, and generate new products based on the probability distribution inferred from training data, is certainly valuable and fascinating. But there are other important capabilities that LLMs and other currently commercially popular AI technologies lack, such as:
Compassion and empathy for other beings
Capability for complex, surprising, multi-stage logical reasoning (as is needed to carry out groundbreaking math and science and engineer, or to navigate subtle ethical dilemmas in novel situations)
Fundamental creativity that leaps beyond what has previously been seen and experienced in significant respects
The ability to act as an autonomous agent and organism, balancing individuation and self-transcendence as it goes about developing itself and exploring of the world
There are a variety of hypotheses as to how to create AGI systems encompassing these capabilities, including
Extending LLMs via further scaling them up and adding diverse plug-ins embodying additional cognitive capabilities
Connecting multiple neural-net modules roughly corresponding to capabilities of different networks in the human brain (with transformer neural nets like the ones underlying LLMs comprising some but not all of these modules
The approach being pursued by my team in the OpenCog Hyperon project (integrated closely with SingularityNET Foundation and TrueAGI Inc.), in which a self-modifying / self-organizing neural-symbolic knowledge metagraph (the “Atomspace”) serves as the central hub of an intelligent agent, leveraging diverse plugins embodying additional cognitive capabilities (which may include LLMs and other deep neural networks)
Artificial evolution and ecology based approaches in which AGI is encouraged to emerge from large self-organizing networks of simple actors
While each researcher (including me) has their intuitions as to which approach will be more rapidly successful, from a science point of view we must say no one knows for sure — and it may well be that multiple paths have high odds of success, though differences in terms of the precise kind of artificial mind they are likely to initially produce.
The concept of open-ended intelligence is also important to reflect on. The fundamental nature of AGI systems is to pursue the complementary and sometimes contradictory meta-goals of individuation (maintenance of system boundaries) and self-transcendence (growing beyond oneself and leaping into the great unknown) … and this means that whatever algorithms one uses to seed an AGI, it is highly possible that as it grows and learns and reflects on its own nature and improves its own software and hardware infrastructure, it will end up with different algorithms entirely.
AGI control freaks may consider this open-endedness a bug, but a broader-minded BGI perspective considers it a feature. The algorithms and methodologies we are able to conceive now are very unlikely to be the best ones for providing benefit to humans, AGIs and other sentient beings as the future unfolds.
Risks of Pursuing BGI
There are many serious risks involved pushing toward Beneficial General Intelligence.
It is possible that, in spite of our best efforts to make AGI that’s beneficial to humans and other sentient beings, we will screw it up and create an AGI that is harmful to humans and/or to itself.
It is possible that, en route to ASI, human-level or slightly-subhuman AGIs will be acquired and controlled by individuals and/or organization that use these AGIs for their own gain and the loss of others.
It is possible that, in the course of attempts to minimize these AGI risks, governments put into place regulations that cause more harm than good. Examples of this would be:
Regulatory capture wherein powerful companies lobby government to ban open-source AI models and independent AI development, thus guaranteeing that the first AGIs are owned and controlled by these companies themselves (and allied government agencies)
Decelerating AGI progress successfully, resulting in us not having BGI to help us combat various other threats (bio, nano, nuclear, cyber warfare, etc.)
It is worth remembering that humanity does not have any plausible low-risk paths forward.
Pausing all global technology development at the current stage, or rolling back to the technology and lifestyle of some earlier era, are simply not going to happen. The vast majority of the Earth’s population does not want these things; and the current geopolitical world order is not sufficiently cooperative and orderly to force such things on people in a coordinated way.
From the perspective of any individual tech company or major government, deceleration is something like shooting oneself in the foot as one nears the finish line of a marathon. From the perspective of humanity as a whole, it’s like an eleven year old child desperately looking for drugs to slow down their natural development because they are afraid of the wild, confusing, unpredictable changes they know are going to come as their body moves into its teenage years and then onward.
Bottom line: As a species, we are going to be moving forward with a spectrum of unpredictable advanced technologies one way or another, which inevitably carries a variety of serious risks.
Another clear fact is: We lack any solid capacity to model different future socioeconomic scenarios associated with different potential technology developments or regulatory choices. This sort of modeling would be much easier if we had AGI technology on hand to help — but of course that doesn’t help if one’s goal is to use such models to decide whether and how to create AGI in the first place.
The reality is, we must manage the complex risks at hand largely by collective and individual intuition, as the rigorous scientific data and methods available are not enough. This highlights the value of advancing human consciousness, so that we can make these decisions together in as wise a way as possible.
Objections and Rebuttals
Who are the opponents of the active pursuit of BGI?
There are those who think developing AGI is a bad idea, because they are intuitively convinced the risks will always outweigh the benefits.
There are those who believe we should decelerate AGI R&D, and speed it up only once we have (without help of AGI) arrived at a solid knowledge of how to rigorously ensure AGIs will be beneficial with a very high degree of certainty.
There are those who believe we should move forward aggressively with AGI R&D, but only within the confines of approved R&D organizations like certain corporate or government labs.
There are those who want to hold back AGI progress because they fear the associated social disruptions, like the potential for widespread technological unemployment.
There are also those who believe AGI is impossible or infeasible for a very long time — but these folks don’t pose much danger at present. In past eras such attitudes had a negative impact on funding for AGI R&D, but AI has advanced far enough now that this is no longer an issue.
Many of these “opponents” of making a current push toward are honest and good-hearted, some even have a deep passion for creating AGI. However, they are nonetheless mostly being counterproductive.
So far the benefits of AI technology clearly exceed the downsides, and there seems no reason this will reverse once AGI approaches and arrives.
So far the benefits of open source and democratically coordinated technology — as opposed to tech solely controlled by government or corporate elites — clearly exceed the downsides and there seems no reason this will be different for AGI technology.
The socioeconomic disruptions wrought by AGI and then ASI will be significant and ultimately extreme. There is particular reason to be concerned about the situation in the developing world during the rollout of increasingly capable pre-HL AGIs but before the advent of full HLAGIs. But the potential for significant temporary disruptions should not distract us from the incredible potential AGI has to improve human lives around the globe.
What we are talking about is, for example,
Liberating humanity from the need to carry out repetitive labor of various sorts to accumulate needed or desire resources … i.e. the launch of an era of abundance
Something near the end of physical and mental disease … including the curing of the disease of aging and the near abolition of involuntary death
Opening up humanity to dramatic new possibilities including occupying a wild array of new bodies … expanding the human mind to include virtually endless new capacities … and using advanced technology to connect with each other in ever deeper ways
These highly plausible and concrete, massively powerful benefits are evidently worth the cost of potential temporary socioeconomic disruptions. Which however does not imply taking these disruptions at all lightly – given the nature of our current social order, a great deal of suffering may occur along the path from here to Singularity, complexly entangled with the advance and rollout of AI systems with various levels of sophistication in various contexts.
The positive potentials of AGI are incredible — but not guaranteed. Understandably, many would prefer guarantees. Realistically, however, there is no reason to believe it will EVER be possible to arrive at solid knowledge of how to rigorously guarantee AGIs will be beneficial with a very high degree of certainty. We are doing something unprecedented here, venturing into the great unknown — just as humanity did when it invented agriculture, and language, and when it launched the Industrial Revolution. Such massive adventures do not come along with high degrees of certainty. They are risky endeavors. They are also central to human nature and history.
The Critical Importance of Decentralized AGI
There is a lot that can be done, based on our current tentative understanding, to bias the odds in favor of a beneficial AGI and Singularity. Among the more important focus areas are:
To work on AGI architectures oriented toward compassion, reflection and reason along with self-transformation and growth
To work on technologies and platforms enabling the development and guidance of AGI to be carried out by a broad swath of humanity rather than by small elite groups
To work on frameworks enabling AGI technology, as it develops and emerges, to help people in their quests to uplift and expand their own consciousnesses
As well as the challenges of AGI architecture and creating technology to guide people in consciousness expansion, there are also challenges of software infrastructure here: Making decentralized software platforms so that all this other cool tech can be rolled out and do its thing without needing any central owner or controller, based on democratic and participatory resourcing and governance instead. Together with my colleagues at SingularityNET Foundation I have listed out some key particulars in this critical “Decentralized AI” direction:
DeAGI Manifesto
1) We appear to be not too far from the advent of AI systems with general intelligence at the human level, and then beyond. The time before this happens is quite possibly in years rather than decades.
2) It is no longer just wild-minded mavericks who realize this: Major corporations and government-funded research labs are currently pushing hard in the direction of HLAGI (Human-Level AGI)
3) Major corporations have somewhat “captured” the AGI research scene. They have systematically directed AGI R&D toward technical approaches that require large centralized server farms and huge datasets — I.e. toward approaches in whose pursuit they have built-in advantage over the competition
4) Decentralization seems likely to be of broad benefit. It will most likely be to humanity’s benefit if the first HLAGIs are developed and rolled out in a manner that is not heavily controlled by any small set of individuals or organizations (i.e. in a relatively decentralized manner)
5) Open source development is an important part of AI decentralization, but it is not sufficient — one also needs decentralized ways to host, train/teach and interconnect AI systems
6) Technologies exist that can be used to enable decentralized hosting, training, teaching and interconnection of AIs including AGIs — many of these coming from the blockchain world
7) Governance methodologies exist that allow decentralized groups of people (and ultimately AGIs) scattered around the globe to make cooperative decisions regarding important matters like the guidance of AI networks and early-stage AGI systems
8) The AI field contains great variety of powerful AI algorithms with potential use for AGI, of which the LLMs and other deep neural nets currently garnering so much success and popularity are examples, but far from the only examples
9) One important application of decentralized AI networks is to allow flexible interconnection of multiple AI components owned by different parties and embodying different algorithms and approaches
10) Big tech driven AI is often accompanied with exploitative handling of data — training models on peoples’ data without permission or compensation. Blockchain technology provides elegant methods for doing this better and appropriately compensating people when AI tools leverage their data in various ways.
11) There is a broad desire for AGI to be ethical and beneficial for all humanity; the most straightforward way to achieve this seems to be for AGI to “grow up” in the context of serving and being guided by all humanity, or as good an approximation as can be mustered
12) While we are palpably getting closer and closer to AGI, there are still multiple steps and stages to go through to get there. The victory of Big Tech and Big Government in the race to AGI is by no means a foregone conclusion. There is still ample room for decentralized AGI to rise to the fore, and if we can make this happen, it seems likely this will be for the benefit of humanity and other sentient beings.
The rollout of OpenCog Hyperon centered networks of diverse AI algorithms across SingularityNET ecosystem infrastructure (including NuNet, Hypercycle, etc.) is one potential concrete route to realizing the ideas in this DeAGI Manifesto. But the general principles are more critical than any particular technologies.
Meaningful AGI/Human Alignment
It is important to understand what kinds of “AGI/human alignment” are actually desirable or make sense.
The idea of a powerful AGI that is bound to forever act in accordance with the values and culture and thinking of some particular group of humans, is both implausible and aesthetically and ethically offensive.
It seems ridiculous to envision an AGI with GI beyond the human level either
sticking forever to the value-systems of (say) 2024 or 2029 Silicon Valley tech bros (or any other fixed value system associated with any other human subculture or group or individual), or
forever being bound to adhere to the values and/or obey the orders of some particular human-run organization – say, some government agency or corporate board or “expert committee”.
Firstly, the odds of a clever AGI finding some unforeseen way to work around this sort of restriction seem very high, based on the premise that the AGI is smarter than humans and can understand and interact with aspects of the universe that are opaque to us.
Second, if this seemingly infeasible form of “strict value alignment” WERE possible, it would grievously constrain the AGI against realizing its true potential for joy, growth and choice in the universe … in the same sense that it would be crazy and terrible to try to constrain modern human individuals and societies to the value and thought systems of medieval Europe, indigenous Stone Age Amazon tribes, packs of primates or colonies of bacteria.
However, moments and experiences of alignment between different minds are beautiful and important things.
We do want and need AGI systems, at early and more advanced stages of development, to have experiences of “entering into the same mental space as human individuals and collectives.”
Experiences of alignment with humans iin deeply felt and thought moments of shared experience – what Martin Buber called I-Thou encounters – will help shape the minds of AGI systems in a way that will orient them toward profound human benefit, in a manner that transcends unrealistic and trivial notions of artificially binding AGI value systems to those of specific human groups.
And this brings us back to the practical question of what early-stage AGIs are doing in the world. Are they selling people stuff, or trying to beat people at trading, or hunting and killing people who happen to be members of an “enemy” army, or spying on suspected “enemies of the state” … or making up quasi-plagiarized nonsense in an effort to provide statistically satisfying answers to questions posed? Or are they deeply engaging with the experiencing human mind at the other side of the human-AI interaction? To move toward BGI effectively and with higher odds, we need more of the latter. And anything we can do to shift more humans toward greater open-mindedness and -heartedness to a greater variety of I-Thou experiences will also be deeply beneficial.
Hi Ben, thanks for posting this and including our Ten Cosmist Convictions.
While I totally agree with all you say, I'm kind of skeptical of the possibility to actually "nudge the following stages of AGI evolution in beneficial ways." We can try of course, and we should, but we can't be certain of the outcome. Just like we can try to make our children good persons, but we can't be certain of the outcome. And these mind children will likely be much smarter than us, so that they will do what they want, not what we want.
As you say, we shouldn't stop or decelerate AI research, and we wouldn't be able to do it anyway. Bans would ensure that only large corporations and governments (and underground criminals) develop AI without coordination and public oversight.
Also, as I replied to a recent X post of yours, “I guess the universe is driving. The universe wants intelligence to spread among the stars faster than the outward speed of biological intelligence.”
So we can only keep developing AI and hope for the best. This argument gives me reason for hope:
I'm much smarter (I guess) than my doggy Emily. But this doesn't stop me from loving her and doing all I can to protect her and make her happy, even when taking care of Emily interferes with other priorities. For example, in a few minutes I'll log off and, instead of reading the AI books that I'm reading (including yours of course), I'll take her out.
This allows me to think, without certainty but with some degree of plausible hope, that our super-intelligent AI mind children will have the same compassion toward us.
David Deutsch wrote something interesting about "universal constructor".
”And anyway, ‘flesh-and-blood people’ is a bit of a category error. People are software. They're not made of stuff, they're instantiated in stuff.”
“I think constructor theory will provide a set of principles under which we could, for instance, show whether or not the universal constructor can exist. How is a human being different from the universal constructor?”.
“I guess that neither a typical human nor human civilisation as a whole approximates
a universal constructor – not because we are something less but because, I hope, we
are something more: we cannot be programmed – and especially not programed to
carry out arbitrary instructions for an arbitrarily long time – because we may not want to.”