(Note for regular readers: My next couple posts will be back to AI and morphic resonance and so forth … don’t worry, this is not becoming a second-rate politics blog!!)
Having moved back to the US from overseas in 2020, I have started paying slightly more than zero attention to US politics (not too much fortunately, but more than while living overseas, which was basically just to grimace when people reminded me the US had elected a buffoon real estate tycoon as President).
One thought that has popped into my head recently is that the advent of human-level AGI (HLAGI) could potentially be a big enough shock to the zeitgeist to enable a radical-futurism-oriented candidate to prevail in a US Presidential election, and multiple Congressional elections as well.
But human processes often move more slowly than technological innovation, so if this moment is to be seized it will probably require some advance planning. This leads me to the conclusion that someone should start a Singularitarian US political movement sometime soon, at very latest in early 2025 right after the next US Presidential election is done.
Similar conclusions likely hold for other democratic nations but I’ll focus on the US here mostly for simplicity’s sake.
The Emergence of HLAGI Will Be (along with other more important things) A Major Political Opportunity
The premise for this line of brainstorming is: I believe it highly likely that, sometime in the next 5-15 years, we will see a breakthrough in Artificial General Intelligence technology, in which machines become capable of cognition at the human level or beyond.
Ray Kurzweil has presented predictive analysis suggesting 2029 as a likely date for emergence of human-level AGI. Of course there are too many uncertainties about such an event to pinpoint a precise date, but I believe his analysis is basically sensible, and this is essentially where my “5-15 years” estimate comes from.
As many readers will know, I have done my own extrapolations and analyses separately from Ray’s, but the gist in terms of the emergence of human-level AGI is the same. A difference between my analysis and Ray’s is that he suggests there will be a 16 year gap between the emergence of human-level AGI and a full-on Technological Singularity marked by massively superhuman AGI. My suspicion is this gap will be significantly smaller.
Along with various more interesting implications, I believe the emergence of HLAGI will present an opportunity for radical disruption of global political systems, in every country including the US. This disruption may be positive or negative in various aspects. With some forethought we can maximize the odds that it’s mostly positive, and that political shifts occur in a way that maximizes the odds of the Singularity coming out broadly beneficial for humans and other sentient beings.
In the US, I believe the emergence of HLAGI will present an opportunity for certain sorts of voices and perspectives far outside the current mainstream to finally find significant influence. The election of Trump as President showed that the electorate can do surprising and counterintuitive things – and even without a really extraordinary circumstance to nudge them. The emergence of HLAGI will be enough of a shock to the average person to open their minds to a variety of new possibilities – including the notion that effectively handling the new era of sentient technology may require a new and different breed of political leadership.
So what I suggest is some enterprising soul or group thereof should form a political movement with a primary motive of making sure the Singularity comes out as well as possible for everyone, and with a game plan of ramping up its activity significantly once the first dramatic breakthroughs toward HLAGI have been achieved. This means gathering significant membership and figuring out appropriate policies in advance, so that once an HLAGI breakthrough is reached, plans can be executed rapidly.
As an initial stab at a name — or at least an only moderately repugnant placeholder — I’ll call this potential new political entity the Movement for a Radically Better Future, or MoR Better Future for short.
MoR Better Future: Key Values and Principles
MoR Better Future as I’m imagining it would have a political philosophy combining key aspects of libertarianism and democratic socialism. I currently don’t see any other approach that has a plausible chance getting us through a Singularity successfully, in the sense of improving life for almost all people rather than oppressing or leaving behind a huge number.
The core values I have in mind here, driving all the specifics I’ll propose, are Joy, Growth and Choice. Promoting these in a context of technological abundance basically requires some sort of libertarian democratic socialism, by which I mean a social contract that:
Supplies all humans and other sentient beings with enough resources of various sorts to get their needs met and allow them to ascend the Maslow Hierarchy toward profound self-actualization and beyond
Allows humans and other sentiences as much freedom to pursue their own ends as possible, consistent with item 1
Fosters both preservation of human life, and enthusiastic advancement toward new technologies and new modes of experience and living
Gives sentient beings a great deal of agency to decide how their society pursues the above principles
My interpretation of these items has among its consequences:
A path of strong though not absolute pacifism. Of course there are some circumstances in which violently forceful response is the approach leading to the overall most joy, growth and choice … but IMO these are few and far between (the military response to Hitler being the classic example), and much fewer than the number of military involvements in the world today.
A bias toward “green” policy and being friendly to the environment – not so much because of fear of global warming’s risk to humanity (though this is greater than zero) but because the Earth ecosystem itself is viewed as a sort of sentience whose right to grow and evolve in its own direction rather than having so much destruction wrought upon it by humanity’s sloppy and shortsighted development choices
There is of course a little tension between the socialism of Principle 1 and the libertarianism of Principle 2. This is fine and dandy; this sort of tension is what drives growth, progress, complexity and excitement in all non-trivial complex systems. This tension must be ongoingly partially-resolved via the democratic decision-making implied by Principle 4, helped along by the power of advancing technology (Principle 3) to obsolete apparent dilemmas.
Decentralized Organization is Critical
It seems to me that to pursue these somewhat-radical-futurist values and principles in practice requires a strong government, but one that operates quite differently from any current one.
Basically, I think the problem with current government is generally not that it’s too big (though it might be in some countries), but rather that it’s too centralized and bureaucratic.
The modern, technological world calls for a far more decentralized, adaptive approach to administration — both in businesses and in governments. This is already the case today and will only become increasingly true as Singularity approaches closer.
Privatizing government functions is not a solution to the problems of over-centralization and inflexibility. Corporate monopolies and oligopolies are no more flexible and intelligent than big government. US government corporate contractors are certainly no paragons of imagination and agility.
Rather, what we need is a government which institutionalizes flexibility and creative experimentation. By decentralizing its functions, and giving independence and control and resources to a wild diversity of smaller organizations, the US government will be able to draw on the full creative power and enthusiastic energy of human beings in America and elsewhere.
What I’m talking about is breaking down government bureaucracies into small units, which function with substantial independence, and which are judged mainly on results (usually over a period of years) rather than methods. Unsuccessful units are liquidated and their resources used to form new units, which vary creatively on the techniques of previously successful units.
Governance of this whole shebang happens via modern, decentralized forms of democracy, such as liquid democracy and reputation based voting. There is plenty of detail to be fleshed out here, but also plenty of knowledge existing to guide this process. And the refactoring of structure within government doesn’t require constitutional amendments and such, we’re just talking about “internal matters”.
The MoR Better Future itself must also be structured in a decentralized way – it should be formally structured as a DAO, with a carefully designed DAO governance structure. In SingularityNET we are currently putting a lot of study into DAO governance as we are looking at gradually shifting more of SingularityNET’s operations into a set of DAOs.
Potential MoR Better Future Policy Directions
The specifics of MoR Better Future policy would of course be decided by the MoR Better Future DAO, i.e. by the members, not by the founders and certainly not by some crazy AGI scientist typing frantically in his blog late at night inbetween getting his real work done and going to sleep!
But that said, here’s a first stab at some policies that would seem to make sense to me in this context .. in no particular order and definitely with many critical omissions:
Replace the National Anthem with the Jimi Hendrix version -- OK OK, I'm largely kidding on this one -- the current standard anthem is tuneless and militaristic, but there are more important problems to deal with …. I’ve always thought “This Land is Your Land, This Land is My Land” by Woody Guthrie would be a nice replacement anthem, actually…. But then I’m just an old hippie…
Avoid military actions and violence in general except in truly extreme straits. So often the US’s military adventures overseas just end up provoking further violence and chaos rather than improving the state of the world. A robust defense capability is still important at the current stage of history, but active use of violence should be restricted to cases of extreme dire obvious necessity. Confronting the Singularity with a vibe of pacifism rather than human-against-human violence is sure to be a better way to fill early-stage AGI minds with the right stuff.
Drastic decentralization of all government bureaucracies. As elaborated above. Diversity and freedom at every level is necessary in order for effective solutions to social problems to evolve.
Dramatically reduce the prison system. The US has a ridiculous number of people in jail, especially black people. There is a strong argument that in the vast majority of cases, other solutions besides imprisonment would provide much greater social benefit at less cost. Confronting the Singularity with fewer horrific stains like this on our collective human soul will be beneficial for all sorts or reasons.
Referendum-based decision-making with liquid democracy. The Internet gives us the ability to decide local and national issues by referendum rather than by votes within legislative bodies. It is foolish not to make use of this ability to implement direct rather than merely representative democracy. And liquid democracy makes this very feasible in practice.
Reform of the US voting process – basics like ranked-choice voting, nonpartisan primaries and independent redistricting commissions
Serious education reform, driven not by privatization, but by government funding of innovative schools, made possible by decentralized education administration.
A nationwide high technology initiative, driven by increased, decentralized government investment in pure research, and government provision of venture capital for hi-tech start-up firms. Investment in science and technology always provides higher quality of life, in the long term.
Democratization of cyberspace: the Internet, and whatever it turns into in the future, should not be controlled by big tech companies, and should not be heavily manipulated by governments or political parties … it should be regulated according to values such as data sovereignty, democratic decision making, diversity and freedom of speech
Free food, housing, internet and basic education for all. We are entering a realm of abundance, and leaving the era of scarcity behind. As automation gradually overtakes human labor in every industry, there is no other humane solution. Universal Basic Income is one possible mechanism here, though not necessarily the best or only tool.
Free university education for everyone. How else can we get a society intellectually capable of coping with the Singularity?
A complete overhaul of the national drug policy. A lot of drug use is bad, but it is not as bad as mass imprisonment and constant violence on the streets. Nearly all regulated drugs should just be legalized,
National, decentralized health care. The current health care system benefits only insurance companies and dehumanizing health care conglomerates. The Australian system is state-funded but decentralized and emphasizes individual choice. A move in this direction is long overdue.
Decentralize and democratize finance. Wall Street’s regulatory capture of US gov’t finance-related agencies is obscene. DeFi, utility and governance tokens and other crypto mechanisms should be broadly legalized with lightweight regulations aimed at protecting people from fraud without stifling innovation or restricting the general population from participation in advanced financial or other tokenomic instruments.
More national parks and such. Protect remaining wild areas from development. We are smart enough to develop technologies that produce wealth without destroying so many animals and plants. Once Singularity comes we will greatly value those natural regions that were left relatively unspoiled during earlier primitive times.
Practical Strategies for Getting Elected
So what would be the plan for MoR Better Future to actually get its chosen candidates elected to office, so as to have a prayer of putting all these nice policies (or whatever nicer ones are chosen by the members) implemented? Would it form a new political party (aiming to be a “third party” separate from Democrats and Republicans) or would it work within one (or both) of the two current major parties pushing its candidates there?
The zeroth-draft potential MoR Better Future policies briefly zipped through above are certainly closer to Democrat than Republican overall – though they have some overlap with the thinking of the more libertarian corner of the Republican Party. (The evangelical-Christian and military-hawk Republicans will, I suspect, not consider MoR Better to actually be more better…) Given the degree to which the US political system is now rigged to reinforce the 2-party system, there is reason to question the wisdom of creating a third party rather than running a Democratic candidate on a MoR Better Future platform.
This “stick with the existing parties” perspective is sensible and may well be correct. However, the Democratic party is sufficiently beholden to so many special interests in such a complex way, to implement a MoR Better Future type platform within the Democratic Party would basically amount to a “beneficial takeover” of the party, similar to Trump’s “semi-hostile dada-ish takeover” of the Republican Party. It’s not impossible but for sure the Democrat establishment would fight pretty hard against such a takeover – just as it did against Bernie Sanders’ attempt at a similar but much milder takeover. (Sanders’ platform deviated from Democrat orthodoxy, but not as far as MoR Better Future does.)
The premise on which this post is based is that the Singularity is a Really Big Deal – and the emergence of HLAGI may be enough to shock people out of their usual behavior patterns. Including the pattern of voting only for one of the two historically standard political parties.
That said, if a MoR Better Future organization were created, it would be possible at some point forit to put itself behind a Democratic candidate on a tactical basis. This would require a big tactical push for MoR Better Future members to register as Democrats in time for a specific election.
Why Not Leverage Existing Fringe/Futurist Political Movements?
Existing US third parties like the Libertarian Party and Green Party are not sufficiently Singularitarian to serve the role of MoR Better Future. Libertarians don’t like the more democratic-socialist aspects of MoR Better Future, and the Green Party is not adamantly pro-technology. MoR Better Future is definitely “green” but is not in favor of slowing down tech progress to save the environment – quite the opposite.
Specifically, the Green Party explicitly embraces the Precautionary Principle,
There is a risk that further rapid technological change will bring about new and catastrophic threats to human survival and flourishing, and to the natural world. In line with our moral obligations to future generations, the Green Party supports the creation of a law formalising the Precautionary Principle to be applied to technologies that pose a plausible risk of ecocide, catastrophe or human extinction. The Precautionary Principle applies especially to those risks where we are uncertain or ignorant of their magnitude or likelihood.
whereas MoR Better Future embraces the Proactionary Principle
People’s freedom to innovate technologically is highly valuable, even critical, to humanity. This implies several imperatives when restrictive measures are proposed: Assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favor measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value. Protect people’s freedom to experiment, innovate, and progress.
My strong belief is that pursuing beneficial technologies in a proactionary way, even when they do have real risks, is the the path with the highest odds of reducing species extinction and preventing dramatic near-term human-caused climate change.
Andrew Yang’s Forward Party has some interesting potential – Yang is not quite a Singularitarian but he’s a fairly radical futurist, and they are starting out with apparently several hundred thousand members, pieced together from existing political groups. Yet their positioning is blatantly “centrist”, whereas MoR Better Future is definitely not about sitting in the middle of Democrats and Republicans, but rather about creating a new mix of policy ideas – including some that look “extreme” relative to current politics – guided by a specific coherent future vision.
MoR Better Future is not centrist, it’s extremist relative to currently mainstream thinking, but it’s extreme in substantially different directions than conventional extreme-left or extreme-right thinking. It’s extreme relative to current standard thinking because the future after HLAGI is going to be extremely different than the current world.
There is a US Transhumanist Party but it’s a very small group, and seems committed to the label “transhumanism”, which has a problematic public image and feel and seems to doom the party to ongoing obscurity even as the mainstream gradually approaches its vision. Their big strength is that they are explicitly Proactionary and have a reasonably clear-headed view of the future, compared to the political mainstream. However, my experience with this group is that it’s really more about discussing interesting ideas than about trying to have dramatic direct political impact. Whereas the premise I’m advancing in this post is that there will come a specific moment at which a “fringe” movement like MoR Better Future could rise to power – i.e. when the world is in shock regarding dramatic advances in HLAGI. It follows that the core purpose of a Singularitarian political movement should be very concertedly to put itself in a position to grab this moment.
My off-and-in experience engaging in discussions with members of the US Transhumanist Party has also taught me that “transhumanism” is far from a complete political philosophy. There are plenty of hardcore political-libertarian transhumanists, and then there are progressive, more “leftie” transhumanists. My feeling is the political-libertarian transhumanists will fail to capture broad enthusiasm even as Singularity approaches, because the natural consequence of their ideology would be to leave the vast majority of voters to starve and suffer as AGI develops, watching it deliver its benefits nearly entirely to the upper classes. On the other hand the more progressive transhumanists generally embrace ideas fairly similar to MoR Better Future – but to the extent they insist on proselytizing these ideas with the language of “transhumanism” they will limit their reach even as the mainstream moves toward their conceptual positions.
Overall, while co-opting the Democratic Party may possibly be a necessary strategy in spite of its aesthetic impurity and substantial risks, none of the existing minor US political parties is strong enough to present a similar situation.
From 2025 to Singularity
The core goal of the MoR Better Future should be to elect the President and as many legislators as possible in the first rounds of elections AFTER a breakthrough to HLAGI has been made. Planning and organization-building before an HLAGI breakthrough will greatly increase the odds of effectively exploiting this breakthrough, because human processes generally take some time to congeal even in urgent and dramatic situations. (Look for example at the sluggish, confused and uncoordinated responses of all parties to the COVID-19 pandemic as an example — and COVID is a much less extreme development than HLAGI, though obviously very different in nature making a comparison complex.)
If the Kurzweilian HLAGI schedule is roughly correct this would mean either the 2028, 2032 or 2036 US Presidential elections (if some huge HLAGI breakthrough occurs before 2024, it’s too late to do the needed political organization befhorehand). This suggests that early 2025 is the time to start putting MoR Better Future into place.
If the HLAGI breakthrough occurs before mid-2027 (which I think is plausible, though it’s hard to precisely estimate the odds), then what is needed is to have a decent-sized membership and a fully functioning organization set up by mid-2027, so that at that point a decision “Democrat vs. third party” can be made and then needed actions can be taken.
To pull this off will require highly competent and passionate leadership in MoR Better Future (or whatever the name of the movement ends up being), if the challenge posed in this blog post ends up being actively taken up. I’m certainly highly interested to be involved but I don’t have time to play a key leadership role as I need to be spending my time in actually making the leap to HLAGI happen so as to realize Kurzweil’s projected timing (cf OpenCog Hyperon, SingularityNET-on-HyperCycle, etc.)… I’m a scientist not a politician. But fortunately each year there are more and more people in the world comprehending the impending nature of the Singularity and understanding the sorts of things that need to be done to bias it toward beneficial outcomes….
Proactive is the Way, People!
The counterargument I most expect to the thesis of this post, from folks who generally accept a Singularitarian view of the world, is basically: “Yeah yeah, but people are reactive rather than proactive. Once HLAGI is here you’ll be able to get people excited about reforming political systems to smooth the path between HLAGI and Singularity. But nobody’s going to take this seriously in advance. Neither you nor Kurzweil nor anyone really knows when HLAGI is going to come anyway….!”
I know full well most people aren’t going to put any effort into processing the practical implications of HLAGI until it hits them in the face (or takes their job, or steals their wife or husband, or send them a nanoassembler to play with, etc.). However in a land of the shortsighted the ones with long-range vision along with close-up have a substantial advantage — and may even become queen or king. In recent global economic history, we’ve seen China, South Korea, Singapore and Taiwan increase their wealth and influence directly during the past few decades, largely due to their straightforward yet atypical pursuit of advance planning — proactivity rather than reactivity. In a similar way, I believe that a moderate dose of proactivity in regard to the political opportunities to be presented by the advent of HLAGI may be able to go a long way.
I mean — I know “Someone Shoulds” are usually lame and don’t get picked up on. But I’m an optimist, right? ;=)
The thinking in this post may seem terribly out-there compared to the preoccupations of the current political sphere. But don’t forget, as Singularity gets nearer, we are seeing an exponential reduction in the time-lag between a futuristic vision being broadly considered insane or off-base and the world suddenly switching to considering it commonsensical. This process often takes just years now, not even decades! (And of course, for futurists in prior centuries, the lag between rejection and acceptance of wild-eyed ideas was generally longer than their lifespan.). Anyway my hope is that while HLAGI is indeed not here yet and nobody can predict the exact year of its arrival, the viability of HLAGI is now widely enough accepted that some nontrivial fraction of people may be able to start wrapping their brains around other related issues like the political potentials and risks it’s going to present.
One can also give these matters a slightly darker twist. I prefer to focus on positive rather than negative aspects, but I’ve also noticed that most people are more effectively motivated by a combination of fear and desire than they are by pure desire. So for this reason and for sake of intellectual honesty and thoroughness I will close this post by. mentioning some of the risks that we run if we Positive Singularitarians fail to grasp the political moment when HLAGI is launched.
Look at the absurd, nasty, stupid reactions we’ve seen around the world to rising immigration … look at the patent incompetence with which the COVID-19 pandemic was handled in nearly all major countries (the rapid rollout of vaccines being pretty much the only bright spot). Look at the narrow-minded fascists and quasi-fascists rising to power in one after another democratic nation … not even to mention, say, the senseless current war in Ukraine. The advent of HLAGI is going to scare quite a lot of people, and many fascists and religious dogmatists and militarists and so forth are going to try to take advantage of this and exploit peoples’ fears (stoked by endless Hollywood movies painting dystopian pictures of AI) to secure their own domination, status and wealth. I believe this can be countered but countering it in a hasty way is going to be much more challenging than doing so with advance planning.
I wrote an article last year recounting the failure of myself and everyone else to effectively use AI and blockchain to help humanity cope with the COVID-19 pandemic. There were so many ways that AI and blockchain could have been used to help save lives and preserve the effective functioning of society and so forth. But the time required to build and roll out and market the relevant software, and the effort to convince political decision-makers to look at things a little differently, ended up causing all efforts in this regard to fall flat. For the next pandemic things may be better, because some of the AI and blockchain software that was created to help respond to COVID-19, while it was too late for COVID-19, will still be around to help with the next one. But the advent of HLAGI is not another pandemic — it’s a different sort of unprecedented event. The similarity to COVID-19 is that to grab the opportunities it provides, in a purely on-the-fly manner without advance preparation, may end up being infeasible due to the crazy rapid-fire way events tend to unfold in the context of major transitions.
Which brings me back to the title of the post — Someone Should Start a Serious Singularitarian Political Movement — By Early 2025 At Latest
P.S. The reader who has made it this far is now treated to the mildly-amusing fact that some of the text on this page was derived from a Web page I put up in 1997 or so titled “Ben Goertzel for President in 2004”. Since that time I have passed the minimum age of 35 required to become US President, but have also long ago lost the desire to hold such an office. I would happily take the job of Chief Scientist of the US though, and see someone else more extraverted than me — and less busy working directly on AGI — be President and do more of the important but tedious politicking. Anyhow though, given how popular concepts of “decentralization” are today compared to the late 1990s, it’s kind of cool to look back and see that I was already proposing a decentralizationist political revolution way back then…! Yes, decentralization was a thing before Bitcoin. In my 2001 book “Creating Internet Intelligence” I was already proposing secure decentralized globally distributed Internet AI, coupled with decentralized governance… and none of these ideas were totally new when I put them forth either. The world is catching up w/ my 30 year old self, little by little. But of course my 55- year old self has also moved way ahead of. my 30yo self in a whole bunch of directions… but those are topics for other blog posts…
Too late
https://futurism.com/political-party-led-by-artificial-intelligence
I realize you put real effort into this excellent post. The ideas are good, however I don't see how we get from here (current state) to there? A HLAGI for president or united nations, probably couldn't do any worse than we are currently doing. Perhap much much better. And I'm not really sure that was what you were saying either, so don't take me wrong and pardon my ignorance. Your article made me want to comment! Perhaps a simulation of such a govenment would be useful. I remember an interesting game called Hamurabi which originally got me interested in computers. https://en.wikipedia.org/wiki/Hamurabi_(video_game)
Instead of Sumerian King, a HLGAI navigating the simulation with real world input. See what happens.
Probably stupid, but that's all I got.