When one lives in a regime of scarcity in regard to some critical resource, it’s next to impossible to avoid one’s whole way of thinking and living becoming shaped around this scarcity. One has a hard time foreseeing how one would do things in a regime of abundance in that resource, and conceptualizing the differences between feasible and unrealistic amounts of abundance.
This holds for familiar forms of material scarcity (food, housing, toys, etc.), but also for psychological scarcity (scarcity of love, friendship…) and — the main point at hand here — scarcity of the compute resources needed to power cognition. Minds operating with a relative abundance of compute resource could potentially use cognitive methodologies quite different than we’re used to, obsoleting some the dilemmas and pathologies that concern us both in everyday life and in our thinking about the future of AI.
Thinking about the ways cognition will improve as resource restrictions are lifted, militates toward an optimistic rather than fearful perspective on the likelihood of an AGI-powered future.
Scarcity Thinking / Abundance Thinking
I had the experience of living in US urban ghetto environments for a while, and then relocating to more affluent areas without a culture of everyday theft of physical objects. It took a while to adapt psychologically to the fact that locking the door of my house was no longer necessary. It’s not that there were no poor people around in, say, Rockville Maryland (where I lived and didn’t lock my door for 9 yrs), but there were few enough that getting your house broken into wasn’t a significant fear.
Similarly, upon moving to Japan it’s confusing at first that one doesn’t generally need to lock one’s bicycle in public places. Yes, there’s a Japanese-culture aspect here, but it’s also connected to the fact that not many Japanese would have difficulty buying a bicycle if they wanted one. The culture aspect comes in, though, when one considers that in some affluent areas of the US, an unlocked bike would have the risk of being ghost-ridden into a ditch by restless teenagers, which is much less likely to happen in Japan.
Iain Banks, in his Culture novels, adopts the precept that one mark of a primitive society is reliance on money. This is hard for us to wrap our brains around now in a practical sense, but may become intuitive in the relatively near future if predictions of an impending Technological Singularity hold true. Suppose a superhuman AGI airdrops 3D-printer/ molecular assemblers in everyone’s back yard, so we can 3D print basically whatever everyday physical object we want (with presumably some restrictions against printing machine guns or briefcase nukes built into the software). How would the availability of these doodads affect our way of thinking and living?
Scarcity thinking leads some people to worry a lot about what would happen if AI took all our jobs. How would we find money to pay the bills? It’s hard for some people to conceptualize a world where there simply are no bills to pay, because the services people want in everyday life are provided by a global AI-controlled robotic service fabric. If this abundance really does come into reality, though, I suspect almost everyone will shift pretty comfortably, after a brief transition period, to a life of pursuing social, athletic, spiritual, intellectual and artistic growth instead of focusing on accumulating funds to pay bills.
How would we think about our schedules and life-plans if time became abundant, in the form of super longevity therapeutics? Some of our current motivational drivers would vanish, but what other ones would pop up and flourish in their place?
Scarcity of love, understanding and affection in childhood leads to adults with various psychological pathologies, e.g. insecure styles of attachment. Growing up when love is hard to come by, one gets programmed to feel like any love one finds has got to be grabbed onto and clutched really, really hard. Which is inappropriate and problematic when in a relationship with someone who has plenty of love for you, so that love is not a scarce resource that needs to be hoarded and fought for all the time.
This same sort of phenomenon occurs on the cognitive level. Our ways of thinking and learning and reasoning, of approaching problems and approaching our mental lives, are heavily adapted to the regime of resource scarcity in which we have evolved and in which we still live. Nothing in our everyday cognitive life gives us much guidance to understand how we would think if memory and processing power were abundant enough to be “not a big deal” for most practical purposes.
Regimes of Feasible Abundance
How would we think if our processing and memory resources were vastly more abundant?
The distinction between feasible and purely fantastic abundance is important here. In the “airdropped molecular assembler” scenario, there is still some limit to the power of the computer a random person would be able to 3D print given the physical resources available to them. Physical resources would be abundant relative to everyday human needs, but not relative to mathematicians seeking to build arbitrarily powerful supercomputers aiming to solve ever more complex math problems.
Emotionally, a healthy loving environment is still not an environment where every interaction and situation is loving. The latter would most naturally lead to sweet, lovely, open-hearted minds with zero toughness and “coping with difficulty” skills. However, an environment rich in love but also marked by various real-life difficulties often leads to a secure attachment style and a self-confident and effective approach to overcoming challenges.
On the cognitive side, given resourcing close enough to infinite, one could implement algorithms similar to Marcus Hutter’s AIXI^tl or Schmidhuber’s Godel Machine — which solve problems basically by brute-force search of the space of all solutions up to a certain size. This sort of algorithm appears not feasible in our physical universe or any vaguely similarly structured physical world. But there is a lot of room between this level of abundance that utterly oversimplifies everything, and the level of scarcity we experience today.
A fine-grained analysis could be done to identify levels of increasing feasible abundance, each with different practical consequences. In a material-resources context, the level of abundance typical in developed countries already has definitive consequences, and the level we’ll have post-Singularity will have different ones. Emotionally, the presence of ongoing secure love has a clear distinct impact on self-development, and then the breakthrough to an oceanic sense of pervasive love is a follow-on phase. Cognitively, there will be a breakthrough beyond the human level to new forms of abundant cognition — which we can vaguely nut out now if we try, as I’m doing in this post — and then there will surely be layers beyond this, which we can currently see even more dimly. Hierarchical layers of spiritual abundance well beyond the level of ordinary everyday human consciousness have been explored and speculatively charted out by Jeffery Martin, Ken Wilber and many other modern theorists as well as various historical wisdom traditions.
The Question of Cognition Under Conditions of Feasible Abundance
Given feasibly abundant compute resources to fuel our cognition, we would far less often need to forget the concrete experiences and ideas underlying our abstract and general ideas. Explicit generalization would still be a useful thing to do — because we’re talking about feasible rather than near-infinite resource abundance — but alienation of generalization from concrete evidence would be far less prevalent. And the consequences of this would be incredible.
So much of our stubbornness, our difficulty in changing our mind in response to new situations and information, is due to the alienation of our abstractions from the experiences that gave rise to them. If, when we found new experience that contradicted our beliefs, we could go back and pool this experience with the earlier experiences that led to the formation of our beliefs, we would find our beliefs quite a lot more pliable. This would make us not only smarter (with better belief-revision heuristics and algorithms, among other technical improvements to our inference capability), but less emotionally screwed-up.
We would also perceive more of the world around us on a regular basis, instead of focusing our attention so tightly on those aspects of the world most relevant to our current goals and most strongly highlighted by our current beliefs. This would open us up to richer relationships with other people and other sentiences, as well as vastly increase our ability to learn from our interactions with our environment.
We would be much more able to observe our own thinking, feeling and acting while in the course of doing it — i.e. to be “mindful” while still being intensively productive and fully engaged. This is possible to a significant extent given the current human mind/brain architecture, but for most people it requires quite a lot of practice to get there. With moderately more abundant resources and a mind-architecture adapted to these resources, it would a lot simpler; a distinct cognitive module could simply be assigned to carry out the “mindful observer” role while other cognitive modules carried out practical tasks.
Effective exercise of compassion would also be more straightforwardly achievable, because we could keep knowledge-graphs about each other individual to whom we wanted to provide aid, and reason extensively about how to do so. With consent of he other individual, we could also have direct access to simulations of them running in various virtual environments, so as to get a much stronger idea of what is good for them in what senses and what isn’t. These fancy methods of other-analysis and other-modeling wouldn’t help a mind without a basic orientation toward compassion — but in the context of a mind that foundationally WANTS to be compassionate (which is the kind of AGI we’re working to engineer in the SingularityNET / OpenCog ecosystem, for instance), they can prove very powerful enhancers.
This question of resource-rich cognition is acutely relevant to concerns about the ethical properties of superintelligent AI systems, because these will almost surely have vastly more abundant compute resources. They will be able to guide and regulate their own growth according to fundamentally different strategies than those we are currently able to follow.
Consider for instance concerns of the form: If I modify my brain or my mental software considerably, in pursuit of certain goals, how do I know the modifications won’t go awry and actually lead me drastically away from my goals in terrifying ways? Eliminating this sort of possibility entirely is probably not feasible, but it seems that in a regime of resource abundance, there may be cognitive strategies of minimizing the odds of such pathologies, which are not available to us now.
Greater mindfulness, broader attentional focus and increased ability to retain concrete groundings for abstractions, and more effective compassionate relationships with other minds would all very likely decrease the odds of a mind’s future-self going awry in a manner surprising to its past selves. But there are also some additional cognitive strategies that will open up once resource restrictions are relieved a bit.
Two Robust Cognitive Strategies for Resource-Rich Minds
Two cognitive strategies that occur to me as probably feasible given a high but not infeasible degree of abundance would be:
1) Sampling from neighboring possible worlds in the multiverse.
Consider a situation where you think you have a good approach to getting a certain thing done, but are (as will usually be the case) unsure whether you’re somehow deluding yourself, perhaps because some of your beliefs or thought-processes are actually not as true as you think they are.
One strategy to deal with this is to create an ensemble of versions of yourself that are tweaked in some way. Perhaps some of them are exposed to a minor variation of the data you’ve actually experienced during your life (after all, it’s reasonable to think a certain percentage of your perceptions that you take for granted are probably erroneous or hallucinated, etc.). Perhaps some of them are driven by a minor variation of the ideas or mental processes that drive you (after all, it’s reasonable to think that some of the cognitive content that drives you was originated via some sort of erroneous process).
You can then ask each version of yourself, drawn from the ensemble, what it would do to achieve the objective you’re after. Taking an approach that gets recommended by a decent number of members of the ensemble of variant yous, is a reasonably robust approach.
This sort of strategy is supported, on a technical level, by various results that estimate the out-of—sample generalization of a model in terms of the robustness of the model with respect to replacement of its training data with subsamples thereof … and other related statistical theorems.
Even if you don’t opt to choose you actions based on a vote across your multiversal neighbors, just understanding their perspectives is bound to be valuable and interesting, and enrich your understanding in ways that may affect your ultimate decisions in more complex ways.
2) Maintaining a history of your prior versions, assessing their view of your current self and rolling back now and then when appropriate
When we humans, in our current realm of scarcity, develop and grow, we leave our old selves behind. We may then muse about what our past selves would have thought about our current selves, but we rarely have a solid idea of this, because what we’re really assessing is our “current model of our past self” … as the number of links in the chain of models increases, the noise multiplies.
What if we could keep our previous versions, or at least a reasonable sampling thereof, in the manner of a code version control system. We could then boot up our old versions from time to time and educate them on our current situation and way of thinking and ask them what they thought. Potentially we could even maintain some of these in virtual worlds, popping into the virtual worlds to consult them now and then.
If our past selves don’t approve of our current self, then what? One option is to accept the advice of the past self and roll back the changes. Rolling back and re-evolving would ultimately lead to a branching tree of possible selves, which could opine as requested on other selves living anywhere in the tree.
The tree could also potentially be expanded in multiple locations in parallel, with parallel versions of yourself existing in software and/or robotic or synthetic-biology instantiations, and providing feedback on each other and each others’ historical predecessors.
This is basically a self-management version of AI algorithms such as backtracking, which have demonstrated value for solving certain sorts of problems. Improvements and variations on this sort of algorithmic approach would doubtless unfold once rich-resource superintelligences started putting them into practice.
Even if the advice of the other variant selves in the tree isn’t heeded, it may still be interesting and enrich one’s point of view.
Having an I-Thou relationship with one’s multiversal neighbors — both the versions of oneself in other similar multiverse branches, and past and alternate-history versions of oneself found on the branching tree of the self’s version-control system — would lead to an utterly different sort of psychology than the one we’re used to. The implications for, say, love relationships are intriguing if a bit baffling to ponder.
Superintelligent Quantum Cognition?
In spite of the “multiverse” language I’ve used here, the above heuristics don’t require quantum computing for their implementation, they can be done in an approximative but powerful way purely classically given a physically realistic amount of compute power. Quantum computing, however, could certainly be very helpful for realizing these cognitive techniques more and more thoroughly, and this may well be part of how superintelligences end up operating.
If enriching quantum mechanics involving morphic-resonance/precedence-principle type dynamics turns out to be useful, then the interplay of these phenomena with resource-rich cognitive heuristics may be interesting. In this case superintelligences’ conceptual explorations of parallel universe versions of their minds, would have actual impact on the distribution of different cognitive processes and associated impacts across the multiverse. This brings us into Terrence McKenna / Julia Mossbridge ish domains, where we start to wonder about post-Singularity minds reaching back in time and influencing our path to Singularity today. But I won’t go too far in this speculative direction right now, as the core points of this blog post are a little more “vanilla” and don’t require anything quite so adventurous!
Resource-Rich Minds and AI Safety
The concerns that some futurist pundits have about future AIs running amok and killing all humans or turning the universe into paperclips, seem to me largely rooted in an unfortunate and inappropriate projection of scarcity thinking into a future domain of abundance.
Rushing forward to modify oneself without carefully evaluating what one’s multiversal neighbors or predecessors would think of the changes, is the sort of thing one does when one is in a panic to maintain one’s survival in a harsh environment, or accumulate more resources than one’s competitors. It’s not the sort of thing one would bother doing in a regime of reasonable resource abundance.
The generally wise, clever and deep thinker Stephen Omohundro wrote an elegant paper on “The Basic AI Drives” some years ago, which summarized some core ideas that have often popped up in the thinking of other AGI-worried pundits like Eliezer Yudkowsky and Nick Bostrom. The basic theme of Omohundro’s paper is that future AIs are going to behave like systems struggling and competing for resources — i.e. like the animals and plants in Earth’s evolutionary history, which mainly evolved in situations of relative resource scarcity.
I like Stephen greatly but find this paper misdirected. The argument that the “Nature red in tooth in claw” dynamics of competition-for-resources will always apply even in future regimes of realistic abundance reminds me of people who argue that Japanese don’t really leave their bikes unlocked out in the street, or people who argued back in the 1970s and 80s that open-source code could never work, because people need to have ownership … or a little later that Wikipedia could never work, because people need to be paid and get credit. It sounds plausible at first, but it’s actually egregious overfitting to the regimes we happen to be familiar with.
I strongly suspect one could show, formally, that under various reasonable assumptions, minds architected to leverage transhumanly-abundant long-term and working memory and self-observation capability and following the two rich-resource cognitive heuristics I suggested above would be much less likely to fall prey to fallacies like accidentally modifying themselves in idiotic ways (causing them to turn the universe in to paperclips, convert to Pastafarianim, or whatever). I believe these heuristics would also have the result of causing minds to be more robustly compassionate to other minds in the universe — at least in the case where the initial mind at the start of the branching tree of evolutes has a value system placing compassion at the fore.
Given the likelihood that resource-rich minds will follow cognitive heuristics that minimize the odds of pathological or uncompassionate behavior, it seems a plausible initial goal system for a self-modifying, self-improving AGI would include:
Manifesting compassion toward humans according to its best understanding of how most humans would interpret this
Achieving sufficient resource-abundance that it can follow robust rich-resource cognitive strategies, and then implementing these strategies when feasible
Of course it’s entirely possible an AGI slightly beyond human level will come up with even better rich-resource cognitive heuristics than the two I’ve briefly sketched here. In fact this seems almost inevitable. And to me this is a feature, not a bug.
The important thing is not to set the precise direction for future AGI evolution — which would be entirely infeasible for us to do — but rather to point our AGIs in the right general direction, establish a meaningful mutually empathic I-Thou relationship with them, and let them work out the details, collaborating with us as appropriate.
". . . or a little later that Wikipedia could never work, . . ."
Wikipedia works? I am curious as to your definition of "work?" For instance, check out William Tiller's obituary in the Stanford Report (https://news.stanford.edu/report/2022/06/21/william-tiller-materials-engineer-expert-materials-solidification-former-guggenheim-fellow-died/):
"[T]iller first gained recognition in the field with a 1953 paper he co-authored with a fellow graduate student and two advisors at the University of Toronto on the way certain impurities get distributed as materials crystallize from liquid to solid, causing instabilities in the resulting material. In it, Tiller and his collaborators for the first time described the principle of “constitutional supercooling” mathematically. The process had been described qualitatively prior to the paper, but never in such concrete terms. The authors’ approach is still used today in textbooks on materials crystallization.
That work and his subsequent nine years at Westinghouse Research Laboratory earned Tiller a certain academic reputation such that in 1964 when he joined Stanford University’s Department of Materials Science and Engineering he was the first faculty member to be appointed as – rather than promoted to – full professor. In Tiller’s first year on the faculty, his Air Force Office of Scientific Research contract alone was $600,000 per year, the largest in the department by a considerable margin. In today’s dollars, such a contact would exceed $5 million.
In 1972, Tiller published another influential paper on stress corrosion cracking. The paper was noted for introducing the concept that, under strain, a surface with wavy undulations will cause atoms to diffuse from the valleys to the peaks, increasing peak heights and producing greater irregularities. It became known as the Asaro-Tiller-Grinfeld (ATG) mechanism and laid the foundation for a new theoretical work in semiconductor films, including quantum nanostructures and quantum dots. Decades later, the paper inspired a retrospective titled “The Asaro-Tiller-Grinfeld instability revisited.”"
Now, compare that to his Wikipedia page: https://en.wikipedia.org/wiki/William_A._Tiller.
There we are informed that the Professional Atheist and Magician, James Randi, awarded Tiller the Pigasus Awared in 1979.
I appreciate your post and your optimism, but I think Roman Leventov below makes a valid point. I think conflict is inherently unavoidable, like your Magician system and the brother-battle. It's a fundamental pattern. In this conflict, it is very difficult if not impossible to even communicate with those opposed. It's like we live in different worlds, disjoint worlds. I don't think that's going to go away without enlightenment . . .
OM
Syllable of the most supreme exclamation of praise.
BENZAR SATO SA MA YA
Vajrasattva’s Samaya
MA NU PA LA YA BENZAR SATO
O Vajrasattva, protect the samaya.
TE NO PA TISHTHA DRI DHO ME BHA WA
May you remain firm in me.
SU TO KA YO ME BHA WA
Grant me complete satisfaction.
SU PO KA YO ME BHA WA
Grow within me (increase the positive within me).
ANU RAKTO ME BHA WA
Be loving towards me.
SARVA SIDDHI ME PRA YATSA
Grant me all the accomplishments,
SARVA KARMA SU TSA ME
As well as all the activities.
TSITTAM SHRE YAM KU RU
Make my mind virtuous.
HUNG
Syllable of the heart essence, the seed syllable of Vajrasattva.
HA HA HA HA
Syllables of the four immeasurables, the four empowerments, the four joys, and the four kāyas.
HO
Syllable of joyous laughter in them.
BHA GA WAN SARVA TA THA GA TA
Bhagawan, who embodies all the Vajra Tathāgatas,
BENZRA MA ME MUNTSA
Do not abandon me.
BENZRI BHA WA
Grant me realization of the vajra nature.
MA HA SA MA YA SATO
O great Samayasattva,
AH
Make me one with you.
Syllable of uniting in non-duality.
Here I am again with another mind-bending wawa woowoo comment. I was in thee bookstore the other day and did a speed read through https://thisishowtheytellmetheworldends.com/, the book about the cyber arms trade, i. e. state-sponsored hackers. I would really love to get your take on THAT. Especially now with these chatbots helping coders write their code. Eventually we end up with AI hackers and no humans able to even comprehend what they have done or can do. Straight outta William Gibson . . .