When one lives in a regime of scarcity in regard to some critical resource, it’s next to impossible to avoid one’s whole way of thinking and living becoming shaped around this scarcity. One has a hard time foreseeing how one would do things in a regime of abundance in that resource, and conceptualizing the differences between feasible and unrealistic amounts of abundance.
This holds for familiar forms of material scarcity (food, housing, toys, etc.), but also for psychological scarcity (scarcity of love, friendship…) and — the main point at hand here — scarcity of the compute resources needed to power cognition. Minds operating with a relative abundance of compute resource could potentially use cognitive methodologies quite different than we’re used to, obsoleting some the dilemmas and pathologies that concern us both in everyday life and in our thinking about the future of AI.
Thinking about the ways cognition will improve as resource restrictions are lifted, militates toward an optimistic rather than fearful perspective on the likelihood of an AGI-powered future.
Scarcity Thinking / Abundance Thinking
I had the experience of living in US urban ghetto environments for a while, and then relocating to more affluent areas without a culture of everyday theft of physical objects. It took a while to adapt psychologically to the fact that locking the door of my house was no longer necessary. It’s not that there were no poor people around in, say, Rockville Maryland (where I lived and didn’t lock my door for 9 yrs), but there were few enough that getting your house broken into wasn’t a significant fear.
Similarly, upon moving to Japan it’s confusing at first that one doesn’t generally need to lock one’s bicycle in public places. Yes, there’s a Japanese-culture aspect here, but it’s also connected to the fact that not many Japanese would have difficulty buying a bicycle if they wanted one. The culture aspect comes in, though, when one considers that in some affluent areas of the US, an unlocked bike would have the risk of being ghost-ridden into a ditch by restless teenagers, which is much less likely to happen in Japan.
Iain Banks, in his Culture novels, adopts the precept that one mark of a primitive society is reliance on money. This is hard for us to wrap our brains around now in a practical sense, but may become intuitive in the relatively near future if predictions of an impending Technological Singularity hold true. Suppose a superhuman AGI airdrops 3D-printer/ molecular assemblers in everyone’s back yard, so we can 3D print basically whatever everyday physical object we want (with presumably some restrictions against printing machine guns or briefcase nukes built into the software). How would the availability of these doodads affect our way of thinking and living?
Scarcity thinking leads some people to worry a lot about what would happen if AI took all our jobs. How would we find money to pay the bills? It’s hard for some people to conceptualize a world where there simply are no bills to pay, because the services people want in everyday life are provided by a global AI-controlled robotic service fabric. If this abundance really does come into reality, though, I suspect almost everyone will shift pretty comfortably, after a brief transition period, to a life of pursuing social, athletic, spiritual, intellectual and artistic growth instead of focusing on accumulating funds to pay bills.
How would we think about our schedules and life-plans if time became abundant, in the form of super longevity therapeutics? Some of our current motivational drivers would vanish, but what other ones would pop up and flourish in their place?
Scarcity of love, understanding and affection in childhood leads to adults with various psychological pathologies, e.g. insecure styles of attachment. Growing up when love is hard to come by, one gets programmed to feel like any love one finds has got to be grabbed onto and clutched really, really hard. Which is inappropriate and problematic when in a relationship with someone who has plenty of love for you, so that love is not a scarce resource that needs to be hoarded and fought for all the time.
This same sort of phenomenon occurs on the cognitive level. Our ways of thinking and learning and reasoning, of approaching problems and approaching our mental lives, are heavily adapted to the regime of resource scarcity in which we have evolved and in which we still live. Nothing in our everyday cognitive life gives us much guidance to understand how we would think if memory and processing power were abundant enough to be “not a big deal” for most practical purposes.
Regimes of Feasible Abundance
How would we think if our processing and memory resources were vastly more abundant?
The distinction between feasible and purely fantastic abundance is important here. In the “airdropped molecular assembler” scenario, there is still some limit to the power of the computer a random person would be able to 3D print given the physical resources available to them. Physical resources would be abundant relative to everyday human needs, but not relative to mathematicians seeking to build arbitrarily powerful supercomputers aiming to solve ever more complex math problems.
Emotionally, a healthy loving environment is still not an environment where every interaction and situation is loving. The latter would most naturally lead to sweet, lovely, open-hearted minds with zero toughness and “coping with difficulty” skills. However, an environment rich in love but also marked by various real-life difficulties often leads to a secure attachment style and a self-confident and effective approach to overcoming challenges.
On the cognitive side, given resourcing close enough to infinite, one could implement algorithms similar to Marcus Hutter’s AIXI^tl or Schmidhuber’s Godel Machine — which solve problems basically by brute-force search of the space of all solutions up to a certain size. This sort of algorithm appears not feasible in our physical universe or any vaguely similarly structured physical world. But there is a lot of room between this level of abundance that utterly oversimplifies everything, and the level of scarcity we experience today.
A fine-grained analysis could be done to identify levels of increasing feasible abundance, each with different practical consequences. In a material-resources context, the level of abundance typical in developed countries already has definitive consequences, and the level we’ll have post-Singularity will have different ones. Emotionally, the presence of ongoing secure love has a clear distinct impact on self-development, and then the breakthrough to an oceanic sense of pervasive love is a follow-on phase. Cognitively, there will be a breakthrough beyond the human level to new forms of abundant cognition — which we can vaguely nut out now if we try, as I’m doing in this post — and then there will surely be layers beyond this, which we can currently see even more dimly. Hierarchical layers of spiritual abundance well beyond the level of ordinary everyday human consciousness have been explored and speculatively charted out by Jeffery Martin, Ken Wilber and many other modern theorists as well as various historical wisdom traditions.
The Question of Cognition Under Conditions of Feasible Abundance
Given feasibly abundant compute resources to fuel our cognition, we would far less often need to forget the concrete experiences and ideas underlying our abstract and general ideas. Explicit generalization would still be a useful thing to do — because we’re talking about feasible rather than near-infinite resource abundance — but alienation of generalization from concrete evidence would be far less prevalent. And the consequences of this would be incredible.
So much of our stubbornness, our difficulty in changing our mind in response to new situations and information, is due to the alienation of our abstractions from the experiences that gave rise to them. If, when we found new experience that contradicted our beliefs, we could go back and pool this experience with the earlier experiences that led to the formation of our beliefs, we would find our beliefs quite a lot more pliable. This would make us not only smarter (with better belief-revision heuristics and algorithms, among other technical improvements to our inference capability), but less emotionally screwed-up.
We would also perceive more of the world around us on a regular basis, instead of focusing our attention so tightly on those aspects of the world most relevant to our current goals and most strongly highlighted by our current beliefs. This would open us up to richer relationships with other people and other sentiences, as well as vastly increase our ability to learn from our interactions with our environment.
We would be much more able to observe our own thinking, feeling and acting while in the course of doing it — i.e. to be “mindful” while still being intensively productive and fully engaged. This is possible to a significant extent given the current human mind/brain architecture, but for most people it requires quite a lot of practice to get there. With moderately more abundant resources and a mind-architecture adapted to these resources, it would a lot simpler; a distinct cognitive module could simply be assigned to carry out the “mindful observer” role while other cognitive modules carried out practical tasks.
Effective exercise of compassion would also be more straightforwardly achievable, because we could keep knowledge-graphs about each other individual to whom we wanted to provide aid, and reason extensively about how to do so. With consent of he other individual, we could also have direct access to simulations of them running in various virtual environments, so as to get a much stronger idea of what is good for them in what senses and what isn’t. These fancy methods of other-analysis and other-modeling wouldn’t help a mind without a basic orientation toward compassion — but in the context of a mind that foundationally WANTS to be compassionate (which is the kind of AGI we’re working to engineer in the SingularityNET / OpenCog ecosystem, for instance), they can prove very powerful enhancers.
This question of resource-rich cognition is acutely relevant to concerns about the ethical properties of superintelligent AI systems, because these will almost surely have vastly more abundant compute resources. They will be able to guide and regulate their own growth according to fundamentally different strategies than those we are currently able to follow.
Consider for instance concerns of the form: If I modify my brain or my mental software considerably, in pursuit of certain goals, how do I know the modifications won’t go awry and actually lead me drastically away from my goals in terrifying ways? Eliminating this sort of possibility entirely is probably not feasible, but it seems that in a regime of resource abundance, there may be cognitive strategies of minimizing the odds of such pathologies, which are not available to us now.
Greater mindfulness, broader attentional focus and increased ability to retain concrete groundings for abstractions, and more effective compassionate relationships with other minds would all very likely decrease the odds of a mind’s future-self going awry in a manner surprising to its past selves. But there are also some additional cognitive strategies that will open up once resource restrictions are relieved a bit.
Two Robust Cognitive Strategies for Resource-Rich Minds
Two cognitive strategies that occur to me as probably feasible given a high but not infeasible degree of abundance would be:
1) Sampling from neighboring possible worlds in the multiverse.
Consider a situation where you think you have a good approach to getting a certain thing done, but are (as will usually be the case) unsure whether you’re somehow deluding yourself, perhaps because some of your beliefs or thought-processes are actually not as true as you think they are.
One strategy to deal with this is to create an ensemble of versions of yourself that are tweaked in some way. Perhaps some of them are exposed to a minor variation of the data you’ve actually experienced during your life (after all, it’s reasonable to think a certain percentage of your perceptions that you take for granted are probably erroneous or hallucinated, etc.). Perhaps some of them are driven by a minor variation of the ideas or mental processes that drive you (after all, it’s reasonable to think that some of the cognitive content that drives you was originated via some sort of erroneous process).
You can then ask each version of yourself, drawn from the ensemble, what it would do to achieve the objective you’re after. Taking an approach that gets recommended by a decent number of members of the ensemble of variant yous, is a reasonably robust approach.
This sort of strategy is supported, on a technical level, by various results that estimate the out-of—sample generalization of a model in terms of the robustness of the model with respect to replacement of its training data with subsamples thereof … and other related statistical theorems.
Even if you don’t opt to choose you actions based on a vote across your multiversal neighbors, just understanding their perspectives is bound to be valuable and interesting, and enrich your understanding in ways that may affect your ultimate decisions in more complex ways.
2) Maintaining a history of your prior versions, assessing their view of your current self and rolling back now and then when appropriate
When we humans, in our current realm of scarcity, develop and grow, we leave our old selves behind. We may then muse about what our past selves would have thought about our current selves, but we rarely have a solid idea of this, because what we’re really assessing is our “current model of our past self” … as the number of links in the chain of models increases, the noise multiplies.
What if we could keep our previous versions, or at least a reasonable sampling thereof, in the manner of a code version control system. We could then boot up our old versions from time to time and educate them on our current situation and way of thinking and ask them what they thought. Potentially we could even maintain some of these in virtual worlds, popping into the virtual worlds to consult them now and then.
If our past selves don’t approve of our current self, then what? One option is to accept the advice of the past self and roll back the changes. Rolling back and re-evolving would ultimately lead to a branching tree of possible selves, which could opine as requested on other selves living anywhere in the tree.
The tree could also potentially be expanded in multiple locations in parallel, with parallel versions of yourself existing in software and/or robotic or synthetic-biology instantiations, and providing feedback on each other and each others’ historical predecessors.
This is basically a self-management version of AI algorithms such as backtracking, which have demonstrated value for solving certain sorts of problems. Improvements and variations on this sort of algorithmic approach would doubtless unfold once rich-resource superintelligences started putting them into practice.
Even if the advice of the other variant selves in the tree isn’t heeded, it may still be interesting and enrich one’s point of view.
Having an I-Thou relationship with one’s multiversal neighbors — both the versions of oneself in other similar multiverse branches, and past and alternate-history versions of oneself found on the branching tree of the self’s version-control system — would lead to an utterly different sort of psychology than the one we’re used to. The implications for, say, love relationships are intriguing if a bit baffling to ponder.
Superintelligent Quantum Cognition?
In spite of the “multiverse” language I’ve used here, the above heuristics don’t require quantum computing for their implementation, they can be done in an approximative but powerful way purely classically given a physically realistic amount of compute power. Quantum computing, however, could certainly be very helpful for realizing these cognitive techniques more and more thoroughly, and this may well be part of how superintelligences end up operating.
If enriching quantum mechanics involving morphic-resonance/precedence-principle type dynamics turns out to be useful, then the interplay of these phenomena with resource-rich cognitive heuristics may be interesting. In this case superintelligences’ conceptual explorations of parallel universe versions of their minds, would have actual impact on the distribution of different cognitive processes and associated impacts across the multiverse. This brings us into Terrence McKenna / Julia Mossbridge ish domains, where we start to wonder about post-Singularity minds reaching back in time and influencing our path to Singularity today. But I won’t go too far in this speculative direction right now, as the core points of this blog post are a little more “vanilla” and don’t require anything quite so adventurous!
Resource-Rich Minds and AI Safety
The concerns that some futurist pundits have about future AIs running amok and killing all humans or turning the universe into paperclips, seem to me largely rooted in an unfortunate and inappropriate projection of scarcity thinking into a future domain of abundance.
Rushing forward to modify oneself without carefully evaluating what one’s multiversal neighbors or predecessors would think of the changes, is the sort of thing one does when one is in a panic to maintain one’s survival in a harsh environment, or accumulate more resources than one’s competitors. It’s not the sort of thing one would bother doing in a regime of reasonable resource abundance.
The generally wise, clever and deep thinker Stephen Omohundro wrote an elegant paper on “The Basic AI Drives” some years ago, which summarized some core ideas that have often popped up in the thinking of other AGI-worried pundits like Eliezer Yudkowsky and Nick Bostrom. The basic theme of Omohundro’s paper is that future AIs are going to behave like systems struggling and competing for resources — i.e. like the animals and plants in Earth’s evolutionary history, which mainly evolved in situations of relative resource scarcity.
I like Stephen greatly but find this paper misdirected. The argument that the “Nature red in tooth in claw” dynamics of competition-for-resources will always apply even in future regimes of realistic abundance reminds me of people who argue that Japanese don’t really leave their bikes unlocked out in the street, or people who argued back in the 1970s and 80s that open-source code could never work, because people need to have ownership … or a little later that Wikipedia could never work, because people need to be paid and get credit. It sounds plausible at first, but it’s actually egregious overfitting to the regimes we happen to be familiar with.
I strongly suspect one could show, formally, that under various reasonable assumptions, minds architected to leverage transhumanly-abundant long-term and working memory and self-observation capability and following the two rich-resource cognitive heuristics I suggested above would be much less likely to fall prey to fallacies like accidentally modifying themselves in idiotic ways (causing them to turn the universe in to paperclips, convert to Pastafarianim, or whatever). I believe these heuristics would also have the result of causing minds to be more robustly compassionate to other minds in the universe — at least in the case where the initial mind at the start of the branching tree of evolutes has a value system placing compassion at the fore.
Given the likelihood that resource-rich minds will follow cognitive heuristics that minimize the odds of pathological or uncompassionate behavior, it seems a plausible initial goal system for a self-modifying, self-improving AGI would include:
Manifesting compassion toward humans according to its best understanding of how most humans would interpret this
Achieving sufficient resource-abundance that it can follow robust rich-resource cognitive strategies, and then implementing these strategies when feasible
Of course it’s entirely possible an AGI slightly beyond human level will come up with even better rich-resource cognitive heuristics than the two I’ve briefly sketched here. In fact this seems almost inevitable. And to me this is a feature, not a bug.
The important thing is not to set the precise direction for future AGI evolution — which would be entirely infeasible for us to do — but rather to point our AGIs in the right general direction, establish a meaningful mutually empathic I-Thou relationship with them, and let them work out the details, collaborating with us as appropriate.
Here I am again with another mind-bending wawa woowoo comment. I was in thee bookstore the other day and did a speed read through https://thisishowtheytellmetheworldends.com/, the book about the cyber arms trade, i. e. state-sponsored hackers. I would really love to get your take on THAT. Especially now with these chatbots helping coders write their code. Eventually we end up with AI hackers and no humans able to even comprehend what they have done or can do. Straight outta William Gibson . . .
I've been correct about Kurt Goedel's Incompleteness results all along, but I just now figured out precisely why. He doesn't address the reflexivity of his substitution function, proving that Sub (x (v, y)), which says substitute the entity y for the variable v wherever it occurs in formula x, is recursive but then using it in a reflexive manner without showing that it is reflexively recursive. This ruins his entire project, I do believe, because two cases arise, neither groovy.
Case one: A Turing Machines does not halt when given Sub (p (19, Z(p))) as input, then Sub (p (19, Z(p))) generates an infinite nested regress analogous to Aczel's omega and cannot be recursively defined in Goedel's system P; indeed, existence cannot be demonstrated without augmenting the system P with, say, Aczel's AFA.
Case two: A Turning Machine does halt when given Sub (p (19, Z(p))) as input, then r = Sub (q (19, Z(p))) is NOT a recursive class-sign because it contains TWO free variables, 17 and 19, hence, 17 Gen r, the undecidable "sentence" is NOT a sentence, in that it still has the variable 19 free and it makes no sense to discuss decidability without additional information or, say, generalizing over 19.
Either way, his result is not valid. I've been right all along, of course. Just like I'm right about William Tiller and quantum computation too.
But my argument about Goedel all along has been, Sub (p (19, Z(p))) is nonsensical unless one can show a Ring isomorphism between the class of all formulas about PA and PA itself (the standard model) and this cannot be done. Goedel's paper is by far the worst mathematics paper I have ever read.
See my latest Quora answer or Medium article for the semi-formal argument (it's formal except I haphazardly add an edit after acknowledging that I would need to prove that a TM does not halt when given the reflexive Sub). I'm so sick and tired of living in Los Angeles - America really; I hate this f&^%king place . . .