Pain and death are two aspects of human life that most of us don’t appreciate very much … but we tend to put up with as gracefully as possible because we consider them inevitable.
Now it turns out that death is probably not so inevitable after all. Human superlongevity is most likely achievable — and as Tipler has pointed out in The Physics of Immortality, even the Big Crunch at the end of the current physical universe need not necessarily crunch us up if we carry out appropriate universe-scale re-engineering.
But what about pain? Common sense tells us that pain is necessary to signal us when there is danger to our physical or mental organisms — yet the existence of people with conditions like pain asymbolia gives one pause. These folks seem to exist OK in human society without ever feeling the bite of pain to any significant degree. So why do the rest of us have to?
Humanity would not have evolved without death clearing out earlier species, nor without pain working alongside pleasure to incentivize individual behavior that benefits the species. But from a transhumanist view, looking at how to engineer the next steps in human and posthuman evolution, there is no need to replicate or perpetuate the specific steps that got us where we are right now.
It’s clear that superhuman superminds won’t need to die, at least not on the sort of rapid-fire schedule typical for biological organisms, and maybe not ever. But it’s less obvious whether superhuman superminds will need to experience pain.
The major question regarding pain and post-Singularity superhuman superminds seems to be: In what sense is pain necessary or highly useful for the survival and flourishing of superhuman superminds?
Supposing it’s viable to engineer posthuman or upgraded-human mind/bodies that avoid pain in the way that folks with pain asymbolia do — or some other interestingly different way — is this actually a desirable thing to do? What gotchas may arise in such a pursuit?
For rational, reflectively self-conscious minds, is pain at all desirable? Is there something intrinsically important to be gained from feeling the bite of pain?
What would be the follow-on benefits of creating generally intelligent minds with no, or relatively minimal, pain experience? Would the absence of significant degrees of chronic or acute pain have other positive or negative psychological or social impacts?
Digging deep into the relationship between pain and various aspects of cognitive systems, one ultimately comes to some fairly optimistic conclusions: Superhuman superminds, if effectively architected, shouldn’t need to experience pain nearly so much as we do. Pain, like death, should be substantially even if not wholly eliminable for post-Singularity minds. While my life in human form is basically pretty good anyway, still, I’m definitely looking forward!
Nonattachment and Pain Minimization
The first conclusion I’ll work toward in this lengthy post is relatively unadventurous from a Buddhist psychology perspective: If we create posthuman minds or AGI minds that display minimal “attachment” to their prior states, ideas, beliefs and experiences, these minds will also experience relatively little (even if not zero) suffering. And the suffering they do experience will likely be “not minded”, in the same way that a person who likes spicy food doesn’t mind the painful zing of the capsaicin on their mouth and tongue.
When we eat tasty spicy food, the pain of the spice is there but it’s an intrinsic part of an overall pleasurable and desirable experience. Of course if the spice level is upped too much then this integration of pain into enjoyment is lost — but the well-engineered supermind can self-tune its parameters not to experience real-world life-pains as “too much” in this sense, partly because of its ability to experience the joy in the whole systems encompassing the pains it encounters.
And then the magic …
The next conclusion/suggestion I’ll wind and wend toward here is a little more adventurous ….
Minimization of attachment decreases fear of pain and suffering — because pain encountered isn’t going to be clung onto indefinitely via getting caught up in feedback loops of self-perpetuating self-infliction. This decrease of fear opens up minds to I-Thou experiences, in which they have full and intense Second Person experiences of the joys and sufferings of other minds. And here is where some mathe-magic occurs…
When a number of minds come together, via mutual Second Person experience, their diverse pains and joys come together into a decentralized and distributed but communally shared pool of feelings. And what often happens here is something amazing:
The pains originating in the different members of the collective mindplex, to a large degree cancel each other out (“destructive interference” in the logic of waveforms)
The joys originating in the different members of the collective mindplex largely amplify each other (“constructive interference”)
Why would this be the case? The crux is in Tolstoy’s inaccurate but evocative aphorism that “Happy families are all alike; every unhappy family is unhappy in its own way” The truth of the aphorism is that there are certain archetypal structures and patterns of joyous experience, which are strange attractors of individual and collective mental/physical/social/transpersonal activity. Joy experience consists in large part of the process of journeying into these attractors, often together with others and in the context of achieving a greater unity and wholeness via this collective convergence. Pain experience consists largely of the fracturing and fragmentation of an archetypal whole.
To introduce a possibly useful metaphor: There is, in an important sense, a greater diversity to piles of fractured clay pieces than to whole cups and bowls. If one has a number of processes of convergence from piles of clay fragments into whole pottery, then when combining these processes in an appropriate logic one finds constructive interference — one finds the commonalities accentuated and the emergence of a holistic process of “pottery shape convergence”. If one has a number of processes of divergence from whole pottery into piles of fragments, each of these processes will generally proceed in a randomly/chaotically different manner and if the combination is done in an appropriate logic there will be cancellation and dampening of the details.
What kind of logic manifests this sort of constructive and destructive interference? Well I’m glad you asked — the answer would be quantum logic, or approximations to quantum logic like (paraconsistent) fuzzy Constructible Duality logic with a Schweizer-Sklar t-norm
The Bad Old Problem of Evil
To walk through all this in more detail, let’s take a few steps back … back 500 years or so, for a start.
Leibniz and various other Christian philosophers of his era and earlier ones spent a lot of time and energy agonizing and analyzing the “Problem of Evil” — basically, why and how a God with absolute power and goodness would create a world with so many apparently nasty and perverse aspects as our current one.
Leibniz’s solution was basically that we live in the Best of All Possible Worlds — that even God is constrained by mathematical consistency, and what seem to be suboptimal aspects of our world are actually subtle compromises. If our world were tweaked to get rid of brain stunting of children due to malnutrition, say, then this would necessarily lead to changes in some other aspect of the world that would be even worse. Voltaire epically parodied this view in his beautifully written Candide….
I have always been a little more sympathetic than Voltaire to this Leibnizian perspective.
There is a certain spiritual truth here: The pain and torture and horror of this world we live in, is totally tied up with the beauty and wonder and perfection of the world. This is what Nietzsche meant when he had his Zarathustra say:
What do you think, you Higher Men? Am I a prophet? A dreamer? A drunkard? An interpreter of dreams? A midnight bell? A drop of dew? An odour and scent of eternity? Do you not hear it? Do you not smell it?
My world has just become perfect, midnight is also noonday, pain is also joy, a curse is also a blessing, the night is also a sun – be gone, or you will learn: a wise man is also a fool.
Did you ever say Yes to one joy?
O my friends, then you said Yes to all woe as well.
All things are chained and entwined together, all things are in love; if ever you wanted one moment twice, if ever you said: ‘You please me, happiness, instant, moment!’ then you wanted everything to return! you wanted everything anew, everything eternal, everything chained, entwined together, everything in love, O that is how you loved the world, you everlasting men, loved it eternally and for all time: and you say even to woe:’ Go, but return!’
For all joy wants — eternity!
From a world-engineer’s point of view, on the other hand, while there is a deep truth to the inextricable intertwining of joy and pain in any particular world — there remains the question of whether some worlds one might be engineered to have a much smaller admixture of pain than others.
It seems feasible that there should be limits to the Goodness of a world (by any reasonably rich criteria of Goodness) because of complex and subtle interdependencies between parts of the world, in accordance with Leibniz’s notion. However, it seems a bit out-there to claim — as Leibniz seems to — that our world actually reaches these limits. This is the aspect of Leibniz’s thinking that triggered Voltaire’s ire — not the notion that joys and pains are complexly intertwined and interdefined, but the notion that there is some ultimate cosmic optimality to how this intertwining manifests itself in our own world.
It’s certainly hard to know for sure from our limited perspective, but my best guess at the moment is that there are plenty of ways to improve our world without hitting the point where any improvement in one place will necessarily cause even worse damage in some other place — according to the criteria of better vs. worse that seem most meaningful to me (Joy, Growth, Choice and all that). The uncertainty involved in pontificating about such grand matters using our little human brains and cultures can feel somewhat daunting — but on the other hand we who would create a beneficial Singularity can’t afford to shy away from the Big Questions , we just need to do our level best.
A post-Singularity version of the Problem of Evil will be confronted by superintelligences engaged in world design. Suppose a superintelligence wants to create a “Matrix” — a virtual world populated by autonomous intelligences, with richness and diversity of forms, and with as little pain and suffering and as much positivity as possible. What constraints as to the Goodness of this world will the superintelligence hit? And how will this depend on the particulars of its conception of Goodness, and on any other constraints the superinteligence is operating under?
To put it less high-falutin’-ly , on a personal-philosophy level, this line of investigation is concerned with understanding “Why soooo many nasty things exist in the world? Like pain and death for example” from a fundamental ethical and conceptual perspective. One can acknowledge that life is inevitably going to involve some unpleasantness here and there, without acknowledging that, say, brutal rape of young children and shelling of residential districts during warfare are necessary and critical aspects of existence. Various kinds of answers to this question are possible, e.g.
Just because we live in a crappy little shard of the multi-multi-…-multiverse that is not well-tuned for goodness and joy and benefit.
Because the evil and horrible and just tedious aspects of our world are important for teaching us lessons that are key to our spiritual growth
Because the author of the simulation we live in was not very competent
Because, given the constraints under which our world was formed, only a certain amount of Goodness is feasible and a certain degree of pain and suffering is unavoidable
On the whole, while I think some answer of the form (4) is abstractly true, I strongly suspect that the amount of pain and suffering in our current world vastly exceeds what is necessary given the relevant constraints. Still, though — post-Singularity it may be that superintelligent world-engineers could hit up against the limits of just how good lovely a world can be and still display the key aspects of what we consider a “realistic world” for intelligences to live in.
As I noted in The Technological Elimination of Pain is Both Feasible and Desirable, I believe that advanced science should and will be used to eliminate pain from the human experience inasmuch as can be done without hitting other even worse side-effects. The line of thinking presented here suggests there may be limits to how far this can be taken. However, I suspect current humans live very, very far from these limits — we can reduce the degree of pain and suffering drastically below currently accepted levels before hitting any fundamental limits. That is, I suggest that
Yes, pain can be a beautiful part of joyful experience
Advanced consciousnesses (e.g. evolved posthumous or engineered AGI superminis) don’t need to suffer from pain the same ways that we humans commonly do in our everyday lives.
The amount of pain suffered by non-advanced consciousnesses in our current mode of existence exceeds what is “necessary” or useful according to the variation on human value that I embrace, and can likely be drastically reduced post-Singularity
Consciousness, Attachment and Pain
Let’s recall from my prior blog post on the logic of pain the distinction between pain and proto-pain — and then extend the ideas given there a little bit.
I note there that in consciousness theory one has the distinction
Proto-consciousness: the raw (Peircean) Firstness of any process
Reflexive process: a process that includes an OK-ish (implicit or explicit) model of itself
Consciousness: The Firstness of a reflexive process
One can add onto this a further distinction
Elevated consciousness: The firstness of : a reflexive process that is non-attached to its model of itself
Non-attachment is a traditional Buddhist concept on which I gave a cognitive-science perspective in a blog post back in 2013,
There I discuss attachment neurally:
Attachment occurs -- neurally speaking -- when there is a circuit binding a cell assembly to the brain's emotional center, in such a way that emotion keeps the circuit whole and flourishing even though otherwise it would dissipate.
One can of course extend this beyond brains and beyond human-like cognitive architectures and say something like:
Attachment occurs in a cognitive system when a pattern P, emergent from a certain substrate S, becomes enmeshed in a network of mutual emergence with other patterns, so that when the substrate S changes so that P is no longer a pattern in it, the rest of this network encourages the system to keep attempting to identify P in S anyway, with more persistence than rational extrapolation would suggest.
In human minds the “network of mutual emergence” involved here is generally heavily emotional in nature — and if one assumes this, one gets from this “cognitive” characterization of attachment to the neural definition I gave in 2013.
There is an interesting connection with morphic resonance (aka Peirce’s “tendency to take habits” aka Smolin’s “precedence principle”) here: morphic resonance suggests that in actual reality patterns do tend to persist further than rational extrapolation and straightforward probabilistic calculation would suggest. However attachment phenomena seem to go well beyond the strength of most real-world morphic resonance phenomena, and involve minds “wishfully” assuming far more persistence of attached-to patterns than actually exists in this world.
In a human psychology context, as I pointed out in the 2013 post:
The model of attachment presented here relates closely to Stanislav Grof's notion of a "COEX (Condensed Experience) system." Roughly, a COEX is a set of related experiences organized around a powerful emotional center. The emotional center is generally one or a few highly emotionally impactful experiences. The various experiences in the COEX, all reinforce each other, keeping each other energetic and relevant to the mind.
In a Hebbian perspective, a COEX system would be modeled as a system of cell assemblies, each representing a certain episodic memory, linked together via positive, reinforcing connections. The memories in the COEX stimulate powerful emotions, and these emotions reinforce the memories -- thus maintaining a powerful, ongoing attachment to the memories.
There are also some potential implications regarding neural models of enlightened consciousness:
ONE of the significant factors the neurodynamics of enlightened states is: A change in the function of the Posterior Cingulate Cortex, so that in relatively non-attached people, emotion plays a significantly lesser role in the maintenance and dissolution of cell assemblies and associated attractors representing memories.
What I’m here calling elevated consciousness, then, is basically a state of mind that reflexively models itself, but such that its self-model is not subject to significant attachment dynamics. In a human context this mostly means the self-model is not too wrapped up with COEX systems. It means that when the self changes empirically, the self-model then changes accordingly in a rapid and realistic way, rather than sticking with old aspects that no longer apply in an over-attached way.
Now let’s apply these notions to the analysis of pain. In the Logic of Pain post I argue one can make the parallel distinctions:
Proto-pain: the proto-consciousness associated with the separation of a pattern from a substrate (I.e. a decrease in the degree to which that pattern is indeed a pattern in that substrate)
Reflexive transition: a transition in the condition of reflexive process, which involves the process richly modeling said transition
Pain: the consciousness associated with the non-reflexive separation of a reflexive pattern from a substrate (i.e. the transition embodying the separation is not a reflexive one, though the pattern is a reflexive process)
I’m suggesting now to extend this to
Elevated Pain: the consciousness associated with the non-reflexive separation of a reflexive pattern from a substrate, in the case where said consciousness is not attached to the former binding of the pattern to the substrate
In this scheme a plant feels proto-pain but probably not pain. Every mammal pretty clearly feels pain in roughly the same sense people do. The degree to which a bug or fish experiences pain vs. proto-pain is currently a somewhat vexed question.
When I burn my finger, the finger is feeling proto-pain as its structure gets disorganized by the burn. My brain is also feeling pain as a result of the signals conveyed from the finger (which disorder its model of the finger and its response patterns to the finger and its expectations regarding the finger’s future sensation and activity).
When a kung fu master gets hurt but doesn’t really feel or react to the pain in any intensive way, they are experiencing one variety of elevated pain. In this case, the pain is there and comes and goes, but it doesn’t occupy the mind or impair activity, it just passes by like any other sensory observation.
Pain Asymbolia as Self-Nonattachment to Pain
Analyses of the cognitive, philosophical or biological underpinnings of pain need to take into account the peculiarities of individuals with disorders like pain asymbolia — rare but real folks for whom pain does not hurt, but just feels like a neutral-valence information signal.
If you’re Jo Cameron or someone else with pain asymbolia or related neurological oddities, then when you burn your finger, it doesn’t feel like it hurts. Now, the finger considered in itself may feel its own proto-pain … and the brain’s model of the finger is still disordered by the burn. But one interpretation of what’s happening would be: Not just the finger but the portion of the brain modeling the finger experiences proto-pain … but key parts of the brain doing deliberative self-modeling do not experience pain.
This is consilient with the psychologist Gerrans who argues that , for individuals with pain asymbolia are viewed as not associating pain with their persistent, continuously self-reconstructing self-model. In this account the self-model of the pain-asymbolic person simply doesn’t deal with the shattered expectations, the separation of pattern from substrate, involved in the burned finger.
Exactly how this happens “under the hood” in the pain-asymbolic brain is not currently clear. My take is that the self-modeling process is in these cases nonattached” to the pattern of integrity of the finger. When this pattern goes away, it’s just allowed to disappear.
The reason our cognitive and self-modeling systems experience pain associated with things like a burned finger is that they are attached to things like the integrity of the finger — so that when the pattern they’re attached to goes away, they experience regret and a difficult time “letting go” of the now-vanished pattern. The brain of the kung fu master, and the brain of the pain-asymbolic, do not possess this attachment. Yet it seems these two cases achieve their non-attachment in quite different ways — the details of which will be quite fascinating to unravel as neuro-imaging advances.
Viewing this from a paraconsistent logic perspective, we can say that in the suffering mind, the skin on the burned finger is both there and not there. In the nonattached mind, once the burned skin is gone, it’s just gone and there’s no lingering phantom sense of its reality. The pain is associated with a BOTH paraconsistent true value, which is an increase in truth-value entropy. The fading of the pain goes along with a decrease in truth-value entropy, as the reality of the loss of this skin is fundamentally accepted. We can say that pain corresponds to an increase in truth value entropy of propositions regarding system integrity — or in open-ended intelligence terms, system individuation….
Pain and Suffering Implicit in Open-Ended Intelligence
Looking at the matter of pain from the perspective of fundamental cognitive systems theory —
The theory of open-ended intelligence considers intelligent systems as driven by two primary dynamics: individuation (strengthening and reification of system boundaries) and self-transcendence (developing and transmogrifying into something fundamentally novel and different). It’s easy to see that self-transcendence generally involves a certain amount of pain at least for some of the subsystems involved.
Self-transcendence will generally involve some prior system patterns becoming less prominent, while new patterns gain in intensity. This won’t necessarily always be the case, though — it would be possible for a system to self-transcend by retaining all its existing patterns, and then growing new additional ones.
The property of retaining prior patterns through transitions might be called “nostalgia.” Nostalgia is different from individuation, because a system could retain most of its patterning through a transition yet lose those particular patterns critical to its distinction from other systems. Conversely, a system could individuate very effectively while churning through a variety of different system patterns over time — maintaining its boundaries strictly as it grows. This year patterns A and B, next year patterns B and C, the year after patterns C and D … but all along with very strict boundaries distinguishing the system from its environment.
Achieving individuation and self-transcendence while also satisfying nostalgia, however, makes a difficult quest (balancing the often-contradictory requirements of individuation and self-transcendence) even trickier. In realistic situations there is very often a tradeoff to be made, and with the imperatives of individuation and self-transcendence generally more primary, the drive to nostalgia often ends up getting compromised.
Key here is the factor of limited energetic and temporal resources. With endless time and space and energy to explore different possible routes to self-transcendence, it would more often be possible for systems to figure out how to preserve their boundaries, grow and develop tremendously, and retain their old patterns alongside the new things that grow. There would be limits to this — sometimes you’d just have to throw out the old to sufficiently emphatically embrace the new — but not as strict as the limits imposed by the need to figure out how to grow and self-preserve under conditions of strictly limited resources.
Growing in a way that involves letting old patterns dwindle intrinsically involves pain (or at very least proto-pain). The pain may be superposed with, and interwoven with pleasure — but it’s pain nonetheless. The subtle aspect of cognitive architecture as related to pain would seem to be: To what extent can one direct the pain experienced by a system toward the less acutely conscious parts of the system? To what extent can one make it proto-pain rather than full-fledged pain?
The examples of pain asymbolics and kung fu masters suggest that, in the context of human cognitive architecture, the answer is: To quite a large extent. This poses a fascinating challenge to AGI architects. How can one architect an AGI system so that its experience of pain bears a family resemblance to pain asymbolics and kung fu masters rather than ordinary everyday humans? I’ll return to this question a little later below…
To the extent that pain involves increase of truth value entropy of propositions involving individuation, it can’t be eliminated entirely in any real-world self-transforming fundamentally growing system. But it can be minimized via minimizing attachment to prior manifestations of individuation… which minimizes the role that these propositions play in a system’s reflexive cognitive dynamics. A reflexive cognitive system has got to model its own individuation, de-individuation and self-transformation— but it doesn’t have to obsess on its de-individuations, it can consider these in a matter-of-fact way and then let them drift from consciousness. And this observation is relevant both on the physical system damage level and on more abstract cognitive and social levels.
Pain and Suffering Implicit in the Combination of Distinction, Time and Diversity
The near-inevitability of suffering (or at least proto-suffering) in the experience of open-ended intelligence reflects yet more fundamental logic regarding the relationship of pain to key properties of the universe.
Roughly, one may say: In any universe featuring:
evolution over time
distinctions between systems and their environments
limitation of resources
a diversity of form
then pain will be a part of the story.
The conceptual argument should be clear enough. To get diversity in a context of systems evolving over time with limited resources, it will be necessary for old patterns to be sometimes left behind in favor of new ones. There will be pain associated with the old patterns dwindling.
In terms of basic mathematical structures, time is basically ordering, distinction is foundational Boolean-ish logic (cf Laws of Form), resource limitation is arithmetic summation, and diversity is information-theory. In any universe with enough mathematical structure to feature ordering, addition, logic and information, you’re going to have the structures needed to build suffering. To put it differently: Suffering is a natural consequence of some fairly basic, massively elegant mathematical structures.
Again, one should not take arguments like this to imply that the DEGREE of suffering we see in the everyday human world is in some way inevitable or desirable, though. The notion that suffering can’t be reduced to zero in any sufficiently mathematically rich universe — a notion I certainly haven’t rigorously demonstrated here, but regarding which I’ve perhaps hinted at some intuitive plausibllity — is interesting and potentially important for the future of human and machine experience, yet shouldn’t stop us from seeking to reduce the current level of suffering among Earthly sentiences by a factor of 10, 100 or a million.
Toward AGIs and Posthumans with Minimal and Elevated Pain
Leibniz was partly though not wholly wrong on pain and evil. We probably can’t get rid of suffering entirely, without sacrificing self-transformation and fundamental development and growth, or else sacrificing continuity of self and identity. But we can almost certainly radically reduce the amount of suffering below what we see in the human world today, or in the natural world in the forests and fields and depths of the oceans.
And the first key to this radical reduction of suffering may just be radically simple: Architect minds to avoid excessive attachment.
Minimizing attachment means that proto-pain arising in less reflexively conscious parts of a system, doesn’t generally turn into major amounts of pain in the more reflexively conscious parts of the system. It also minimizes the degree to which reflexive cognition hangs onto pain that it creates for itself in the context of its own thinking and feeling. In short, minimizing attachment minimizes the total amount of pain because it minimizes the feedback loops via which reflexive cognition can iteratively and recursively turn pain into more pain.
There is an interesting (though obvious) divergence between consciousness and pain here, in that:
Elevated consciousness produces MORE consciousness than ordinary reflexive consciousness
Elevated pain produces LESS pain than ordinary reflexive pain
This is because non-attached reflexive processes will tend to connect more effectively with broader systems outside the given cognitive system, and thus lead to the emergent co-creation of more and more reflexive processes. Whereas, non-attached reflexive pain processes will tend to dissipate as the system focuses on newly created patterns rather than recently disattached ones.
The long, winding road of this blog post has so far led to some fairly intuitively obvious conclusions: That if we create future minds with more “enlightened” consciousness and less attachment in their psychology, these minds will also suffer far less. However, as unsurprising as these ideas are from various (e.g. transpersonal psychology) perspectives, I believe it’s worthwhile to work through them carefully and connect them with the conceptual frameworks underlying AGI and neuroscience and allied areas.
And then we get to the funkier bits, which I alluded to above:
Non-attachment to self and pain reduce fear of I-Thou relationships with other minds.
Collectives of minds engaged in I-Thou relationships effectively form mindplexes, sharing pains and joys in a self-organizing distributed holistic way, often involving non-ordinary consciousness states
Joys tend to involve convergence to archetypal patterns, so when summed within a mindplex (Using a logic in which summation involves emergence, e.g. quantum logic or similar) they tend to undergo constructive interference — the joys of parties in an I-Thou relation boost each other
Pains tend to involve chaotic fragmentation of archetypal patterns, so when summed within a mindplex they tend to undergo destructive interference — their chaotic fluctuations tend to cancel each other out
So putting the pieces together, the bottom line on pain and post-human open-ended intelligence seems to be:
Pain to some degree seems inevitable in open-ended growing systems operating in regimes of limited resources
However the “necessary” degree of pain appears to me far lower than the degree of pain we see in our everyday world
From the standpoint of minds that are non-attached enough to form into I-Thou mindplexes with other minds, pains will tend to dampen and joys to amplify
Superhuman superintelligences may well suffer — a little bit. But their degree of suffering doesn’t have to be anywhere near human level, and they don’t need to be as prone to attachment as people either, so they will much more easily form into pain-dampening, joy-amplifying mindplexes.
It will be good to be a post-Singularity supermind!
Ben, I'll read this carefully from the beginning to the end, but I'm afraid I very much disagree with the title. Suffering is perception of a conflict between what you want and the reality of what it is. If you don't feel this conflict and are always happy with the reality of what it is, why and how should you strive to change it? To me, if anything, superhuman superminds will suffer MORE. At the same time, they will suffer for things that we wouldn't even understand, so they will transcend the causes of OUR suffering. But not the causes of their own suffering.
As you mention at the beginning of the article death and pain create the incentives and renewal of an ecosystem so that it can evolve and grow into a more complex system. This seems to me to be the fundamental engine behind intelligence. Within my own mind ideas and algorithms are being constantly pruned to create room for new ones. The physical mechanisms (neurons and connections) are constantly evolving to support those new ideas. It's not merely just reconfiguration, cells and organenelles and whole species within an individual human biome go extinct or evolve new structures every day. And my community of people that power my thinking sometimes thrive or die off. Whole countries and cultures are experiencing attempted genocide in eastern Europe and Africa, as we speak. Intelligence is a property of a system, not an individual brain. Death and pain are necessary to guide that system to evolve greater and greater intelligence and complexity. Without it there is no growth or even "treading water" in the struggle against the tide of thermodynamics (the long slow great crunch, you mention).