BGI Summit & Unconference, Coming to Panama City and the Internet/Metaverse, Feb 27-March 1
The AI field has picked up steam remarkably in recent times, with exciting new developments announced weekly. There is an increasingly wide sense that AGI is coming near, and a commensurately increasing concern about what this is going to mean for humanity.
For sure, at the moment Artificial General Intelligence (AGI) at the human level remains a bit out of reach. Modern Large Language Models are incredible, yet they lack autonomy, fundamental creativity, and the capability for complex multi-stage reasoning. However, research aimed at addressing all these weaknesses is energetically underway and backed by significant resources. Various avenues are being explored, including improving and expanding LLMs as well as quite different approaches involving AI methods like symbolic reasoning and evolutionary learning or more realistic biological simulation.
Whichever research approach succeeds at making the next big leap, once we have an AGI with general intelligence at the human level, my own intuition is that it probably won’t be much longer before we have Artificial Superintelligence (ASI). A human-level AGI worthy of the name will already have the ability to improve its own software and hardware, leading to a rapidly accelerating process of snowballing capability that I.J. Good in 1965 called “The Intelligence Explosion.”
Fears regarding the potential implications of this sort of development are understandable. Although Hollywood exaggerates the dark potentials of advanced AI, in reality, the risks are also very real.
An AGI beyond the human level could lead us into an era of abundance, conquering death, disease, and poverty, removing the need to work for a living and opening up a currently incomprehensible variety of astounding options for humans and other sentient beings. Counterbalancing this, the current economic and control structure of the AI field is not entirely a cause for positive hope. The large corporations and major governments that house the majority of the planet’s applied AI work have their own particular interests in mind, alongside and sometimes contradicting the overall interests of humanity.
So what’s the solution to the uncertainty associated with the exciting and unprecedented position we find ourselves in?
As things opened up after the pandemic and I started going to events again at a more measured pace, I also got a strong reminder of the unique value that comes from gathering together with diverse others in a common space. Gathering in virtual spaces can be fantastic, and keeps getting better as the technology advances, and it can be extremely worthwhile. But at this point, there’s still some powerful, unique value in getting bodies together in a common physical space to share visions, ideas, and reactions in a holistic way.
It was with a bit of trepidation that I decided to put together a conference on Beneficial General Intelligence (BGI).
Is this by far the most critical issue facing the human species at this point in time? Absolutely. Any other technology capable of doing good can likely be completed and rolled out faster and better by a Beneficial AGI. Any other technology capable of doing harm can likely be deployed and inflicted more effectively by a NON-beneficial AGI, working under its own direction or that of malevolent or selfish humans.
But how can a few more talks and panels, whether in person or online, help address these issues? Talks and panels on AI ethics have been going on for decades, and plenty of interesting conversations have been triggered. Meanwhile, AI development and deployment continue largely orthogonally to all this discussion.
To deal with these new challenges, we do need to listen and to discuss, but we also need more than that—we need to come together and re-invent our future in a new and better way.
The story of how AGI and humanity co-exist in the future has yet to unfold. The organizations and dynamics that now seem so indomitable could inevitably go the way of Gregg shorthand, Blockbuster Video, Honeywell, land lines, and horse-drawn carriages. Current AI technology is amazing, but there are still a few radical inventions to be made between here and human-level AGI, and it’s not yet written who will make these inventions, in what form, or by what kind of organization they will be rolled out.
We need to make plans, strategies, and tactics, knowing full well that they will all need to be re-thought time and time again as technology and society evolve in unpredictable and rapid ways. We need to form agile, flexible organizations that can reshape themselves in accordance with shifting realities without losing sight of the critical mission of catalyzing the emergence of Beneficial AGI (BGI).
I couldn’t think of a single structure for a BGI event that would really suit the purpose, so I decided to go for multiple formats.
On February 27th to 28th, there will be a BGI Summit, with a limited number of participants selected for experience and expertise in areas related to BGI, but also for diversity and representation of a breadth of walks of life, conceptual perspectives, and geographical regions.
The Summit will take place at the Hilton Panama, selected because of its beautiful location and also because of Panama’s relatively accepting visa policy, enabling participants from around the world to join in person without uncertainty or hassle.
I don’t like elitism, and frankly, the notion of having a small, hand-picked group bothers me a little, but from a practical standpoint, there is a special value to having a relatively small group interact intensively F2F.
During the Summit, the morning sessions will have talks and panels, while the afternoons will be more working group-oriented, with the goal of creating concrete plans for coordinated actions in various respects working toward BGI.
The Summit will immediately be followed by a BGI Unconference, Feb 29 and March 1, where the Summit participants will be joined by anyone else who’s up for it. Here, the agenda is wide open—follow-ons to Summit sessions will be a natural ingredient, but it will be disappointing if we don’t see any surprising new directions emerge alongside.
The Unconference will have a F2F aspect at the same location as the Summit and will also have a virtual aspect, which will involve traditional video conferencing platforms and also some experimental sessions in online virtual “metaverse” worlds.
Even an extraordinary event like this one is not going to solve all the problems of catalyzing Beneficial AGI. Although it seems probable that human-level AGI is near and ASI (Artificial Super Intelligence) may unfold not long after that, we can’t yet know the precise form the first human-level AGIs will take, nor what practical pursuits or human communities it will be most involved with.
What we can do, however, is…
Bring together a diversity of perspectives to openly, sincerely, and imaginatively flesh out plausible future scenarios and possibilities.
Form friendships, networks, and alliances oriented toward dynamically conceiving and taking practical actions aimed at beneficial AGI.
Open our minds and work on our own selves and understandings so that we can confront the unfolding future with a minimum of counterproductive preconceptions, biases, and attachments.
Think together about the relevant issues with clear minds and compassionate hearts, setting aside our historical belief systems and personal interests.
Conceive specific plans and strategies for maximizing the odds that, as AGI is developed, it emerges as a beneficial force for humans and other sentient beings, understanding that these plans are almost surely going to morph multiple times and in multiple ways before human-level AGI is reached.
It is our humble hope that the BGI Summit & Unconference will be able to help at least a little bit toward these goals.
Please see the conference website for more details on the event … and more and more info will be added there as February 27 approaches!
Oh, hell, I also wanted to share a couple of link-a-dinks with you, my friend;
https://youtu.be/ZVLHZ_w5A2s
https://youtu.be/loCBvaj4eSg
Where I am camping now, psilocybin is legal, so I went to the Farmers Market and bought a piece of chocolate and ate 5 squares (about 5 grams). I swam in the river and screamed madly at random passerby, birds mostly. It was great fun . . .
Why don't you invite Dzongsar Khentse Rinpoche to your little get together?
https://khyentsefoundation.org/who-we-are/our-founder/
https://khyentsefoundation.org/story/rinpoche-on-wisdom-ai-and-filmmaking/
Put a little wisdom in the driver's seat maybe?