Discussion about this post

User's avatar
Giulio Prisco's avatar

Hi Ben, I read the book attentively, cover to cover, and I mostly agree with your review. Mine is here: https://magazine.mindplex.ai/post/sorry-mr-yudkowsky-well-build-it-and-everything-will-be-fine

In particular, I totally agree on "the most important work isn’t stopping AGI - it’s making sure we raise our AGI mind children well enough." I don't think we can be 100% sure that a well-raised AGI won't become a psychopath, but then we can't be 100% sure of that even with our organic human children. But good parenting helps, in both cases.

After reading the book, my main concern is this: over the years Yudkowsky has demonstrated a certain ability to attract weak minded people to his personality cult. Therefore, his thesis will probably be amplified and receive more attention than deserved.

Myself, I think giving birth to our AGI and then ASI mind children is our cosmic destiny and duty, and I think the universe will provide for both them and us.

Expand full comment
Aaron Turner's avatar

'By treating beneficial AGI as impossible..." - to be fair, this is not exactly what the book says, which is: "If any company or group, anywhere on the planet, builds an ASI using anything remotely like current technologies...". In other words, beneficial AGI is impossible if it's fundamentally based on NNs/SGD (i.e. "grown" not "crafted"), with which I broadly agree.

Without wishing to be outcast as a radical heretic, I have for some years believed it to be extremely unlikely (1% subjective probability) that any LLM-based system will ever achieve reliable human-level AGI (irrespective of how much money is spent on scaling, or how many cognitive fudges such as RAG or COT are somehow bolted on).

Even more radically, I have strongly suspected that neural nets (being "grown" rather than "crafted") are essentially unalignable (to the degree required for human-level or greater AGI).

I cannot objectively prove either of these assertions (they are to a significant degree motivated by ~40 years’ of personal thought and research pertaining to AGI and machine cognition, which is subjective and not easily shared).

That said, there is a growing minority of AI “grey-hairs” (such as Emily Bender, Yan LeCun, Gary Marcus, Melanie Mitchell, Richard Sutton, and Stuart Russell) who seem to broadly agree with me re LLMs. In “If Anyone Builds It, Everyone Dies” (2025), Yudkowsky and Soares effectively conclude, as I do, that NNs are effectively unalignable (to the degree required for ASI). Plus there is a growing cacophony of alarm bells re the major AI labs’ insatiable need for VC being unsustainable, implying that they may well run out of cash before getting to reliable human-level AGI.

If we grey-hairs are correct, then that means that (1) the vast sums currently being spent chasing AGI via LLMs are effectively being wasted (a depressing side-effect of which is that alternative approaches are seriously underfunded), (2) there *is* no imminent AI safety emergency necessitating that all AI safety research be NN/LLM-focussed, and (3) we need alternative approaches that do not fundamentally rely on LLMs or even NNs.

If by some chance any readers of this comment happen to be in broad agreement, and have the time to do so, please see “TTQ: An Implementation-Neutral Solution to the Outer AGI Superalignment Problem” (preprint: https://doi.org/10.5281/zenodo.16876832), which is the first of four planned papers outlining my personal research agenda for "Gold-Standard" AGI.

The TTQ paper has so far been downloaded over 800 times, but I have only had serious feedback from a single person, Professor Steve Young CBE FRS at the University of Cambridge, who kindly provided the following testimonial: "The TTQ paper is certainly a tour de force. Aaron sets out a carefully argued process for producing an AGI in as safe a manner as possible. I hope that people read it and at minimum use it as a check list of things to consider."

If by any chance you have time to read it (it's not short - apologies in advance), I'd love to know your thoughts!

Expand full comment
34 more comments...

No posts