5 Comments

I agree that current models are useful but not... interesting? It’s also telling that these AIs are ok at generating things, but fairly awful at collaborating. What I’m interested in are ways for humans to be the creativity engines and AI to augment those things. I’m never surprised when talking to a chatbot, unless it’s surprise at the things it still fails to do. As a musician, wouldn’t it be cool to come up with things and be able to play around with them without being either subsumed or having to spell things out in great detail? I think (still, after all these years) that without a somatic sense, muscle memory, feelings, many things about the way we operate aren’t “better-able” using AI. Not because humans are special, because biological complexity is special. And everything is connected. When models are about massive training data, a lot of dross creeps in. When companies building AIs are profit-focused in an environment where externalities are ignored, the AIs will reflect their designers’ worldviews.

Expand full comment

I definitely agree to the major thrust of this post. General intelligence is tightly coupled with evolution - or what I would call open-ended innovation. The dog must be capable of learning new tricks when challenged by new unpredictable circumstances. LLMs are not evolutionary architectures. They do possess certain aspects of evolution namely being able to combine and recombine vast amounts of information and do it also recursively to a certain degree and yet at core these do not amount to evolution. I do not see a trajectory where the quantity of bigger models and more training miraculously turn into the magical quality of evolution which is the true mark of general intelligence. Betting on a self-modifying architecture (plus sub-symbolic/symbolic computation) is definitely a game changer in the right direction towards AGI yet I am still wondering if this is the final missing piece in the puzzle of achieving AGI.

Expand full comment

i can only barely follow this, but what ever Ben says is whats up...

Expand full comment

“It does very poorly at our MeTTa language (a novel AGI language, part of our OpenCog Hyperon would-be AGI framework), even after lots of creative attempts to teach it...”

I feel this is the biggest weak spot of today’s AI. You can’t teach it anything because it doesn’t learn.

Expand full comment

I tend to agree that LLMs are not sufficient for HLAGI, but I see the point of your business colleague and I’m rethinking this. I’ll try to put my inner dialogue in precise words and reply.

Expand full comment