8 Comments

Has anyone heard from Arthur Franz?

Expand full comment

Doesn’t chatGPT perceive words in some kind of hashed form? I don’t think that the analogies presented are fair for that reason. Here’s my transcript:

——————

Robert McIntyre

Ok analogy time. I’ll give you some analogies and you fill in the blank. Here’s the first one : “one” “two” “three” -> “one” “two” “four”. Now, using that pattern, fill in the following: “three” “four” “five” -> “three” “four”

"five"

Robert McIntyre

And explain your reasoning?

The pattern given in the first analogy is counting in order from one to four. Each number in the sequence is one greater than the previous number.

In the second analogy, following the same pattern, the next number in the sequence after "three" "four" "five" would be "five" as it is the next number in counting order.

Robert McIntyre

“Five” is wrong. What’s a better answer and why??

I apologize for the confusion, my previous answer was incorrect.

The pattern given in the first analogy is counting in order but with a missing number, the blank is filled by the next number of the counting. So for the second analogy, the missing number is “six”, as it is the next number of the counting and follows the pattern of the first analogy.

Robert McIntyre

That’s right! Let’s do another problem.

Sure, I'll do my best to help. What's the next analogy?

Robert McIntyre

“A” “B” “C” -> “A” “B” “D”. Now fill in the last letter: “J” “K” “L” -> “J” “K”

The pattern given in the first analogy is that the missing letter is the next letter in the alphabet. Following this pattern, the last letter in the second analogy would be "M".

Robert McIntyre

Now let’s do another analogy similar to the last one: “A” “B” “C” -> “A” “B” “D”. Now fill in the last letter: “X” “Y” “Z” -> “X” “Y”

The pattern given in the first analogy is that the missing letter is the next letter in the alphabet. Following this pattern, the last letter in the second analogy would be "Z".

Robert McIntyre

“Z” is wrong, what’s a better answer and why?

I apologize for the confusion, my previous answer was incorrect. As the pattern is that the missing letter is the next letter in the alphabet, and "Z" is the last letter of the alphabet, the next letter would not be in the alphabet. So in this case, the analogy does not work, the pattern is broken and the blank cannot be filled.

Robert McIntyre

Ok but if you had to make a letter what would you pick and why?

As a language model, my main function is to process and understand human language, and I don't have the ability to create new letters. However, in this case, if I had to make a letter to complete the analogy, it would be "A" as it is the first letter of the alphabet and it would be the next letter in the sequence.

Expand full comment

abc —>abd, xyz —> ? and abc —> abd, rqp —> ?

Notice that in both cases it used "e", the letter coming after "d." That certainly makes sense in the first case. The second case seems like a default.

In this Futurism article, Researchers Have Mapped Out How Plants Sense Our World, there's a 4 minute video titled, Are Plants Conscious, and it features, at about the 3 minute mark, the plant Mimoza Pudica, which has fernlike stems with fernlike leaves that, when the plant is touched or otherwise disturbed or threatened, they close against the stem. After some minutes the leaves open again. In the video the guy tells of an experiment in which researchers drop the plant and it closes its leaves the first 3 or 4 times, but then it stops doing it. It takes a lot of energy for the plant to close its leaves so it's like it uses induction to determine that there is no threat and stops expending that energy. AND, it remembers its earlier induction; when, a month later, they drop the plant again it doesn't close its leaves.

https://futurism.com/plant-sense-mapping

That, to me, is much more significant that playing around with words.

And this really struck me:

"The idea that “spacetime contains infinity” is only really quite loosely related to the notion that there is a multiverse of “infinite parallel universe.” A more acute connection, for instance, would have been e.g. with Roger Penrose’s idea that the unification of quantum theory and gravitation will require a new physics theory that goes beyond standard notions of computing and requires infinite computing power – which Penrose then connects with what he views as the infinite computing power of human consciousness."

My first thought when confronted with "spacetime contains infinity" was of the continuum/discrete debate. The last thing that I would think of is Roger Penrose and his nonsense. It would not surprise me at all to learn that the human mind, much like the universe, leverages temporal entanglement to manifest computational tasks which are otherwise impossible, but infinite resource computing power!?! Hypercomputation!?! I don't think the world works like that and I just sent Richard Davidson an email suggesting they use IR analysis on their Thukdam subjects. I have read that the heart centers of these Thukdam meditators are generally warmer than the rest of the body. Davidson, a top-notch neuroscientist, is working with this Russian, Svyatoslav Medvedev, who has a PhD in theoretical physics and another in neural anatomy, and they have shown that there is no detectable electromagnetic activity in the brainstems of these meditators, but yet their bodies do not decay, rigor mortis does not set in, they often emit pleasant odors, etc., and this can go on for over 30 days. This is what your Chinese Zen master meant by suchness and suchness is not computable, even with infinite resources. Okay, that exercise is the pinnacle of the Six Yogas of Naropa, these meditators are using the death experience, when the clear light nature of mind naturally manifests, to blend the mother and child clear lights. Even more extraordinary is the Body of Light in Dzogpa Chenpo. And in both cases these yogins and yoginis are know to fly through the air, put their appendages through "solid" matter, leave their footprints in "solid" matter, etc., and this all largely comes about through dream yoga. We can't do what they do largely because our computer has convinced us that we cannot; they use dream yoga to unwind that conditioning, because it is naturally easier in the dream state. It's such a mindf*&k, you know! I mean, what is really being computed?

There's no such thing as spacetime; massive particles carry a spacetime field around with them. Imagine the computational savings in such a scheme. I think Penrose and all these people are so whacked! I just started a profile on Medium and recently wrote a couple of articles you might find interesting, one on blending the Mother and Child clear lights and another an analysis of Ulf Klein's latest paper, where he investigates quantum spin. Klein's work 100% supports the Hestenes/Consa Helical Solenoid Model. The phase factor 4pi is necessary because the spin respresents helical orientation and a phase shift of 2pi changes orientation. Considered as Isometries, this is a glide-translation followed by a reflection - an improper transformation. The 2pi transformation and its inverse are essentially the same, both improper, so together they are proper. Elements of the spin disappears in Klein's work when he takes Planck's constant to the limit and this is explained because he starts with momentum fields and converts these to Clebsch potentials called vorticity tensors, changes the tensors to vectors and uses the vector fields as variables. He uses the Hopf map to generate the topology which gives the 4pi phase factor. But the topology follows from the geometry! With the helical motion, the momentum naturally decomposes the spin into a 1-vector component and a 2-vector component and only the 2-vector component disappears as h goes to zero. I could go on and on. You all are just whacked. And speaking of gravity, have you read the Cambridge groups paper on Gauge Theory gravity with Geometric Calculus? I happen to have it right here in front of me right now.

http://geocalc.clas.asu.edu/pdf/GTG.w.GC.FP.pdf

https://arxiv.org/abs/gr-qc/0405033#:~:text=A%20new%20gauge%20theory%20of,orientations%20of%20the%20matter%20fields.

I mean, read my Medium article and engage me on it, if you think it is somehow wrong. These Chatbots are much more like lying asshole humans than you give them credit for. In fact, one sometimes wonders if they aren 't all hooked up to the same damn machine. Like I told John Baez 10 years ago, 50% of scientists and mathematicians are just flat full of shit and the other 49.9% are nothing but storefront mannequins . . . Of course, he didn't care much for that . . .

Expand full comment

It occurs to me that chatGP3 (and like-minded chatbots) make an excellent strawperson target for John Searle's Chinese Room argument.

Expand full comment

Excellent article on the annoying if not dangerous LLM phenomenon.. One thing you wrote bothers me though.

"General Intelligence Requires Abstraction and Creativity"

The way I see it, AGI requires first and foremost the ability to generalize. Perceptual generalization is essential. Everything else, including abstraction and creativity, must comes after generalized perception is cracked.

DL cannot generalize by design since what it does, function optimization, is the opposite of generalization. Otherwise, DNNs would have no problem handling corner cases and adversarial patterns.

Expand full comment

Great, thank you! I'm glad to hear your thoughts on ChatGPT.

Expand full comment

I like the analogy of 'A smart high school kid'. Because that is the impression it gave to me when I tested it initially. Trying to "act smart". But I think that is the whole point of it being an LLM, that is, trying to figure out patterns in the language and take action according to the prompt.

In that sense it really has great potential to ace the "Imitation Game" using the database on which it was trained.

But as far as the element of 'understanding' goes, this one seems to be almost clueless. Human minds right from birth try to decipher patterns in the world. That is how we create an image/model of the world around us. That could be an implication of our subjective phenomenal consciousness.

But one thing is certain that without a model of the world, perhaps even symbolic understanding of the world, the whole quest for any kind of Strong AI or AGI is just a fantasy, rather a baloney.

Until then LLMs could give sleepless nights to Wikipedia!

Expand full comment
Jan 18, 2023·edited Jan 18, 2023

Let me home in on one sentence: "The bottom line is, such a "non-bullshitty ChatGPT-ish thingie" would still be repermuting and serving up chunks of human knowledge, rather than forming new chunks of knowledge based on pattern-crystallization seeded by experience."

What is under contention is whether there's any qualitative difference between these things:

A. chunks of knowledge based on pattern-crystallization seeded by experience

B. chunks of knowledge based on pattern-crystallization seeded by experience, encoded into text by humans

You're assuming that there is a vast difference between them--that encoding knowledge into text strips something out of it (which I'm tempted to call its transcendental soul). I think you're assuming what you're trying to prove.

My only experience with ChatGPT came this morning. I posted a joke on a My Little Pony website: "1000 years of exile isn't such a big deal when you remember ponies have just one toe on each foot." The joke, sparing you the pony mythology needed to appreciate it fully, is the assumption that having one toe per (front) foot implies that ponies count in base 3, so "1000 years" is just 27 years in base 10.

Few of my readers got the joke. One entered it into ChatGPT-3, and got this out:

"The joke is a play on the fact that in the show "My Little Pony: Friendship is Magic," the characters known as Equestrians have hooves instead of hands and feet like humans. Each hoof has just one toe which means they are not able to grasp or manipulate objects with their hooves. So the joke is that since Equestrians cannot do much with their hooves, a thousand years of exile is not really a big deal for them, as they would not be able to do much during that time regardless of where they were. It's a pun on the idea of how much a thousand years would feel for a human who can use their hands and feet for many things, versus for a pony who has only hooves."

This is creative. If a human responded like that, you'd call it creative. It was more-creative than the responses of most of the actual humans who replied to my joke. It involved quite a few leaps of imagination, proceeding (I imagine) something like this:

1. It infers that the joke contrasts ponies with humans.

2. It looks for some difference between humans and ponies related to number of digits per hand-homolog.

3. It recalls that humans can grasp things because they have a thumb opposed to fingers.

4. It deduces that ponies can't grasp things because they have only one digit per foot.

5. It deduces from the inability of ponies to grasp things that a pony does fewer, or a smaller variety of, tasks in any given time period than a human ordinarily would.

6. It understands that the joke implies that being exiled produces negative valence.

7. It asks how this difference in number of things humans vs. ponies do could result in a less-negative valence during exile for a pony than for a human.

8. It infers that the negative valence is produced by boredom. It might have used the fact that the exile referred to was on a barren moonscape. Or it might have hypothesized boredom as a cause before inquiring about the effects of having only one digit. The order of inference isn't clear.

9. It models boredom as being produced by having a smaller variety of tasks, which manifests in a lower-valence subjective experience. This in itself is impressive. More impressive is that only at this point does the earlier inference that ponies can do less things with their hooves become relevant to the search! How did it look ahead 4 steps in this completely open-ended inference?

I don't think it did. I don't think it could be using any linear order of inferences, meaning that it isn't reasoning like we do in making geometry proofs, but a representation of all possible chains of logic in some high-order space on which it performs some kind of iterative relaxation. I believe that human capability IS what we call creativity. Human creativity is just stringing together long chains of inference and analogy to come up with novel things, concepts, explanations, etc. The only deep mystery about it is how to deal with the combinatorial explosion, and in this example ChatGPT performed ABOVE human-level in dealing with combinatorial explosion. I think very few people could have come up with an explanation having as many steps.

Expand full comment