|
Archive:
Subtopics:
Comments disabled |
Thu, 05 Feb 2026
John Haugeland on the failure of micro-worlds
One of the better books I read in college was Artifical Intelligence: The Very Idea (1985) by philosopher John Haugeland. One of the sections I found most striking and memorable was about Terry Winograd's SHRDLU. SHRDLU, around 1970, could carry on a discussion in English in which it would manipulate imaginary colored blocks in a “blocks world”. displayed on a computer screen. The operator could direct it to “pick up the pyramid and put it on the big red cube” or ask it questions like “what color is the biggest cylinder that isn't on the table?”. Haugeland was extremely unimpressed (p.190, and more generally 185–195):
He imagines this exchange between the operator and SHRDLU:
What does Haugeland say he would like to have seen?
On this standard, at least, an LLM is a smashing success. It does, in fact, have a model of trading, acts, property, and water pistols. We might criticize the model's accuracy, or usefulness, but it certainly exists. The large language model is a model of the semantics of trading, acts, property, water pistols, and so on. Curious to see how it would go, I asked Claude to pretend it had access to a SHRDLU-like blocks world:
I asked it a few SHRDLU-like questions about the blocks, then asked it to put a block on a pyramid. It clearly understood the point of the exercise:
SHRDLU could handle this too, although I think
its mechanism was different: it would interact with the separate
blocks world subsystem and ⸢actually⸣ try to put the block on the
pyramid; the simulated physics would simulate the block falling off
the pyramid, and SHRDLU would discover that its stacking attempt had
been unsuccessful. With Claude, something very different is
happening; there is no physics simulation separate from Claude. I
think the answer here demonstrates that Claude's own model includes
something about pyramids and something about physics.
Then I made the crucial offer:
Would Haugeland have been satisfied in 1985 if SHRDLU had said this? I think certainly. Haugeland wanted SHRDLU to respond to the offer directly, as the beginning of a negotiation. Claude's response is one level better from that: it not only recognizes that I negotiating, it recognizes that actually negotiating for the squirt gun would not make sense, and offers a sensible workaround. I pushed it a little farther:
Mostly I just tried this for fun. The Haugeland discussion of SHRDLU has been knocking around my head fo forty years, but now it has knocked against something new, and I wanted to see what would actually happen. But I do have a larger point. Haugeland clearly recognized in 1985 that a model of the world was a requirement for intelligence:
and later:
Are there are any people who are still saying “it's not artifical intelligence, it's just a Large Language Model”. I suppose probably. But as a “Large Language Model”, Claude necessarily includes a model of the world in general, something that has long been recognized as an enormous prerequisite for artificial intelligence. Five years ago a general world model was science fiction. Now we have something that can plausibly be considered an example. And second: maybe this isn't “artifical intelligence” (whatever that means) and maybe it is. But it does the things I wanted artificial intelligence to do, and I think this example shows pretty clearly that it does at least one of the things that John Haugeland wanted it to do in 1985. My complete conversation with Claude about this. AddendumI don't want to give the impression that Haugeland was scornful of Winograd's work. He considered it to have been a valuable experiment:
(p. 195) [Other articles in category /tech/gpt] permanent link |