BY MATTHEW HERBERT
So there I was, just reading some improving books by Malcolm X and Andrea Dworkin a few weeks ago, when I was taken by a fit to smarten up on artificial intelligence.
Okay, it wasn’t really a fit. I noticed an essay writing contest about disruptive technology and the future of work. Bada bing, bada bang, as they say, the next thing you know I’m reading mad AI, with doses of nanotech and biotech thrown in.
I’d been intrigued by AI ever since I read Daniel Dennett’s 1984 paper in which he lays out why it will be so monstrously hard to create an artificial general intelligence (AGI), or a thinking machine that has the full human range of cognitive abilities. Dennett called this challenge the “frame problem.” You can train a machine to become massively intelligent at a predefined task, but training it to navigate and understand the fluid, borderless scenarios that make up life is a different matter entirely. Machines can’t frame a situation ex nihilo, which we can do effortlessly. Being an ordinary human is orders of magnitude harder than being a really smart machine.
Here’s the opening paragraph of Dennett’s article, “Cognitive wheels: the frame problem of AI,” which kind of gives you a flavor:
Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for itself. One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon. R1 located the room, and the key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT (Wagon, Room, t) would result in the battery being removed from the room. Straightaway it acted, and did succeed in getting the battery out of the room before the bomb went off. Unfortunately, however, the bomb was also on the wagon. R1 knew that the bomb was on the wagon in the room, but didn’t realize that pulling the wagon would bring the bomb out along with the battery. Poor R1 had missed that obvious implication of its planned act.
Ordinary life is beset with looming implications much more complex than the ones R1 has to cope with, but you get the point. Knowing what to attend to and what to ignore in a given scenario comes automatically to people, or at least it seems to.
I pretty much buy Dennett’s arguments to this effect on their technical merits alone. But my experience for the better part of 25 years as an expat trying to survive using foreign languages also helped convince me that Dennett is right. Here’s what occurred to me just looking back at my last 12 years living in Germany.
My German was never great, but here’s a list of business that I could and did conduct auf Deutsch:
– Applying for a loan and buying a house
– Seeking and undergoing major surgery
– Putting three kids into local schools and kindergartens
– Routinely going to city hall to get documents and various administrative clarifications from bureaucrats
– Fending off a collection action from a business that failed to deliver contracted services
– Explaining to dentists the ways I did not wish to be hurt
Never did I accomplish these tasks with high style, but I got them done. But guess what I never did in Germany?–I never made an acquaintanceship that was anything like a friendship. This was not due entirely to my standoffishness. I like having friends and am willing to put a certain amount of effort into it.
What doomed me was that I was essentially an AI trying to fit into life as led by normal human beings. It was the language. Once I had done something like ask some neighborhood moms if one of my kids could walk to school with their kids’ group, they could sense that my script for that scenario had run its course and they were now dealing with a robot. I was basically like R1 in Dennett’s paper: I had trained myself to get through a handful of narrowly defined tasks like the ones in my bullet list. What I couldn’t do was small talk. I had no feel for the changing frames of ordinary life. I probably gave those moms a mild laugh, which is nice to think.
But back to AI. Dennett’s argument about why it will be so hard to create an AGI is essentially that all the ceteris paribus clauses that simplify our lives (e.g. crossing a street does not directly implicate questions of poetry or particle physics) have to be built up in an AI using raw computational power. And that’s how I had to (try to) get through life speaking German: I was basically a sentence-forming machine trying to generate grammatically permissible strings of words appropriate to the task at hand. I worked hard at it. I could march, sprint or lurch toward one goal at a time but never could I wend my way through normal life. Water under the bridge, though. Who gets to live a normal life?
In the next few weeks I’ll try to write a few things about why AI seems a little more robust today than Dennett made it out to be in 1984. Machines may not achieve general intelligence any time soon, but we have compensated for their deficits over the last 30 years by setting up our lives in ways that give even narrow AIs agency and influence. Even though machines seem not to be capable of replacing us on the basis of brute computational power alone, we are building systems (such as e-commerce, AI-enhanced medicine, etc.) that seem likely to lead to this replacement.