Eric Larsons recent book The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do is both a compelling look into the history of artificial intelligence research and an exploration into the shortcomings of contemporary approaches to creating true AI.
As a stark counterpoint to the heady optimism of previous popular titles such as Ray Kurzweils How to Create a Mind (Viking Penguin, 2013), Larsons book might feel to some readers like a powerful shot of naloxone, pulling them out of their techno-utopian overdoses. Indeed, the goal is to bring researchers and public alike back to their senses by carefully looking at the field of AI and examining a myth which he identifies as all pervasive in current AI research.
The book examines the beginnings of AI, seeking to reframe Alan Turings and Jack Goods views of machine intelligence. Turings ideas and legacy are vital to the field of AI to this day, and Larsons aim at the start of the book is to call these ideas into question, accusing early researchers and thinkers of committing an intelligence errorerrors in their understanding, not of machines, but of human intelligence.
As Larson puts it: Early errors in understanding intelligence have led, by degrees and inexorably, to a theoretical impasse at the heart of AI (p. 31). The myriad subtleties of human intelligence were necessarily reduced at the very outset of the study of artificial intelligence. Research proceeded apace with a simplified world view and a narrow conception of human intelligence. The results of which, after over sixty years, have saddled us with a generation of narrow and shallow AI paradigms that play god-like chess, yet cannot understand the meaning behind even the most basic examples of natural-language prose.
The proposed missing ingredient comes in the form of a type of inference, first identified back in the nineteenth century by legendary philosopher Charles Sanders Pierce: abduction or sometimes retroduction. This somewhat-special kind of inference is heralded as the key to unlocking machine intelligence and moving machines from narrow problem-solving tasks towards true AI or Artificial General Intelligence. This then is the central theme of the book: to understand what abductive inference is and why it is so important to human understanding.
What follows is a highly illustrative exploration into abductive inference. Then the book turns to look at todays current AI technologies and AI-inspired research, and the inability of these modern AI systems to understand, in the true sense of the word. Larson finishes with a scathing critique of the current culture of Big Data-driven scientific research. Ultimately calling for a kind of renaissance in scientific enquiry, with more value placed on the individual and the inimitable human intellect itself.
After reading the book, one cant help but marvel at ones own mental faculties: the ability to understand nuances of meaning such as irreverence or sarcasm within a given text, our ability to glean, with ease, the true meaning of a passage of prose, or the ability to use common sense. This is the real beauty of Larsons project; to showcase the abilities of our minds when it comes to natural language, and to explain the current narrowness of machine understanding in relation to natural language. Chapter 14 (Inference and Language II) is perhaps the best within the book, forming a short essay all of its own. In the chapter we are treated to an exposition of different forms of inference (deduction, induction and abduction) as they relate to a seemingly mundane story about a man buying groceries in a store (pp. 212 213). Who would have thought that a story about a man shopping for food could be so riveting?
Notice how natural language is the main theme here. For Larson, natural language understanding is tantamount to true AI, AGI, human-level AI, essentially the kind of machine intelligence that could pass a Turing test. The main argument of the book can then be boiled down to this:
- Human-level intelligence requires abductive inference
- No one knows how to program abductive inference
- Therefore, computers dont have human-level intelligence
The myth, referred to in the title of the book, is the idea that human-level intelligence is achievable, even inevitable, with the use of deductive/inductive inference alone. This contradicts premise 1 of Larsons argument.
What basis do we have for asserting premise 1? Most modern cognitive scientists, philosophers, AI researchers and others agree that abductive reasoning is a characteristic of human thought. However, what is absolutely not clear from the literature is how we use abduction, or even what we mean when we refer to abductive inferencea seemingly slippery, context-dependent miracle of cognitive powers, working sometimes this way, sometimes thatthe exact form as well as the normative status of abduction are still matters of controversy (Igor Douven, Abduction, Stanford Encyclopedia of Philosophy (Summer 2021 Edition). Indeed, Larson seems to use its vagaries of meaning as a kind of catch-all for human intelligence: Abduction is inference that sits at the center of all intelligence (p. 174). How justified such statements are is open to debate.
Turning to premise 2 we can readily concede that programming abductive inference into a system is perplexing, to say the least. Sometimes defined as the logical fallacy of affirming the consequent, abductive inference allows for utterly open-ended choices when selecting a best explanation for any given consequent. A programming nightmare. Here I want to point out that, while Larson makes a compelling case for the difficulties involved in incorporating such inference into a machine, he never provides a solid argument to suggest that it isnt possible at all. Thus, the subtitle of the book Why Computers Cant Think the Way We Do might be better phrased Why Computers Dont Think the Way We Do.
This leads me to a strange recurrent theme that pervades the book, namely Larsons vacillation between criticizing the kitsch-like ideas of human-level machine intelligence (think Hal in 2001: A Space Odyssey), contrasted with his full acknowledgment of human-level machine intelligence being an emotional lighthouse by which we navigate the AI topic (p. 76). Throughout the book he continually refers to human-level AI as the goal or the dream, and while he is highly critical of the fictional figures that embody this dream, such as Ava in Ex Machina, he insists that this Turing Test-passing kind of machine is the only true AI.
To be fair to Larson, his explicit criticism is not that human-level machine intelligence is a myth, but that its inevitability is a myth. Sometimes this subtle distinction can get lost and the reader is often thrown into confusion about where Larson really stands, both intellectually and emotionally, on the prospects of a future human-level AI.
I recently attended a technology conference where Larson talked about the book. At that time I got the impression that he was more convinced of the futility of the whole enterprise of trying to create a mind. The book certainly offers us a lot of reasons to support this view. But despite all of this Larson still seems to hold onto this dream, this guiding light, which is nothing less than the prospect of conversing intelligently with a machine, by the fireside, so to speak.
Larsons concern is that the path towards this kind of human-level AI is currently on a dead-end track. I think this fact alone may have been the major motivation behind his writing the book. He dedicates much of the book towards criticisms of the narrowness of current AI, the futility of inductive/deductive inference approaches, and failures in AI Big Data projects. This is all with the intention of trying to put AI research onto another track, one that might finally realize the dream.
Whether you agree with Larsons criticisms of current AI or not, the books winning qualities lie not so much in the many criticisms, but in the wonderful exposition of what it really means to understand natural language, not as a machine, but as a human being.