Recent decades have seen machines make tremendous inroads against the superiority of human intelligence. The most notable conquests occur in the area of games, from Deep Blue’s victory over Gary Kasparov in 1997 to the more recent victories by Google DeepMind and OpenAI in the ancient game of Go, and the highly complex real-time strategy games of Starcraft II and DOTA in recent years. These advances have been taken as harbingers of the eventual dominance that AI will have over humans, leading the way to the development of Artificial General Intelligence (AGI) and the Singularity—an AGI system so advanced, that it surpasses human capabilities and can continue to improve itself, compounding on previous gains in intelligence without limit.

The oracles of Silicon Valley technology often speak of this in inevitable and reverent terms. Some have even gone so far as to found a church based on the worship of this eventual AI-god (see Mark Harris, “Inside the First Church of Artificial Intelligence,” Wired, November 15, 2017). George Gilder has unfortunate news for the acolytes of the Singularity: the dream of AGI is not only infeasible, but impossible.

Impossible is not a term that is frequently thrown around in tech circles. However, Gilder makes six concise arguments across the fifty-six pages and eight chapters of his concise book that put the prospects of AGI in serious doubt. Chapters are devoted to each of his arguments, which Gilder summarizes at the end as The Modeling Assumption, The Big Data Assumption, The Binary Reality Assumption, The Ergodicity Assumption, The Locality Assumption, and The Digital Time Assumption (p. 50). Each of these, Gilder argues, are assumed by proponents of the AGI hypothesis as they argue that a superhuman mind will emerge from our technological progress, but without any one of them, no silicon-based mind is possible.

Gilder begins his critique by taking aim at the idea that computers are reasonable proxies for the human mind. Computers are built on the von Neumann architecture which separates memory and processing power. This puts a limitation on computers commonly referred to as the von Neumann Bottleneck,which constrains the input/output speed between memory and the processing system. This is a limitation governed by physics that only gets exacerbated as one scales up machines and connections. Unfortunately, the connections scale faster leading to communication constraints as computers and networks of computers get linked.

Underpinning all of this remain further, fundamental issues of power consumption between the human brain and the most advanced AI systems. The former requires roughly 12 watts to contemplate the mundane and profound alike whereas the latter requires millions of times that to learn and process data.

This scaling issue becomes more troubling when you consider the amount of processing around the world in comparison to the human brain. As Gilder points out (p. 34), in 2017, we entered the Zettabyte Era—a moment in which all of the data in the internet has exceeded one zettabyte (1021 bytes). The human brain, in all of its complexity would also require one zettabyte of data to map. This renders the task of developing a complete connectome of a human brain a monumental challenge, both from the data and energy requirement view.

Moreover, even if it were possible to build a human brain in silicon, what good would it do us? Connectome research into C. Elegans—a roundworm made famous for being the only fully mapped neural circuitry to date with 302 neurons and over 7,000 connections—highlights many of the limitations. Simply knowing which neurons connect to others doesn’t tell us anything about their firing patterns, neuromodulators (chemicals that change neuron behavior), the strength of connections or precisely what they do and how they are controlled.

Gilder proceeds to attack the philosophical assumptions of AGI’s Binary Reality Assumption by invoking C.S. Peirce’s triadic logic. In Peirce’s system of semiotics, there are three components to thought: the signs, the object, and the interpretant. Gilder argues that AGI relies on the first two components while ignoring the crucial third.

In Peirce’s system, signs are representations of objects. However, the interpretant is required to provide a link between the two. This is summed up by Gilder as “the map is not the territory” (p. 37) indicating that the symbols on a map are signs that represent territory, but clearly these symbols are not the territory itself.

This is significant because computers and AI systems are sign-processing machines. Signs are processed as inputs according to whatever procedure is defined by the program and outputs are produced. These signs are meaningless without the interpretant to connect them to the object itself.

Consider a deep learning model—the type of machine-learning model that has ignited the current AI renaissance—tasked with classifying images. It is trained with pixels as the input and learns by taking these digital images and randomly selecting a corresponding label (at least initially). As training continues, the model improves by adjusting its internal weights according to an algorithm in order to minimize its error. In this case, the map is the territory. The signs are processed algorithmically and there is no meaning, no interpretation internal to the system itself.

The same goes for deep reinforcement learning systems like AlphaGo. It receives input from the board game—a collection of black and white stones that have a given spatial relationship among them. From a physical perspective, there is no meaning between one arrangement of stones or another (p. 36), only an interpretant can provide that to determine wins or losses. This is not something AlphaGo, as a purely physical system, has any relation to.

While quantum computing holds great promises for fields such as optimization, machine learning, and physics—famed physicist Richard Feynman’s original use case for quantum computing was as a quantum simulator—it fares no better at achieving the grand visions of AGI. Peirce’s issue of the interpretant becomes even clearer when we step into the quantum world. As Schrödinger’s famous feline thought-experiment illustrates, there’s a paradox in quantum mechanics: the system needs something external to itself, an observer that a pure quantum system would lack.

Gilder presents a series of tight and thought-provoking arguments against the possibility of producing a mind in silico. Moreover, the prose flows well throughout the short book taking the reader seamlessly from quantum computing to neuroscience, from philosophical discussions on meaning and representation to vivid images of the vast web of wires and cables that link nearly every corner of the globe. On one hand, its quick pace is a great strength as readers may breeze through the chapters and acquaint themselves with the arguments involving one of the most important technical developments of our time. On the other hand, it moves too quickly to allow the reader to properly digest the presented arguments. Deep topics and concepts are raised in one sentence and dropped in the next (e.g., cellular automata, the quantum Zeno effect, Heisenberg’s Uncertainty Principle) with no further elaboration or explanation to draw out the implications of the argument or further supporting evidence.

Take Gilder’s argument from evolutionary simulations and the work of Gregory Chaitin, who built on top of Gödel’s famous incompleteness theorems. Gilder argues that creativity is a required input into mechanistic computer systems (p. 46). We’re assured that Chaitin has a “mathematics of creativity” where creativity is depicted “as high-entropy, unexpected bits” (p. 49). Computers are not creative in this sense; therefore, they cannot be creative in the way humans can. That’s it—no further information or explanation for why this may be the case is provided for the reader.

This is the book’s greatest flaw. The nail is rarely driven home and the arguments, while generally sound, lose their force as topics race by and feel disjointed. There are many powerful threads there, but it is up to the reader to pull on them and figure out how they are all interconnected.

It is often taken for granted that AGI is possible because we’re simply “meat machines,” with the “materialist superstition” reigning supreme in Silicon Valley and academia (p. 20). As Eliezer Yudkowsky puts it (“Biology-Inspired AGI Timelines: The Trick That Never Works,” Less Wrong, December 1, 2021), “we shall at some point see Artificial General Intelligence. . .the problem must be solvable in principle because human brains solve it, eventually it will be solved; this is not a logical necessity, but it sure seems like the way to bet.” And therein lies the crux of the debate, are human minds just a product of material forces? If so, then AGI is possible, even if it isn’t feasible given the current technical limitations and challenges expressed by Gilder. If not, then even in principle AGI is impossible.

These philosophical arguments are the strongest arguments against AGI because the techno-optimists will continually push back on our current capabilities asking us to just wait until these hurdles of data, connections, and architectures are overcome. Then you’ll see it!

While Gilder’s facts check out, it can feel like a Gish Gallop of heady concepts to those unfamiliar with such a wide range of topics. In that, the book is more a primer for the typical reader looking to quickly accustom him or herself with arguments against the possibility of AGI; arguments which, at their core, rely on a tenuous materialistic assumption.

Christian Hubbs
Labor Law and RegulationLaw and LibertyScience and Public Policy