How do you define intelligence? We each have our own notions of what it means to be intelligent—perhaps being skilled in math or adept in social situations. But providing a general definition is surprisingly difficult. Herein lies the challenge for artificial intelligence, or AI: how do we structure scientific study around a term that is typically reserved for humans?
While progress has been made in mimicking aspects of human intelligence, the human-focused origins of the field of AI may be limiting the scope of our scientific pursuit. As we move forward, perhaps looking beyond ourselves for inspiration will provide a more comprehensive definition of intelligence, or a new concept altogether.
The concept of ‘intelligence’ comes from human psychology, where it is measured using IQ tests. As one cannot directly measure an abstract concept like intelligence, these tests instead evaluate a range of tasks, from reasoning to memory and verbal comprehension. Not surprisingly, when AI adopted the term intelligence over a half-century ago, along came a focus on performing similar cognitive tasks.
Wikimedia Commons, CC BY-SA 3.0
As a famous example, the Turing test pits a machine against a human, with the machine using written conversation to attempt to convince the human that it is also human. Similar motivations fuel modern-day efforts to use computers to master boardgames like Go and classic Atari videogames, human pursuits that require coordinated actions over many steps. Human influences are also present in tasks like processing language or identifying objects in images. In the absence of a clear definition of intelligence, these approaches implicitly assume that human tasks are a proxy for human intelligence, with the hope that a machine capable of performing these tasks and more will attain ‘artificial general intelligence,’ becoming flexible enough to perform any task.
Krizhevsky et al., 2012
While such generally intelligent machines do not yet exist, we have made advances in many cognitive tasks, impacting society through applications like self-driving cars, facial recognition, and language translation. This progress has been largely the result of deep neural networks, mathematical models which are loosely inspired by the biological neurons in brains. Through a process of learning to map input data (e.g. a photo) to corresponding outputs (e.g. what objects that photo contains), machines are now becoming capable of tasks like recognizing, reasoning about, and manipulating objects. Mastering many of these basic human cognitive capabilities now seems on the horizon. However, it remains unclear whether such machines would unlock the mysteries of our own range of capabilities or those of other organisms, let alone general intelligence. Rather than exploring broader, fundamental principles underlying intelligent systems, the field of AI has been, in effect, teaching to the Turing Test—focusing on mimicking our own human capabilities at ever-increasing levels of sophistication.
As AI keeps advancing, this adherence to a human-centric view of intelligence could have major consequences. I’m reminded of the Copernican revolution: for centuries, astronomers placed the Earth at the center of the universe, with our desire for significance guiding our conception of reality. However, when observations did not align with this theory, we came to understand that the Earth orbits the Sun. In a similar way, I feel that we have placed humans at the center of our definition of intelligence. Clearly, we have unique capabilities, just as our planet is unique among its neighbors. Yet, any comprehensive definition of intelligence should account not only for our own capabilities, but those of other entities as well. Looking to other biological and human-made entities will also help us see ourselves within a broader scope of intelligence, like studying the Earth in the context of other planets.
Wikimedia Commons, CC BY-SA 2.5
When we look at biology, we see systems that sense and respond to their surroundings. One such system is the cell, which has sensors for chemicals, as well as actions it can take in response, like going into “hibernation”. Entire multi-cellular organisms can also be considered as systems. Animals, from the smallest insect to the largest whale, interpret and interact with their environments in a multitude of ways.
Plants are attuned to sensory inputs like sunlight, moisture, and temperature, prompting responses like orienting leaves, extending roots, and releasing seeds. And groups of organisms, from forests of trees to colonies of ants, collectively sense and respond to their environments in ways that we are still just beginning to understand. Ultimately, all of these processes share a common form: they convert energy into actions, affecting themselves and their environments to promote the survival of genes.
Our technological inventions can also be viewed from this systems perspective. The tools of our early ancestors, like spears and boats, expanded the ways in which they could respond to their environments. More recent inventions, like radios and cameras, have similarly expanded the ways in which we sense our environments. Modern advances in computing, and now AI, have taken this trend further, creating systems that can sense and respond to their environments largely independently of human input. This is a world in which power grids can automatically sense and respond to supply and demand, and vehicles can automatically sense and respond to obstacles.
The progress we have made in AI certainly has the power to positively impact society, and interacting with humans clearly requires human capabilities, such as speech and vision. However, there is a shared sentiment that focusing too narrowly on human tasks is ultimately limiting. There is a place for studying human capabilities, but this should not define the field. Recasting AI from a broader systems perspective requires integrating knowledge across many existing areas, from biology to control theory to computer science and physics. Doing so will create a scientific discipline that studies systems at multiple levels, from the molecular level up to collective behavior of network systems spanning the globe and beyond. Uniting these perspectives could open new frontiers. It might allow us to develop adaptive materials capable of complex behaviors such as healing, or to coordinate massive distributed systems such as networks of receiving and transmitting satellites. While the behaviors of these systems hardly resemble the cognitive tasks that we currently consider intelligent, they are arguably just as impressive.
As we explore, my hope is that we will arrive at a more complete view of intelligence. With the proper context, we will more fully appreciate the impressive capacity of humans to flexibly alter our sensory-response patterns, and in the process, we will more thoroughly understand ourselves.