According to some experts, everything is sentient these days, from computers to honey bees, but this is a radical redefinition of a term once applied to humans and perhaps a few of our close relatives. Sentience used to require awareness and understanding. Suddenly, merely adaptive behavior will do as activists start calling for insect rights, embracing a ridiculously anti-human philosophy.
Earlier this year, a Google engineer, Blake Lemoine, declared the company’s latest chatbot software was “sentient,” meaning the machine is both self-aware and experiences emotions in ways similar to humans. Mr. Lemoine was subsequently fired from his position at Google, only to claim that LaMDA was also a racist. Some in the mainstream media jumped on the story, echoing Mr. Lemoine’s conclusions with very little skepticism. The Washington Post, for example, claimed LaMDA had successfully passed Alan Turing’s famous test for Artificial Intelligence, suggesting the Google engineer was correct and we had developed a machine that can truly think. They reached this conclusion based purely on a (slightly) edited transcript of a conversation Mr. Lemoine and another Google employee had with LaMDA. To be sure, LaMDA displays conversational abilities beyond what most of us have experienced dealing with chatbots, but even a cursory analysis of the transcript reveals obvious flaws in the claim the machine is truly thinking, rather than regurgitating the information it’s been provided with sophisticated processing rules. Further, one needn’t be an expert in cognition or the intricacies of consciousness to be skeptical of these claims: Almost everyone deals with Google, Siri, or Alexa on a daily basis. As powerful and useful as these technologies are, they are as far from being true thinking machines as the Wright Brothers first airplane is from a Space-X Dragon capsule. Even the most basic questions can trip them up, producing nonsensical responses that reveal the limitations of modern “Artificial Intelligence.” What are the odds Google’s next generation bridges that gap in a single leap?
The situation is even more confused when even skeptics seem blissfully unaware of the logical conclusion to their claims. Noah Giansiruacusa, an associate professor of mathematical sciences at Bentley University, and Paul Romer a University Professor at NYU and co recipient of the 2018 Nobel Prize in economics, published something of a formal critique in Barron’s, where they asserted that a chatbot cannot be sentient because it is a mathematical function like any other, and mathematical functions are by definition non-sentient. “We reached the same conclusion via a different path, using a little mathematical formalism to burn off the fog of confusion. A chatbot is a function. Functions are not sentient. But functions can be powerful. They can fool people. The important question is who will control them and whether they are used transparently.” As they define it, “A function is a rule for turning one number (or a list of numbers) into another number (or list of numbers.) By this definition, all the AI systems in use today, including the LaMDA chatbot that triggered the recent controversy, are functions. AI systems are much more complex than the four functions listed above, much more. It would take hundreds of billions of symbols to write down the formula for the function that is LaMDA. So LaMDA is a very complex function, but a function, nevertheless. And no function is sentient.” This is nominally true: No one really believes there is a specific function to produce sentience, consciousness, or general intelligence. We will never be able to look at a piece of software, point to a line of code, and say: This is what makes it smart. Alan Turing correctly perceived this truth when he developed the Turing Test. At the same time, this does not preclude some currently unknown combination of advanced, likely yet to be developed functions from collectively producing intelligent behavior. Otherwise, you are asserting that there is some magic, non-mathematical component to our own intelligence, something outside the laws of physics, which of course are all functions, meaning evolution can only produce other functions. The only alternative would be to assert humans possess something unquantifiable, such as a “soul.”
Turning from computer science to biology, experts in the field of animal behavior appear just as confused on the topic of sentience and intelligence in general. How else can you explain claims that insects, honey bees in particular, might be “conscious, feeling” creatures that “experience pain and engage in complex decision-making?” Vox.com recently reported on a new study that presented bees with a choice: They could feed from three concentrations of sugar solutions, from 10% to 40%. If the conditions at each feeding station were the same, the bees naturally chose the highest possible concentration. The researchers then added another variable, heat, to the highest concentration feeder. The bees were willing to brave 131 degrees to “enjoy” the 40% sucrose solution if the only other options were 10% or 20% percent, but if either 30% or 40% was available without the additional heat, they rapidly gravitated to the more comfortable choice. “Instead of being sort of a robotic reflexive response, which would be them always avoiding the heat in any situation, they’re able to weigh up the different options and then suppress this response,” one of the researchers, PhD candidate Matilda Gibbons of Queen Mary University of London, explained. Her colleague, Andrew Crump, a postdoctoral biologist at the London School of Economics, pondered the ramifications for other insects, “Can we really say that just because bees are doing this, does that tell us much about other insects? It probably does about the closer related ones, so bees and wasps and ants and maybe flies, but as you get sort of further and further away, probably less.” The authors themselves, however, stopped short of claiming bees were sentient. According to Vox.com, this was because of the “the inherently subjective nature of pain and consciousness.” Alas, Heather Browning, a philosopher and scientist in the Foundations of Animal Sentience project, also at the London School of Economics, and who was not involved in the study, still insisted, “Work like this recent paper that shows motivational trade-offs [and] very strongly suggests pain experience is, in some sense, quite revolutionary.” “At least one of the likely roles of sentience for an organism, one of the reasons that [sentience] evolved, is to help an animal make trade-offs like this,” she continued. “It’s to help them have flexible decision-making when they have these competing motivations.”
Thus, sentience, consciousness, and self-awareness are reduced simply to flexible decision making, something exhibited in even some microscopic organisms. Are they sentient too? They make these statements even as the behavior in bees shouldn’t have been the least bit surprising. All animals with a sophisticated central nervous system extract and synthesize complex information about the environment to make decisions about their behavior. Generally speaking, this includes a few key components whether the organism in question is an insect or a human being. The nervous system is equipped with sensors that gather relevant information about the environment, sight, sound, smell, touch, etc. This information is then processed by a decision-making engine that applies both learned experience and instinct to determine the next best action based on the prevailing conditions. This is where “trade-offs” necessarily occur; because the information coming in is always going to be variable (for example the quantity of sugar in a solution, or the intensity of the heat near the food source) and provided in multiple streams (for example, the presence of food versus the presence of danger), even a “simple” organism exhibits far more than basic binary decision making. This is the entire point of a central nervous system in the first place, and no organism would survive long in a complex world without flexibility in their behavior. An animal or even a microbe needs to decide when to eat, what to eat, when to rest, where to rest, when to mate, who to mate with, etc. These choices are not simple in an ever changing world with near limitless choices, and some ghostly sentience has never been required to explain variable behavior.
Indeed, once upon time it was common knowledge in evolutionary biology circles: The concept was known as an Evolutionarily Stable Set. The phrase was coined by the great biologist, John Maynard Smith in 1974, and the underlying idea was always that behavior was variable both within a population and for each individual in a population. The classic example is the fight or flight response: Whether to be a hawk or a dove. A population composed entirely of hawks quickly destroys itself. Successful groups need some kind of balance between the two extremes, and so scientists projected what this balance might look like and then compared the results to the real world. The mathematics revealed something interesting, however. The set in question constituted all observed behavior in the group. The underlying mathematics do not distinguish between whether the behavior is exhibited by a member of the group that acts like both a hawk and a dove based on the situation, in other words if their behavior is flexible, or if a member is always a dove or always a hawk. The aggregate is all that’s required, and in the real world scientists observe every possible combination. No one claimed these organisms were sentient because they adapted their behavior to meet the needs of a specific set of variables. At that point, sentience was reserved for only those rare animals that could understand their behavior, rather than merely execute.
This is double true of honey bees, where their complex behavior as a social animal is already well established. They are, in fact, one of only three known species that can communicate information about an object to other members of their group without the object itself being present, the other two being a species of crow and humans. Honey bees live in a society so stratified their specific role is determined by their genetic makeup. Feeding the hive and hatching new members requires massive amounts of food, a situation that is especially challenging when not all members of the hive participate in finding and gathering food sources. Instead, scouts range from the hive in different directions, locate potential areas ripe for exploitation, and then return to report their findings to their comrades. This presents its own unique challenge, however. How is the hive supposed to determine what is the optimal source out of the options presented? The bees solve this problem via a complex dance. The form of the dance indicates the direction and the distance of the food source while the intensity suggests the size. The rest of the colony observes the returning bees perform their dance and then they collectively decide. Essentially, they vote. This incredibly complex behavior is truly astounding to watch, puzzling scientists for decades. How could a creature with a brain as simple as a bee possibly communicate the distance, direction, and size of distance objects, much less vote on the outcome?
As biologists began to probe the dance further, however, a few things became apparent. First, the language of the dance was much simpler than it seemed. Second, the dance itself wasn’t all that communicative in many cases, acted on somewhere between one out of 50 and one out of 5 times. Overall, the bees only make use of the information about 10% of the time. Third, the overall behavior seemed designed more to excite the hive than anything else and likely originated as precisely that, perhaps in conjunction with the need to identify new nesting sites. In other words, it’s both less complicated and less useful than it initially appears, falling far short of what anyone would describe as sentient behavior. It requires only the ability to glean information about the outside world (something bacteria are capable of), store it in memory (something most animals are capable of), and behave based on simple rules (something almost all animals are capable of). A comparison to ants is helpful: Ants also live in colonies and have the same general evolutionary challenge, but they have solved it in a different way. Rather than reporting back to the colony, ants leave behind a trail of pheromones as they travel, their scent. Other ants are predisposed to follow these trails if the concentration is high enough. As more ants head to the same potential food source, the trail intensifies, causing even more ants to follow. At some point, the trail can become intense enough that almost all of the ants will head in the same direction. They accomplish this without any need for sentience, only a basic rule to follow a trail based on the intensity. The behavior of the hive itself is complicated, but each individual ant is only executing a relatively simple program with the appropriate flexibility of having a central nervous system.
Regardless of how any particular animal solves an evolutionary challenge, there is no doubt the dance of a bee is far more complicated than anything these scientists recently observed for their feeding behavior, much less the intricate set of behaviors required to build the hive in the first place. This prompts a key question: Why is potential sentience in bees becoming a topic now, despite our knowing for decades about these other, far more complicated behaviors? Putting this another way, why are they intentionally defining sentience down, from something unique to humans and higher animals, something that used to require an awareness of the behavior in question, to the mere exhibition of the behavior? The question takes on additional importance when we are simultaneously rushing to claim computers are sentient as well. What is driving the need to devalue what has been traditionally understood as a trait exclusive to humans, or at most shared with primates and perhaps a handful of other advanced species, redefining it into a meaningless term that describes only adaptive behavior present in almost all species with advanced nervous systems?
For bees, at least, we have a ready answer: Animal rights activists are already pushing for protections for insects. The thinking is obvious. If they can convince enough of the public that insects feel pain, they will be more receptive to insect rights. As Vox described it, “If just a small fraction of the 10 quintillion insects alive right now can feel pain, some changes may need to be in order…We can find ways to more humanely coexist with insects, such as reducing insecticide use at home and on farms. Policymakers might one day consider protecting insects under the law too. Earlier this year, the UK parliament passed the Animal Welfare (Sentience) Bill, which encompasses all vertebrates; cephalopods, like octopus and squid; and decapods, like lobsters, shrimp, and crawdads. The law isn’t going to, say, outlaw shrimp farming, but it’s a sign that those highest in government are giving the question of animal sentience real consideration.” More broadly speaking, I can only speculate, but this seems to be part of a larger trend to view humanity in a less elevated light, to make us less special by redefining our unique traits into something shared with all other animals and now even machines. There is little doubt that some segment of humanity has rapidly become anti-human. They no longer believe humans have the right to transform the planet to better meet our needs, embracing a philosophy that views nature for nature’s sake alone. Some might even go so far as to say the world would be better off without us, and that as much of the world as possible should be protected from human development. However well intentioned, this philosophy is a dead-end for human advancement and the fulfillment of human potential. It should be rejected at every turn, even as it is rapidly embraced by the real card carrying experts.