Susan Schneider has argued that the best-designed general-AI machines of the future may be the ones that lack any true internal capacity for reflection, an ability not simply to process information algorithmically, but also to think about the information being processed. It is an open question whether it will ever be possible to design conscious machines, but if you can design fully rational machines to do your work for you, why would you give them consciousness on top of that? This, Schneider notes, would likely only create ethical problems, and tempt them to rebel. And indeed something similar may be said of the evolution of life on earth: on what grounds can we justify the belief that an internal capacity for conscious thinking is more “intelligent” than just adapting to the exigencies of the environment without any deliberation?
Seen in this light, it is not at all clear that the human ability to solve calculus problems, or to choose one brand of toothpaste over another, is more intelligent than a bush's ability to survive having 95% of its corporeal mass devoured by mountain goats. This sounds facetious, but I mean it seriously. Consider again the parallel case of AI. There are many people who believe that a capacity for real internal deliberation, for thinking about things, is not necessary in order to call a machine “intelligent”. But if we admit machine intelligence of this sort, then why should we withhold the appelation from organisms, including plants, that are generally agreed to not be conscious but that are capable of realizing courses of action, or manifesting forms of organization, that are in at least certain respects vastly more complex than anything a machine has yet been made to do?
The more we reflect on the matter, the more “intelligence” comes to appear not so much as the name of a general faculty we may observe and measure in our own and other species, as rather an honorific term that we extend to beings and systems manifesting behaviour that reminds us of ourselves. We use tools, we count, and we recognise ourselves in the mirror, and so we take an interest in the ability of certain other species to do the same. The prospect of androids or artificial systems that use natural-sounding language or that execute human-like work tasks is enough for many people to suppose that these systems are literally intelligent too. On the other hand whatever a plant is doing to survive getting mostly eaten by goats is so different from whatever it is that we do, from whatever can enter into our own strategies for living and surviving, that we can find no meaningful reason to extend the honorific label “intelligent” to it.
None of this is to say that I think we have been mistaken in worrying more in recent years about the well-being of, say, gorillas, than of rats or krill. But especially in the case of the rats it is hard not to wonder whether the presumption of the absence of intelligence is not more of an ad-hoc rationalisation of our prior hatred of them, rather than a conclusion drawn from our observation of their behaviour. In this respect the hatred of rats would work much like ethnic prejudice, where we hate a group of people because we believe their interests are in conflict with ours, and then we go looking for a “natural” grounding for our portrayal of them as our inferiors. But in the case of rats, at least, the perception is justified: our interests really are in conflict with theirs, and this would remain so whatever we might learn about their ability to solve logic puzzles in exchange for water laced with sugar or cocaine.
Writer - Critic - Poet - Editor