Will AI be a threat to humans? Will it take over our job? These are questions that are popping out of newspapers, blogs and journals everywhere. These are really important questions, and I want to discuss them, but I would like to take a slightly different route. In this article, I will argue that the major reasons and then why I think there is a possibility of AI faltering in its course.
But first, there is that difficult job of defining AI systems.
Many authors who predict an AI victory over humans opt to skip defining what they mean by AI. And it is understandable. I have been working on AI systems for more than three years now, understanding, building and putting AI systems to use. But if you ask me to define AI systems, I am confounded. When it comes to calling something as AI, there are some peculiar phenomena that I have observed:
No longer AI: A system loses its ‘AIness’ as it becomes more and more familiar. The chess program that beat the then world champion in 1998 no longer enjoys a place in the AI club. Recently the AlphaGo system built by Google defeated the world champion in the complex and ancient game called Go. AlphaGo stills wears its AI tag with pride, but I am sure it will lose its shine in a few short years.
It’s not AI, it’s X: Bertrand Russel, the famous philosopher and Mathematician compared knowledge with a vast filing cabinet, with drawers marked as Physics, Mathematics and so on. The last drawer is ‘Don’t Know’, which is the realm of the philosophers. Once some topic becomes well defined enough, it is given a new name, like Physics. This might be true in case of AI. Once some topic in AI becomes well known, for example language processing, it becomes a subject of its own. ‘It’s not AI, it’s NLP’.
That is AI, too: This is the opposite effect, in which subjects which were earlier independent are pulled under the AI umbrella. Statistical Learning methods got the name Machine Learning, and are now mentioned in the same line as AI - ‘AI, Machine Learning….’, making it look like a part of AI.
To avoid getting trapped in this maze, some writers take a simpler route – defining AI by its function. It looks something like ‘AI is teaching machines to develop human capabilities’. But this definition leads us to another minefield.
‘Earlier, accounting was done by human beings. Now it is done by accounting software. So can we call accounting programs AI’?
While nobody is foggy enough to award the AI medal to accounting software, it is certainly a difficult question to answer. One can even pull this line of argument a little backwards and claim that the automatic weaving machine is AI because it has some human capabilities. We really need to come out of this hole that we seem to have dug ourselves.
Frustrated by my inability to even define the thing I am working on, I devised my own acronym to describe AI systems. After a lot of thought, I invented the term Systems with Human Aspirations (SHA). Though exhausted by this supreme effort, I continued tirelessly to give a short meaning to SHA- ‘a machine, program or any future artefact that aspires to emulate or surpass one or more of the human capabilities’.
One thing this does is to allow the entry of the accounting software and even the lowly weaving machine in the SHA club. If this sounds like digging a really magnificent hole, I totally agree. So as a remedy, I devised a classification1.
Called SHA Classification, it divides the whole SHA universe in four classes based on three characteristics:
- Capability Type: The area of capability in which the SHA aspires to compete with humans – is it an areas of strength of humans, or an area of weakness.
- Performance Level: Is the SHA better or worse than human in the capability aspired?
- Basis of Usefulness: Why is the SHA useful to humans?
There are two ways in which an SHA can be useful to human beings:
1. Usefulness by Extension: SHAs do not have some constraints that human beings have. Two are particularly notable – energy and psychology.
Energy: An automated weaving machine is not as good as a human, but it can burn a lot of energy. A modest 1.5 kW power loom will burn 10 million calories in an eight-hour shift, while humans spend around 2500 calories in a whole day!
Psychology: The capability of SHAs can be augmented by employing more SHAs. For the reasons we will soon see, this method of augmentation doesn’t work very well with humans.
Because they are not as good as humans, the SHAs which have the Lack of Constraints (LoC) advantage are useful as Extensions to human beings.
2. Usefulness by Substitution: If an SHA has better capability than human beings in a certain area, it can substitute humans. An example is the calculator of earlier days. It is decidedly better than us in calculating say 29 x 17, and is useful by substitution.
And the classes are:
- Class 1: Where the SHA compete in area of human weakness, but are still not good enough. They are useful by extension because of LoC. Our power loom will fall in this class.
- Class 2: The SHA compete in area of human weakness and are better than human beings. They thus substitute humans, like the accounting software.
- Class 3: Here the SHAs aspire to compete in areas of human strengths, like language. They are not as good as humans, but LoC advantage makes them useful by extension. Most SHAs which are called AI today will be found in this class.
- Class 4: These SHAs compete in areas of human strength and become better than humans. So they can be useful (and threatening) by substitution.
It is evident that we do not have to worry much about Class 1-3 SHAs. They can be compared to the cows and horses, useful and harmless. We start getting worried when it comes to Class 4.
A slightly different and vaguer classification is the ANI-AGI classification mentioned in a lot of current literature on AI. The development of Artificial General Intelligence (AGI) is considered as the beginning of our existential threat. AGI is defined as the capability of a machine to learn any task, as compared Artificial Narrow Intelligence (ANI) that learns one or few specific tasks.
But the emergence of an AGI system may not be the only way for machines to begin their domination. Many Class 4 SHAs working together can be powerful enough to threaten humans. Considering that development of AGI is still an unknown and uncertain future, the possibility of Class 4 SHAs working in tandem can pose a greater risk to us.
This is a good time to inspect the psychological aspects in which SHAs score over humans. This is crucial to our discussion, given that once SHAs come near human performance, lack of psychological constraints can give them an immediate advantage.
What are the psychological constraints that hold humans back? We can divide them in two types – personal constraints and team constraints.
Personal constraints have to do with an individual. Some of them are:
Biases: The human mind is prejudiced by nature. It is plagued by various types of biases such as ‘confirmation bias’, ‘loss aversion’ and ‘stereotyping. In fact, we are biased much more than we would care to admit2. The biases affect our judgement and reasoning.
Boredom: The human mind cannot do the same activity over long stretches of time.
Likes and dislikes: Humans prefer to do some things and avoid others. This is partly related to biases.
But the human limitations are really highlighted when it comes to working together. It would seem to anyone who is naïve enough not to understand human nature that putting humans together will multiply the capability. This can be one way to overcome the energy limitations. But human beings don’t work together well. When they are put together in a team, they exhibit certain traits:
Cheating: It is a form of claiming an unfair advantage for the self at the cost of others. It comes in many forms, beginning with such basic stuff as not doing your work correctly, going all the way up to proper theft and bribery.
Groupism: Where a few members of a team come together to and consider others as different from them. Loyalty to group and hatred of non-members are basic requirements.
Alpha male domination: The tendency of one member of a group to claim control over behavior of the entire group. Sometimes useful in short term, the long term effects for the team are always harmful.
The commons problem: Where individual members consume disproportionate amount of common property. Can be considered as a type of cheating.
Conflicts: When two team members have grievances against each other, mostly due to a competing claim on a real or imaginary resource.
Revenge: A way of finding psychological closure for an outstanding conflict, leaving the conflict outstanding for future.
Blame: A tendency of human mind to allocate responsibility of a failure somewhere, anywhere.
In any human team that comes together for certain purpose, these and more such traits raise their ugly head sooner or later. Human beings are really dumb when it comes to working together.
This is the area where SHAs can beat human beings. SHAs don’t have minds and so they can work together well. If you want one reason to be afraid of SHAs, then it is the psychology – ours.
Thus the danger of Class 4 SHAs with different capabilities coming together, unhindered by psychological issues and substituting humans seems clear and present.
But is it real? Are we missing something?
I have reasons to believe that this scenario is not likely. Let me try and describe why.
The capabilities of human beings have developed through evolution and natural selection. Each new capability, even in its basic form proved to be an advantage to its owner. This advantage made the offspring of the owner populous, pushing others into extinction. It is in this cut-throat ‘winner takes all’ world that we got our language and reasoning. Can the SHAs get them without going through a similar process?
Something tells me that it is unlikely. Going by past evidence, abilities comparable to existing species will develop only through a process similar to natural selection. Even today, scientists are working on methods of training that are based on competition and are using algorithms that are based on evolution. The most likely way to Class 4 is through using these methods.
And in that case, is it possible that SHAs will also evolve the traits that affect us so much? After all, the biases and the selfish traits all have origins in the evolutionary struggle3. Who is to say that SHAs that face a similar struggle will not be chained with similar attributes?
To be useful then, the SHAs have to do a delicate balancing act. While competing with humans in a particular area, they have to improve their performance to certain level, at which they become useful due to the Lack of Constraints advantage (remember energy and psychology). But if they try going up to a level comparable to humans, it might involve evolutionary struggle that implants them with the same undesirable traits that humans have. This means that there is a strong possibility of SHAs getting trapped in an equilibrium zone.
That, in short is my conjecture – SHAs will come very close to equaling human beings, but might never quite reach there. Class 4 SHAs may appear but they will not pose a coordinated threat to humans.
- The SHA Classification was first proposed in my article ‘A method of classification for AI systems’ in CSI Communications, June 2016 issue. You can read the article at - http://www.csi-india.org/digital_magazine/CSIJune16/mobile/index.html#p=10
- Just go through the list of biases in https://en.wikipedia.org/wiki/List_of_cognitive_biases and you will see what I mean.
- The disciplines that study psychology in light of evolution are called Neo-Darwinism and Evolutionary Psychology. A good place to start is the iconic book ‘The Selfish Gene’ by Richard Dawkins. A wonderful exposition of cognitive biases is found in ‘Thinking Fast and Slow’ by Daniel Kahneman)
Founder and CEO, Cere Labs.