In March 2016, “AlphaGo” of Google DeepMind played against Lee Sedol, a 9-dan ranked professional Go player referred to as the best Go player in the world by experts. Up to the match with Lee Sedol, AlphaGo has gained fame for being the first computer program to beat a professional Go player, and has extended its fame by winning 70 matches with other professional players, with a win rate of 100%. The match of AlphaGo against Lee Sedol has gained worldwide attention with the title “Man vs A.I. ”, attaining views from people who have shown no interest in particular in Go beforehand.
AlphaGo was not specifically trained nor designed to compete with Lee Sedol or any other player. Rather, it was purely designed to make the move that would grant the highest probability of winning in any circumstance according to the opponent’s move, that be a random move or the move decided by the best professional Go player in the world. Given this, many experts expected an easy win for AlphaGo, since it has gone through “deep learning” – a process of machine learning that imitates the human brain’s function in processing data and creating patterns for use in decision making – and also given that AlphaGo has maintained its 100% win rate in the 70 matches against other top-level professional Go players. On the other hand, the general public wished Lee Sedol to seize victory since they believed that Lee Sedol’s defeat would be the ultimate defeat of mankind against artificial intelligence.
Contrary to the general public’s expectations, in the first three matches against AlphaGo, Lee Sedol seemed lethargic against the perfect moves of AlphaGo, leaving numerous fans shocked by the projection that mankind might in the near or distant future be inferior than machines, causing machines – with artificial intelligence – take the dominance of the world from humans. However, the problem with this projection lies here. Although computer programs like AlphaGo might surpass humans in their ability to play games – like Go – with set rules that can be represented mathematically, it would be much harder, or maybe impossible, for a computer program to surpass people in their ability to, for example, communicate with another person in accordance to the person’s mood, since it is nearly impossible for a computer program to “understand” a person’s feelings as well as a real person just with mathematical representations of the person’s facial expressions.
Indulged in a similar idea, John R. Searle came up with a way to differentiate between forms of artificial intelligence like AlphaGo that reach near-human or surpass humans in a limited area of ability and those that reach near-human ability in all areas. He came up with the notations “strong A.I.” and “weak A.I.” to distinguish between the two.
In the past, Alan Turing came up with the “Turing Test” in an attempt to test a machine’s ability to exhibit intelligent behaviour equivalent to that of a human. To put it simply, the test judges the conversational skills of a machine and shows whether a computer program has cognitive parts that are like humans that can fool a human into believing it.
The Turing Test has been further developed from its base by many institutions. For example, the Standard Interpretation – developed by Stanford University – is one of them. It is slightly amended but is very similar to the original Turing Test. In the Stanford Interpretation Test, interrogator C is the person who asks the respondent, and computer A and person B are operating as respondents. The role of the interrogator is to determine which is human and which is a computer program just by its questions. Based on their speed and quality of responses, person C, the interrogator, decides who is a person and who is a computer program . This is repeated several times, and computer A passes the test if the number of times A is identified as human exceeds the number of times B (the actual human) is identified as human.