What we can learn from AlfaZero?
I like to discuss the fuss about AlfaZero. Some people, even chess players are praising it as something they never saw before and the authors behind AlfaZero wish to convey us that they created something which could learn chess in four hours and is using some forgotten new technology called neural networks and making huge leap developing the AI. The AI development using chess is misleading, so I like to qoute next paragraphs from the following paper.
Long time ago it was suggested that chess is the drosophila of AI? The specific meaning of the analogy has never been more than superficially elaborated. What most practitioners seem to mean by claiming chess as the drosophila of AI is simply that computer chess, like drosophila, represented a relatively simple system that nevertheless could be used to explore larger, more complex phenomena.
Deep Blue program which beat Kasparov in 1997 was capable of evaluating 200 million positions per second (which translated into an average search depth of six to eight moves). IBM had spent millions of dollars on Deep Blue, a machine that only played a grand total of six games against a single opponent before it was dismantled. In fact, the machine was disassembled immediately after its narrow victory over Garry Kasparov, and its internal workings have never been revealed to the satisfaction of the research community – an important but unintended consequence, perhaps, of the competitive tournament system and the increasing reliance on cash prizes to fund system development. In any case, to many observers, Deep Blue’s brute force approach to computer chess – along with its narrowly specialized ‘Kasporov Killer’ techniques – was too single-minded to suggest any meaningful general intelligence.
‘My God, I used to think chess required thought’, reflected the noted cognitive scientist Douglas Hofstadter in response to the Deep Blue victory: ‘Now, I realize it doesn’t. It doesn’t mean Kasparov isn’t a deep thinker, just that you can bypass deep thinking in playing chess, the way you can fly without flapping your wings’ (quoted in Weber, 1996). In a 1997 response to the Deep Blue victory published in the journal Science, John McCarthy, the founding father of both AI and competitive computer chess, publicly lamented the degree to which computer chess had been led astray by the will-o-wisp of tournament victories: ‘chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies’
At the heart of McCarthy’s critique is the perception that, although computer chess was productive in that it encouraged constant experimentation, it produced no new theories – either about human cognitive processes or theoretical computer science.
Herbert Simon and Allen Newell had stressed that it was essential not only that the computer made good moves, but that it made them for the right reasons. Computer chess was, for Simon and Newell, valuable only to the degree that it represented a ‘deliberate attempt to simulate human thought processes’ (Newell et al., 1958). This lofty goal was soon abandoned in the quest to build stronger tournament performers.
The race of building the winning chess playing program as the only successful outcome of the AI development only made the difference between human and machine wider.
The brute-force approach to computer chess highlighted the growing divide between AI and the human cognitive sciences. A growing body of research on human chess players indicated that human players rarely thought ahead more than one or two moves, relying instead on perception, pattern recognition, and the use of heuristics. Chess, as it was played by humans, turned out to be an even more complex cognitive activity than was imagined by the early artificial researchers (Wagner and Scurrah, 1971). As a result, computer chess came to be seen as increasingly distinct from human chess.
Many AI researchers appeared to believe, the primary measure of an experimental organism was its ability to produce fundamental theory, then chess was probably not the drosophila of AI. Despite the impressive productivity of the computer chess researchers, the research agenda that computer chess encouraged was simply too narrow to be sustainable. It was as if drosophila-based genetics research had never advanced beyond the mapping of the drosophila chromosome.
Chromosome mapping was, of course, an important contribution made by the drosophilists to genetics research, but as mapping techniques became increasingly routine, interest in drosophila stagnated. It was only with the introduction of new wild varieties of drosophila into the laboratory, and the migration of the drosophilists out of it, that the drosophila was reinvented as an experimental technology for investigating population genetics. Computer chess had no such second act yet. AlphaZero supposed to have one as the Google Corporation wants us to believe.
When looking the games of AlphaZero I found one position which has some blink of AI presence. The game is commentated below and the move I am talking is the 19th move of White h2-h3. Most likely I am mistaken and the move was still coming out from the brute force calculation tree and has nothing to do with AI. I think that the comparison with animal psychology is very relevant here. We have a very big data about animal behavior and despite popular beliefs there are registered very few occasions when animals purposely try to send signals or information to humans using their mind. Animals are bound to their instincts and to detect the appearance of intellectual life act is rare as the AI appearance in chess playing computer program.
I hope that AlfaZero is not disappearing like Deep Blue. We will see.