Impact of Computers in Chess
Abstract
This paper aims to inform about the impact of computer engineering on the game of chess. It explores the impact of computers on chess theory, analysis, endgame analysis, and its impact on the quality of chess play. Its goal is to show that computers can be used to learn about things in the real world, if they are effectively programmed based on previous human knowledge about these topics.
Introduction
Chess is a game whose objective is checkmate. This occurs when one player attacks the opponent’s king in such a way that they will not be able to escape capture in the next move. Chess has been around since at least the fifth century, first being played in India. Important concepts to the strategy of chess are king safety, central control, material advantages, special advantages, and piece activity. Human players form their strategy around these general guidelines and base their play off of specific available moves in the position. Prophylaxis may be used to play moves that limit the opponent from forming a strong response.
The earliest paper chess algorithms were made in 1941 in Konrad Zuse’s Plankakul programming formalism. Plankakul was the first linear higher-level programming language based on non-looping. However, due to Allied bombing; many of his machines got destroyed, so his work was not published until the 1970’s. As a side note, Zuse also made the first programmable computer(Wikipedia on Zuse).
After the turmoil of the Second World War, the three fathers of computer science, Norbert Wiener, Claude Shannon, and Alan Turing wrote about the game of chess(Wikipedia on Computer Chess). Wiener’s book, Cyberkinetics, describes how a chess program could be developed using a depth-limited minimax search algorithm. Then Shannon published “Programming a Computer for Playing Chess”. Then Alan Turing was the first to publish a program developed on paper capable of playing a full game of chess. This algorithm was not very effective and as chess computing evolved it became clear that increasing the depth on a minimax search was the most effective way of programming a chess computer.
In 1956, Los Alamos chess is the first program to play a variant of chess without bishops(anti-clerical chess), and it was developed by Paul Stein and Mark Wells for the MANIAC 1 computer (Wikipedia on Computer Chess). In 1962 the first program to play credibly, Kotok-Mccarthy, was created at MIT. Then in 1966-67 the first match between computer programs is played, where Moscow Institute for Theoretical and Experimental Physics beats the Kotok-Mccarthy program at Stanford University, by telegraph over nine months. In 1992, a significant feat by microprocessors occurred, when the first microcomputer won the seventh World Computer Chess Championship in front of supercomputers, mainframes, and special hardware. This shows that chess can be used as a way to follow the evolution and that many changes in hardware are first tested in the game of chess, before being extended to other purposes.
Alexander Kotov wrote “How to Think Like a Grandmaster”, because he thought he should teach humans to think like computers. He made it a point that while other Grandmasters, like World Chess Champion, Mikhail Botvinnik, was some human teaching computers about chess, Kotov would teach humans to play like computers. He would talk about the “analysis tree” and “candidate moves” which is still taught to players today. This shows an impact on how computers have changed chess literature and how the game is taught.
Today, prevalent chess engines include Stockfish, Fritz, Houdini, Komodo, Rybka, Alpha Zero, and Leela Zero. The first five are older engines and can be classified as “brute force” engines, because they evaluate millions of positions every second. Alpha Zero and Leela Zero evaluate less positions per second (though still in the millions), but rely instead on reinforcement learning, in which computing algorithms are formed based on the results of previous testing, in what is known as reinforcement learning. More will be said about these engines and their differences later.
Orienting Material
Deep Blue vs Garry Kasparov matches in 1996 and 1997 are the most notable and well-known instances of World Chess Champions facing computers in official matches. IBM developed Deep Blue with the help of Grandmaster Joel Benjamin. Kasparov won the first match 4-2, but Deep Blue won the rematch 3.5-2.5. The program was written in C and ran under the AIX operating system(Unix). IBM did not let Kasparov study previous games of Deep Blue, nor did they let Kasparov look at changes to the log files between games.
This leads to another impact of computers on chess, the creation of chess cheating. Kasparov believed that the Deep Blue team cheated by giving the computers opening moves he thought the program did not have in its opening book. He lost the match because after the computer countered his dubious opening with a known, but not well known, response, the psychological shock led him to not put up much of a fight and lose. This instance of cheating is unique to modern chess cheating, because it involves humans being accused of giving a computer an unfair advantage using their own knowledge. Nowadays, chess cheating is more often humans gaining an unfair advantage using computers’ analysis about chess games.
IBM did not allow a second rematch and disassembled the machine. Thus a 2003 documentary suggested that it was a ploy to boost IBM’s stock value, rather than offer entertainment of competition or a true and fair assessment of the world’s best human players vs the world’s best computer chess programs at that time. Deep Blue could evaluate 200 million positions per second and was the strongest computer to ever face a world chess champion (in terms of depth search).
Body
Nowadays even stronger computers exist that have shifted to being based more on software, that play at a strength that even the best humans do not stand a chance against, and they do so by evaluating around 70 million positions per second. Perft(performance test, move path enumeration) is used by modern engines to search how many nodes(move paths) are possible until an average n-moves until checkmate is evaluated. They use average amount of moves until a foreseeable checkmate as one method to evaluate how strong chess moves are. I know these facts from playing chess and programming my own chess program, thus I do not cite them here.
The Stockfish vs Alpha Zero match and similar matches in Go and Shogi perhaps present a turning point in the history of artificial intelligence. Google’s Deep Mind company developed these programs in late 2017. In the first 100 game match, Alpha Zero won with a stunning score of 28-0-72(28 wins, 0 losses, and 72 draws), a win to draw ratio way higher than most computer vs. computer matches. Stockfish relied more heavily on brute force and was allowed a hash size of 64 threads and 1 GB which the Stockfish creators criticized as suboptimal. Alpha Zero used parallel neural networks on 5000 first generation tensor procession units(TPUs) to learn chess in just 9 hours. Each computer was given one minute per move, which the Stockfish owners criticized as suboptimal, because Stockfish is best suited for determining in game “critical points” in which a good move may be much better than the second-best move. Humans have also taken up this approach in their own games, showing another impact of computers on chess play. These complaints by Stockfishs’ creators makes questioning the motive of Stockfish creators accepting the match on Google’s terms a valid idea. Other criticisms of the match include availability/reproducibility of the software, the amount of training time it had was more than 4 hours, due to the 5000 tensor processing units running concurrently, and the fact that only 10 of the 100 games were published.
This paragraph describes the differences between the programs style of computing. Alpha Zero used reinforcement learning algorithms to play chess (Wikipedia Alpha Zero). Stockfish used Alpha-Beta search and bitboards as their algorithms and data-structures. Alpha Zero used parallel computer (sharing a common database between TPUs) and their Monte Carlo Search Tree searched around 80,000 positions per second. Stockfish used distributed computing and their Monte Carlo Search Tree searched around 70 million positions per second. Thus, although a single PC that Stockfish was ran on took more positions per device, the 80,000 positions per second were allowed 4 hours to store information in a “database” which was actually the optimization of the Alpha Zero computer’s chess playing algorithm via reinforcement learning.
Chess engines have deeply impacted the way chess is played by professionals and strong players alike. Preparation for a chess match begins with working on a player’s opening repertoire where candidate moves will be decided based on the player’s style and the computer chess engine’s evaluations of the strength of the resulting positions. Many openings that were once thought of as unplayable have been shown to be palpable to a computer. Many openings that were thought of as playable have been concretely refuted by computers, all the way to checkmate. Chess literature based on such analysis has proliferated in the 21st century, with many books being published each week. Very rarely are books not at least checked with engine evaluations, although this takes a long time; therefore, the evaluations are sometimes mentioned by the authors as being “unclear”. Many complicated openings have variations whose evaluations change over time and these changes are noted by grandmasters. Sometimes games end in results that were completely analyzed by one or both grandmasters before the game was played. This shows that the impact of computers on the opening phase of the chess game is immense.
Besides for opening analysis, computing has also allowed for the transmission of information through databases, which players often use to study the play of their opponents before rounds in open tournaments. Players can also store their own games in these databases to analyze their own play. The most famous database program is ChessBase. This would count as middlegame analysis.
Endgame tablebases have analyzed up to seven pieces. The size of the database that contains all of the positions and moves to draw or mate in seven pieces or less is 140 Terabytes (Wikipedia Endgame tablebase). Six-piece endgame tablebases are downloadable for free and require 1.2 Terabytes of memory. This shows that chess is theoretically solvable, but optimizations need to be made somehow to make it more practical. If Moore’s law continues to hold, we may see the day in the not so distant future.
Increase in chess program playing strength is a corollary of Moore’s Law. Moore’s law was an observation made by Gordon Moore in 1965 that the number of transistors per square inch on an integrated circuit doubles each year. Ken Thompson experimentally discovered that for each half-move(ply) in depth searched, the playing strength of the computer increased by 200 Elo rating points(Frederick Prost, 5).
The advent of computers has created computer vs computer and computer vs human matches as mentioned before. The internet allows players to face other players online, such as for free on Chess.com. The Alexa ranking of chess.com is within the top 1000 most frequently visited sites in the world. Computers face each other every year in the TCEC(Top Chess Engine Championship), which is a multistage event. Apps like the “Play Magnus” app allow you to play a computer representing the playing strength and style of the World Chess Champion, Magnus Carlsen, all the way from age 5 to his current age (I am proud to say that I once drew him when he was aged 20 years old). Playing against super strong computers can be done on the internet also, if looking for the toughest challenge. Unlike Alpha Zero, which Deep Mind did not release to the public, Leela Zero allows you to play against an open community(GitHub) based reinforcement learning algorithm chess engine.
Chess engines can be used to analyze mistakes and possible improvements as mentioned before. Cheating has evolved as an issue in chess. In 2006, Vesselin Topalov accused Vladimir Kramnik of cheating in the World Chess Championship, saying that Kramnik went to the bathroom to look at computer analysis of the game. This is ironic, because this is the same year that Kramnik played and lost against Fritz in the last official match between a computer program and a World Chess Champion played to this date. In 2010, three French players and their coach cheated in the Chess Olympiad. Their elaborate scheme was to use the online relay of their moves and positions over the internet from Russia to France to allow another French grandmaster to analyze the positions for the next best move to play. These moves were communicated to the French coach who took frequent trips to the bathroom to learn about them in secret. These were then communicated to the players through gestures and signals made by the coach when he walked into the playing area. The most infamous alleged chess cheater is Borislav Ivanov, who cheated in several tournaments from 2012 to 2014. Methods against cheating include not allowing electronic devices to be turned on in the playing area after the round starts, or in the bathroom.
Conclusion
Computers have significantly impacted the chess world by changing the way players study and play the game. At the pinnacle of chess play, the World Chess Champion and the challenger hire seconds to do extensive computer analysis of positions to possible play. For example, in the previous Carlsen vs Karjakin match, Carlsen hired three other strong players to take on specific roles, and others to be his seconds. He hired a coach, a sparring-partner, and a manager to train him. He also hired five other grandmasters to do computer analysis as his seconds. He also trained against another grandmaster. Thus, the playing strength of a player can reach the world’s best through training and computer analysis. Karjakin used exclusively Russian chess grandmasters and may have lost the match due to having less seconds providing quality engine analysis. This match was another “West” vs “East” match where the west faced the Russian chess powerhouse and won. Prior to the advent of computers, chess games would be adjourned for later play in tournaments and world championship matches if the game passed a certain amount of playing time. This practice fell out of existence due to the advent of computer analysis which would allow players an unfair advantage or reveal that it was not worthwhile for the opposition to play on.
Chess has led the way in revealing what computers are capable in performing. It was predicted by the Charles Babbage and Ada Lovelace that chess could be programmed using numbers to represent positions. Still, when computers were first built, no one knew their full scope and capabilities. Turing asked the questions “Could one make a machine which would obey the rules of chess, i. e. one which would play random legal moves, or which could tell whether a given move is a legal one” and “Could one make a machine that would have feelings like you and I do?” in section 1.2 Chess first published in Faster than Thought in 1953. The answer first to the first question is “Yes” and the second question remains open to debate. This shows that chess reveals that computers can be used as a discovery tool and that they are often the pioneers of testing computer innovation.
Humans have learned much more about chess due to the invention of computers. For one, they can verify if a position is technically won based on a forced mate shown by a computers plies in the display of the engine. Computers can suggest the best lines of play in chess middlegames by using their alpha-beta search algorithm. Computers are often used to simulate other world events, through application-specific programs. For example, you can simulate the impact of a meteoroid or other celestial body with earth in Meteor Crater Arizona. Nasa runs space simulations all the time when they decide on if they can get equipment to gather the information they want.
Roller Coaster engineers use computers to track the acceleration of a roller coaster based on many conditions using an accelerometer on a real track and design their simulation based on variables due to this data. Stock brokers analyze the stock market using powerful computers and mathematics. The examples are numerous. This proves the central idea that computers can be used to learn about processes in the real world.
References
Collados, J. (2018). Is AlphaZero really a scientific breakthrough in AI?. [online] Medium. Available at: https://medium.com/@josecamachocollados/is-alphazero-really-a-scientific-breakthrough-in-ai-bf66ae1c84f2 [Accessed 5 Mar. 2018].
En.wikipedia.org. (2018). AlphaZero. [online] Available at: https://en.wikipedia.org/wiki/AlphaZero [Accessed 23 Apr. 2018].
En.wikipedia.org. (2018). Computer chess. [online] Available at: https://en.wikipedia.org/wiki/Computer_chess [Accessed 23 Apr. 2018].
En.wikipedia.org. (2018). Deep Blue versus Garry Kasparov. [online] Available at: https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov [Accessed 5 Mar. 2018].
En.wikipedia.org. (2018). Endgame tablebase. [online] Available at: https://en.wikipedia.org/wiki/Endgame_tablebase [Accessed 23 Apr. 2018].
En.wikipedia.org. (2018). Konrad Zuse. [online] Available at: https://en.wikipedia.org/wiki/Konrad_Zuse [Accessed 23 Apr. 2018].
En.wikipedia.org. (2018). Tensor processing unit. [online] Available at: https://en.wikipedia.org/wiki/Tensor_processing_unit [Accessed 23 Apr. 2018].
Frederic Prost. On the Impact of Information Technologies on Society: an Historical Perspective through the Game of Chess. 2012. <hal-00933764>
Pavlovic, M. and Greet, A. (2010). The open Sicilian. Glasgow: Quality Chess. https://www.qualitychess.co.uk/ebooks/CuttingEdge1.pdf