FREE - In Google Play
FREE - in Win Phone Store
This is incredible.
Wow, this is great, I was wondering if they'd try chess.
It calculates 80 thousand positions per second vs stockfish's 70 million positions per second.
Also interesting is that as it was training itself, it eventually started to prefer the English opening and Queen's gambit, and did not like the Sicilian.
Amazing! There's a graph of Alphazero's Elo relative to Stockfish's but unfortunately the paper doesn't seem to give a precise figure.
It scored 64% so almost exactly 100 Elo stronger (although in the 1200 game match it only scored 61%).
This list puts SF at around 3389 so for the 100 game match the rating for AlphaZaro would be roughly 3489. (Time control matters too, they played at 1 minute per move, but this is just for an estimate.)
Note though that it was equal to stockfish after only 4 hours of self training (300,000 self training games) and played the 100 game match after 700,000 self training games. Also note that it's not just a chess playing program, it beat the best Shogi program and go programs too.
Also note that ratings are relative e.g. this isn't a perfect comparison to FIDE ratings.
Okay, thanks, Sammy!
What depth did sf go for? 1 minute per move is not much for the current engines.
I'm not gonna believe in this until their engine appears in the CCRL 40/40 rating list.
Looks like an extraordinarily strong engine. I would have liked to see it compete in the Chess.com Computer Chess Championship (CCCC) to get an idea of its play style.
I wonder how many games it would take for Carlsen to get a win against Deep Mind.
It is important to be careful about assessing the result. Different hardware was used and it is necessary to be sure this does not account for the 100 point strength difference. However, the paper on this indicates that AlphaZero would have had to be slowed by a factor of about 30 to be 100 points weaker! Given that 4 TPUS were up against a very powerful computer, this indicates to me that AlphaZero is genuinely better.
It's also worth noting that asmFish (derived from Stockfish) is currently the highest rated engine (CCRL). The AlphaZero team did not use this derivative - they used the 2016 champion version.
Facts from the DeepMind paper.
A bit of fun: Lichess analysis of the game. Interesting trend at the end after a close first half.
Then again chess is drawn game. Give e.g. sf more time and it will find good moves. AlphaZero algorithm is clearly superior which benefits it with short time controls. Then again, that is in a way the point, isn't it? Get more with less effort.
No, AlphaZero has more of an advantage with longer time controls. There is no reason this would become less with even longer time controls.
Eventually, such AI are going to be implanted into robots, and I can picture a Terminator scenario....for humans. Yikes.
1. d4 is best. See page 6.
So is there a record of the entire match ? Also in the paper www.chess.com/forum/view/general/stockfish-dethroned it is mentioned that a program called Elmo did in fact defeat Alpha Zero. Is Elmo then the John Connor of the chess universe ?
Extraordinary achievement. The style of play for Alpha is much more human like than regular engines too.
However there is one big blemish: They used massive 64 cores for Stockfish engine but only gave it 1GB hash memory which is ridiculously low given the kind of hardware they are using.
Using such low hash memory ensures that Stockfish will be forced to cut longer lines and thus Alpha would have the edge in long lines. This is exactly what we see in the games, Stockfish misevaluates lines.
13. Ncxe5 is particularly strange in game 1
Also 31. Qxc7 seems way too greedy allowing Bh3 and giving black counterplay
Only 10 selected games were given so far. Demis Hassibis tweeted that more would be coming, specifically early games before AZ became superhuman. Here's the 10 games so far.
Elmo is a shogi playing engine.