x
Chess - Play & Learn

Chess.com

FREE - In Google Play

FREE - in Win Phone Store

VIEW

Stockfish dethroned

  • #1
    Deepmind has released a paper with a generalized alphazero that plays go, shogi, and chess.  It scored 28 wins with 0 losses in a 100 game series against stockfish. Here is one of its wins with black.  Of note is that alphazero uses another form of tree search than alpha-beta pruning, and uses no tablebases or opening book.

     

  • #2

    This is incredible.

  • #3
  • #4

    Wow, this is great, I was wondering if they'd try chess.

    It calculates 80 thousand positions per second vs stockfish's 70 million positions per second. 

    Also interesting is that as it was training itself, it eventually started to prefer the English opening and Queen's gambit, and did not like the Sicilian.

  • #5

    Amazing! There's a graph of Alphazero's Elo relative to Stockfish's but unfortunately the paper doesn't seem to give a precise figure.

  • #6
    Rocky64 wrote:

    Amazing! There's a graph of Alphazero's Elo relative to Stockfish's but unfortunately the paper doesn't seem to give a precise figure.

    It scored 64% so almost exactly 100 Elo stronger (although in the 1200 game match it only scored 61%).

    This list puts SF at around 3389 so for the 100 game match the rating for AlphaZaro would be roughly 3489. (Time control matters too, they played at 1 minute per move, but this is just for an estimate.)

    Note though that it was equal to stockfish after only 4 hours of self training (300,000 self training games) and played the 100 game match after 700,000 self training games. Also note that it's not just a chess playing program, it beat the best Shogi program and go programs too.

    Also note that ratings are relative e.g. this isn't a perfect comparison to FIDE ratings.

  • #7

    Okay, thanks, Sammy!

  • #8

    What depth did sf go for? 1 minute per move is not much for the current engines.

     

  • #9

    I'm not gonna believe in this until their engine appears in the CCRL 40/40 rating list.

  • #10

    Looks like an extraordinarily strong engine. I would have liked to see it compete in the Chess.com Computer Chess Championship (CCCC) to get an idea of its play style.

  • #11

    I wonder how many games it would take for Carlsen to get a win against Deep Mind.  

  • #12

    It is important to be careful about assessing the result. Different hardware was used and it is necessary to be sure this does not account for the 100 point  strength difference. However, the paper on this indicates that AlphaZero would have had to be slowed by a factor of about 30 to be 100 points weaker! Given that 4 TPUS were up against a very powerful computer, this indicates to me that AlphaZero is genuinely better.

    It's also worth noting that asmFish (derived from Stockfish) is currently the highest rated engine (CCRL). The AlphaZero team did not use this derivative - they used the 2016 champion version.

    Notes:

    • The 1200 game match was handicapped by using known openings. With such strength it seems best to rely 100% on self-learning!
    • Concerning the time limit, AlphaZero increased its edge over the entire range of time controls from 0.1s per move to 1 minute per move, so would surely have been at least as good a result with even more time. Stockfish was stronger only when the time per move was reduced below 1/3 of a second!

    Facts from the DeepMind paper.

    A bit of fun: Lichess analysis of the game. Interesting trend at the end after a close first half.

  • #13

    Then again chess is drawn game. Give e.g. sf more time and it will find good moves. AlphaZero algorithm is clearly superior which benefits it with short time controls. Then again, that is in a way the point, isn't it? Get more with less effort.

  • #14

    No, AlphaZero has more of an advantage with longer time controls. There is no reason this would become less with even longer time controls.

  • #15

    Eventually, such AI are going to be implanted into robots, and I can picture a Terminator scenario....for humans. Yikes.

  • #16

    1. d4 is best. See page 6.

  • #17

    So is there a record of the entire match ? Also in the paper www.chess.com/forum/view/general/stockfish-dethroned it is mentioned that a program called Elmo did in fact defeat Alpha Zero. Is Elmo then the John Connor of the chess universe ?

  • #18

    Extraordinary achievement. The style of play for Alpha is much more human like than regular engines too.

    However there is one big blemish: They used massive 64 cores for Stockfish engine but only gave it 1GB hash memory which is ridiculously low given the kind of hardware they are using. 

     

    Using such low hash memory ensures that Stockfish will be forced to cut longer lines and thus Alpha would have the edge in long lines. This is exactly what we see in the games, Stockfish misevaluates lines.

     

    13. Ncxe5 is particularly strange in game 1

     

    Also 31. Qxc7 seems way too greedy allowing Bh3 and giving black counterplay

  • #19

    Only 10 selected games were given so far.  Demis Hassibis tweeted that more would be coming, specifically early games before AZ became superhuman. Here's the 10 games so far.

    Elmo is a shogi playing engine.  

  • #20
    This is pretty wild. It's far worse at playing Black, though, only 3 wins out of 50 games, the rest of them draws. As White it manages 25/25 win/draw.
Top

Online Now