Objectively Speaking, Is Magnus a Patzer Compared to StockFish and AlphaZero?

Sort:
Avatar of SeniorPatzer
SmyslovFan wrote:

SP, my comment about chess not being dead was a direct response to your post #42. 

 

I'm reading comments by GMs on Facebook saying that this is a great day for chess because of how much AlphaZero could teach us about the game!

For every Eeyore shaped cloud, there's a silver lining.

 

Look how FM Mike Klein opened up his essay:  "Chess changed forever today."

 

Then a commenter just posted this:  "Most comments treat the DeepMind machine as an engine. That is entirely wrong. There is no chess engine at all. There is an AI algorithm which trained himself for 9 hours, starting from bare chess rules. Leave it to train for 100 days, and you might expect something much more powerful. Admittedly, after some time, it would refine its evaluations to an extent that further training would not improve him at all."

 

Wowwwww.  Triple Exclam.  It's what I thought back in '97 or so when Deep Blue beat WC Garry Kasparov.  The brute number crunching will just get faster and deeper.

 

Now we have a self-trained AI algorithm where the programmers can either improve the algorithm continuously or have the algorithm go much longer.  

 

Yowsa!  AlphaZero's chess strength will be "God-Like".

 

But AlphaZero will not turn into a sentient life-form.  Thank goodness!

Avatar of Elroch
batgirl wrote:

I'm rather illiterate on the subject of computers, but I do know in the 1950s, developers were hoping that chess could be the construct for understanding AI, and visionaries such as Mikhail Botvinnik, saw computers learning to play chess like humans, only better. But rather quickly number crunching became the way to go and AI development -in relation to chess at least - fell by the wayside.  Pure calculation did however prove to be impossibly strong especially as computers themselves also became faster along with improved algorithms and better elvaluations. 
Is this so-called AlphaZero a sort of vindication of the early computer-chess ideas?

As someone who has quite a few years knowledge and experience of this area (at least the parts of it known as "machine learning") I would say yes, you are right, the success of AlphaZero is a vindication of the idea that artificial intelligence is capable of achieving entities that can do amazing things (better than conventional computer programs). Deep Blue (and its offspring Deeper Blue) was not an AI: it was a traditional computer system where humans put in the intelligence and the computer did the hard work, plus some specialised hardware to make it run faster. Conventional chess engines have a tree search algorithm and a positional evaluation function and the program is designed to efficiently implement what those algorithms say. Deep Blue incorporated the wisdom of professional chessplayers in its evaluation function, and used custom ASICs designed for running chess code: that is what made it better than other programs of the time (but a long way short of the best PC-based  programs of today).

By contrast AlphaZero uses a form of artificial intelligence called reinforcement learning. which basically involves playing chess in order to learn very efficiently from experience to play chess better and better.

It does have algorithms in it corresponding to the tree search and positional evaluation, but it writes them itself based on experience and keeps on tweaking them as it plays.

One clever trick is called bootstrapping. Suppose an AI evaluates a position and works out the best move and a few moves later with the apparent best moves played the evaluation is different . This happens quite often even in games between top engines: it is a necessary part of how games get won!

Without the game having ended, the AI now has reason to believe that for reasons of consistency its evaluation function is not quite right, so it adjusts it with a little nudge that would have made the earlier evaluation more similar to the later one. The useful thing is that because the evaluation routine is general, it will now tend to evaluate other positions that are somewhat similar a little differently too (and the hope - and empirically the truth - is that that will make it play better). This process repeated for a few tens of millions of moves (in 600,000 games if I recall) was sufficient to produce what may well be the strongest chess player ever.

Avatar of USArmyParatrooper
Debistro wrote: 

Yes we are. Never thought I'd see the day when Skynet would be a reality and here we are.

Very soon, this AI will learn how to drive cars and then we will have driverless cars, etc.

Become soldiers that never miss a shot.

This AI will take over our jobs. https://www.huffingtonpost.com/stowe-boyd/robots-jobs-purpose-humans_b_5689813.html

But since we are talking about chess here, I have to admit even though Caruana won again last night and he could be regaining his old form, it didn't excite me any more. Human chess suddenly looked a lot less exciting now.

Here is a machine that is playing exciting chess, sacrificing material, and winning. Probably putting an end to "Chess Styles" which are a characteristic of humans; It doesn't have any, it just goes for the most direct route to win.

 

We already have one 😉💪

Avatar of SmyslovFan

BBC recently reported that ~800million jobs will be lost to automation by 2030. 

Avatar of admkoz
SeniorPatzer wrote:

Mick, Smyslov Fan, MickyNJ, Debistro, et al,

 

I bought quite a few chess books, and I am/was planning to storm the mountain of 2000 and 2200 after a 30 year layoff.  I'm not yet 60, and I wanted to make it by age 65.

 

Now I'm wondering if OTB chess is dead if/when AlphaZero "solves" chess.

 

If so, then I'd like to make an early determination on whether to stop this Don Quixote quest to attain 2000/2200.

 

Moreover, I have a 3rd grader who just got into chess.   It would be cool if he made NM but if AlphaZero "solves" chess what's the point of going further?

Seems to me like AZ didn't try to "solve" chess.  "Solving" would be infeasible.  There are too many positions - well more than the number of electrons in the universe - so there would be no way to store the "solution" even if it were calculated.  

Avatar of admkoz
Elroch wrote:
batgirl wrote:

I'm rather illiterate on the subject of computers, but I do know in the 1950s, developers were hoping that chess could be the construct for understanding AI, and visionaries such as Mikhail Botvinnik, saw computers learning to play chess like humans, only better. But rather quickly number crunching became the way to go and AI development -in relation to chess at least - fell by the wayside.  Pure calculation did however prove to be impossibly strong especially as computers themselves also became faster along with improved algorithms and better elvaluations. 
Is this so-called AlphaZero a sort of vindication of the early computer-chess ideas?

As someone who has quite a few years knowledge and experience of this area (at least the parts of it known as "machine learning") I would say yes, you are right, the success of AlphaZero is a vindication of the idea that artificial intelligence is capable of achieving entities that can do amazing things (better than conventional computer programs). Deep Blue (and its offspring Deeper Blue) was not an AI: it was a traditional computer system where humans put in the intelligence and the computer did the hard work, plus some specialised hardware to make it run faster. Conventional chess engines have a tree search algorithm and a positional evaluation function and the program is designed to efficiently implement what those algorithms say. Deep Blue incorporated the wisdom of professional chessplayers in its evaluation function, and used custom ASICs designed for running chess code: that is what made it better than other programs of the time (but a long way short of the best PC-based  programs of today).

By contrast AlphaZero uses a form of artificial intelligence called reinforcement learning. which basically involves playing chess in order to learn very efficiently from experience to play chess better and better.

It does have algorithms in it corresponding to the tree search and positional evaluation, but it writes them itself based on experience and keeps on tweaking them as it plays.

One clever trick is called bootstrapping. Suppose an AI evaluates a position and works out the best move and a few moves later with the apparent best moves played the evaluation is different . This happens quite often even in games between top engines: it is a necessary part of how games get won!

Without the game having ended, the AI now has reason to believe that for reasons of consistency its evaluation function is not quite right, so it adjusts it with a little nudge that would have made the earlier evaluation more similar to the later one. The useful thing is that because the evaluation routine is general, it will now tend to evaluate other positions that are somewhat similar a little differently too (and the hope - and empirically the truth - is that that will make it play better). This process repeated for a few tens of millions of moves (in 600,000 games if I recall) was sufficient to produce what may well be the strongest chess player ever.

What I am curious about is whether it "figures out" things like "don't give up a free queen", or does it really just have to figure that out again every time such an option presents itself?  

Avatar of SeniorPatzer
SmyslovFan wrote:

BBC recently reported that ~800million jobs will be lost to automation by 2030. 

 

Wow.  That's a lot of jobs.

Avatar of batgirl
SmyslovFan wrote:

BBC recently reported that ~800million jobs will be lost to automation by 2030. 

That was just an automated report.

Avatar of batgirl
Elroch wrote:
batgirl wrote:

 

As someone who has quite a few years knowledge and experience of this area (at least the parts of it known as "machine learning") I would say yes, you are right,...

Yay!

Avatar of SeniorPatzer
batgirl wrote:
SmyslovFan wrote:

BBC recently reported that ~800million jobs will be lost to automation by 2030. 

That was just an automated report.

 

Very funny Batgirl, lol.

 

SmyslovFan wrote:  "I'm reading comments by GMs on Facebook saying that this is a great day for chess because of how much AlphaZero could teach us about the game!

For every Eeyore shaped cloud, there's a silver lining."

 

Adapting slightly:  "I'm reading comments by Executives on Facebook saying that this is a great day for Profits because of how much AlphaZero could teach us about automation, cost-cutting, and efficiency!

For every Eeyore shaped cloud, there's a silver lining."

Avatar of IpswichMatt
batgirl wrote:
SmyslovFan wrote:

BBC recently reported that ~800million jobs will be lost to automation by 2030. 

That was just an automated report.

Genius!

Avatar of SmyslovFan

Batgirl's post is an excellent argument in favor of installing "likes" on chess.com!

Avatar of chessgm003

I don;t think so

Avatar of Elroch
admkoz wrote:

What I am curious about is whether it "figures out" things like "don't give up a free queen", or does it really just have to figure that out again every time such an option presents itself?  

That's a good question.

If my understanding is correct, when AlphaZero started to teach itself how to play chess by practice, it didn't even have knowledge of the rough value of pieces. What it had was a way of determining the legal moves in a position and a way of representing any chess position.

The neural networks that estimate (1) the probability that a move is best and (2) the expected score starting from a specific position started off with no knowledge,, Usually neural networks are initialised with small random parameters rather than the obvious "all moves are equally likely to be best" and "all positions are worth about 0.5 point", so it would think a move or a position was better, but would start off doing no better than blind guessing.

From there its experience improves these networks and after a while it would learn that positions where there was a queen missing tended to not have such as good an expected result. Well, actually it would get a general idea that more material is better, learning about the value of all of the pieces (and positional factors) in parallel. It would also learn that moves dropping a queen have lower probability of being best, But later it would learn from its exploring that a queen sacrifice that is forcing might be worth looking at  (i.e. its probability of being best is not too tiny) and so on.

I have put this crudely, but basically a big neural network learns to encapsulate concepts that can be very sophisticated, and outputs numbers that summarise the impact of the combination of the state of all of these concepts on the objective (i.e. the objectives of finding the best move and of estimating the likely result.

One property of a sufficiently large and complex neural network trained in the way used by AlphaZero is that it learns to evaluate positions better and better without exploring the moves at all! This is how humans get to be able to play very fast chess surprisingly well: instinctive evaluation that is not bad. The better the evaluation routine and the better the network that is used for evaluating the probability that a move is best, the less actual calculation needs to be done and the greater precision for a specific amount of calculation.

It seems clear that AlphaZero "thinks" a lot more like humans than most computer programs, and one consequence of this is that its standard of play increases more rapidly with the amount of time to think/compute (conventional engines don't improve as much as humans with extra time, presumably because their search is not selective enough.

Avatar of Elroch
batgirl wrote:
Elroch wrote:
batgirl wrote:

 

As someone who has quite a few years knowledge and experience of this area (at least the parts of it known as "machine learning") I would say yes, you are right,...

Yay!

Thanks for quoting the less dull part of my post!

Avatar of Lyudmil_Tsvetkov

Looking forward to Alpha telling me this place is not the best one to post...

Avatar of usmansk

from algorithm point of view, stockfish was evaluating 80 million positions and alpha zero only 700k in one minute. This was the huge favor to stockfish. Real fun will come when both should be allowed to evaluate 80 million positions in one minute

Avatar of SeniorPatzer
Lyudmil_Tsvetkov wrote:

Looking forward to Alpha telling me this place is not the best one to post...

 

Looking forward to seeing Patzer GM's annotating AlphaZero's brilliancies for the benefit of other patzers, lol.

Avatar of SeniorPatzer
SeniorPatzer wrote:
Lyudmil_Tsvetkov wrote:

Looking forward to Alpha telling me this place is not the best one to post...

 

Looking forward to seeing Patzer GM's annotating AlphaZero's brilliancies for the benefit of other patzers, lol.

 

GM Peter Svidler on AlphaZero:

 

"Also... I don't like wearing tinfoil hats, but I looked through some games. The games were absolutely fantastic, phenomenal. But it was said that neither chess engine had opening books. Alpha Zero won several incredible games in Queen's Indian, with Qc2, c5, d5. This is central theory... (Laughs) The central theory that was developed by Borya Gelfand, Lyova Aronian and others from the ground up ten or so years ago. And now a computer just plays like that on its own? This is absolutely central theory!

It just improves theory...

Yes. It's an absolutely central theoretical line. We're told that it has no opening book, it's just so devilishly strong that after training for a few hours it's able to replicate things that took humans years to develop. This was a breakthrough in Queen's Indian, you remember. This line was a breakthrough. I was in awe of the machine's games, but I was just astonished when I saw openings. I thought, "Damn, if it can actually..." I can believe that it can play equal positions greatly, but if deep learning can actually replicate opening lines and improve upon them, it's just stunning."

The GM Patzer Svidler is in awe of AlphaZero.  Thus Senior Patzer is similarly dumbstruck.

Avatar of admkoz
Elroch wrote:
admkoz wrote:

What I am curious about is whether it "figures out" things like "don't give up a free queen", or does it really just have to figure that out again every time such an option presents itself?  

 

From there its experience improves these networks and after a while it would learn that positions where there was a queen missing tended to not have such as good an expected result. Well, actually it would get a general idea that more material is better[...]

I have put this crudely, but basically a big neural network learns to encapsulate concepts that can be very sophisticated[...]

So you're saying it DOES figure out that "more material is better" meaning that it can evaluate positions it has never seen before on that basis.  

 

You and me can glance at a board, see that there are no immediate threats, see that Black is up a rook, and figure Black has it in the bag, even if an actual mate is 30+ moves away.  We'll be right 999,999 times out of a million.  Can AlphaZero do that?  

Avatar of Guest3630011050
Please Sign Up to comment.

If you need help, please contact our Help and Support team.