Was AlphaZero's victory over Stockfish one big publicity stunt?


Like many, I was stunned to hear reports of an AI destroying the world's strongest engine after a mere 4 hours of learning chess. In a game that has arguably become somewhat stale, with computers completely dominating human players and the only remaining unknown being the question of whether computers could finally solve chess, this news certainly shook things up. In short, the idea that a chess-playing entity that much stronger than Stockfish could actually exist made us question whether there are still unexplored depths to the game that even our best engines are yet to comprehend.


However, I've since read reports of the various concessions Stockfish was required to make for this match. An obsolete version of the program running on inferior hardware, an arbitrary limit of one minute thinking time per move, and if I'm not mistaken, no opening book or endgame tablebases. These handicaps are surely significant enough to cast doubt on the validity of the result. If Google wanted to prove that their toy was all it's hyped up to be, why not agree to a fair fight?


It seems to me that the motivation may have been to generate impressive publicity for themselves, in tandem with reigniting some spark of public interest in a game that has lost much of the mystique it once held. I just can't help wondering whether this news truly as shocking as it first seemed, and whether the result might have been far more mundane (i.e. all games drawn) if Stockfish had been allowed to perform at full strength.


It seems to me that it was just an experiment in artificial intelligence and machine learning. It was not intended to settle anything.

Google didn’t really care in the slightest about publicity, they just wanted to test their AI, and beating even an inferior version of Stockfish ( still probably over 3200 strength), with only 4 hrs of machine learning, is a pretty big breakthrough.

Just like when IBM beat Kasparov, computers weren't actually better than humans yet, but they got their publicity and declined all rematches.


I appreciate that Google are too big to care much about 'exposure', but they presumably wouldn't want their product to look bad either. Could this be why they imposed certain restrictions that to an extent neutered Stockfish? I'm sure it was still very strong, but also feel that many in the chess world would be curious to see a rematch under fairer conditions. Not so much to see which engine is better, but from the viewpoint of studying the game.


If Google had not cared about publicity, they would not have made public press releases about private testing done behind closed doors.  To believe otherwise is being in pure denial.


As far as I know, what the Super GM Nakamura said is true.

I myself experimented with the strongest stockfish engine on chess.com and other online supercomputing servers, and found the moves to be just non-sense, which stockfish wouldn't play even at Depth 15.


Here is the topic, I created 




Stockfish 260318 has beaten Houdini 6.03 in TCEC s11 superfinal(2018) by 59-31.




houdini 6 is slightly stronger than stockfish 8



so, by comparison latest stockfish is very equal to AlphaZero (9hr training version was used in match against SF8, not the 4hr version, as some of you said & in most comments in youtube vlogs) at the match hardware & time control.


give it equal processing power hardware, it will show who is the boss.



google did not publish pgn files of the games. not whole 100 games(without opening book)

and 1200 games (from 12 opening positions from super GMs)


There has been quite a lot of discussions about that. I think it doesn't matter because this approach opens up a whole new world of possibilities not only for chess.


also follow leela chees zero, an open source chess AI

its learning by playing itself like alphazero.

interesting thing is that you can use LCZero in Arena gui, in your pc!

its currently at 2400 human level. but you can play against it at your elo level.

play it in browser(not all elo level) :



useful vlog: http://www.bhagwad.com/blog/2018/personal/how-to-run-leela-chess-engine-lczero-in-arena-at-a-specific-elo.html/

Smositional wrote:

There has been quite a lot of discussions about that. I think it doesn't matter because this approach opens up a whole new world of possibilities not only for chess.

i agree with it, that AI will have profound impact on human civilization in future.

but did you guys see these type of news "alpha zero masters complete history of chess" or "alpha zero learns more than what humans have learnt in thousand years of chess(including brute force engine)"


these have been completely bluff.

i have also seen a chess youtuber posted a video claiming "alpha zero refutes 1.e4"

the reality is(now) nowhere near it in chess achievement.



Quit simple, really - as a company that makes 90% of its income from free, globally available services, the fact that Google haven't published A0 for open play means that their claim that it's is stronger than SF is insincere/misleading. Even if that is arguable, there's no doubt in my mind they published the games for publicity.


It was significant but hyped as well. Before alpha zero ai was not a viable chess engine. it proved that it is.


There is something that people could get wrong, the "after a mere 4 hours of learning chess" is kind of mess up.

It's NOT 4hours in human-time, I mean, it's not like you ask someone to practice chess for 4 hours. These hours could represent billions of billions of games.

Also, google used their Super calculator to compute the best moves to do, which was not the case for stockfish. A fair game would be to use the same hardware computer for both engines.

This said, I think google wanted more to show the power of Deep Learning 


What happened with that Giraffe program, btw? Wasn't it somehow teaching itself too?

Pulpofeira wrote:

What happened with that Giraffe program, btw? Wasn't it somehow teaching itself too?

Giraffe in 2013 turned into Alpha Zero in 2017.Both were written by the same authour, Matthew Wai. If you want more detail,  ask him directly in talkchess.