Hi Jeff,
I too was stunned by this and were it not for the fact that my friend is totally honest I would have had problems believing it.
Yeah. Some chess players may need some mental health counseling.
Hi Jeff,
I too was stunned by this and were it not for the fact that my friend is totally honest I would have had problems believing it.
Yeah. Some chess players may need some mental health counseling.
Sorry guys, but @Rsava's summary above is an accurate reflection of the tenor of discussions within the cheating forum.
Indeed, you can glean a very similar story from @Hicetnunc, a chess coach and long time (respected) member of that same forum.
On balance, few people have the time (or inclination) to think (or dig) deeply on the internet. That's unfortunate.
Maybe Alpha Zero already solved it and that's why they aren't releasing those other 90 games ??
Could be. AlphaZero definitely revealed some truths about chess that were previously overlooked. I learned a lot about chess theory from those 10 games.
Sorry guys, but @Rsava's summary above is an accurate reflection of the tenor of discussions within the cheating forum.
Indeed, you can glean a very similar story from @Hicetnunc, a chess coach and long time (respected) member of that same forum.
On balance, few people have the time (or inclination) to think (or dig) deeply on the internet. That's unfortunate.
It is not accurate, because chess.com has made huge advances in detecting cheats and that's what matters. The group cannot be characterised in a simple way because it ranges from many people who have no special knowledge to those who have demonstrated skill in detecting cheats and developing techniques to do so, and it is only the latter who matter. Those people often contribute positively by identifying people who end up getting banned. If someone chooses to ignore this or reveals their ignorance of it, I can't stop them.
The reason I drew attention to the group is that I thought that anyone who had been a member would understand that there are ways to detect cheating by explicit example. Unfortunately I overestimated Rsava.
Hicetnunc says that he reports everyone he suspects of cheating, and I am sure he does not do it without expecting some success.
NOPE. Not Accurate, Sorry.
It's so easy to toggle into and out of computer suggested moves that people are essentially undetectable at all rating levels.
In that forum, a U.S. Blitz speed champion got targeted by cheats. A very long discussion ensued. Moreover, cheats effected a wholesale revision (by Chess.com) of the entire rating scale for Rapid ratings (because so many ratings were adversely effected by engine use, and multiple accounts), to cite just a few areas.
On balance, you only need a few engine moves per game to get a significant material advantage. In my particular case, I got a (just a bit) tired of winning 1-3 points of material in the first 20 moves of Game in 15/5, and then watching (somewhat helplessly) as my opponent proceeded to play like Superman for the the rest of the game. Been there, done that, and those discussion are long over for me. I simply keep track (superficially now) of the threads in that forum, b/c I am still a member.
Everyone is free to draw their own conclusions. But (essentially) the same sentiment I expressed above has been voiced multiple times (over the past couple years) by reputable people in that forum, and by @Hicetnunc, repeatedly.
The simple fact is that you only need a 1-1/2 pawn advantage to acquire a "winning advantage" in a chess game. You only need a few (key) computer moves to achieve that advantage, then you simply play the rest of the game yourself. On balance, it's only the cheating outliers that get caught. And they are quickly back on the site under another name and account.
It's too expensive to police millions of accounts effectively. Hell, the site is still straddling V2 and V3 problems, along with the usual hardware and software complaints. Why expense the cheats? It surely costs too much to pay someone a coder's salary to oversee that process, effectively.
The other dirty little secret is that some cheats particpate with verve in those Forum discussions as well. Neat trick. Apparently there is "safety" as well as "wisdom" in crowds. Not my problem.
I think the site is great, even if the hardware and software is a bit twitchy, and perhaps too often freezes up when I play Game in 5/5.
Best Wishes. Now back to "Solving Chess," whatever that means, substantively.
I'm guessing a Hidden Markov Model would be very helpful for finding cheating in this context, even for just a few moves a game. These work better than you might expect, although are certainly not perfect. I don't see any literature on it though, possibly chess is too complex for this sort of approach.
The tests actually used are presumably more straightforward, like comparing the strength of your moves to your actual Elo.
Join the forum and go down the rabbit hole, if you have the time and inclination.
End of Story, and Best Wishes.
I'm guessing a Hidden Markov Model would be very helpful for finding cheating in this context, even for just a few moves a game. These work better than you might expect, although are certainly not perfect. I don't see any literature on it though, possibly chess is too complex for this sort of approach.
The tests actually used are presumably more straightforward, like comparing the strength of your moves to your actual Elo.
I think they use the CAPS system now to determine the percentage of moves that match Stockfish's moves.
zborg, you are mathematically wrong. What you describe would both balance ineffectiveness against easy statistical detection. Try to work out why. Don't get me wrong - it is certainly possible for someone to get an insignificant advantage without being detectable, but they'd be better off reading a book on chess.
Jeff, why would you assume their system was limited to that?
hairhorn, I have often thought of the idea of using an AI to detect cheats. HMMs are a model that might be applied, but would not be my choice.
I am puzzled why anyone would think chess.com would not be keen to kick every cheater off the site.
As you may probably know, DeepMind has recently published a paper on AlphaZero [1], a system that learns by itself and is able to master games like chess or Shogi.
Before getting into details, let me introduce myself. I am a researcher in the broad field of Artificial Intelligence (AI), specialized in Natural Language Processing. I am also a chess International Master, currently the top player in South Korea although practically inactive for the last few years due to my full-time research position. Given my background I have tried to build a reasoned opinion on the subject as constructive as I could. For obvious reasons, I have focused on chess, although some arguments are general and may be extrapolated to Shogi or Go as well. This post represents solely my view and I may have misinterpreted some particular details on which I am not an expert, for which I apologize in advance if it is the case.
Chess has arguably been the most widely studied game in the context “human vs machine” and AI in general. One of the first breakthroughs in this area was the victory of IBM Deep Blue in 1997 over the world champion at the time Garry Kasparov [2]. At that time machines were considered inferior to humans in the game of chess, but from that point onwards, the “battle” has been clearly won by machines.
On a related note, DeepMind released a couple of years ago AlphaGo, a Go engine which was able to beat some of the best human players of Go [3]. Note that the complexity of Go is significantly larger than in chess. This has been one of the main reasons why, even with the more advanced computation power available nowadays, Go was still a game on which humans were stronger than machines. Therefore, this may be considered a breakthrough in itself. This initially impressive result was improved with AlphaGo Zero which, as claimed by the authors, learnt to master Go entirely by self-play [4]. And more recently AlphaZero, a similar model that trains a neural network architecture with a generic reinforcement learning algorithm which has beaten some of the best engines in Shogi and chess [1].
This feat has been extensively covered by mass media [5,6] and chess-specialized media [7,8], with bombastic notes about the importance of the breakthrough. However, there are reasonable doubts about the validity of the overarching claims that arise from a careful reading of AlphaZero’s paper. Some of these concerns may not be considered as important by themselves and may be explained by the authors. Nevertheless, all the concerns added together cast reasonable doubts about the current scientific validity of the main claims. In what follows I enumerate some general concerns:
We should scientifically scrutinize alleged breakthroughs carefully, especially in the period of AI hype we live now. It is actually responsibility of researchers in this area to accurately describe and advertise our achievements, and try not to contribute to the growing (often self-interested) misinformation and mystification of the field. In fact, this early December in NIPS, arguably the most prestigious AI conference, some researchers showed important concerns about the lack of rigour of this scientific community in recent years [15].
In this case, given the relevance of the claims, I hope these concerns will be clarified and solved in order to be able to accurately judge the actual scientific contribution of this feat, a judgement that it is not possible to make right now. Probably with a better experimental design as well as an effort on reproducibility the conclusions would be a bit weaker as originally claimed. Or probably not, but it is hard to assess unless DeepMind puts some effort into this direction. I personally have a lot of hope in the potential of DeepMind in achieving relevant discoveries in AI, but I hope these achievements will be developed in a way that can be easily judged by peers and contribute to society.
— — — — — -
[1] Silver et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.” arXiv preprint arXiv:1712.01815 (2017). https://arxiv.org/pdf/1712.01815.pdf
[2] https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov
[3] https://www.theguardian.com/technology/2016/mar/15/googles-alphago-seals-4-1-victory-over-grandmaster-lee-sedol
[4] Silver et al. “Mastering the game of go without human knowledge.” Nature 550.7676 (2017): 354–359. https://www.gwern.net/docs/rl/2017-silver.pdf
[5] https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours
[6] http://www.bbc.com/news/technology-42251535
[7] https://chess24.com/en/read/news/deepmind-s-alphazero-crushes-chess
[8] https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match
[9] http://chessok.com/?page_id=27966
[10] https://hunonchess.com/houdini-is-tcec-season-10-champion/
[11] https://www.chess.com/news/view/alphazero-reactions-from-top-gms-stockfish-author
[12] Link to reproduce the 10 games of AlphaZero against Stockfish: https://chess24.com/en/watch/live-tournaments/alphazero-vs-stockfish/1/1/1
[13] https://www.twitch.tv/videos/207257790
[14] https://medium.com/@karpathy/alphago-in-context-c47718cb95a5
[15] Ali Rahimi compared current Machine Learning practices with “alchemy” in his talk at NIPS 2017 following the reception of his test of time award: https://www.youtube.com/watch?v=ORHFOnaEzPc
By clapping more or less, you can signal to us which stories really stand out.
It seems ridiculous to think that computers will not solve chess. Now ... if you want to put a time limit on things, that could change things. For instance, I don't think it will be solved within the hour.
One problem with this is that the sun will super increase and wipe out all life and computers on earth before computers have time to solve chess.
Suggesting that a computer HAS to use a brute force approach (analyze every possible move) is somewhat facile.
It is quite possible that some improvement of the alpha-beta pruning that already exists will be sufficient to solve chess.
With some openings approaching 30 moves and the endgame positions solved for 5 pieces or soon more, the speed and number of processors. The actual number of positions which need to be analyzed is far far smaller than the numbers which have been touted here.
Suggesting that a computer HAS to use a brute force approach (analyze every possible move) is somewhat facile.
It is quite possible that some improvement of the alpha-beta pruning that already exists will be sufficient to solve chess.
With some openings approaching 30 moves and the endgame positions solved for 5 pieces or soon more, the speed and number of processors. The actual number of positions which need to be analyzed is far far smaller than the numbers which have been touted here.
If it's so "facile", go ahead and show us how to eliminate, say, 30 orders of magnitude in the number of positions. Tablebases are "solved" for 7 pieces, not 5, by the way.
If it's so "facile", go ahead and show us how to eliminate, say, 30 orders of magnitude in the number of positions...
@chessspy doesn't need to do this now to prove that it might be done in the future.
The fact that something has not been done does not prove that it cannot be done.
"If it's so "facile", go ahead and show us how to eliminate, say, 30 orders of magnitude in the number of positions..."
What is facile is to suggest that the number of positions that a program must assess in order to 'solve' chess is somehow related to the number of atoms in the universe (or insert any other googol sized number of your choice. (America's foreign debt level for example)).
I have just thought of an interesting, perhaps slightly novel angle on the issue of complexity.
One can consider two players who choose strategies which may be deterministic or stochastic for playing chess. A constraint on these strategies is that they achieve the optimal result (probably a draw against any optimal opposing strategy).
This surely gives a large set of strategies for each player, but we can order these strategies in an interesting way, by the minimax total number of positions that can be reached. The idea is that a player may either choose to make it as difficult as possible for the opponent's strategy (not his play in individual games) by choosing his strategy so that it has some combination of surviving a long time in games and visiting a large variety of positions.
Some symbols may help.
Suppose the first player is trying to keep it simple and the second player is trying to make it complicated. For each pair of strategies S1 and S2, for the two players, these two strategies played against each other indefinitely will visit some subset of all of the legal chess positions. We can call this number N(S1, S2).
Now if player 2 is trying to be a pain, he should choose his strategy S2 so that the minimum over all possible strategies S1 for the first player of N(S1, S2) is as high as possible. Such a strategy makes player 1's strategy necessarily as complex as possible in the sense that it has to deal with the maximum number of position.
Actually it is extremely easy to choose such a strategy for player 2: it is simply to choose each move completely randomly from the legal moves. This means that the total number of positions any strategy for player 1 has to deal with is maximised in all cases. It doesn't matter than player 2's play is abysmal most of the time and doesn't achieve the optimal result all of the time. He does play good moves some of the time in every position, and this is enough to make player A's task as hard as possible.
If player 2 is restricted to playing perfectly, his strategy to make things as complex as possible is just as easy in principle: he just randomly picks his moves from all of those that are optimal in every position.
To solve chess requires being able to take advantage of bad moves as well as to play safely against good ones, so the first strategy for player 2 is the one that matters to the complexity of chess, and is why the game complexity is irreducibly high. The only thing that helps the first player is that bad moves make the game rather shorter, and a succession of bad moves met by good ones makes it increasingly shorter. But even if there are typically only say 2 optimal moves for player 2, the complexity is impractically high even without counting the contribution from having to deal with suboptimal moves.
Hi Jeff,
I too was stunned by this and were it not for the fact that my friend is totally honest I would have had problems believing it.