Elo vs the Rest of the World is a competition that aims to discover whether other approaches can predict the outcome of chess games more accurately than the workhorse Elo rating system. Scheduled to end on November 14th, the competition and has reached its halfway mark and the best entries have outperformed Elo by over 8 per cent.
The competition
The Elo rating system was invented half a century ago by Hungarian-born physicist and chess master Arpad Elo. It is the most famous technique for rating chess players and is used throughout the chess world. It has been applied to many other contests as well, including other board games, sports, and video games. However, it has never really been demonstrated that the Elo approach to calculating chess ratings is superior. Elo's formula was derived theoretically, in an era without large amounts of historical data or significant computing power. With the benefit of powerful computers and large game databases, we can easily investigate approaches that might do better than Elo at predicting chess results.There are several alternatives to the Elo approach. Professor Mark Glickman developed the Glicko and Glicko-2 systems, which extend the Elo system by introducing additional parameters to represent the reliability and volatility of player ratings. Ken Thompson uses a linearly weighted average of a player's last 100 results to calculate a weighted performance rating. Jeff Sonas (who put together this competition) developed Chessmetrics ratings to maximize predictive power. More details are available on the hints page.Kaggle wants to see if somebody out there can do even better. Competitors train their rating systems using a training dataset of over 65,000 recent results for 8,631 top players. Participants then use their method to predict the outcome of a further 7,809 games.Kaggle is a platform that allows companies, researchers, governments and other organizations to post their problems and have statisticians worldwide compete to predict the future (produce the best forecasts) or predict the past (find the best insights hiding in data).
No free hunch
By Jeff SonasWe have just passed the halfway mark of the “Elo vs the Rest of the World” contest, scheduled to end on November 14th. The contest is based upon the premise that a primary purpose of any chess rating system is to accurately assess the current strength of players, and we can measure the accuracy of a rating system by seeing how well the ratings do at predicting players’ results in upcoming events. The winner of the contest will be the one whose rating system does the best job at predicting the results of a set of 7,800 games played recently among players rated 2200+.So far we have had an unprecedented level of participation, with 162 different teams submitting entries to the contest! There is also a very active discussion forum to promote the free flow of ideas, although many teams are still hesitant to share too many details about their approach (especially considering that the winner will receive a copy of Fritz signed by Garry Kasparov, Viswanathan Anand, Anatoly Karpov, and Viktor Korchnoi). Both Chessbase and Kaggle have donated generous prizes, to be awarded to top-performing participants who are willing to share their methodology publicly.A wide range of approaches have been tried, including almost every known chess rating system as well as other tries involving neural networks, machine learning, data mining, business intelligence tools, and artificial intelligence. In fact over 1,600 different tries have been submitted so far, and we anticipate far more submissions as the competition heats up over the final seven weeks.The #1 spot is currently held by Portuguese physicist Filipe Maia, who confesses to little knowledge about statistics or chess ratings, but is nevertheless managing to lead the competition! He is also the author of El Turco, the first-ever Portuguese chess engine. Out of the current top ten teams on the leaderboard, seven use variants of the Chessmetrics rating system, two are modified Elo systems, and one is a “home-grown variant of ensemble recursive binary partitioning”. That last approach belongs to the #3 team on the public leaderboard, a team known as “Old Dogs With New Tricks”. This team is a collaborative effort between Dave Slate and Peter Frey, both prominent leaders in computer chess for many years.Read Jeff Sonas' full blog post
here at Kaggle.com.