Tech Talk
Designed the logo myself in canva!

Tech Talk

Avatar of ring-tailed_lemur
| 3

Hello, and welcome to this post! For what this is about, the title says it all. Today we are going to discuss chess engines. I am going to give the engine's history, and it's current place in the CCC.

First off,

Leela Chess Zero:

Rating:

3656

Version:

net 42282

Developed by

History

The Leela Chess Zero project was first announced on TalkChess.com on January 9, 2018.This revealed Leela Chess Zero as the open-source, self-learning chess engine it would come to be known as, with a goal of creating a strong chess engine. Within the first few months of training, Leela Chess Zero had already reached the Grandmaster level, surpassing the strength of early releases of Rybka, Stockfish, and Komodo, despite evaluating orders of magnitude fewer positions while using MCTS.

In December 2018, the AlphaZero team published a new paper in Science magazine revealing previously undisclosed details of the architecture and training parameters used for AlphaZero. These changes were soon incorporated into Leela Chess Zero and increased both its strength and training efficiency.

The work on Leela Chess Zero has informed the similar AobaZero project for shogi.

The engine has been rewritten and carefully iterated upon since its inception, and now runs on multiple backends, allowing it to effectively utilize different types of hardware, both CPU and GPU.[11]

The engine supports the Fischer Random Chess variant, and a network is being trained to test the viability of such a network as of May 2020.

Program and use

The method used by its designers to make Leela Chess Zero self-learn and play chess at above human level is reinforcement learning. This is a machine-learning algorithm, mirrored from AlphaZero used by the Leela Chess Zero training binary to maximize reward through self-play. As an open-source distributed computing project, volunteer users run Leela Chess Zero to play hundreds of millions of games which are fed to the reinforcement algorithm. In order to contribute to the advancement of the Leela Chess Zero engine, the latest non-release candidate (non-rc) version of the Engine as well as the Client must be downloaded. The Client is needed to connect to the current server of Leela Chess Zero, where all of the information from the self-play chess games are stored, to obtain the latest network, generate self-play games, and upload the training data back to the server.

In order to play against the Leela Chess Zero engine on a machine, 2 components are needed: the engine binary, and a network (The engine binary is distinct from the client, in that the client is used as a training platform for the engine). The network contains Leela Chess Zero's evaluation function that is needed to evaluate positions. Older networks can also be downloaded and used by placing those networks in the folder with the lc0 binary.

Self-Play Elo

Self-play Elo is used to gauge relative network strength to look for anomalies and general changes in network strength, and can be used as a diagnostic tool when there are significant changes. Through test match games that are played with minimal temperature-based variation, lc0 engine clients test the most recent version against other recent versions of the same network's run, which is then sent the training server to create an overall Elo assessment.

Standard Elo formulae are used to calculate relative Elo strength between the two players. More recent Self-Play Elo calculations use match game results against multiple network versions to calculate a more accurate Elo value.

There are several unintended consequences of the Self-Play approach to gauging strength and are as follows:

  • skins.vector.styles/images/bullet-icon.svg?d4515");margin:0.3em 0px 0px 1.6em;padding:0px;color:#202122;font-family:sans-serif;font-size:14px;font-style:normal;font-weight:400;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:#ffffff;">
  • Differing scales of initial Elo inflation in training runs due to periods of lower/higher self-improvement and adversarial play.
  • skins.vector.styles/images/bullet-icon.svg?d4515");margin:0.3em 0px 0px 1.6em;padding:0px;color:#202122;font-family:sans-serif;font-size:14px;font-style:normal;font-weight:400;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:#ffffff;">
  • Strength measured this way is not objective and is relative to previous networks, allowing for a false illusion of gained strength since networks are trained to beat and anticipate the actions of their past selves.
  • skins.vector.styles/images/bullet-icon.svg?d4515");margin:0.3em 0px 0px 1.6em;padding:0px;color:#202122;font-family:sans-serif;font-size:14px;font-style:normal;font-weight:400;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:#ffffff;">
  • Overfitting against previous network versions of Lc0 continuously adds small amounts of Self-Play Elo to the cumulative measured Elo. Overfitting in this manner is generally seen more clearly when training smaller networks.
  • skins.vector.styles/images/bullet-icon.svg?d4515");margin:0.3em 0px 0px 1.6em;padding:0px;color:#202122;font-family:sans-serif;font-size:14px;font-style:normal;font-weight:400;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:#ffffff;">
  • There is no direct 1-to-1 correlation between self-play elo and the strength against Alpha-Beta engines, and no known correlation to strength against humans.
  • skins.vector.styles/images/bullet-icon.svg?d4515");margin:0.3em 0px 0px 1.6em;padding:0px;color:#202122;font-family:sans-serif;font-size:14px;font-style:normal;font-weight:400;letter-spacing:normal;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:#ffffff;">
  • Behavioral changes in networks between runs affect inflation.

An example of Self-Play elo inflation is the Test 71.4 run (named as 714xxx nets), a Fischer Random Chess run, which nearly has 4000 cumulative self-play elo 76 nets after the start of its run. Self-Play Elo estimates of this run can be roughly compared with other runs to gauge the impracticality of pure cumulative self-play elo. A pure self-play elo comparison with one of the Test 60 networks 3000 nets into the run reveals that 63000 can consistently beat 714070 in head-to-head matches at most, if not all "fair" time controls. Yet, 63000 nets from the Test 60 run have a self-play elo around 2900, while the Self-Play Elo of early Test 71.4 is already near 4000. This contradiction of self-play Elo strength is enough to credit the claim that self-play Elo is not an objective measure of strength, nor is it one which allows one to easily compare network strength to Human strength.

Self-play rating for the engine could be used as a rough approximation of conventional Human Elo ratings, however no universal conversion formula exists for many reasons. These include but are not limited to the scale of initial inflation of self-play Elo and the late-term self-play Elo inflation between trained runs, differing time controls, differing systems of Elo measurement between chess tournament platforms, allocated resources to the engine, network size and structure, a network's training data set, and the multiple factors of which strength is given by the binary of the engine.

Setting up the engine to play a single node with ``--minibatch-size=1`` and ``go nodes 1`` for each played move creates deterministic play, and Self-Play elo on such settings will always yield the same result between 2 of the same networks on the same start position--always win, always loss, or always draw. Self-play elo is not reliable for determining strength in these deterministic circumstances.

Variants

In season 15 of the Top Chess Engine Championship, the engine AllieStein competed alongside Leela. AllieStein is a combination of two different spinoffs from Leela: Allie, which uses the same evaluation network as Leela, but has a unique search algorithm for exploring different lines of play, and Stein, an evaluation network which has been trained using supervised learning based on existing game data featuring other engines (as opposed to the unsupervised learning which Leela uses). While neither of these projects would be admitted to TCEC separately due to their similarity to Leela, the combination of Allie's search algorithm with the Stein network, called AllieStein, is unique enough to warrant it competing alongside mainstream Lc0 (The TCEC rules require that a neural network-based engine has at least 2 unique components out of 3 essential features: The code that evaluates a network, the network itself, and the search algorithm. While AllieStein uses the same code to evaluate its network as Lc0, since the other two components are fresh, AllieStein is considered a distinct engine). 

Competition results

In April 2018, Leela Chess Zero became the first neural network engine to enter the Top Chess Engine Championship (TCEC), during season 12 in the lowest division, division 4. Leela did not perform well: in 28 games, it won one, drew two, and lost the remainder; its sole victory came from a position in which its opponent, Scorpio 2.82, crashed in three moves. However, it improved quickly. In July 2018, Leela placed seventh out of eight competitors at the 2018 World Computer Chess ChampionshipIn August 2018, it won division 4 of TCEC season 13 with a record of 14 wins, 12 draws, and 2 losses. In Division 3, Leela scored 16/28 points, finishing third behind Ethereal, who scored 22.5/28 points, and Arasan on tiebreak.

By September 2018, Leela had become competitive with the strongest engines in the world. In the 2018 Chess.com Computer Chess Championship (CCCC), Leela placed fifth out of 24 entrants. The top eight engines advanced to round 2, where Leela placed fourth. Leela then won the 30-game match against Komodo to secure third place in the tournament. Concurrently, Leela participated in the TCEC cup, a new event in which engines from different TCEC divisions can play matches against one another. Leela defeated higher-division engines Laser, Ethereal and Fire before finally being eliminated by Stockfish in the semi-finals.

In October and November 2018, Leela participated in the Chess.com Computer Chess Championship Blitz Battle. Leela finished third behind Stockfish and Komodo.

In December 2018, Leela participated in season 14 of the Top Chess Engine Championship. Leela dominated divisions 3, 2, and 1, easily finishing first in all of them. In the premier division, Stockfish dominated while Houdini, Komodo and Leela competed for second place. It came down to a final-round game where Leela needed to hold Stockfish to a draw with black to finish second ahead of Komodo. It successfully managed this and therefore contested the superfinal against Stockfish. It narrowly lost the superfinal against Stockfish with a 49.5-50.5 final score.

In February 2019, Leela scored its first major tournament win when it defeated Houdini in the final of the second TCEC cup. Leela did not lose a game the entire tournament. In April 2019, Leela won the Chess.com Computer Chess Championship 7: Blitz Bonanza, becoming the first neural-network project to take the title.

In May 2019, Leela defended its TCEC cup title, this time defeating Stockfish in the final 5.5-4.5 (+2 =7 -1) after Stockfish blundered a 7-man tablebase draw. Leela also won the Superfinal of season 15 of the Top Chess Engine Championship 53.5-46.5 (+14 -7 =79) versus Stockfish.

Season 16 of TCEC saw Leela finish in 3rd place in premier division, missing qualification for the superfinal to Stockfish and new neural network engine AllieStein. Leela did not suffer any losses in the Premier division, the only engine to do so, and defeated Stockfish in one of the six games they played. However, Leela only managed to score 9 wins, while AllieStein and Stockfish both scored 14 wins. This inability to defeat weaker engines led to Leela finishing 3rd, half a point behind AllieStein and a point behind Stockfish. In the fourth TCEC cup, Leela was seeded first as the defending champion, which placed it on the opposite half of the brackets as AllieStein and Stockfish. Leela was able to qualify for the finals, where it faced Stockfish. After seven draws, Stockfish won the eighth game to win the match.

In Season 17 of TCEC, held in January-April 2020, Leela regained the championship by defeating Stockfish 52.5-47.5. It qualified for the superfinal again in Season 18, but this time was defeated by Stockfish 53.5-46.5. In the TCEC Cup 6 final, Leela lost to AllieStein, finishing 2nd.

Results summary

Top Chess Engine Championship (TCEC)
Season Division 4 Division 3 Division 2 Division 1 Division P Superfinal
12 (2018) 8th
13 (2018) 1st 3rd
14 (2018) 1st 1st 1st 2nd 2nd
15 (2019) 2nd 1st
16 (2019) 3rd
17 (2020) 1st 1st
18 (2020) 2nd 2nd

Top Chess Engine Championship Cup (TCEC Cup)
Event Result Opponent Score
Cup 1 (2018) 3rd Komodo 0-0
Cup 2 (2019) 1st Houdini 4.5-3.5
Cup 3 (2019) 1st Stockfish 5.5-4.5
Cup 4 (2019) 2nd Stockfish 4.5-3.5
Cup 5 (2020) 2nd Stockfish 2.5-1.5
Cup 6 (2020) 2nd AllieStein 2.5-1.5

Chess.com Computer Chess Championship (CCCC)
Event Year Time Controls Result Ref
CCC 1: Rapid Rumble 2018 15+5 3rd [41]
CCC 2: Blitz Battle 2018 5+2 3rd [42]
CCC 3: Rapid Redux 2019 30+5 2nd [43]
CCC 4: Bullet Brawl 2019 1+2 2nd [44]
CCC 5: Escalation 2019 10+5 2nd [45]
CCC 6: Winter Classic 2019 10+10 2nd [46]
CCC 7: Blitz Bonanza 2019 5+2 1st [31]
CCC 8: Deep Dive 2019 15+5 2nd [5]
CCC 9: The Gauntlet 2019 5+2, 10+5 3rd [47]
CCC 10: Double Digits 2019 10+3 3rd [48]
CCC 11 2019 30+5 1st [49]
CCC 12: Bullet Madness! 2020 1+1 1st [50]
CCC 13: Shapes 2020 3+2, 5+5, 10+5, 15+5 1st [51][52]
CCC 14 2020 15+5 1st [53]
CCC 15: Bullet II 2020 1+1 TBD

Notable games

Created 2018

Stockfish:

Rating:

3679

Version:

130519 64 BMI2

Developed by:

Tord Romstad

Marco Costalba

Joona Kiiski

https://stockfishchess.org/

Stockfish is a free and open-source chess engine, available for various desktop and mobile platforms. It is developed by Marco Costalba, Joona Kiiski, Gary Linscott, Stéphane Nicolet, Stefan Geschwentner, Joost VandeVondele, and Tord Romstad, with many contributions from a community of open-source developers.[2]

Stockfish is consistently ranked first or near the top of most chess-engine rating lists and is the strongest conventional chess engine in the world.[3] It won the unofficial world computer chess championships in seasons 6 (2014), 9 (2016), 11 (2018), 12 (2018), 13 (2018), 14 (2019), 16 (2019) and 18 (2020). It finished runner-up in season 5 (2013), 7 (2014), 8 (2015), 15 (2019) and 17 (2020).

Stockfish is derived from Glaurung, an open-source engine by Tord Romstad released in 2004.

Features

Stockfish can use up to 512 CPU threads in multiprocessor systems. The maximal size of its transposition table is 32 TB. Stockfish implements an advanced alpha–beta search and uses bitboards. Compared to other engines, it is characterized by its great search depth, due in part to more aggressive pruning, and late move reductions.[4]

Stockfish supports Chess960, which is one feature that was inherited from Glaurung.

The Syzygy tablebase support, previously available in a fork maintained by Ronald de Man, was integrated into Stockfish in 2014.[5] In 2018 support for the 7-men Syzygy was added, shortly after becoming available.

History

The program originated from Glaurung, an open-source chess engine created by Romstad and first released in 2004. Four years later, Costalba, inspired by the strong open-source engine, decided to fork the project. He named it Stockfish because it was "produced in Norway and cooked in Italy" (Romstad is Norwegian, Costalba is Italian). The first version, Stockfish 1.0, was released in November 2008.[7][8] For a while, new ideas and code changes were transferred between the two programs in both directions, until Romstad decided to discontinue Glaurung in the favor of Stockfish, which was the more advanced engine at the time.[9] The last Glaurung (version 2.2) was released in December 2008.

Around 2011, Romstad decided to abandon his involvement with Stockfish in order to spend more time on his new iOS chess app.[citation needed]

On 18 June 2014 Marco Costalba announced that he had "decided to step down as Stockfish maintainer" and asked that the community create a fork of the current version and continue its development.[10] An official repository, managed by a volunteer group of core Stockfish developers, was created soon after and currently manages the development of the project.[11]

In June 2020, a efficiently updatable neural network (NNUE) fork called Stockfish NNUE was discussed by developers.[12][13] In July 2020 chess news reported that Stockfish NNUE had "broken new ground in computer chess by incorporating a neural network into the already incredibly powerful Stockfish chess engine."[14] A NNUE merge into Stockfish has been announced and development builds are available.[15][16]

"The NNUE branch maintained by @nodchip has demonstrated strong results and offers great potential, and we will proceed to merge ... This merge will introduce machine learning based coding to the engine, thus enlarging the community of developers, bringing in new skills. We are eager to keep everybody on board, including all developers and users of diverse hardware, aiming to be an inclusive community ...the precise steps needed will become clearer as we proceed, I look forward to working with the community to make this happen!"

— Joost VandeVondele, 25 July 2020[15]

Fishtest

Since 2013, Stockfish has been developed using a distributed testing framework named Fishtest, where volunteers can donate CPU time for testing improvements to the program.[17][18][19]

Changes to game-playing code are accepted or rejected based on results of playing of tens of thousands of games on the framework against an older "reference" version of the program, using sequential probability ratio testing. Tests on the framework are verified using the chi-squared test, and only if the results are statistically significant are they deemed reliable and used to revise the software code.

As of June 2018, the framework has used a total of more than 1200 years of CPU time to play over 840 million chess games.[20] After the inception of Fishtest, Stockfish experienced an explosive growth of 120 Elo points in just 12 months, propelling it to the top of all major rating lists.[21] In Stockfish 7, Fishtest author Gary Linscott was added to the official list of authors in acknowledgement of his contribution to Stockfish's strength.

Competition results

Stockfish versus Nakamura

Stockfish's strength relative to the best human chess players was most apparent in a handicap match with grandmaster Hikaru Nakamura (2798-rated) in August 2014. In the first two games of the match, Nakamura had the assistance of an older version of Rybka, and in the next two games, he received White with pawn odds but no assistance. Nakamura was the world's fifth-best human chess player at the time of the match, while Stockfish was denied use of its opening book and endgame tablebase. Stockfish won each half of the match 1.5–0.5. Both of Stockfish's wins arose from positions in which Nakamura, as is typical for his playing style, pressed for a win instead of acquiescing to a draw.[22]

An artificial-intelligence approach, designed by Jean-Marc Alliot of the Institut de recherche en informatique de Toulouse ("Toulouse Computer Science Research Institute"), which compares chess grandmaster moves against that of Stockfish, rated Magnus Carlsen as the best player of all-time, as he had the highest probability of all World Chess Champions to play the moves that Stockfish suggested.[23]

Participation in TCEC

Stockfish is a TCEC multiple-time champion and the current leader in trophy count. Ever since TCEC restarted in 2013, Stockfish has finished first or second in every season except one. In TCEC Season 4 and 5, Stockfish finished runner-up, with Superfinal scores of 23–25 first against Houdini 3 and later against Komodo 1142. Season 5 was notable for the winning Komodo team as they accepted the award posthumously for the program's creator Don Dailey, who succumbed to an illness during the final stage of the event. In his honor, the version of Stockfish that was released shortly after that season was named "Stockfish DD".[24]

On 30 May 2014, Stockfish 170514 (a development version of Stockfish 5 with tablebase support) convincingly won TCEC Season 6, scoring 35.5–28.5 against Komodo 7x in the Superfinal.[25] Stockfish 5 was released the following day.[26] In TCEC Season 7, Stockfish again made the Superfinal, but lost to Komodo with the score of 30.5–33.5.[25] In TCEC Season 8, despite losses on time caused by buggy code, Stockfish nevertheless qualified once more for the Superfinal, but lost the ensuing 100-game match 46.5–53.5 to Komodo.[25] In Season 9, Stockfish defeated Houdini 5 with a score of 54.5 versus 45.5.[25][27]

Stockfish finished third during season 10 of TCEC, the only season since 2013 in which Stockfish had failed to qualify for the superfinal. It did not lose a game, but was still eliminated because it was unable to score enough wins against lower-rated engines. After this blip, Stockfish went on a long winning streak, winning seasons 11 (59 vs. 41 against Houdini 6.03),[25][28] 12 (60 vs. 40 against Komodo 12.1.1),[25][29] and 13 (55 vs. 45 against Komodo 2155.00)[25][30] convincingly.[31] In Season 14, Stockfish faced a new challenger in Leela Chess Zero, but managed to eke out a win by one game (50.5-49.5).[25][32] Its winning streak was finally ended in season 15, when Leela qualified again and won 53.5-46.5,[25] but Stockfish promptly won season 16, defeating AllieStein 54.5-45.5, after Leela failed to qualify for the superfinal.[25] In season 17, Stockfish faced Leela again in the superfinal, losing 52.5-47.5; however, it qualified and defeated Leela in season 18, 53.5-46.5.[25]

Stockfish also took part in the TCEC cup, winning the first edition, but was surprisingly upset by Houdini in the semifinals of the second edition.[25][33] Stockfish recovered to beat Komodo in the third place playoff.[25] In the third edition, Stockfish made it to the finals, but was defeated by Leela Chess Zero after blundering in a 7-man endgame tablebase draw. It gained its revenge in the fourth edition, defeating Leela in the final 4.5–3.5.[25]

Computer chess tournament

Ever since chess.com hosted its first computer chess championship in 2018, Stockfish has been the most successful engine. It dominated the earlier championships, winning six consecutive titles before finishing second in CCC7. Since then, its dominance has come under threat from the neural-network engines Leelenstein and Leela Chess Zero, but it has continued to perform well, reaching at least the superfinal in every edition up to CCC11. CCC12 had for the first time a knockout format, with seeding placing CCC11 finalists Stockfish and Leela in the same half. Leela eliminated Stockfish in the semi-finals. However, a post-tournament match against the loser of the final, Leelenstein, saw Stockfish winning in the same format as the main event.

Chess.com Computer Chess Championship
Event Year Time Controls Result Ref
CCC 1: Rapid Rumble 2018 15+5 1st [34]
CCC 2: Blitz Battle 2018 5+2 1st [35]
CCC 3: Rapid Redux 2019 30+5 1st [36]
CCC 4: Bullet Brawl 2019 1+2 1st [37]
CCC 5: Escalation 2019 10+5 1st [38]
CCC 6: Winter Classic 2019 10+10 1st [39]
CCC 7: Blitz Bonanza 2019 5+2 2nd [40]
CCC 8: Deep Dive 2019 15+5 1st [41]
CCC 9: The Gauntlet 2019 5+2, 10+5 1st [42]
CCC 10: Double Digits 2019 10+3 2nd [43]
CCC 11 2019 30+5 2nd [44]
CCC 12: Bullet Madness! 2020 1+1 3rd [45]
CCC 13: Heptagonal 2020 5+5 2nd [46]

Stockfish versus AlphaZero

In December 2017, Stockfish 8 was used as a benchmark to test Google division Deepmind's AlphaZero, with each engine supported by different hardware. AlphaZero was trained through self-play for a total of nine hours, and reached Stockfish's level after just four.[47][48][49] In 100 games from the normal starting position, AlphaZero won 25 games as White, won 3 as Black, and drew the remaining 72, with 0 losses.[50] AlphaZero also played twelve 100-game matches against Stockfish starting from twelve popular openings for a final score of 290 wins, 886 draws and 24 losses, for a point score of 733:467.[51][note 1]

AlphaZero's victory over Stockfish sparked a flurry of activity in the computer chess community, leading to a new open-source engine aimed at replicating AlphaZero, known as Leela Chess Zero. By January 2019, Leela was able to defeat the version of Stockfish that played AlphaZero (Stockfish 8) in a 100-game match. An updated version of Stockfish narrowly defeated Leela Chess Zero in the superfinal of the 14th TCEC season, 50.5–49.5 (+10 =81 −9), but lost the superfinal of the next season to Leela 53.5-46.5 (+14 =79 -7). The two engines remain very close in strength to each other even as they continue to improve: Leela defeated Stockfish in the superfinal of TCEC Season 17, but Stockfish won TCEC Season 18.

Platforms

Release versions and development versions are available as C++ source code and as precompiled versions for Microsoft Windows, macOS, Linux 32-bit/64-bit and Android.

Stockfish has been a very popular engine for various platforms. On the desktop, it is the default chess engine bundled with the Internet Chess Club interface programs BlitzIn and Dasher. On the mobile platform, it has been bundled with the Stockfish app, SmallFish and Droidfish. Other Stockfish-compatible graphical user interfaces (GUIs) include Fritz, Arena, Stockfish for Mac, and PyChess. As of March 2014, Stockfish is the chess engine used by Lichess, a popular online chess site.

Authors

Founders and Maintainers
Author Name Role GitHub ID
Marco Costalba Founder of the project mcostalba
Joona Kiiski Founder of the project zamar
Gary Linscott Founder and Fishtest developer glinscott
Tord Romstad Author of Glaurung romstad
Stéphane Nicolet Maintainer since 2016 snicolet
Stefan Geschwentner Maintainer since Sep 2018 locutus2
Joost VandeVondele Maintainer since Jan 2020 vondele
Contributors to the Stockfish Project
Author Name GitHub ID
Aditya absimaldata
Adrian Petrescu apetresc
Ajith Chandy Jose ajithcj
Alain Savard Rocky640
Alayan Feh Alayan-stk-2
Alexander Kure
Alexander Pagel Lolligerhans
Ali AlZhrani Cooffe
Andrew Grant AndyGrant
Andrey Neporada nepal
Andy Duplain
Aram Tumanian atumanian
Arjun Temurnikar
Auguste Pop
Balint Pfliegel
Ben Koshy BKSpurgeon
Bill Henry VoyagerOne
Bojun Guo noobpwnftw, Nooby
Unknown braich
Brian Sheppard SapphireBrand, briansheppard-toast
Bryan Cross crossbr
Unknown candirufish
Unknown Chess13234
Chris Cain ceebo
Dan Schmidt dfannius
Daniel Axtens daxtens
Daniel Dugovic ddugovic
Dariusz Orzechowski
David Zar
Daylen Yang daylen
Unknown DiscanX
Unknown double-beep
Eduardo Cáceres eduherminio
Eelco de Groot KingDefender
Elvin Liu solarlight2
Unknown erbsenzaehler
Ernesto Gatti
Linmiao Xu linrock
Fabian Beuke madnight
Fabian Fichter ianfab
Unknown fanon
Fauzi Akram Dabat FauziAkram
Felix Wittmann
Unknown gamander
Gary Heckman gheckman
Unknown gguliash
Gian-Carlo Pascutto gcp
Gontran Lemaire gonlem
Goodkov Vasiliy Aleksandrovich goodkov
Gregor Cramer
Unknown GuardianRM
Günther Demetz pb00067, pb00068
Guy Vreuls gvreuls
Henri Wiechers
Hiraoka Takuya HiraokaTakuya
Unknown homoSapiensSapiens
Hongzhi Cheng
Ivan Ivec IIvec
Jacques B. Timshel
Jan Ondruš hxim
Jared Kish Kurtbusch
Jarrod Torriero DU-jdto
Jean Gauthier OuaisBla
Jean-Francois Romang jromang
Unknown Jekaa
Jerry Donald Watson jerrydonaldwatson
Jonathan Calovski Mysseno
Jonathan Dumale SFisGOD
Jörg Oster joergoster
Joseph Ellis jhellis3
Joseph R. Prostko
Unknown jundery
Justin Blanchard UncombedCoconut
Kelly Wilson
Ken Takusagawa
Unknown kinderchocolate
Kiran Panditrao Krgp
Unknown Kojirion
Leonardo Ljubičić
Leonid Pechenik lp--
Linus Arver listx
Unknown loco-loco
Lub van den Berg ElbertoOne
Luca Brivio lucabrivio
Lucas Braesch lucasart
Lyudmil Antonov lantonov
Maciej Żenczykowski zenczykowski
Malcolm Campbell xoto10
Mark Tenzer 31m059
Unknown marotear
Matthew Lai matthewlai
Matthew Sullivan Matt14916
Michael An man
Michael Byrne MichaelB7
Michael Chaly Vizvezdenec
Michael Stembera mstembera
Michael Whiteley protonspring
Michel Van den Bergh vdbergh
Miguel Lahoz miguel-l
Mikael Bäckman mbootsector
Unknown Mira
Miroslav Fontán Hexik
Moez Jellouli MJZ1977
Mohammed Li tthsqe12
Nathan Rugg nmrugg
Nick Pelling nickpelling
Nicklas Persson NicklasPersson
Niklas Fiekas niklasf
Nikolay Kostov NikolayIT
Nguyen Pham
Ondrej Mosnáček WOnder93
Oskar Werkelin Ahlin
Pablo Vazquez
Unknown Panthee
Pascal Romaret
Pasquale Pigazzini ppigazzini
Patrick Jansen mibere
Unknown pellanda
Peter Zsifkovits CoffeeOne
Praveen Kumar Tummala praveentml
Rahul Dsilva silversolver1
Ralph Stößer (Ralph Stoesser)
Raminder Singh
Unknown renouve
Reuven Peleg
Richard Lloyd
Rodrigo Exterckötter Tjäder
Ron Britvich Britvich
Ronald de Man syzygy1, syzygy
Ryan Schmitt
Ryan Takker
Sami Kiminki skiminki
Sebastian Buchwald UniQP
Sergei Antonov saproj
Sergei Ivanov svivanov72
Unknown sf-x
Shane Booth shane31
Stefano Cardanobile Stefano80
Steinar Gunderson sesse
Unknown Thanar2
Unknown thaspel
Unknown theo77186
Tom Truscott
Tom Vijlbrief tomtor
Tomasz Sobczyk Sopel97
Torsten Franz torfranz, tfranzer
Tracey Emery basepr1me
Unai Corzo unaiic
Uri Blass uriblass
Vince Negri cuddlestmonkey

Houdini

Rating:

3621

Version:

6.03

Developed by:

Robert Houdart

http://www.cruxis.com/chess/houdini.h

Houdini is a UCI chess engine developed by Belgian programmer Robert Houdart. It is influenced by open-source engines IPPOLIT/RobboLito, Stockfish, and Crafty. Earlier versions are free for non-commercial use (up to version 1.5a), but later versions (2.0 and onwards) are commercial. As of October 2019, Houdini 6 is the fourth highest-rated chess engine on major chess engine rating lists, behind Stockfish, Leela Chess Zero and Komodo.[1]

Playing style

Chess commentator and video annotator CM Tryfon Gavriel compared Houdini's playing style to that of the Romantic Era of chess, where an attacking, sacrificial style was predominant.[2] According to Robert Houdart, Houdini's advantage against other top engines is in its handling of piece mobility, which is why it "favors aggressive play that tries to win the game".[3]

Version history

Version Release date Features[4]
1.0 May 15, 2010 First release
1.01 June 1, 2010 Bug fixes, improved search algorithm
1.02 June 18, 2010 SMP and hash collision bug fixes. Work-around for Shredder GUI.
1.03 July 15, 2010 Multi-PV, searchmove and large page support. Improved evaluation function.
1.03a July 17, 2010 Bug fix for Multi-PV
1.5 December 15, 2010 Improved search and evaluation. Gaviota Table Base Support.
1.5a January 15, 2011 Maintenance update with work-arounds for Fritz GUI and other minor improvements.
2.0 September 1, 2011 First commercial release. Improved analysis capabilities, enhanced search and evaluation. Houdini Pro version for high-end users with powerful hardware (multi-core support). Chess960 support. Strength limit feature. Position learning. Save hash to file, load hash from file, never clear hash.
2.0b November 7, 2011 Maintenance update with minor bug corrections and Nalimov EGTB support.
2.0c November 20, 2011 Maintenance update with minor bug corrections and new analysis options. MultiPV_cp option to limit multi-PV analysis to moves within a range of the best move. FiftyMoveDistance option to make the 50-move rule kick in earlier. UCI_Elo and UCI_LimitStrength options as UCI standard-compliant alternative to Strength option. Exit on detection with GUI exit.
3.0 October 15, 2012 Major new version. Improved search and evaluation (+50 Elo), Tactical Mode, Scorpio bitbases, accelerated Principal Variation Search "Smart Fail-High", optimized hash usage.
4.0

November 25, 2013

Major new version. Improved search and evaluation (+50 Elo), 6-men Syzygy table bases (coding provided by Ronald de Man)
5.0

November 7, 2016

Major new version, about 200 Elo stronger. Rewritten evaluation function, deeper search.
5.01

November 15, 2016

Maintenance update with some interface corrections and improvements.
6.0

September 15, 2017

Major new version. Improved search and evaluation (+50-60 Elo), enhanced multi-threading.
6.01

September 24, 2017

Maintenance update with Nalimov EGTB correction and new output option.
6.02

October 1, 2017

Maintenance update with Polyglot book support.
6.03

November 20, 2017

Correction for incorrect detection of stalemate in positions with white pawn capture moves.

The latest stable release of Houdini comes in two versions: Houdini 6 Standard and Houdini 6 Pro. Houdini 6 Pro supports up to 128 processor cores, 128 GB of RAM (hash) and is NUMA-aware, Houdini 5 Standard only supports up to 8 processor cores, 4 GB of hash and is not NUMA-aware. As with many other UCI engines, Houdini comes with no GUI, so a chess GUI is needed for running the engine. Houdini 5 uses calibrated evaluations in which engine scores correlate directly with the win expectancy in the position.[4]

Competition results

Houdini is one of the most successful engines in the TCEC tournament, which is often regarded as the Unofficial World Computer Chess Championship, with four championship wins to date.[5]

Notable games

"Houdini Immortal"
Rybka - Houdini
Position after the 24th move. Houdini (Black) is three pawns down but has very active pieces and White's king is exposed. White couldn't avoid losing a piece 7 moves later.

Komodo

Rating:

3616

Version:

2333.00bmi2

Developed by:

Don Dailey

Larry Kaufman and Mark Lefler

https://komodochess.com/

Komodo is a UCI chess engine developed by Don Dailey and Mark Lefler, and supported by chess author and evaluation expert GM Larry Kaufman. Komodo is a commercial chess engine but older versions (11 and older) are free for non-commercial use. It is consistently ranked near the top of most major chess engine rating lists, along with Stockfish and Leela Chess Zero.

History

Komodo was derived from Don Dailey's former engine Doch in January 2010.[6] The first multiprocessor version of Komodo was released in June 2013 as Komodo 5.1 MP.[7] This version was a major rewrite and a port of Komodo to C++11. A single-processor version of Komodo (which won the CCT15 tournament in February earlier that year) was released as a stand-alone product shortly before the 5.1 MP release. This version, named Komodo CCT, was still based on the older C code, and was approximately 30 Elo stronger than the 5.1 MP version, as the latter was still undergoing massive code-cleanup work.[8]

With the release of Komodo 6 on 4 October 2013, Don Dailey announced that he was suffering from an acute form of leukaemia, and would no longer contribute to the future development of Komodo.[9] On October 8, Don made an announcement on the Talkchess forum that Mark Lefler would be joining the Komodo team and would continue its development.[10] The latest version, Komodo 14, was released on May 12, 2020.[11]

Komodo MCTS

On December 17, 2018, Larry Kaufman announced the release of Komodo 12.3 MCTS, a version of the Komodo 12.3 engine that uses Monte Carlo tree search instead of alpha–beta pruning/minimax.[12]

Playing strength and style

Komodo heavily relies on evaluation rather than depth, and thus has a distinctive positional style. Its forte is to play when there is nothing to play.[13] Komodo author Don Dailey described it as such: "In positions that most engines would likely struggle or find it impossible to make progress, Komodo quietly prepares a break and ends up with the victory."[14]

Competition results

Komodo has played in the ICT 2010 in Leiden, and further in the CCT12 and CCT14. Komodo had its first tournament success in 2013, when it won the CCT15 with a score of 6½/7.[15] Komodo also fared very well in the TCEC competition, where in Season 4, it lost only eight out of its 53 games and managed to reach Stage 4 (Quarterfinals), against very strong competition which were running on eight cores (Komodo was running on a single processor).[16] In TCEC Season 5, it won the superfinal against Stockfish. It managed to reach the Superfinal in TCEC Season 6 again, but this time, it lost to Stockfish. Komodo regained the title in TCEC Season 7, defeating Stockfish in the superfinal. In TCEC Season 8, Komodo defeated Stockfish again in the superfinal.[17] Komodo won both the World Computer Chess Championship[18] and World Computer Software Championship[19] in 2016. Komodo once again won the World Computer Chess Championship[20] and World Blitz[21] in 2017. Komodo came third in TCEC Season 11 losing to Stockfish and Houdini, and came second in Season 12 losing to Stockfish.[22][23][24]

Notable games

Komodo vs Hannibal
Komodo plays the exchange sacrifice 33. Rxc6 and goes on to win the game, proving the superiority of its pieces over Black's two rooks.

Ethereal

Rating:

3544

Version:

11.38 (PEXT)

Developed by:

Andrew Grant

Ethereal is an open source chess engine written in C. Ethereal is greatly influenced from Crafty, Stockfish, TSCP, MadChess, and Fruit.

Created 2016

I know there are some bugs involving it displaying code instead of words, but I bet you can live with that wink.png ! 

I really hope you enjoyed this blog post!

Sincerely, Ring-tailed_lemur

*BONUS!!* Try to design the coolest engine logo. It can be for a made up one, or real. pls post them down below!!

(Oh, and I copied this from Wikipedia, so credit to whomever put it there!)

I hope you enjoy SUPER LEMUR CHESS!!!