Chess (PlayStation Game) first released on November 29, 2001, developed by Success and published by A1 Games.
This PlayStation adaptation of the classic board game allows two-player play or single-player matches against nine different computer opponents. It also provides narrated video clips describing the history and rules of the game.
I had this game when I was a child; I used to play it with my brother and my father. Back then, we barely knew how to play, and it wasn't very entertaining to play this game specifically, so we didn't pay much attention to it. However, I remember that its AI, especially the highest level, was quite strong. But how strong exactly…?
Inside Thread.
I decided to download the PS1 emulator and pit it against the current bots from chess.com.
I must remember that the game was released in 2001 for the PlayStation 1 (released in 1994). We're talking about a time difference of more than 20 years.
My hypothesis: I don't know much about programming, so I couldn't quantify the ELO of the PS1 AI, but I hope that at least it is capable of playing at a level of 1800/2000. If it plays lower than that, this experiment would end very soon, haha.
I must say that I played more games than these, but since they were not saved, I won't talk about them.
The bots faced were:
(White P) PS1 AI Nvl 9 vs Komodo 3200 (Black P) = Komodo won. (Sure xD)
(White P) PS1 AI Nvl 9 vs Nora-BOT 2200 (Black P) = PS1 AI won.
(White P) PS1 AI Nvl 9 vs Alexander-BOT 2450 (Black P) = PS1 AI won.
(White P) PS1 AI Nvl 9 vs Alexander-BOT 2450 (Black P) = PS1 AI won.
(White P) Alexander-BOT 2450 vs PS1 AI Nvl 9 (Black P) = Alexander-BOT 2450 won.
(White P) PS1 AI Nvl 9 vs Naroditsky-BOT 2650 (Black P) = Naroditsky-BOT won.
(White P) PS1 AI Nvl 9 vs Naroditsky-BOT 2650 (Black P) = Draw.
Note: You can saw all this matches in my game history.
First Match Komodo 3200: Compare the power difference between the maximum difficulty of the 2001 AI against the current maximum (from chess.com) and appreciate the difference.
Conclusions: First pleasant surprise; the PS1 module comes with a very complete opening book, which, as we will see, it plays with a precision of between 90 – 99%. The first moves are played instantly, then it takes about a minute to think about its moves. Result: The PS1 AI survives the opening. Reaching the middlegame, Komodo simply wins a piece or gains a decisive advantage. The difference in calculation depth is abysmal; Komodo arrives with a decisive advantage in all endings. Note: I did not save the game, but the difference between both AIs is much more noticeable when Komodo plays with white. English opening: Four Knights, kingside fianchetto line. Precision: 86.5 vs 92.9 Game score: 2400 Opening precision: 98.0 Middlegame: 65.5 Endgame: I resigned.
Second Match Nora-BOT 2200: My hypothesis was that the module would only stand out in the opening, and if a bot took it out of the most orthodox paths, it might compromise it. In this game, Nora-BOT brings out the queen early, but far from being a problem, the PS1 module takes advantage of the middlegame advantages to reach a won endgame. Very spectacular, playing with white, the PS1 AI can beat a current 2200 bot. English opening: Agincourt Defense. Precision: 88.9 vs 81 Game score: 2000 Opening precision: 81.1 Middlegame precision: 88.5 Endgame precision: 91.7
Third Match Alexander-BOT 2450: Time to face a heavyweight. 2 games with white and two with black (the second with black I did not save). Result: 2 – 2 Hypothesis: Very curious, since it clearly defeated the 2200 bot. Result: I have found that both chess.com bots and the PS1 AI have an ELO rating for playing with white and another for playing with black. Both modules are quite superior with white and have little or no chance when playing with black.
Game 1: Very even, but full of errors and inaccuracies in the middlegame. It reaches a complex endgame of rook and 4 pawns vs bishop and knight plus two pawns. The PS1 AI wins the endgame with the rook. Ruy Lopez opening: Morphy Defense, Columbus Variation. Precision: 84.9 vs 75.8 Game score: 2050 Opening precision: 100 Middlegame precision: 79.7 Endgame precision: 84.9
Game 2: Alexander wins. After a relatively superior middlegame, it reaches an endgame with a decisive advantage and a pawn to promote. There is no record of this match. Sorry :/
Game 3: Very solid victory by the PS1 AI. Much superior in the opening and with a decisive advantage in move 19, winning the bishop for the rook. Won endgame played correctly. Ruy Lopez opening: Bird Defense. Precision: 90.4 vs 81.5 Game score: 2150 Opening precision: 95.6 Middlegame precision: 86.8 Endgame precision: -
Game 4: Alexander gains an advantage in the opening due to many inaccuracies by the PS1 AI. Around move 15, the PS1 AI blunders a piece cleanly, which is a serious mistake. Around move 25, I decide to abandon (PS1 AI). Alexander has a piece advantage and a decisive advantage. French Defense: Classical, Steinitz Variation. Precision: 91 vs 79 (PS1 AI) Game score: 1700 Opening precision: 93 Middlegame precision: 67.4 Endgame precision: -
Conclusion: I really see it as very unlikely that either module will win or draw with black. Which makes me wonder… up to what ELO can the PS1 AI reach with white, and up to what ELO with black?
Time to face Daniel Naroditsky 2650.
Game 1: Very balanced, both sides gain slight advantages through inaccuracies in the opening, but in move 25 the PS1 AI makes a mistake. Among good moves and inaccuracies, Daniel N. reaches a completely won endgame by promoting pawns. Modern Defense with 1.d4 Precision: 78.6 (PS1 AI) vs 88.4 Game score: 1900 Opening precision: 90.2 Middlegame precision: 81.7 Endgame precision: 66.1
Game 2: Very strange game. Daniel’s BOT dominates in the middlegame against the PS1 AI, clearly surpassing it in calculation depth. From move 20 to 40, black is winning due to a passed pawn. Around move 40, black changes the passed pawn and reaches a dead drawn endgame. English opening: Reversed Sicilian, Kramnik-Shirov Counterattack. Note: The precision in this game is not reliable since there are dead drawn positions at move 43 and the game continues until move 94. Precision: 91.9 (?) vs 91.9 (?) Game score: 2250 Opening precision: 87.5 Middlegame precision: 85.7 Endgame precision: 94.2 (?)
The PS1 AI released in 2001 is capable of engaging and holding a lost position against a current 2650 module.
Final Conclusion: I believe that the programmers, testers, and the team in general must feel very satisfied with their work, and I am curious to know if they may have counted on the help of GMs to make the game. That an AI released in 2001 with the limitations of its console can engage with a current 2650 module is quite impressive. Regarding the maximum performance against black, I will leave it for another time.
What do you think about it? Did you expect these results? Do you think you could analyze a game in depth and show us things that less experienced players don't see? Did you find the games spectacular or simple? What do you think of the game itself? I'm really interested in your opinion!
Chess (PlayStation Game) first released on November 29, 2001, developed by Success and published by A1 Games.
This PlayStation adaptation of the classic board game allows two-player play or single-player matches against nine different computer opponents. It also provides narrated video clips describing the history and rules of the game.
I had this game when I was a child; I used to play it with my brother and my father. Back then, we barely knew how to play, and it wasn't very entertaining to play this game specifically, so we didn't pay much attention to it. However, I remember that its AI, especially the highest level, was quite strong. But how strong exactly…?
Inside Thread.
I decided to download the PS1 emulator and pit it against the current bots from chess.com.
I must remember that the game was released in 2001 for the PlayStation 1 (released in 1994).
We're talking about a time difference of more than 20 years.
My hypothesis: I don't know much about programming, so I couldn't quantify the ELO of the PS1 AI, but I hope that at least it is capable of playing at a level of 1800/2000.
If it plays lower than that, this experiment would end very soon, haha.
I must say that I played more games than these, but since they were not saved, I won't talk about them.
The bots faced were:
Note: You can saw all this matches in my game history.
First Match Komodo 3200: Compare the power difference between the maximum difficulty of the 2001 AI against the current maximum (from chess.com) and appreciate the difference.
Conclusions: First pleasant surprise; the PS1 module comes with a very complete opening book, which, as we will see, it plays with a precision of between 90 – 99%.
The first moves are played instantly, then it takes about a minute to think about its moves.
Result: The PS1 AI survives the opening. Reaching the middlegame, Komodo simply wins a piece or gains a decisive advantage. The difference in calculation depth is abysmal; Komodo arrives with a decisive advantage in all endings.
Note: I did not save the game, but the difference between both AIs is much more noticeable when Komodo plays with white.
English opening: Four Knights, kingside fianchetto line.
Precision: 86.5 vs 92.9
Game score: 2400
Opening precision: 98.0
Middlegame: 65.5
Endgame: I resigned.
https://www.chess.com/game/computer/152484673
Second Match Nora-BOT 2200: My hypothesis was that the module would only stand out in the opening, and if a bot took it out of the most orthodox paths, it might compromise it.
In this game, Nora-BOT brings out the queen early, but far from being a problem, the PS1 module takes advantage of the middlegame advantages to reach a won endgame. Very spectacular, playing with white, the PS1 AI can beat a current 2200 bot.
English opening: Agincourt Defense.
Precision: 88.9 vs 81
Game score: 2000
Opening precision: 81.1
Middlegame precision: 88.5
Endgame precision: 91.7
https://www.chess.com/game/computer/152773717
Third Match Alexander-BOT 2450: Time to face a heavyweight.
2 games with white and two with black (the second with black I did not save). Result: 2 – 2
Hypothesis: Very curious, since it clearly defeated the 2200 bot.
Result: I have found that both chess.com bots and the PS1 AI have an ELO rating for playing with white and another for playing with black. Both modules are quite superior with white and have little or no chance when playing with black.
Game 1: Very even, but full of errors and inaccuracies in the middlegame. It reaches a complex endgame of rook and 4 pawns vs bishop and knight plus two pawns. The PS1 AI wins the endgame with the rook.
Ruy Lopez opening: Morphy Defense, Columbus Variation.
Precision: 84.9 vs 75.8
Game score: 2050
Opening precision: 100
Middlegame precision: 79.7
Endgame precision: 84.9
https://www.chess.com/game/computer/152783889
Game 2: Alexander wins. After a relatively superior middlegame, it reaches an endgame with a decisive advantage and a pawn to promote. There is no record of this match. Sorry :/
Game 3: Very solid victory by the PS1 AI. Much superior in the opening and with a decisive advantage in move 19, winning the bishop for the rook. Won endgame played correctly.
Ruy Lopez opening: Bird Defense.
Precision: 90.4 vs 81.5
Game score: 2150
Opening precision: 95.6
Middlegame precision: 86.8
Endgame precision: -
https://www.chess.com/game/computer/153140685
Game 4: Alexander gains an advantage in the opening due to many inaccuracies by the PS1 AI. Around move 15, the PS1 AI blunders a piece cleanly, which is a serious mistake. Around move 25, I decide to abandon (PS1 AI). Alexander has a piece advantage and a decisive advantage.
French Defense: Classical, Steinitz Variation.
Precision: 91 vs 79 (PS1 AI)
Game score: 1700
Opening precision: 93
Middlegame precision: 67.4
Endgame precision: -
https://www.chess.com/game/computer/154671651
Conclusion: I really see it as very unlikely that either module will win or draw with black.
Which makes me wonder… up to what ELO can the PS1 AI reach with white, and up to what ELO with black?
Time to face Daniel Naroditsky 2650.
Game 1: Very balanced, both sides gain slight advantages through inaccuracies in the opening, but in move 25 the PS1 AI makes a mistake. Among good moves and inaccuracies, Daniel N. reaches a completely won endgame by promoting pawns.
Modern Defense with 1.d4
Precision: 78.6 (PS1 AI) vs 88.4
Game score: 1900
Opening precision: 90.2
Middlegame precision: 81.7
Endgame precision: 66.1
https://www.chess.com/game/computer/160148099
Game 2: Very strange game. Daniel’s BOT dominates in the middlegame against the PS1 AI, clearly surpassing it in calculation depth. From move 20 to 40, black is winning due to a passed pawn.
Around move 40, black changes the passed pawn and reaches a dead drawn endgame.
English opening: Reversed Sicilian, Kramnik-Shirov Counterattack.
Note: The precision in this game is not reliable since there are dead drawn positions at move 43 and the game continues until move 94.
Precision: 91.9 (?) vs 91.9 (?)
Game score: 2250
Opening precision: 87.5
Middlegame precision: 85.7
Endgame precision: 94.2 (?)
https://www.chess.com/game/computer/160415361
The PS1 AI released in 2001 is capable of engaging and holding a lost position against a current 2650 module.
Final Conclusion: I believe that the programmers, testers, and the team in general must feel very satisfied with their work, and I am curious to know if they may have counted on the help of GMs to make the game.
That an AI released in 2001 with the limitations of its console can engage with a current 2650 module is quite impressive.
Regarding the maximum performance against black, I will leave it for another time.
What do you think about it? Did you expect these results? Do you think you could analyze a game in depth and show us things that less experienced players don't see? Did you find the games spectacular or simple? What do you think of the game itself? I'm really interested in your opinion!