Ken Thompson's Belle Chess Machine

Sort:
meshka

In his book, "Studying Chess Made Easy", GM Andrew Soltis claims that the machine can only see four plys into the future and it reached master level. From there he concludes that anyone who can foresee 2 1/2 moves into the future can reach master level. However when I looked at this game I am amazed by the sacrifice few moves in the future so does this make Soltis claim false? Thought?

[Event "ACM 1978"]

[Site "Washington D.C."]

[Date "1978.12.06"]

[Round "4"]

[White "Blitz 6.5"]

[Black "Belle"]

[Result "0-1"]

1.e4 e5 2.Nf3 Nc6 3.Nc3 Nf6 4.Bb5 Nd4 5.Bc4 Bc5 6.Nxe5 Qe7 7.Bxf7+ Kf8

8.Ng6+ hxg6 9.Bc4 Nxe4 10.O-O Rxh2 11.Kxh2 Qh4+ 12.Kg1 Ng3 13.Qh5 gxh5

14.fxg3+ Nf3# 0-1

MikeCrockett

I haven't read the claim Soltis made so I will make no comment on it but I recently came across a chess problem that was White to win where the computer was insisting Black was ahead based on material advantage alone.  The actual mate was beyond the computer's look-ahead horizon.  All it could see was the material imbalance.  All the moves were essentually forced and it was a lowly Bishop that delivered the fatal blow yet the computer failed to follow the logical path to the end because it couldn't see it until it was too late.

If all we trusted was the computer's mechanical judgement, then the beauty of that checkmate would have been forever lost.  There is more to chess than how many plies you can see in the future.

wilford-n

MikeCrockett, I'd like to see that problem, and know which engine claimed the forced mate was winning for the other side, as well as how the engine was configured. That claim stretches credulity beyond the breaking point, unless you're talking about something from the same era as meshka's game above... from 1978; might as well be the stone age as far as computers are concerned.

All of the major engines now in use extend their search horizon for all forcing lines. A forced mate quite simply is not going to be missed by any modern engine given a reasonable amount of time, unless it's intentionally configured for sub-optimal play.

In fact, nearly all chess problems are at least checked by engines, and many are sourced from their analysis of master games; for example ChessTempo's entire library of tactical problems.

martinji

It would be fascinating to use top engines to come up with estimates for max ELO possible given look ahead of n moves.  Eg take Stockfish unconstrained.  ELO 3350 or whatever.  Have it play itself 100s of times but the second version is limited to max 15 moves (ie 30 ply) look ahead, even in forcing lines.  Therefore get an estimate for 15 move look ahead ELO.  Repeat reducing the number of moves by 1 each time.  Eventually you'll get an estimate for the ELO of a player who sees all lines perfectly up to their move limit, and has a very good positional evaluation, but can only see say 3 moves ahead at any time.  I'd find it really interesting and potentially helpful to see that if I can't master looking more than 5 moves ahead, I could never be more than 2200 ELO (say).

Obviously btw in the successive matches the weaker engine would be used each time, ie the second match would be 15-move vs 14-move, not unconstrained vs 14-move.

Blougram

I find that claim hard to believe. There is (or used to be) an engine on ICC that only searched four plies ahead (brute force, no extensions, I believe, though I'm no programmer). Its rating fluctuated between 1550 and 1650. I downloaded the source and changed the depth to 6 plies -- and yes, you had to play carefully so as not to lost to tactics. But 1) playing some of my favorite gambit lines in the Blackmar-Diemer allowed me to win by sacrificing and exploiting the horizon effect; 2) Its endgame is abysmal. With 6 plies you can't even calculate simple pawn races.

meshka

So I was wondering if Soltis claim is correct only if the 4 plys do not include forced lines (which the mate above has), but it seems from the rook sac to the mate there are more than 4 non-forced plys. So I too call BS

meshka

Agree with martinji it would be a neat experiment. Probably easier if you set it on ICC or FICS to pit it against other engines and humans and let it run wild for one day per ply depth

SilentKnighte5

Seems plausible to me.   Keep in mind that besides 5-ply perfect tactical vision, you would have good evaluation of the resulting positions and lights out endgame technique.

No losses in time scrambles.  No blowing completely won positions because you've been playing for 5 hours straight or you got lazy.

SilentKnighte5
Blougram wrote:

I find that claim hard to believe. There is (or used to be) an engine on ICC that only searched four plies ahead (brute force, no extensions, I believe, though I'm no programmer). Its rating fluctuated between 1550 and 1650. I downloaded the source and changed the depth to 6 plies -- and yes, you had to play carefully so as not to lost to tactics. But 1) playing some of my favorite gambit lines in the Blackmar-Diemer allowed me to win by sacrificing and exploiting the horizon effect; 2) Its endgame is abysmal. With 6 plies you can't even calculate simple pawn races.

Yes, some crude ICC experiment engine isn't going to have the same finely tuned eval that an engine like Komodo does.  Komodo that can only see 5 plies ahead is still going to have tremendous positional play and understanding of material compensation.

It's not just about avoiding silly 3-move tactics.

mnag

I agree with Soltis" statement; however, I wold like to add that just seeing 2 1/2 moves ahead is insufficient. You must correctly evaluate the position you see.

Blougram

Once again, I'm not a programmer, but looking at the end of some of the Stockfish lines on Chessbomb, it is clear that the positional evaluation is often at odds with the concrete realities on the board: won endgames (e.g. pawn races) evaluated as lost, etc.

This matters little if you do a selective 30 ply search, but it would severely handicap the engine in the case of 5 or 6 plies, and not make it much stronger in the endgame -- or indeed when it comes to 3+ moves forcing tactics -- than the ICC engine (MSCP).

MikeCrockett

I didn't intend to imply the problem was a forced mate. There was a way to avoid mate with a serious loss of material. The key point however is the computer's initial evaluation was incorrect due to the material imbalances. The machine obviously got swamped by the variations exactly because the moves were not forcing but still losing. I don't know how to post the position from a mobile phone nor can I give credit to the composer. the problem was shown to me at a chess club. if you can follow forsyte notation the problem is:

8

3P3k

n2K3p

2p3n1

1b4N1

2p1p1P1

8

3B4

White to move and win.

MikeCrockett
MikeCrockett

The point I'm making about the above position is that it doesn't really matter how fast your computer is or how deep it can search. There will always be winning positions that occur beyond the horizon the machine will fail to see. When that occurs it will make an incorrect decision. In this example, grabbing material instead of playing for mate would prove fatal. Granted the faster and deeper the machine can go the fewer mistakes it will make but until it can search the entire game tree it will be vulnerable to mistakes in judgement .

wilford-n

Mike, I understood your point, but I'm inclined to think the human evaluation is usually where the error lies. For example, in the problem you give above, Black plays a very "human" move that turns out to be a game-losing blunder: 4...Nf7+?? (the irresistible royal fork), which Stockfish recognizes almost instantly as a mate in 11. That is a far cry from "the computer fail[ing] to follow the logical path to the end;" it is the human who missed the consequences of his material-grabbing error.

In fact, Stockfish evaluates the initial position as better for White from the very first ply, and it's evaluation of 1.Nf6+ (+4.01 at one ply) remains remarkably consistent at 27 plies (+4.00). The closest it comes to favoring Black is at 8 plies, when it still gives White a one-pawn edge.

I frequently hear claims of computers giving incorrect evaluations. Most of the time when I ask for a concrete example I'm met with either silence or one from something like two decades ago, so I appreciate you taking the time to provide one, even if on further examination it turned out the computer was correct.

SilentKnighte5

There are a couple of cases where engines are prone to give incorrect evals:

1) Early opening

2) Locked middlegames

3) Some "obviously" drawn endgames such as opp colored bishops, or positions with a fortress.

#2 is especially prevalent with Stockfish.

wilford-n

Silentknight: I probably should have specified that I omit #1 and #3... although if you have a 7-piece tablebase with extentions for more pawns, #3 becomes much less of a problem. (I also have a minor quibble with the contention that engines fail to recognize fortresses, which generally draw by repetition of position, something computers are quite adept at finding.)

#2 is a problem for most engines (although I don't think Stockfish is any worse than many of its competitors at positional chess, and its rating would imply that it's a good deal better than most), because in those types of games, the position often doesn't open up for 20 or 30 moves (40-60 plies). In those special circumstances, I agree with MikeCrockett's comment about the horizon effect. On the other hand, as the "mysteries" of positional chess are boiled down to more and more concrete rules -- however complex -- computers will continue to become stronger in that area as well. One of the ways computer algorithms are compensating for poor positional awareness, for example, is to assign numerical values to things like mobility of pieces and control of certain squares (whose value can be weighted based on factors such as relationship to the enemy king, material value and "temporal distance" of enemy contestants for those squares, etc.). And the pace of improvement is accelerating. While computers are still a long way off from "solving" chess, this generation will live to see engines that are literally unbeatable by any human player. We're almost there now.

MikeCrockett

@Wilford I used Stockfish too. What did your version recommend as whites first move?

It's one thing to point out that Stockfish finds the correct line 4 moves into the analysis and another thing to point out that it needed to find the first correct move to reach that 4th move position.

My machine is older, hence slower, so perhaps your machine found the proper sequence, but it still boils down to the horizon effect being a serious issue.

As you pointed out, machines are rapidly approaching the point whereby humans can not defeat them, but that is NOT the same thing as saying the machines play perfectly.  At best they play just a bit more precisely than we do.  That's not saying much.