Stockfish is known to be pruning a lot more aggressively in order to go much deeper on what it thinks are good continuations.
The downside of this is, as we've just seen here, that it often won't find winning sacrifices.
In an average chess position there are 35 legal moves. If you want to apply the minimax/negamax algorythm to the tree of chess you need the following time as a function of depth ( measured in plies ):
Minimax/negamax analysis time ( secs ) | |||
Depth ( plies ) | Nodes | Laptop 3500 k-Nodes/sec | Deep Blue 200 M-Nodes/sec |
1 | 35 | 0 | 0 |
2 | 1 225 | 0 | 0 |
3 | 42 875 | 0 | 0 |
4 | 1 500 625 | 0 | 0 |
5 | 52 521 875 | 15 | 0 |
6 | 1 838 265 625 | 525 | 9 |
7 | 64 339 296 875 | 18 383 | 322 |
8 | 2 251 875 390 625 | 643 393 | 11 259 |
On my laptop Houdini 3 runs at an analysis speed of 3500 kN/s, Deep Blue analyzed at a speed of 200 MN/s which even by today's standards is a high speed since at TCEC, the site where top engines compete for the unofficial word championship title running on high end commercially available hardware the typical analysis speed is around 50 MN/s.
It can be seen from the table that the realistic ( that is: which can be achieved in a couple of minutes ) depth that can be reached by minimax/negamax is 6 plies on a laptop and 7 plies on Deep Blue.
Minimax/negamax analyzes every single position which can arise in a given number of plies therefore all positions will have correct value within the tree. However this is not necessary since you can apply a trick. If a move is refuted minimax/negamax will still search on to find out whether it can be refuted even more. However you can stop the search of the refuted move immediately without examining further the possible responses to this move. In this way you can cut branches of the tree. You can do the same thing at every level recursively. This algorythm is called alphabeta. With correct ordering of the moves ( using evaluations of shallow searches for ordering in deeper searches ) enough cuts can be made to reach twice the depth by alphabeta that can be achieved by minimax/negamax. Since alphabeta does not lose relevant information there is no risk that you cut a meaningful branch of the tree. Alphabeta will always find the same best move which would have been found by minimax/negamax.
By applying alphabeta you can reach a search depth of 12 plies on a laptop and 14 plies on Deep Blue. This means that by today's standards a perfect search can only be performed to a depth of 12 - 14 plies depending on the hardware. Any search deeper than this runs the risk of cutting meaningful branches off the tree. Since today's engines can reach a depth of at least 20 plies in a couple of minutes it is evident that these searches can not be perfect. Any move which yields visible results beyond 12 - 14 plies can fall victim to pruning.
Here's a position. Computer gives white is winning by a lot, but it's a draw.