How chess engines analyse positions

Sort:
dude0812

I am trying to make a chess engine right now. My engine is looking 2 moves ahead, 2 moves for itself, 2 moves for its opponent. In the end it counts the material and checks whether checkmate has occured. However, my engine is very slow. At first couple of moves my engine moves after only a couple of seconds but as I reach positions where there are 50 legal loves or so my engine takes over 20 seconds to make a move. If there are 50 moves on each turn then my engine needs to calculate legal moves for 50^3 = 125 000 chess positions and it needs to evaluate over 6 million positions. This is only on depth 2, modern chess engines analyse 20 moves deep or further. Since the number of chess positions rises exponentially, chess engines must somehow disregard moves, they do not look at all the legal moves. How do they do that? How do they choose which legal moves to look at?

Also, when can I be done optimizing the function which generates legal moves. Right now it can calculate legal moves for 14000 chess positions per second when chess positions have around 20 legal moves. Is that good enough? Should I try to optimize this function  further or should I be done with it and only try to improve what moves my engine looks at and improve how my engine evaluates the position in the end of a variation when it calculates no further moves?

How fast was deep blue at finding legal moves?

I am making the engine in Rust so I wouldn't gain any speed by changing languages. I know that there are transpositions in chess and I tried using a hashmap to save the evals of positions but that only slowed down my engine. I am guessing that the reason is that it messed with branch prediction since if the current chess position exists in the hashmap one branch is executed and if it doesn't exist then another branch is executed.

speedrun_for_fun

Yeah, engines like stockfish are pieces of art.

Kronos288

Top engines do discard some moves but they are mainly able to search to such high depths because of crazy efficiency and the alpha beta pruning optimization on the minimax algorithm. They are able to generate moves very fast and then predict which moves are worth looking at first really well. This means that they can quickly discard parts of the search space by calculating a lower bound of the true value of positions. Look at the chess programming wiki, they've got lots of great resources