You would have to run Stockfish at a search depth of 1,050 ply for it to "prove" the position turn 1 is a forced win, so it's a fantastic test of the limits of alpha-beta search.
Does engine opening theory deviate/improve on human theory?
For example, starting turn 1, what if White forces a win with 1. g4? Stockfish would assume, like most of us, that the most promising lines will be in 1. d4 and 1. e4, and will tend to ignore/prune most of the analysis in moves that look relatively stupid, so it would miss a 1. g4 forced win unless it has to sit on the position for an extremely deep search that would show the initially promising moves were useless. etc.
"Alpha-beta pruning uses a minimax tree. It's still a brute-force algorithm."
We can argue semantics all day ^_^ but at the end of the day, a pure brute-force method would require fully searching all possible moves, which becomes exponentially harder the deeper the analysis goes. Alpha-beta pruning does help narrow the search horizon processing requirements, while allowing for the potential of blind spots.
Do you even understand alpha-beta pruning? It only discounts lines that mathematically CAN'T be better but as I said in my first post, it all depends on the depth. If you have stockfish at depth 50 it's not gonna miss anything.
Mathematically better according to what? What if 1. e4 h5 Qxh5?? forces a win for White? Stockfish would never see it, because "it mathematically can't be better" at any depth before it becomes a converted win. Since alpha-beta pruning would ignore the move, it would never be seen.
"If you have stockfish at depth 50 it's not gonna miss anything."
Run Stockfish 10 as deep as you want through this, and get back to me on that. See how often it misses the critical line and draws. https://www.chess.com/forum/view/fun-with-chess/longest-mate-official---mate-in-545
1. Stockfish has an oponing book.
2. Alpha-beta pruning discounts the lines that can't be better at the depth set, so again, as I said on my first comment it's all about the depth.
3. Stockfish doesn't look into the Qxh5 line becouse it's minimax tree is all about material or potential material win, so a -8 on move 2 obviously can't be good.
4. We can argue, but please make logic arguments.
1. An opening book designed 1a) by flawed humans and 1b) flawed previous iterations of chess engines. We can't build a perfect opening from flawed players, so we know for fact that there are still mistakes in our theory.
2. Let's say that there's four variations. Variations 1, 2, and 3 all appear to be best at depth 10, and variation 4 appears to lose at depth 10. The engine has been programmed to prune all lines that look like a loss at move 10, so it stops looking at variation 4 entirely. Variation 4 at depth 24 converts to a win, but variations 1, 2, and 3 all look drawn. The engine would still not see variation 4 at depth 24, because it has shut off calculating variation 4 entirely at depth 10, you see? Only if variations 1, 2, and 3 all prove to be losses that this engine would then broaden the search to previously likely-losing lines.
3. See 2.
4. I do make logical arguments. Then we fail to understand each other and argue in circles. I have plenty of time to waste, so I'm happy to do it. ![]()
You continue to not see my point @ #2.
As for #1, who is to say A0 wasn't suggesting improvements to lines in the matches it did defeat Stockfish? That would be worthwhile analysis. It definitely had some new ideas in the 1. c4 / 1. Nf3 lines.
I'll try to explain #2 better:
When we say, "Stockfish analyzed this position to a depth of 25", what we actually mean is "it analyzed two or three main variations to a depth of 25, and anything less promising got shut off somewhere earlier". Does that make more sense???
@Play_e5
A couple things. First, with a branching factor of 2 or 3 you definitely couldn't analyze to a depth of 100 with any computer.
Modern engines like SF already have branching factors around 2 and don't just instantly hit depth 100 (2^100 is still a very, very large number).
Second, the branching factor is so low for modern engines like SF because they are NOT merely AB engines.
While AB is completely mathematically "safe" in that it preserves the result a brute-force minimax search would get to the same depth, modern engines use a bunch of speculative reductions and extensions that are not similarly "safe".
They work much more often than they fail, which is why they're used, of course, but search in an engine like SF is most assuredly not even functionally equivalent to minimax.
It can miss things a minimax/pure AB search would find at the same "depth" (really just iterations of search in a modern engine, since some lines, most even, are not searched to the nominal "depth", but rather to something less or greater).
Cheers!
Thanks all for the insights!
I would like to credit the LeelaChess Discord channel. I actually referenced my discussion with Playe5 there, assuming I was correct. Then, to summarize, it became clear there's actually two different things we're referring to when we say "alpha-beta pruning"--
1) being a mathematically flawless AB, using flawless evaluation to determine choices, but also
2) Our reality's AB models, which are fed evaluations we know have yet to be perfected, therefore also cannot be perfect alpha-beta pruning.
So @Playe5, I think we both were arguing two different things at the same time. Credit where it's due.
Adding on to what @Dyslexic_Goat said and also from the Discord
- SF doesn't use a perfect AB implementation. It cuts corners (prunes branches which a perfect AB wouldn't due to some other metric) so when it says depth 20, it doesn't mean it has searched all possibilities at depth 20.
- This is done on basically every good/popular chess engine since the search tree becomes so large
Interesting thread, and, yes!
"The hundredth monkey effect is a hypothetical phenomenon in which a new behaviour or idea is claimed to spread rapidly by unexplained means from one group to all related groups once a critical number of members of one group exhibit the new behaviour or acknowledge the new idea.[1]"
Im sticking with linear thinking and its priority of utopia....as I wrote in one of the previous 'will computers solve chess?' threads. It seems like an epidemic. Ive have read many things blamed for it....but at this point we are not yet forced to read it, so.....
Engines are informative but that is all they can be. Engines using normal search techniques look for the best move, regardless of how difficult it would be for a human to actually find the continuations over the board.
This raises the question: could an engine be modified so that it remained strong, but used a different search technique in some respects, so as to flag which lines require a high search depth to manage their consequences?
Engines are informative but that is all they can be. Engines using normal search techniques look for the best move, regardless of how difficult it would be for a human to actually find the continuations over the board.
This raises the question: could an engine be modified so that it remained strong, but used a different search technique in some respects, so as to flag which lines require a high search depth to manage their consequences?
Yes and no. You can edit some parameters in Stockfish and its closest neighbors (Houdini & Komodo) that would change its level of aggression (relatively), or how it treats certain things like pawns, king safety etc.
You might be thinking of the setting known as "contempt", which alters how much the engine wants to play for a draw; higher contempt makes Stockfish ignore drawish chances and try to play for a win more. But this just changes the odds of drawing a given position, not necessarily telling the engine to avoid lines that require long search depths.
Engines tend to be weakest in the opening and endgames; for openings, it is the broad calculating necessary that can throw them off balance, and for endgames it is the much deeper horizon required to see a position convert to a win/loss, and/or the necessity of understanding a material imbalance that might be far more important in the endgame than the midgame (four pawns vs Rook for example, or two knights plus rook vs Queen)
Engines have generally helped solve these issues using memorized openings books plus endgame tablebases, but we're nowhere near a flawless setup in either area yet.
Does engine opening theory deviate/improve on human theory?
Sometimes.
Sometimes it suggests:
the human preferred move
An objectively worse move
An objectively better, but impractical move
An objectively better, and practical move
For top players this isn't so much about finding strong moves that gain an advantage, it's more about finding moves that in the past would have been considered dubious, but allow you to draw as black. Or moves as white that give up equality on purpose to reach a mid game where engines can't give you the final word.
So somewhat ironically, it's more about finding moves that make the position equal ![]()
I do see what you mean to an extent - comp vs comp games will sometimes produce some some interesting looking ideas, but usually in relatively less explored lines. To survive in the top tournaments at 2750+ level, they have to be prepping very heavily with engines, especially in the razor sharp stuff like the Najdorf.
There are exceptions of course, sometimes they'll play something a bit offbeat just to 'get a game', but even these lines have their theory engine tested.
I think this would have been an interesting question 10 or so years ago when engines were super strong but often lacked super GM level judgement. That maybe still happens, but much less so.
And this is another important distinction... there's a difference between a professional using an engine to prep, and most amateurs who run an engine on their phone or browser for 20 seconds and expect to get a good answer.
To use an engine correctly you have to understand their strengths and weaknesses, and then understand how they work to help avoid the weaknesses... actually not that hard to do, mostly it takes time, but I'd guess most amateurs have no clue, and for example think letting stockfish reach some stupidly high depth on move 5 will give them the best move.
I do see what you mean to an extent - comp vs comp games will sometimes produce some some interesting looking ideas, but usually in relatively less explored lines. To survive in the top tournaments at 2750+ level, they have to be prepping very heavily with engines, especially in the razor sharp stuff like the Najdorf.
There are exceptions of course, sometimes they'll play something a bit offbeat just to 'get a game', but even these lines have their theory engine tested.
I think this would have been an interesting question 10 or so years ago when engines were super strong but often lacked super GM level judgement. That maybe still happens, but much less so.
And this is another important distinction... there's a difference between a professional using an engine to prep, and most amateurs who run an engine on their phone or browser for 20 seconds and expect to get a good answer.
To use an engine correctly you have to understand their strengths and weaknesses, and then understand how they work to help avoid the weaknesses... actually not that hard to do, mostly it takes time, but I'd guess most amateurs have no clue, and for example think letting stockfish reach some stupidly high depth on move 5 will give them the best move.
Then there's the third group, weird engine enthusiasts whose only interest is in finding new ideas from the machines, regardless of whether it improves our own (or any human's play), just for the sake of finding new ideas. I claim to fall in that category. ![]()
"I would be far more interested in my own skill if I thought I could do something with it" -- Dyslexic_Goat
Continuing on my earlier project of trying to find different lines for the Two Knight's Defense--
22 moves in, it looks playable enough, but objectively White looks better in this line than the 8 ... Bd6 mainlines. So while not a true "mistake", 8... Bd6 is still relatively dubious if it allows White to enter more profitable lines.
I followed some computer matches. For example Stockfish - Houdini and Stockfish - Alpha Zero.
What I noticed is that the computer that plays reliable openings like Nimzo Indian Queens Indian Ruy Lopez etc scores better than a computer that has dubious openings lines in its openingbook like Traxler Gambit Schliemann Gambit of Chingorin defence or even King Indian. That shows In muy opinion that the openings that is taken randomly out of an openings book with dubious openings has a big influence on the match result. So openings are importent for the computers. Maybe if you let them free from move one to choose they can even become beter. And it must be possible to let them learn by analysed games and the next time let them not play again the losing move but an other move. This must be possible to program. That they have a data base of played games of them self.
For example, starting turn 1, what if White forces a win with 1. g4? Stockfish would assume, like most of us, that the most promising lines will be in 1. d4 and 1. e4, and will tend to ignore/prune most of the analysis in moves that look relatively stupid, so it would miss a 1. g4 forced win unless it has to sit on the position for an extremely deep search that would show the initially promising moves were useless. etc.
"Alpha-beta pruning uses a minimax tree. It's still a brute-force algorithm."
We can argue semantics all day ^_^ but at the end of the day, a pure brute-force method would require fully searching all possible moves, which becomes exponentially harder the deeper the analysis goes. Alpha-beta pruning does help narrow the search horizon processing requirements, while allowing for the potential of blind spots.
Do you even understand alpha-beta pruning? It only discounts lines that mathematically CAN'T be better but as I said in my first post, it all depends on the depth. If you have stockfish at depth 50 it's not gonna miss anything.
Mathematically better according to what? What if 1. e4 h5 Qxh5?? forces a win for White? Stockfish would never see it, because "it mathematically can't be better" at any depth before it becomes a converted win. Since alpha-beta pruning would ignore the move, it would never be seen.
"If you have stockfish at depth 50 it's not gonna miss anything."
Run Stockfish 10 as deep as you want through this, and get back to me on that. See how often it misses the critical line and draws. https://www.chess.com/forum/view/fun-with-chess/longest-mate-official---mate-in-545