Whats the internet slang thingy? TLDR?
I don't play engines, so they don't impact my tournament play.
Whats the internet slang thingy? TLDR?
I don't play engines, so they don't impact my tournament play.
Which internet slang thingy are you referring to? (:
Isnt tldr internet slang/abbreviation for to long didnt read?
Ah. Well, carry on, citizen.
Thank You Sir!
I shall carry one, and you do the same.
The mainline theory for the Two Knights line begins --
After Ng5, up to this point it does not seem like either side can sanely deviate. It's after ... bxc6 however, we start to get options.
The most-followed human theory is:
Black has compensation for the material disparity. So one question is, where in that move order is there room for improvement, if anywhere? If there are objectively "better" ideas to be played for either side, they may be worth exploring.
There are two side recommendations from the computer world:
The earlier a deviation can be found, the better. If we follow the latest recommendations from Stockfish 10, however, a different line is offered--
Engines are informative but that is all they can be. Engines using normal search techniques look for the best move, regardless of how difficult it would be for a human to actually find the continuations over the board. Therefore even if the engine provided perfect guidance (and they certainly don't), you would follow their recommendations at your peril.
They are useful. But the lines they suggest need to be vetted for practical playability.
Correct. I view their thoughts as a kind of "alternate reality" to explore for inspiration, more than concrete guidance. And since I'm not any kind of master, by any means, my own thoughts are largely irrelevant unless I help find uncover something unusual. ![]()
I'm not sure about something like the Two Knights (that's not the Fried liver btw) but most of the 'human' theory you will see in all the highly topical openings will essentially all be engine prep. I'm not too sure I see the distinction between the two.
I'm not sure about something like the Two Knights (that's not the Fried liver btw) but most of the 'human' theory you will see in all the highly topical openings will essentially all be engine prep. I'm not too sure I see the distinction between the two.
That's an interesting thought. And thanks for the correction!
So, you're saying that human theory is already basically copycatting engine theory?
I do see what you mean to an extent - comp vs comp games will sometimes produce some some interesting looking ideas, but usually in relatively less explored lines. To survive in the top tournaments at 2750+ level, they have to be prepping very heavily with engines, especially in the razor sharp stuff like the Najdorf.
There are exceptions of course, sometimes they'll play something a bit offbeat just to 'get a game', but even these lines have their theory engine tested.
I think this would have been an interesting question 10 or so years ago when engines were super strong but often lacked super GM level judgement. That maybe still happens, but much less so.
If so, then, have we/do we credit which version of engine discovers/recommends lines? The difference between recommendations of Stockfish 5 vs Stockfish 9 is considerable for example, as we know SF9 has evolved substantially. But what about SF8 vs SF9? Or even weirder, SF9 v 10, which is currently just a development version and might change depending on the day you download it from the developers? Lol.
If we don't know which engines are supporting human mainline theory, then this whole equation gets a whole lot messier assuming we're copycatting them
. We know that engines still do screw up, and some are indeed better than others. What if we humans are still following lines developed by Rybka 3???
I do see what you mean to an extent - comp vs comp games will sometimes produce some some interesting looking ideas, but usually in relatively less explored lines. To survive in the top tournaments at 2750+ level, they have to be prepping very heavily with engines, especially in the razor sharp stuff like the Najdorf.
There are exceptions of course, sometimes they'll play something a bit offbeat just to 'get a game', but even these lines have their theory engine tested.
I think this would have been an interesting question 10 or so years ago when engines were super strong but often lacked super GM level judgement. That maybe still happens, but much less so.
This is fascinating, though. Assuming that engine opening theory is its own kind of arms race, slowly improving with more data from self-play over time, I'm interested to see how humans deviate from analysis, especially as our own machines have surpassed us. If we're just stealing from their ideas now, we're essentially admitting we don't even know how to properly play the lines we're being fed! Lol
Another thought is, with the new explosion of interest in machine-learning algorithms in chess (A0 and LC0), it could very well be that some of these established lines might be tested by an alien mindset. Stockfish and its brothers Komodo, Houdini, and others were all spoonfed by human programmers. Imagine the utter chaos that might happen soon if Leela reinvents the wheel o_o
Looking at current TCEC results, we're not quite there yet, at least statistically. Leela's ranked 3d out of 8, behind the latest versions of Komodo and Stockfish, but it's only managed to pull ahead by beating up ranks 7 and 8 (Fire and Andsacs respectively). Against its main competitors, the other three of the top 4, we have seen many, many draws. Stockfish has been the only one able to squeeze out any kind of victories in the top 4 pairings.
It is worth asking, were the wins decided by any flaws in the openings, or later? One match lasted 49 moves, the other 88.
All depends on the depth humans and engines are looking, the more depth the more precise
Generally speaking, yes. But since modern engines aren't pure brute-force algorithms, instead relying on the method of alpha-beta pruning to avoid over-analyzing "unpromising" lines, there are still plenty of blind spots that might or might not get solved from deeper searching. Sometimes it works, sometimes it doesn't.
All depends on the depth humans and engines are looking, the more depth the more precise
Generally speaking, yes. But since modern engines aren't pure brute-force algorithms, instead relying on the method of alpha-beta pruning to avoid over-analyzing "unpromising" lines, there are still plenty of blind spots that might or might not get solved from deeper searching. Sometimes it works, sometimes it doesn't.
The think about brute-force algorithms is that those always work, they may require a lot of computational power but they always work. If stockfish analyzes a position at depth 20 it's going to find the best move at that depth always but the best move at depth 20 may not be the best move at depth 30 (horizon effect). AI's like AlphaZero rely on something that would be similar to the experience, like humans, wich is the think that actually doesn't work sometimes, that's why Stockfish wins AlphaZero in chess 960 or other non standard types of chess.
"The think about brute-force algorithms is that those always work, they may require a lot of computational power but they always work. If stockfish analyzes a position at depth 20 it's going to find the best move at that depth always but the best move at depth 20 may not be the best move at depth 30"
Stockfish is not a brute-force algorithm. It uses alpha-beta pruning.
It will not, however, always work. It still has the possibility of missing critical lines if it discounts them early in the analysis due to a seeming lack of potential. This happens less often now than it used to in machine playing, but it's still a possibility that must be accounted for.
For example, starting turn 1, what if White forces a win with 1. g4? Stockfish would assume, like most of us, that the most promising lines will be in 1. d4 and 1. e4, and will tend to ignore/prune most of the analysis in moves that look relatively stupid, so it would miss a 1. g4 forced win unless it has to sit on the position for an extremely deep search that would show the initially promising moves were useless. etc.
"Alpha-beta pruning uses a minimax tree. It's still a brute-force algorithm."
We can argue semantics all day ^_^ but at the end of the day, a pure brute-force method would require fully searching all possible moves, which becomes exponentially harder the deeper the analysis goes. Alpha-beta pruning does help narrow the search horizon processing requirements, while allowing for the potential of blind spots.
This is going to be an increasingly interesting question as we face stronger computer opposition. We know that engines tend to fight each other very often, relatively-- these AI are happy to run 24-7 as long as the computer's not interrupted--but a lot of engine matches very likely won't be seen by human eyes. A few have been interesting to the human public (thinking of AlphaZero and Leela) for example, but these are pretty much the exception to the norm.
So, this thread will be an attempt to explore opening lines and compare what the "human" repertoire advises, versus the "computer" lines. Since I'm just an amateur enthusiast, take my exploration for whatever it's worth, and feel free to counter or add upon wherever you see fit.
Now, all that said:
First off, how do engines treat opening theory? It is worth noting that the approach is a bit different from human over-the-board fighting. Humans might choose lines to play against specific opponents, but in general, we search for opening lines that will award us positions we can relatively understand and play with few(er) mistakes (than our opponent).
Computer analysis is a bit different. You generally see one of two approaches happen--
1) No opening book at all. Most engines without an opening book will, depending on search depth, go for a really dry starting position (think 1. e4 e5 2. Nf3 Nc6 3. Bc4 Bc5/ 1. d4 d5 2. Bf4 Nf6) etc. This approach actually isn't usually very helpful, because unless you run the analysis, per move, to a search depth they will find something that unusual, they will basically try to put their opponent to sleep. Note that this wasn't the case with AlphaZero however, and I'll discuss that later.
2) The second approach is to combine computer analysis with an openings "database", preferably one updated by newly finished engine matches.
This gets very interesting. Bear with me while I try to explain:
Using this approach, you'd start with some preselected trees of openings. The engine would basically play by rote until reaching moves that are uncharted from previous play, following the statistically best lines. A few potential problems do arise however;
2a) This basically eliminates the potential for any exploration before a player gets out of memorized theory;
2b) Assuming the lines are regularly updated, and the statistically best lines are chosen, the engines would eventually memorize a few select lines that force draws for both sides, 100% of the time, and always play them without any deviations. Whenever a potential opening line is discovered that has a high winning percentage, the opposing player would simply avoid letting it happen.
2c) Another issue, in general, to bear in mind with engine play is that they don't "imagine" the board the same way we do. To their eyes, the entire board is a graph of centipawns, to be evaluated based on their search horizon and the "best" lines they could find to continue their position. This analysis is far from perfect, though, and what this translates into, for openings, can mean a very over-optimistic machine trying to hold onto an imaginary advantage that might not actually exist.
So, with all that in mind, we can try to offer some solutions. Engine openings books might include some randomness in the selection of moves, for example, or disallow an engine from following previous lines. An openings book might also tell an engine it has to analyze a position, despite having previously established lines, if there's an arbitrarily chosen minimum number of games that have to be played for a line to be established theory, and that requirement hasn't been met. There's still issues with all this, so, it's not likely to be an exact science anytime soon.
My own research will focus on one or two openings at a time, and I'll post thoughts as I see them. I hope to find, if not new ideas, then at least a few lines here and there where our established "mainlines" deviate from what computer analysis recommends, and, if so, why. Right now, I'm most interested in how our machines look at the Two Knights (edited) theory. It's an interesting equation, because Black sacrifices a pawn but still gets enough compensation to keep an equal position, according to human analysis. What if we're wrong, and it's not actually equal at all? If so, who actually wins?