| 6

     There are lots of things an instructor can do during a lesson to help a student, such as reviewing games, answering questions, or having the student do "indicative" puzzles. But I also include three types of exercises which I periodically rotate for a student who has ongoing lessons:

  • a de Groot - where the student "thinks out loud" in an interesting position, trying to find a move just as he would do in a long time control game,
  • I play an entire game against an intermediate computer "out loud". The student gets to see in "real time" the thinking process I use on each move in order to play a strong game, and
  • A DATSCAN - "Dan-Assisted Thinking"
     Unlike the first two, which are solo acts, a DATSCAN combines the two exercises, and asks the student to help me find a move in a random position neither of us has seen before.
     The way I pick the position for a DATSCAN is easy: I go to a database of a recent event, such as Tata Steel 2013, and ask my student to pick a game randomly. So if there are 91 games in the "A" section, he/she picks a number from 1-91. Then I ask the student to pick a move number and a color, such as "White's 29th move".

     We go through the game quickly and head toward White's 29th move. If it turns out that this move is a forced recapture, or any kind of forced move where there is really no choice, we make that move and go to that player's next move (or the next one where there is a choice).

    At that point we treat the game as if it is our own. The first thing I ask the student to do with me is evaluate the position statically, which almost always starts with counting the material balance. The goal is to quickly surmise "Who is ahead, by how much, and why". In a real game we would already have a good feel for this and it wouldn't necessarily be part of the thought process but, when you are thrown into a game "cold", static evaluation becomes a necessary step.

     After these preliminaries, we treat the position like any other, analyzing to find the best move we can in a reasonable period of time. Of course, a DATSCAN almost always takes longer than one would to find a move in a long time control game, since we are discussing the ideas out loud, parrying ideas back-and-forth. I won't get into details about how to analyze a position to find a move; I have written many articles on that. Suffice it to say that there is no single correct thought process that would cover every type of position, and you can reference many of my thought processs articles via - Take the links in Item #1 "Thought Process". Finally, there are entire books on the subject like Soltis' recommended How to Choose a Chess Move or my The Improving Chess Thinker. The root book on this, more like a PhD Thesis, is de Groot's Thought and Choice in Chess.

     After we are finished analyzing, my student and I each pick the move we think we would play in a long time control game. It doesn't have to be the same move - we could disagree. Then we let a strong engine pick out the top N moves to see where our move places. I would let the engine think for at least 2-3 minutes. Finally, we see what the grandmaster played and play out the remainder of the moves to see how the game finished.
     Many times my move has scored higher on the computer's list than the grandmaster's move; that may sound impressive, but you have to take into account that I am never in time trouble when doing the exercise and under no pressure. My student and I often take 20-40 minutes to discuss the analysis, while the grandmaster rarely has the luxury to take that long. On the other hand, the grandmaster more often picks the higher ranked move than I do, something you would absolutely expect!

     Here's a DATSCAN position from earlier today. Before reading on, I suggest you take a few minutes and figure out which move you would chose for Black if you were playing a long time control game and why, and which side you think is better.


















     My student and I discussed the many ways to address White's threat of 19.Rxb7. Placing a rook on b8 seemed out because of 19.Bf4, removing the guard (I trust you saw that - if you did not even attempt to refute 18...R(either)b8, that amounts to "Hope Chess"). Nor did we find any reasonable counter-attack.

     So that left defensive moves like 18...Ra7, 18...Re7, 18...b6, and 18...b5. My student felt that 18...Ra7 looked odd, but I pointed out that this gets ready to double rooks on the a-file, where White has an isolated pawn. This rook is also flexible in that it can swing over to the center, say Rae7, after a late move of the b-pawn. My student also thought that 18...b5 looked awkward due to the c-pawn, but my experience told me that grandmasters often make moves like this, making the a-pawn backward and letting the pawns guard themselves. However, some lines of analysis after 19.Nb4 had me questioning whether I wanted to put the pawn on b5 after all. My student liked the flexible 18...Re7 and, after a while, I liked that perhaps best too, although I thought that 18...Re7, 18...Ra7, and 18...b5 were all close, with 18...b6 trailing by a bit (but it had some ideas to recommend it, too). We finally both picked 18...Re7, me by just a tiny bit.

     It turned out Houdini 3 thought all four moves were reasonable - the spread was only about 0.15 pawns, which means choosing one over the other should not take a lot of time, and is not critical. A fifth move, the strange 18...Nd7, allowing 19.Rxb7 Ne5 was also a computer line). The computer's #1 choice at 26 ply was Anand's 18...Ra7 with 18...b6 a surprising but close second. The computer gave Anand a slight advantage at that point (so did we in our evaluation!), but Harikrishna went on and held the draw.

    Over the years, many students have developed a preference for one exercise over the others. Interestingly, it seems spread about even: some prefer a de Groot, others like the games against the computer, and a third group the DATSCAN!Smile