Help me design a tactics computer program


Hello !
Although I think this is a good idea, I must say I think ChessTempo has already everything you could think of as far as tactics training goes.
May I suggest another course of action ? We badly need a 'positional trainer' rather than a tactical trainer.
Here is an idea to select a position eligible for positional training :
- use an engine with a "normal" evaluation function (Houdini, Rybka or Critter come to mind)
- scan games and look for positions where the move played meets the following critariae :
- 1st choice of computer
- results in eval between 0.3 and 1
- is superior to 2nd choice by at least 0.2 and less than 0.7 (to avoid saving tactics !)
Maybe you can tweak the # when the 1st choice of the computer is also the 1st choice of the human player, as there is a greater chance this is a 'natural' move that the trainee should play, but you get the idea.
This kind of trainer doesn't exist at the moment

Really ? You mean GTM ? But it's not exactly the same idea I have in mind.
(...though GTM is top-notch, for sure !)
My idea is more along the lines of finding moves like rerouting a Knight, or penetrating on an open file, or restricting an opponent's piece - things like that...

Ah, sorry, I see, so you mean in specific positions where one move is vastly superior, but not so much for tactical reasons, not just random positions. I get you and that would be good if the positions are picked well.

Here is a sample of the kind of positions I have in mind -

Hello !
Although I think this is a good idea, I must say I think ChessTempo has already everything you could think of as far as tactics training goes.
May I suggest another course of action ? We badly need a 'positional trainer' rather than a tactical trainer.
Here is an idea to select a position eligible for positional training :
- use an engine with a "normal" evaluation function (Houdini, Rybka or Critter come to mind)
- scan games and look for positions where the move played meets the following critariae :
1st choice of computer results in eval between 0.3 and 1 is superior to 2nd choice by at least 0.2 and less than 0.7 (to avoid saving tactics !)Maybe you can tweak the # when the 1st choice of the computer is also the 1st choice of the human player, as there is a greater chance this is a 'natural' move that the trainee should play, but you get the idea.
This kind of trainer doesn't exist at the moment
I am open to the idea of a PT, but I don't see that your criteria (or any I can think of) would be effective. How would the user be convinced that her choice is worse than the computer's when the evaluations differ by such a small amount, especially when we cannot give the reasons for the two evaluations? My program, GTM, allows one to play out an existing game guessing the moves and seeing how the engine's choice differs from the user's (as well as the evaluation of the move actually played in the game). It thus gives the kind of comparison you are suggesting, but frankly I have found very little instructional value when the moves differ by such a small amount.
I have toyed with the idea of developing some sort of objective criteria such as control of the center, total squares controlled by the pieces, pieces guarded (not free), etc. I can think of a few more that would be actually calculable to assess two positions. The problem is: who would be convinced that my choice of criteria, scores, weights applied to such were actually indicative of a better "positional" move? All engines, ultimately, must make such judgments when they reach a quiesent/static position in order to pick the best move. I could make up my own evaluator (so that I could feed back the reason for the score), but I just don't see people buying into my judgement.
The principal reason that Chess Tempo does not meet my needs is that it does not drill me on the same puzzle until I get it right. I played around with mnemosyne, lashing it up (by hand) to CT-ART tactics. It was cumbersome, but I found the idea of spaced repetition a better learning experience than viewing the puzzle, getting it wrong or right, and then never seeing it again.

The more I think about hicetnuc's idea about a PT the more I like it. How about this as a criteria: If the evaluation of position changes by +x, and if the material is the same before and after the principal evaluation (pv), then we have a positional puzzle: the move is a very good one (+x), but there is no material gain or loss. This is easy to implement as another feature of the proposed TT. However, I am still uncertain about it because the user gets no explanation (except a raw engine score) as to why the move is a good one. Tactics are easy to judge: the move does or does not result in material or mate, so the user is easily satisfied that her guess was right or wrong.

ChessTempo now has a fully customizable SRS feature. You can select the kind of tactic you want to learn, the level of difficulty, the size of the set, the time between repetition, etc.
Positional play is about slightly improving your position : so you should see something has changed for the better Usually, it's related to the scope of the pieces (you increase it). Here is an example from the positional test suite :

I think positional improvement falls into one of those categories :
- increasing the scope of your pieces (or restricting the scope of an opponent's piece) - eg. centralizing a Knight, opening a diagonal for a bishop, penetrating on the 7th for a rook, etc.
- preventing your opponent from doing one of those (that would be prophylaxy)
- improving your pawn structure (or degrading the opponent's)
- making a favourable trade (eg. you trade a poor bishop against his strong Knight)
- improving piece coordination (although this one is trickier...)

The more I think about hicetnuc's idea about a PT the more I like it. How about this as a criteria: If the evaluation of position changes by +x, and if the material is the same before and after the principal evaluation (pv), then we have a positional puzzle: the move is a very good one (+x), but there is no material gain or loss. This is easy to implement as another feature of the proposed TT. However, I am still uncertain about it because the user gets no explanation (except a raw engine score) as to why the move is a good one. Tactics are easy to judge: the move does or does not result in material or mate, so the user is easily satisfied that her guess was right or wrong.
I'm not sure how it works exactly, but doesn't this detect a positional mistake rather than a good positional move ? Well, I guess I need an example, or how does it work in the above position ?

It would be easy enough to program the computer to give an explanation of why the ending position is better than the starting one for black, at least in this case: count squares attacked by pieces, count isolated and backward pawns, might as well count pawn islands too, for both sides at start and end. We can think of other features to include. If the proposed program gave the new deficiences of the loser's side and the new advantages of the winner's as a list (not just a numeric score), in addition to the computer evaluation, perhaps this would indeed be a good learning tool. I will dwell on this; thanks for the suggestion.

Our posts are crossing in the mail. For positional problems I think it might work like this (suppose white delivers the blow): at move 10 for white, evaluation is about even. On move 11 for white, evaluation is now +x for white. So, black made a mistake on his move 10. Furthermore, the pv for white at move 11 does not lead to a material delta (i.e pieces may be exchanged but the exchanges are balanced). So white's best move 11 is a telling blow. So, the position ahead of white's 11th move is a candidate for a positional problem: make the best move and you get a positional but not material advantage.

I understand, but I'm afraid this will also bag positions where black just drifts rather than white finding a good positional move. Let me try and find an example.

BTW, GTM does set positional problems in that the engine Stockfish, will identify positional advantages. I played this game in GTM. At this position:
So, you can learn about positional moves via GTM. The problem is that without the comment that was in the score, you are left to yourself to figure out why Stockfish thinks the move is bad.

Do you need any programmers' help? I could help develop something if you want me to...although my actual chess abilities are more limited... :)

Do you need any programmers' help? I could help develop something if you want me to...although my actual chess abilities are more limited... :)
Thanks for the offer. I don't know how to break the problem down at this point, to support collaboration. I am fairly far along and it is probably too late to turn it into a joint project.

I understand, but I'm afraid this will also bag positions where black just drifts rather than white finding a good positional move. Let me try and find an example.
The only problem I can see is that (say) white makes good positional moves but his evaluation creeps up so slowly (because no single black move results in a big advantage for white), that the computer would never detect a puzzle (using the above algorithm). Still, there are situations (as given in my example) where such a puzzle can be detected because had my error actually been in the game score, Stockfish would have raised white's evaluation sufficiently.
Another kind of puzzle occurs to me: suppose for the traditional tactical puzzle (white to move), we pose that position as black to move. Thus the user must detect white's tactical shot and find a move to avoid it (such move would be calculated in advance by the computer in composing the puzzle). I think Heisman somewhere suggests that the main purpose for studying tactics is not so you can work them on your opponents (such opportunities are rare), but so you can recognize when they are about to be worked on you. I think this is a nice twist and might be a useful feature of the trainer.

I understand, but I'm afraid this will also bag positions where black just drifts rather than white finding a good positional move. Let me try and find an example.
The only problem I can see is that (say) white makes good positional moves but his evaluation creeps up so slowly (because no single black move results in a big advantage for white), that the computer would never detect a puzzle (using the above algorithm). Still, there are situations (as given in my example) where such a puzzle can be detected because had my error actually been in the game score, Stockfish would have raised white's evaluation sufficiently.
Hence the idea to compare white's best with white's second best and make sure there is a significant difference. Actually, while 0.2 looks very small, it seems it's large enough to detect good positional moves nonetheless. In positions where there is nothing special to do, the eval. difference between 1st choice and say 3rd or 4th choice is usually inferior to this 0.2
Another kind of puzzle occurs to me: suppose for the traditional tactical puzzle (white to move), we pose that position as black to move. Thus the user must detect white's tactical shot and find a move to avoid it (such move would be calculated in advance by the computer in composing the puzzle). I think Heisman somewhere suggests that the main purpose for studying tactics is not so you can work them on your opponents (such opportunities are rare), but so you can recognize when they are about to be worked on you. I think this is a nice twist and might be a useful feature of the trainer.
This is an excellent idea ! Defensive tactics are difficult to practice too