Forums

Help me design a tactics computer program

Sort:
fredm73

 
working on an app called "TacticsTrainer" (TT) and solicit your request for features.  The software is 
about 50% complete.  Here are the features I intend to include:
1.It is free, runs under Windows.  I cannot develop for other platforms.
2.It is based on "spaced learning", as in http://mnemosyne-proj.org/ (see 3 and 4).
3.It presents tactics you have seen before as well as new ones, based on a % you give (oldPuz).
During a session you will see that percent of old puzzles, and (1.0-oldPuz) new ones.  As the session
wears on, TT keeps the balance, striving to reach the target percent.
4.The old puzzles that you see are based on 2 factors (each given a weight you can choose): 1) how well
you have done on the puzzle (goodAnswers/allTries), and 2) how recently you have seen the puzzle (less recently seen
puzzles are scheduled before recent ones).  In this way you have a constant drill on ones you did poorly
on, and (less frequently) ones you did well on, but will eventually see all the puzzles. New puzzles (never seen before) will be presented easier ones first (see 8).
5.You can use a chess engine to play out the puzzle (you against the computer until you are convinced of the
solution).
6.You can add puzzles by submitting the FEN to TT. I wrote a program that lets you extract FEN from a chess board image on your computer which can be used, http://www.chess.com/download/view/convert-chessboard-image-to-fen-and-pgn).
 
7.You can extract puzzles from a game file (format is pgn).  This is harder than you may think and might not
be feasible, although I have some ideas.  If the "game" is clearly a puzzle (because it has a "FEN" tag pair indicating a non-standard start position), then TT's extraction is easy and fast.  There are many such "puzzle" pgn  files on the web you can download.
8.TT will attempt to grade new puzzles (easy, medium, difficult) and present the new puzzles, easier ones first.  Thus you
should be able to build up your solving skill gradually.  Automatic grading by computer is also difficult and some puzzles will probably
be misgraded.
9.You can delete puzzles from the tactics file.  You can specify different tactics files as input to TT.  The files will be text
(txt) format and fairly easy to read, but it is not expected you will update them by hand (see 6, 7).
10.When TT presents a puzzle, you will see the first and last dates it was shown to you, and the attempts and successes you have
achieved on that puzzle.  You will see a chess board and can click to make the first move.  Then TT tells you if you passed
or failed on the puzzle. Then you can use the engine to play the position against the computer.  The interface is
like the above mentioned GTM.
11.Some sort of long term history will be kept or calculated.
12.    TT will come with a few thousand puzzles, along with references to websites that have such for your own download.
 
If you have ideas about features for TT let me know and I will carefully consider them.  I am writing this app for my own use
and so have absolute veto power without recourse :-).  However, I will make it free for download.
 
If you have ideas on how to implement 7 or 8, I would be interested. I am restricting the logic to use only the data found
in a chess engine's principal variation (pv).  The pv is based on the sequence of moves the engine thinks is best (to a limited
depth, say 15 ply).  All I have are these moves.  The pv may extend beyond the actual combination. I.E. there will be a bunch
of more or less superfluous moves at the end making it difficult to know where the combination stops.  To implement 7 I will have to get the pv for all the moves (to make it easy, of
just the side that won the game).  Determining what is a tactic from that data is a bit problematic, although I have some
ideas.  Processing a pgn file of many games could take a few hours.
 
Similarly, grading a puzzle (for feature 8) is not obvious.  E.G. I cannot tell from the length of the pv if the puzzle
is easy or hard, since the pv may continue well beyond the combination. My thoughts are to ask: is the first move a capture or
a check (makes it easier); is the percent of checks/captures in the pv high (makes it easier).  Does the pv end in checkmate (makes it
easier), etc.  Some sort of neural net or other machine learning technique might be appropriate but gather graded training data is not quickly done. 
 
I am writing this program because, to the best of my knowlege, there  is no other single app with features 3,4,6,7,8. My goal is to have a beta test version ready in a couple of months (but no promises :-)).

VLaurenT

Hello !

Although I think this is a good idea, I must say I think ChessTempo has already everything you could think of as far as tactics training goes.

May I suggest another course of action ? We badly need a 'positional trainer' rather than a tactical trainer.

Here is an idea to select a position eligible for positional training :

- use an engine with a "normal" evaluation function (Houdini, Rybka or Critter come to mind)

- scan games and look for positions where the move played meets the following critariae :

  • 1st choice of computer
  • results in eval between 0.3 and 1
  • is superior to 2nd choice by at least 0.2 and less than 0.7 (to avoid saving tactics !)

Maybe you can tweak the # when the 1st choice of the computer is also the 1st choice of the human player, as there is a greater chance this is a 'natural' move that the trainee should play, but you get the idea.

This kind of trainer doesn't exist at the moment Smile

Scottrf

hicetnunc, he has already made a positional trainer Wink

VLaurenT

Really ? You mean GTM ? But it's not exactly the same idea I have in mind.

(...though GTM is top-notch, for sure !)

My idea is more along the lines of finding moves like rerouting a Knight, or penetrating on an open file, or restricting an opponent's piece Wink - things like that...

Scottrf

Ah, sorry, I see, so you mean in specific positions where one move is vastly superior, but not so much for tactical reasons, not just random positions. I get you and that would be good if the positions are picked well.

VLaurenT

Here is a sample of the kind of positions I have in mind -

VLaurenT

Mixing positional problems with tactics problems would also be a good idea, but here you have GTM Smile

fredm73
hicetnunc wrote:

Hello !

Although I think this is a good idea, I must say I think ChessTempo has already everything you could think of as far as tactics training goes.

May I suggest another course of action ? We badly need a 'positional trainer' rather than a tactical trainer.

Here is an idea to select a position eligible for positional training :

- use an engine with a "normal" evaluation function (Houdini, Rybka or Critter come to mind)

- scan games and look for positions where the move played meets the following critariae :

1st choice of computer results in eval between 0.3 and 1 is superior to 2nd choice by at least 0.2 and less than 0.7 (to avoid saving tactics !)

Maybe you can tweak the # when the 1st choice of the computer is also the 1st choice of the human player, as there is a greater chance this is a 'natural' move that the trainee should play, but you get the idea.

This kind of trainer doesn't exist at the moment

I am open to the idea of a PT, but I don't see that your criteria (or any I can think of) would be effective.  How would the user be convinced that her choice is worse than the computer's when the evaluations differ by such a small amount, especially when we cannot give the reasons for the two evaluations?  My program, GTM, allows one to play out an existing game guessing the moves and seeing how the engine's choice differs from the user's (as well as the evaluation of the move actually played in the game).  It thus gives the kind of comparison you are suggesting, but frankly I have found very little instructional value when the moves differ by such a small amount.

I have toyed with the idea of developing some sort of objective criteria such as control of the center, total squares controlled by the pieces, pieces guarded (not free), etc.  I can think of a few more that would be actually calculable to assess two positions.  The problem is: who would be convinced that my choice of criteria, scores, weights applied to such were actually indicative of a better "positional" move?  All engines, ultimately, must make such judgments when they reach a quiesent/static position in order to pick the best move.  I could make up my own evaluator (so that I could feed back the reason for the score), but I just don't see people buying into my judgement.  

The principal reason that Chess Tempo does not meet my needs is that it does not drill me on the same puzzle until I get it right.  I played around with mnemosyne, lashing it up (by hand) to CT-ART tactics.  It was cumbersome, but I found the idea of spaced repetition a better learning experience than viewing the puzzle, getting it wrong or right, and then never seeing it again.

fredm73

The more I think about hicetnuc's idea about a PT the more I like it.  How about this as a criteria: If the evaluation of position changes by +x, and if the material is the same before and after the principal evaluation (pv), then we have a positional puzzle: the move is a very good one (+x), but there is no material gain or loss.  This is easy to implement as another feature of the proposed TT.  However, I am still uncertain about it because the user gets no explanation (except a raw engine score) as to why the move is a good one.  Tactics are easy to judge: the move does or does not result in material or mate, so the user is easily satisfied that her guess was right or wrong.

VLaurenT

ChessTempo now has a fully customizable SRS feature. You can select the kind of tactic you want to learn, the level of difficulty, the size of the set, the time between repetition, etc. 

Positional play is about slightly improving your position : so you should see something has changed for the better Smile Usually, it's related to the scope of the pieces (you increase it). Here is an example from the positional test suite :

VLaurenT

I think positional improvement falls into one of those categories :

  • increasing the scope of your pieces (or restricting the scope of an opponent's piece) - eg. centralizing a Knight, opening a diagonal for a bishop, penetrating on the 7th for a rook, etc.
  • preventing your opponent from doing one of those (that would be prophylaxy)
  • improving your pawn structure (or degrading the opponent's)
  • making a favourable trade (eg. you trade a poor bishop against his strong Knight)
  • improving piece coordination (although this one is trickier...)
VLaurenT
fredm73 wrote:

The more I think about hicetnuc's idea about a PT the more I like it.  How about this as a criteria: If the evaluation of position changes by +x, and if the material is the same before and after the principal evaluation (pv), then we have a positional puzzle: the move is a very good one (+x), but there is no material gain or loss.  This is easy to implement as another feature of the proposed TT.  However, I am still uncertain about it because the user gets no explanation (except a raw engine score) as to why the move is a good one.  Tactics are easy to judge: the move does or does not result in material or mate, so the user is easily satisfied that her guess was right or wrong.

I'm not sure how it works exactly, but doesn't this detect a positional mistake rather than a good positional move ? Well, I guess I need an example, or how does it work in the above position ? Smile

fredm73

It would be easy enough to program the computer to give an explanation of why the ending position is better than the starting one for black, at least in this case: count squares attacked by pieces, count isolated and backward pawns,  might as well count pawn islands too, for both sides at start and end. We can think of other features to include. If the proposed program gave the new deficiences of the loser's side and the new advantages of the winner's as a list (not just a numeric score), in addition to the computer evaluation, perhaps this would indeed be a good learning tool.  I will dwell on this; thanks for the suggestion.

fredm73

Our posts are crossing in the mail.  For positional problems I think it might work like this (suppose white delivers the blow): at move 10 for white, evaluation is about even.  On move 11 for white, evaluation is now +x for white.  So, black made a mistake on his move 10.  Furthermore, the pv for white at move 11 does not lead to a material delta (i.e pieces may be exchanged but the exchanges are balanced). So white's best move 11 is a telling blow.  So, the position ahead of white's 11th move is a candidate for a positional problem: make the best move and you get a positional but not material advantage.

VLaurenT

I understand, but I'm afraid this will also bag positions where black just drifts rather than white finding a good positional move. Let me try and find an example.

fredm73

BTW, GTM does set positional problems in that the engine Stockfish, will identify positional advantages.  I played this game in GTM.  At this position:

So, you can learn about positional moves via GTM.  The problem is that without the comment that was in the score, you are left to yourself to figure out why Stockfish thinks the move is bad.

StormieEDC

Do you need any programmers' help? I could help develop something if you want me to...although my actual chess abilities are more limited... :)

fredm73
TootsieRoller wrote:

Do you need any programmers' help? I could help develop something if you want me to...although my actual chess abilities are more limited... :)

Thanks for the offer.  I don't know how to break the problem down at this point, to support collaboration.  I am fairly far along and it is probably too late to turn it into a joint project.

fredm73
hicetnunc wrote:

I understand, but I'm afraid this will also bag positions where black just drifts rather than white finding a good positional move. Let me try and find an example.

The only problem I can see is that (say) white makes good positional moves but his evaluation creeps up so slowly (because no single black move results in a big advantage for white), that the computer would never detect a puzzle (using the above algorithm).  Still, there are situations (as given in my example) where such a puzzle can be detected because had my error actually been in the game score, Stockfish would have raised white's evaluation sufficiently. 

Another kind of puzzle occurs to me: suppose for the traditional tactical puzzle (white to move), we pose that position as black to move.  Thus the user must detect white's tactical shot and find a move to avoid it (such move would be calculated in advance by the computer in composing the puzzle).  I think Heisman somewhere suggests that the main purpose for studying tactics is not so you can work them on your opponents (such opportunities are rare), but so you can recognize when they are about to be worked on you.  I think this is a nice twist and might be a useful feature of the trainer.

VLaurenT
fredm73 wrote:
hicetnunc wrote:

I understand, but I'm afraid this will also bag positions where black just drifts rather than white finding a good positional move. Let me try and find an example.

The only problem I can see is that (say) white makes good positional moves but his evaluation creeps up so slowly (because no single black move results in a big advantage for white), that the computer would never detect a puzzle (using the above algorithm).  Still, there are situations (as given in my example) where such a puzzle can be detected because had my error actually been in the game score, Stockfish would have raised white's evaluation sufficiently. 

Hence the idea to compare white's best with white's second best and make sure there is a significant difference. Actually, while 0.2 looks very small, it seems it's large enough to detect good positional moves nonetheless. In positions where there is nothing special to do, the eval. difference between 1st choice and say 3rd or 4th choice is usually inferior to this 0.2

Another kind of puzzle occurs to me: suppose for the traditional tactical puzzle (white to move), we pose that position as black to move.  Thus the user must detect white's tactical shot and find a move to avoid it (such move would be calculated in advance by the computer in composing the puzzle).  I think Heisman somewhere suggests that the main purpose for studying tactics is not so you can work them on your opponents (such opportunities are rare), but so you can recognize when they are about to be worked on you.  I think this is a nice twist and might be a useful feature of the trainer.

This is an excellent idea ! Defensive tactics are difficult to practice too Smile