@playerafar What was Black's previous move?
Chess will never be solved, here's why

@playerafar What was Black's previous move?
I get it. It couldn't have got there. So its illegal anyway.
But what do they allow or count and what do they not ?
#2566
"how does black have a previous move - if its white to move?"
e6xd7+ Ke8-d8
None of this is relevant to solving chess.
#2536
"Imperfect player guesses that game is perfect"
++ I do not guess, I calculate:
ICCF WC 30: 136 games = 127 draws + 9 decisive games, D = 9 / 136 = 0.0662, E = 0.0659
Hence
Drawn games with 0 errors (?) = ideal games with optimal moves: 126
Drawn games with 2 errors (?): 1
Decisive games with 1 error (?): 9
Let D represent the rate of decisive games
Let E represent the error rate per game
D = E + E³ + E^5 + E^7 + ... = E / (1 - E²)
Hence
E² + E/D - 1 = 0
Hence
E = Sqrt (1 + 1 / (2*D)²) - 1 / (2*D)
@playerafar
In the latter case
What do they allow? That's a good question.
I think they exclude a minimum of what's described in the last paragraph here, but for a full list documentation seems to be sparse. You may have to consult the source code for the generators.
E.g. the syzygy-tables.info site rejects this
https://syzygy-tables.info/?fen=8/8/2kn4/3pP3/8/8/8/4K3_w_-_d6_0_1
but allows this
https://syzygy-tables.info/?fen=8/8/2K5/3pP3/8/8/8/4k3_w_-_d6_0_1
or this
https://syzygy-tables.info/?fen=3R4/8/8/R2k3R/8/8/8/4K3_b_-_-_0_1
I've assumed illegal positions form a negligible proportion of tablebased positions in the graph.
The problem of diagnosing positions as illegal has not as yet been solved.
#2536
"Imperfect player guesses that game is perfect"
++ I do not guess, I calculate:
ICCF WC 30: 136 games = 127 draws + 9 decisive games, D = 9 / 136 = 0.0662, E = 0.0659
Hence
Drawn games with 0 errors (?) = ideal games with optimal moves: 126
Drawn games with 2 errors (?): 1
Decisive games with 1 error (?): 9
Let D represent the rate of decisive games
Let E represent the error rate per game
D = E + E³ + E^5 + E^7 + ... = E / (1 - E²)
Hence
E² + E/D - 1 = 0
Hence
E = Sqrt (1 + 1 / (2*D)²) - 1 / (2*D)
And what does your calculation give for these games?
You calculate from guesses. Wrong guesses at that.
@playerafar
Either of your positions in #2566 could be either player to move.
The tablebases do contain illegal positions and the number of these is unknown, but in most cases appears to be negligible, probably smaller than the number of omitted positions with castling rights.
Determining legality is an as yet unsolved problem.
This would obviously represent a potential source of error with the extrapolation in the graph I posted, but as @Elroch already pointed out, that would be the least of my problems in that respect.

It seems that Q-learning is not suited for NN, right?
Not sure what you mean, but deep Q-learning is a powerful general reinforcement learning paradigm which might be reasonably hypothesised to be near optimal. (The discrete version can be proven to be optimal in one sense).
Yes, but Q-learning needs a table to store the Q(state, action) values. For chess, though, we cannot provide this table beforehand and I think the lookup would be too slow, so a function-approximation system must be used. The problem is that function-approximation systems can lead to instability:
Even with a discount factor only slightly lower than 1, Q-function learning leads to propagation of errors and instabilities when the value function is approximated with an artificial neural network. [ . . . ]
This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy of the agent and the data distribution, and the correlations between Q and the target values. [ . . . ]
In 2014, Google DeepMind patented an application of Q-learning to deep learning, titled "deep reinforcement learning" or "deep Q-learning" that can play Atari 2600 games at expert human levels. ¹
It seems that instability appears with less complex function-approximations too.²
¹ https://en.wikipedia.org/wiki/Q-learning (I know, I know, I deprecated Wikipedia once, but it is not always bad )
² www.leemon.com/papers/1995b.pdf
Wow. That syzygy needs a little more work.
Syzygy is really the best thing since sliced bread.
There's no requirement for it not to solve illegal positions as well as legal ones.

It seems that Q-learning is not suited for NN, right?
Not sure what you mean, but deep Q-learning is a powerful general reinforcement learning paradigm which might be reasonably hypothesised to be near optimal. (The discrete version can be proven to be optimal in one sense).
Yes, but Q-learning needs a table to store the Q(state, action) values.
That's the original, discrete Q-learning. Deep Q-learning has a neural network to generate the action values from the position (expected score is appropriate for general chess).
For chess, though, we cannot provide this table beforehand
Nor in any discrete Q-learning problem. It is the action values that get learnt.
and I think the lookup would be too slow, so a function-approximation system must be used. The problem is that function-approximation systems can lead to instability:
Even with a discount factor only slightly lower than 1, Q-function learning leads to propagation of errors and instabilities when the value function is approximated with an artificial neural network. [ . . . ]
Stability is always an issue. The discount factor gamma tends to damp errors and aid stability. Lowering the learning rate alpha also helps as it effectively smooths learning over larger samples.
This instability comes from the correlations present in the sequence of observations,
Always best to avoid correlations in the sequence of observations.
the fact that small updates to Q may significantly change the policy of the agent and the data distribution, and the correlations between Q and the target values. [ . . . ]
In 2014, Google DeepMind patented an application of Q-learning to deep learning, titled "deep reinforcement learning" or "deep Q-learning" that can play Atari 2600 games at expert human levels.¹
Deep Q-learning achieved stable, high performing results for almost all the games. Bitmap screen images were used for input. Here is the early work, expanded to far more games later: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
It seems that instability appears with less complex function-approximations too.²
¹ https://en.wikipedia.org/wiki/Q-learning (I know, I know, I deprecated Wikipedia once, but it is not always bad ) It is generally speaking very useful, given the list of references!

Maybe technology will never solve chess but it can help you to improve. Check out this blog site chesstech.info You might be surprised at all the help that is available via tech.

Interesting, @MARattigan. So the first mover is getting over 75% of the wins and the fraction is probably going to increase further (2 -> 4 is a big increase, 3 -> 5 a small one).
#2579
"there's something a bit wrong with the algebra"
++ No, the algebra is all right.
"this isn't intelligible"
++ I try to explain again
error (?) = move that turns a drawn position into a lost position, or a won position back into a drawn position
blunder (??) = double error = move that turns a won position into a lost position
Per generally accepted hypothesis the initial position is a draw.
Thus a drawn game must contain an even number of errors and a decisive game must contain an odd number of errors.
Let E represent the chance of finding a single error in a game, i.e. the number of games with a single error in them divided by the total number of games in a large tournament.
Let D represent the chance of a game being decisive, i.e. the number of decisive games divided by the total number of games in a large tournament.
Thus the chance of 2 errors in 1 game, i.e. the number of games with 2 errors in them divided by the total number of games in the tournament is E².
Thus the chance of 3 errors in 1 game, i.e. the number of games with 3 errors in them divided by the total number of games in the tournament is E³.
Thus D = E + E³ + E^5 + E^7 + ...
I previously applied that to the Zürich 1953 Candidates' Tournament.
Applying the same to the last complete ICCF WC tournament leads to the conclusion that 99% of the ICCF WC drawn games contain no error and are thus ideal games with optimal moves and are thus part of the weak solution of chess.
"giving the approx. result that D = E"
++ That is right: when the error rate is very small then the rate of decisive games becomes very small as well, or vice versa: a large tournament with a small number of decisive games must have a low error rate.
Interesting, @MARattigan. So the first mover is getting over 75% of the wins and the fraction is probably going to increase further (2 -> 4 is a big increase, 3 -> 5 a small one).
I'll repost it with three versions; up to 1 pawn difference, 5 pawns difference and 9 pawns difference. In each there looks to be a stationary point in the ratio between first and second mover wins between 6 and 7 men, so I think that leaves it a bit up in the air.
But certainly a strong first move advantage in what's tablebased (advantage as in win rather than the ones that fade away, popular with chess players).
Max 1 pawn difference
USING UNCORRECTED SYZYGY POSITION COUNTS:
Game BR
Endgames with notional material difference at most 1 pawn(s)
# men additional to kings=0
total endgame classifications: 1
win % =0.00
first player win % =0.00; second player win % =0.00
ratio (second player wins)/(first player wins) = -
# men additional to kings=1
total endgame classifications: 2
win % =67.17
first player win % =37.71; second player win % =29.46
ratio (second player wins)/(first player wins) = 0.78
# men additional to kings=2
total endgame classifications: 7
win % =27.28
first player win % =20.09; second player win % =7.19
ratio (second player wins)/(first player wins) = 0.36
# men additional to kings=3
total endgame classifications: 34
win % =40.70
first player win % =30.77; second player win % =9.93
ratio (second player wins)/(first player wins) = 0.32
# men additional to kings=4
total endgame classifications: 55
win % =48.59
first player win % =37.97; second player win % =10.63
ratio (second player wins)/(first player wins) = 0.28
# men additional to kings=5
total endgame classifications: 238
win % =58.00
first player win % =44.39; second player win % =13.62
ratio (second player wins)/(first player wins) = 0.31
Max 5 pawns difference
USING UNCORRECTED SYZYGY POSITION COUNTS:
Game BR
Endgames with notional material difference at most 5 pawn(s)
# men additional to kings=0
total endgame classifications: 1
win % =0.00
first player win % =0.00; second player win % =0.00
ratio (second player wins)/(first player wins) = -
# men additional to kings=1
total endgame classifications: 8
win % =49.27
first player win % =26.25; second player win % =23.02
ratio (second player wins)/(first player wins) = 0.88
# men additional to kings=2
total endgame classifications: 25
win % =56.64
first player win % =32.58; second player win % =24.05
ratio (second player wins)/(first player wins) = 0.74
# men additional to kings=3
total endgame classifications: 104
win % =60.67
first player win % =37.69; second player win % =22.98
ratio (second player wins)/(first player wins) = 0.61
# men additional to kings=4
total endgame classifications: 241
win % =63.92
first player win % =42.57; second player win % =21.35
ratio (second player wins)/(first player wins) = 0.50
# men additional to kings=5
total endgame classifications: 710
win % =71.02
first player win % =46.85; second player win % =24.16
ratio (second player wins)/(first player wins) = 0.52
Max 9 pawns difference
USING UNCORRECTED SYZYGY POSITION COUNTS:
Game BR
Endgames with notional material difference at most 9 pawn(s)
# men additional to kings=0
total endgame classifications: 1
win % =0.00
first player win % =0.00; second player win % =0.00
ratio (second player wins)/(first player wins) = -
# men additional to kings=1
total endgame classifications: 10
win % =54.84
first player win % =27.87; second player win % =26.97
ratio (second player wins)/(first player wins) = 0.97
# men additional to kings=2
total endgame classifications: 43
win % =65.08
first player win % =34.35; second player win % =30.72
ratio (second player wins)/(first player wins) = 0.89
# men additional to kings=3
total endgame classifications: 158
win % =71.61
first player win % =39.19; second player win % =32.42
ratio (second player wins)/(first player wins) = 0.83
# men additional to kings=4
total endgame classifications: 411
win % =73.61
first player win % =43.15; second player win % =30.46
ratio (second player wins)/(first player wins) = 0.71
# men additional to kings=5
total endgame classifications: 1122
win % =78.33
first player win % =45.56; second player win % =32.78
ratio (second player wins)/(first player wins) = 0.72
I think half point or full point zwangs like these might feature strongly in the figures, which might not be what you had in mind with your chess playing hat on.

Yes, but Q-learning needs a table to store the Q(state, action) values. For chess, though, we cannot provide this table beforehand
Nor in any discrete Q-learning problem. It is the action values that get learnt.
Yeah, I meant that for chess we don't even know what would be the size of that (huge) matrix, therefore it would be inefficient to use it in a program.
#2579
"there's something a bit wrong with the algebra"
++ No, the algebra is all right.
"this isn't intelligible"
++ I try to explain again
error (?) = move that turns a drawn position into a lost position, or a won position back into a drawn position
blunder (??) = double error = move that turns a won position into a lost position
Per generally accepted hypothesis the initial position is a draw.
Thus a drawn game must contain an even number of errors and a decisive game must contain an odd number of errors.
Let E represent the chance of finding a single error in a game, i.e. the number of games with a single error in them divided by the total number of games in a large tournament.
Let D represent the chance of a game being decisive, i.e. the number of decisive games divided by the total number of games in a large tournament.
Thus the chance of 2 errors in 1 game, i.e. the number of games with 2 errors in them divided by the total number of games in the tournament is E².
Thus the chance of 3 errors in 1 game, i.e. the number of games with 3 errors in them divided by the total number of games in the tournament is E³.
Thus D = E + E³ + E^5 + E^7 + ...
I previously applied that to the Zürich 1953 Candidates' Tournament.
Applying the same to the last complete ICCF WC tournament leads to the conclusion that 99% of the ICCF WC drawn games contain no error and are thus ideal games with optimal moves and are thus part of the weak solution of chess.
"giving the approx. result that D = E"
++ That is right: when the error rate is very small then the rate of decisive games becomes very small as well, or vice versa: a large tournament with a small number of decisive games must have a low error rate.
Again what does your calculation give for these games?
Why do you apply it games where you can't possibly know where the errors are? I've listed the errors in these games by checking with the Syzygy tablebase.
The fact that you carry on to "calculate" that SF14 will find errors in its top four moves in only 1 in 100,000,000,000,000,000,000 positions and I've given you four already ought to tell you something.

Yes, but Q-learning needs a table to store the Q(state, action) values. For chess, though, we cannot provide this table beforehand
Nor in any discrete Q-learning problem. It is the action values that get learnt.
Yeah, I meant that for chess we don't even know what would be the size of that (huge) matrix, therefore it would be inefficient to use it in a program.
Well, this is what some of the DeepMind work has done, and it is close to state of the art.
A way to think of it is that the network derives a way to positionally evaluate a general position with high level abstraction using millions of parameters.
@playerafar
The number of KPK positions according to Wilhelm/Nalimov is also 331352.
I don't think its hard to calculate by hand.
Although I won't guarantee it would match that number.
It Probably Would !!
A sum of three terms for a King in corner/edge/interior multiplied by 2 and then 48 ?
Unfortunately not quite that simple - as one or both Kings might take one or two of the pawn's 48 possible squares
You wouldn't match it exactly if you ruled out all illegal positions. Both Nalimov and Syzygy allow positions like this https://syzygy-tables.info/?fen=8/8/8/8/8/5k2/4P3/4K3_b_-_-_0_1 (but not with White to move).
I looked at it - when I switched to white to move it indicated an illegal position.