привет
Chess will never be solved, here's why
@playerafar
How would that procedure arrive at this (legal) position for example?
Martin - I deleted both my posts !
I realized I had not stated it properly.
One can't generate all three men positions from two man (two Kings only) by adding a reverse move capture.
You can generate all the pre-cursor positions - is what I should have posted.
Is there any good way in which positions could be categorized by computers - for particular projects?
1) Either a capture is possible or it is not. In that case - none of the second category can be precursor positions to fewer pieces.
Obvious - but should it be used?
2) Either a check is possible or it is not. Again obvious in the second category - no checkmate possible.
3) If there is no move that makes moving impossible for the other side - then again it obviously can't be a precursor position to stalemate.
Using those three factors - you can categorize positions finally as potential precursor or they're not.
All final positions and all positions with fewer than 32 pieces have to have precursor positions?
Not quite.
Final positions of checkmate or stalemate yes.
But the other? Gets messy.
That's one of the reasons I had to delete. ![]()
Here is an example of human analysis:
https://www.chessgames.com/perl/chessgame?gid=1033779
The game was adjourned after move 40 in a position with 16 men.
Shortly before the game was resumed, the second of Bronstein, B. S. Vainstein, showed the exact 5-men final position on a pocket set to the Soviet Ambassador in Hungary, saying:
"This one will be reached if Najdorf plays the best moves."
Vainstein got a special prize for the best second.
If one single human second could in one night calculate 41 moves deep from 16 to 5 men, then it is plausible that a cloud engine can in 5 years calculate from a 26-men tabiya to a 7-men table base.
That is also why the claim by Sveshnikov is credible: he too was a world top analyst.
He analysed in the pre-engine era as well as later with engines and table bases.
In the pre-engine era games were adjourned after 40 moves and 5 hours of play and play was then resumed the next morning after players and their seconds had analysed the position the whole night. They had neither engines nor table bases. They had only like 10 hours to analyse. Nevertheless they more often than not arrived at the truth about the position.
In reality you no more know that they arrived at the truth than you could beat SF/Syzygy in the position I invited you to try in #770, and neither did they.
They arrived at conclusions that would very likely work in the context (i.e. practical play at that level). They were usually very well qualified for that.
How did they do that? First they established who had the advantage, i.e. which side had to play for a win e.g. white and which side for a draw e.g. black.
In theoretical terms establishing who has the advantage means determining if either side has a forced mate (possibly in a trillion moves). Also, if the players are to play perfectly, both sides should be playing for the same result.
The terms you are using are only relevant in practical play. You can never make a move that wins for instance if the position is a theoretical draw, but you can, in practical play, make a move that is likely to increase the chances of your opponent blundering into a loss. Which moves might do that in practical play depends on the strength of your opponent.
Practical play is irrelevant to the problem unless you believe (as turned out to be the case in checkers) that practical players are close to perfection. But chess is not checkers.
Then they looked at the position and its traits. Then they started analysis i.e. they played a game against themselves from the position. When the outcome was as desired for one side, then they started with takebacks for the other side, starting at the end.
Same way anybody analyses their games.
The procedure wouldn't have worked for the toddlers in the example I gave in #770.
It is this procedure that I propose to emulate with cloud engines and table bases to likewise find the truth about a 26-men tabiya.
You intend to produce a weak solution of competition rules chess.
For this the Wikipaedia definition of "weak solution" is adequate viz:
"Provide an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game. That is, produce at least one complete ideal game (all moves start to end) with proof that each move is optimal for the player making it." (My highlighting.)
As @btickler pointed out, asking two toddlers doesn't count as a proof.
The tablebases I have no problem with (assuming they were complete; not quite the case yet) - it's the bit before.
If you call a cloud engine with 10^9 nodes / second running for 60 hours / move a toddler, then your desktop at 10^6 nodes / second running for a few seconds is not even an embryo.
Not at all.
I actually gave them 2 hours for the first 40 moves and two hours for the rest (which of course they didn't need). I would expect (though I'll leave it to you to try) that allowing them 60 hours a move would produce an almost identical game.
It's not a shortage of time that you see in the example, it's a problem with Stockfish 12's algorithm. If it's asked to mate with the knights in any mate of depth greater than about 35 it can be provoked into taking your pawn in short order. I think the algorithm prunes all the winning lines quite quickly.
That's why I chose SF12 in preference to SF8, SF11 or SF14 , which I also have. The others can't mate either but they'll all draw under the 50 move rule. With the time settings I used, I didn't want to wait that long.
"If one single human second could in one night calculate 41 moves deep from 16 to 5 men, then it is plausible that a cloud engine can in 5 years calculate from a 26-men tabiya to a 7-men table base."
///////////////
Depends on the position.
Also - grandmasters can skip steps that computers couldn't.
Also - humans could make mistakes on occasions that computers wouldn't - see the second point.
Apparently - you want a project whereby computers would do what strong players do.
But that isn't really 'solved'.
Its pushing a kind of procedure for computers.
I like your remarking of 'takeback' though.
So I thought of a qualification on it.
Investigating positions with one more piece - by generating them with reverse captures. We then get closely related positions.
Closely. Not air-tight related. But closely.
#784
I am only concerned about a weak solution of chess and without the 50-moves rule.
I agree:
"Produce at least one complete ideal game (all moves start to end) with proof that each move is optimal for the player making it."
To speed up I start from a 26-men tabiya and I stop when the 7-men table base is hit.
The tentative ideal game is generated by letting the cloud engine of 10^9 nodes / second play against itself for 60 hours / move until it reaches the table base.
That is 1.2 million times more than your 10^6 nodes / second desktop at 3 minutes / move.
Proof for the black moves resulting in a draw is unnecessary: if the black moves do lead to a table base draw, then they are good enough and the table base retroactively validates them all.
Proof for the white moves comes by granting takebacks to white for his last move, his 2nd last move and so on to ascertain that the alternatives produce nothing better than a draw either.
"In theoretical terms establishing who has the advantage means determining if either side has a forced mate"
++ No, not at all. Determining who has the advantage is finding out who seeks a win and who seeks a draw. In the initial position white is a tempo up, so white has an advantage. White tries to win and black tries to draw. In the adjourned position Najdorf - Bronstein black had a positional advantage and more active pieces, hence black tried to win and white tried to draw. If an advantage is enough or not to win is the outcome of the analysis. Who has an advantage is at the start of the analysis.
I"ve noticed that analysis boards often show 0.00 in the evaluation.
Usually that means one or both sides can force a draw or its already a completely dead draw with no win possibe even by deliberate blundering.
As opposed to 'dead even'. But in theory that's possible too.
I think the engines maybe always try to assign a slight edge to somebody then though.
The only time the engines seem to assign a 100% starkly unambiguous result in advance is when its 'mate in' whatever number of moves.
As opposed to an 'edge'. Which is what they usually do.
Sometimes they assign a big edge. Like +50. Or -50.
I've never seen +100 or higher. Can't recall +80 or higher.
But: I've seen Stockfish change its mind about positions if its let run for some minutes.
Even going from plus to minus - or vice versa. Sometimes substantially.
#784
I am only concerned about a weak solution of chess and without the 50-moves rule.
That changes the basis substantially. You said earlier you wanted to include it.
main changes are
(i) The number of legal positions is back to Tromp's (2.6+-2.9)x10^44 - no need to multiply by 100. The number that need to be taken into account can be approximately halved owing to left right symmetry.
(ii) Mates under basic rules which could not be won under the 50 move become wins instead of draws. Forced mates in more than 5898.5+50 moves are no longer automatically excluded.
I agree:
"Produce at least one complete ideal game (all moves start to end) with proof that each move is optimal for the player making it."
To speed up I start from a 26-men tabiya and I stop when the 7-men table base is hit.
The tentative ideal game is generated by letting the cloud engine of 10^9 nodes / second play against itself for 60 hours / move until it reaches the table base.
That is 1.2 million times more than your 10^6 nodes / second desktop at 3 minutes / move.
Yes, but that's a drop in the ocean when you consider a mate of depth 549 or say Haworth's predicted 1200+ moves for 8 men or, if you take the liberty of extending the prediction to 26 men, 2,601,500,000+ moves.
I wouldn't even expect it to sort SF12 out on the toddler's problem, because I think that might only be sorted if it searches to a depth where it actually hits mate - 51.5 moves.
The problem is the exponential growth of positions to be evaluated with the number of moves. A 1000-fold increase in speed wouldn't produce a very large increase in the search depth.
Proof for the black moves resulting in a draw is unnecessary: if the black moves do lead to a table base draw, then they are good enough and the table base retroactively validates them all.
How is the distinction between colours arrived at?
Proof for the white moves comes by granting takebacks to white for his last move, his 2nd last move and so on to ascertain that the alternatives produce nothing better than a draw either.
As in the example I gave in #770?
"In theoretical terms establishing who has the advantage means determining if either side has a forced mate"
++ No, not at all. Determining who has the advantage is finding out who seeks a win and who seeks a draw.
In theoretical terms both players must seek the same result if they are both to play perfectly. For a weak solution, one players moves are arbitrary.
In the initial position white is a tempo up, so white has an advantage.
You can't mix up practical play with a solution of chess. If White has a theoretical advantage that means he has a forced mate. You have no grounds for assuming that White has a forced mate from the starting position.
White tries to win and black tries to draw.
If it were true that White has a forced mate in the starting position, then in any weak solution White tries to win (but not necessarily in a sensible way - any old way will do). Black plays in a totally arbitrary fashion.
In the adjourned position Najdorf - Bronstein black had a positional advantage and more active pieces, hence black tried to win and white tried to draw. If an advantage is enough or not to win is the outcome of the analysis. Who has an advantage is at the start of the analysis.
Who, if anyone, had a forced mate from the position would be determined in absolute terms at the outset. Any analysis that occurred would attempt to assess the probable outcome of practical play.
Someone capable of reliably making an evaluation in absolute terms might well need an ELO rating millions of times that of anyone involved. You don't know. There's no way to correlate when the number of men on the board gets beyond 7.
I have come round to the viewpoint that the far less unidirectional nature of chess compared to checkers (where most moves early on are irreversible, so there is an order on classes of positions which greatly constrains the paths through the positions), there is not necessarily anything like a square root saving in chess for the size of the position space. i.e. it may be that the ratio of the logs of the size of the set of legal positions (Tromp's number) to that of the set of all positions dealt with by a strategy may be much less than 2 and maybe even nearer to 1.
The only thing (as far as I can see) that might recover some reduction is if bad moves by the defender allow a lot of reduction by the other player of the positions that can be reached. As a simple example, capturing a piece helps to reduce the possible futures. The problem with this hope is that a class of positions needs to be avoided not in one branch but in the whole tree. Worse still, if the result is a draw, this means we would have to have a vain hope that although white is merely an equal competitor in guiding the game to a result, it would need to be able to win a parallel game with black to only allow a tiny fraction of the positions to be reached ("square root reduction" is a REALLY tiny fraction, 1e-23 or so).
Bottom line: I am in no way convinced that it would be practical to solve chess in a thousand years with current cloud computing, never mind five years.
Interesting indeed.
That consideration is what I have been referring to as "yet to be discussed" in my posts. (The nitty gritty.)
Is it possible to post details?
Well it starts by creating the partial order that P1 >= P2 if it is possible to reach P2 from P1. This sorts all positions into equivalence classes which are ordered by the induced order. Note that these classes include a mix of positions with each player to play in almost all cases.
The parallel game that white needs to play is that for every class of positions it wants to ensure the subset that is actually reached in the strategy is a tiny fraction (overoptimistically 1e-23 on average) of the total. This needs to be achieved by some combination of (1) avoiding reaching a lot of the classes entirely and (2) not reaching all the positions in the classes that are reached.
I don't claim the reasoning that needs to follow this is more than a confident guess.
[Note that the partial order (and hence the decomposition into classes) is a refinement of simpler ones:
- total number of pieces and pawns (this is non-increasing for both colours independently - a direct product of orders)
- whether the number of pawns allows similar material by promotion (again for both sides)
- pawns moving up the board (quantified perhaps by the maximum total number of future pawn moves for each side - another two independent orders).
- pawns needing to change file - each such requires a capture which is quite a strong constraint
- minor irreversibility like castling rights and e.p.
- messy special cases where everything above is consistent but there is no way to get from a position to another]
Bottom line: I am in no way convinced that it would be practical to solve chess in a thousand years with current cloud computing, never mind five years.
Me neither, alhough I was guessing regarding the number of positions; but so is everyone. My figure was very far in excess of 1000 years and it seemed quite reasonable. I think one person has made a very faulty judgement re 5 years and perhaps tygxc has taken that person at his world.
Привет