#2100
I tried to replicate your perceived error on Stockfish 14 NNUE, but could not replicate your findings: the engine goes straight for the table base correct win 1 Rh8+ Ke7 2 Rh7+ Kf6 3 Rf7+ Kg6 4 Rd7 Rb1+ 5 Ka5 Kf6 etc. I guess there is something wrong with your version of Stockfish 14.
Chess will never be solved, here's why
"At infinite time the error rate is 0. At zero time the error rate is infinite."
A lot of math things have 'infinities' at either end.
Doesn't prove anything in between.
Doesn't validate 'extrapolations' from the infinities.
#2129
"Doesn't prove anything in between."
at time 0 / move: infinite error / game
at time 1 s / move: 11.8% error / game
at time 1 min / move: 2.1% error / game
at time infinite / game: 0 error / game
That is enough to interpolate everything in between.
Anything is 'enough' if you want it to be.
The earth can even be flat if whoever wants that.
Many people still believe that.
Even on chess.com there's at least one believer of that.
And another who insists that 'viruses can't spread diseases'.
#2133
"Many people still believe that."
On this thread there are even people who believe that:
chess is a forced win for white in 3 trillion moves starting with 1 e4 e5 2 Ba6
1 a4 is better than 1 e4 or 1 d4
positions with 7 white rooks, 3 black rooks, 3 black bishops, and 5 black knights are common
solving chess requires floating point operations
engines play weaker when they use more time
strongly solving is the same as weakly solving
Prof. van den Herik cannot even define the subject he wrote a paper about
GM Sveshnikov knew nothing about chess analysis
2 data points are not enough to estimate 2 parameters
nodes do not include evaluation
there are twice as much positions as diagrams
the 50 moves rule is invoked in most chess games
there is a huge difference between 3-fold, 2-fold, or 5-fold repetition
#2140
"How can you say the error rate is that function of time?"
At infinite time the error rate is 0. At zero time the error rate is infinite.
Hypothesis 1.
So the simplest monotone function that satisfies those 2 asymptotic boundary conditions is
error = a / time^b
Hypothesis 2. Occam's razor is not a proof.
In plain English:
time * 60 = error / 5.6
"And those two equations are not coherent." ++ What do you mean?
Wrong calculation. Are you kidding? If b = 60 and a = 5.6, then
error = 5.6/time⁶⁰ and error/5.6 = 1/time⁶⁰ = time⁻⁶⁰ ≠ time*60
"the games may contain any odd number of errors, but then you make your calculations assuming there is only one"
++ P(5 errors) << P(3 errors) << P(1 error)
Hypothesis 3.
"if the game was a win for White or for Black, following your reasoning we should think that at least one error occurred in all the drawn games"
++ Yes, that is right: if chess were a win for white or even for black, then all the drawn games would contain an odd number of errors, at least 1. That would lead to the odd outcome that more time = more errors.
Hypothesis 4.
Counter-hypothesis: not to mention pathology in game trees, even if the evaluation was strongly biased, more time would lead to more draws, because the engine plays against an equally flawed evaluator, itself. With less time the result has simply greater variance. Calculations make it possible for the evaluator to avoid those lines that lead to lower expected results, therefore stabilizing the evaluation. The average outcome is the same: 0.5 and cannot be anything but that, with self-play/self-analysis and limited search.
Fact: this phenomenon (a more stable evaluation with increasing depth) has been observed basically in any engine. Following your reasoning, we could infer that the game is a draw even using a 1700 elo-rated engine.
"if errors by both sides do compensate each other, that parameter "a" can be any number."
We are talking about 11.8% decisive games @ 1 s/move and 2.1% decisive games @ 1 min/move. That justifies to neglect the occurence of 3, 5, 7, 9... errors in 1 game, even more so at 60 h/move.
Hypothesis 5.
Estimating the parameters a & b comes from the 2nd step: extrapolating from 1 s/move and 1 min/move to 60 h/move. That is independent from neglecting 3, or 5, or 7 errors and only assuming 1 error as there are only 2.1% decisive games @ 1 min/move.
I did not assume a linear dependence.
On the contrary I assumed logarithmic dependence:
at 1 s / move: 1 error / 679 positions
I have to suppose only one error per game, to obtain that error rate per position. Can you be more explicit?
The 4th step is then to infer from that that the top 4 engine moves only produce 1 error in 10^20 positions and thus suffice as only 10^17 positions need consideration.
A magician of numbers like you, then, can answer a question I have asked you four times and is still unanswered. I ask it here for the fifth time, in a different form.
Given a game tree of 10¹⁷ nodes, with this structure:
1) Every black node (representing a position after Black's move) has 4 child nodes (white nodes)
2) Every white node (representing a position after White's move) has 1 child node (black node)
a) What's the depth of the game tree?
b) If 60 hours are needed to produce the 4 white nodes and their 4 child black nodes (one for each white node), how many hours are needed to produce the entire tree?
Since everybody seems to agree or at least not contest that chess cannot be strongly solved in a feasible way as things stand now ...
then there could be many discussions as to what computers might otherwise achieve instead. Within chess.
Have/are computers changing chess or chess development as a result of computer-solving of anything?
Tactics puzzles and their solutions can be computer-checked.
One can computer-check one's game.
One can 'work out' against pre-selected positions - or play computers.
(could make a good 'warmup' just before a real tournament game against 'the living'
)
But do those change anything about chess 'instruction'?
Or about chess theory?
The biggest such change might be that players now have a much faster route to improvement.
Because computers have compiled many tactics puzzles.
Isolating key concepts. Much more efficient than games.
But in that role - the computers are simply presenting - not instructing.
But the analysis (Stockfish feature) button can tell players why and how some moves don't work in puzzles.
#2135
"At infinite time the error rate is 0. At zero time the error rate is infinite.
Hypothesis 1."
++ That is obvious.
With unlimited time you can strongly solve chess by visiting all positions so there is 0 error.
If you have 0 time, then you can get nothing right at all, so infinite error.
"error = a / time^b
Hypothesis 2. Occam's razor is not a proof."
++ What else would you fit to it?
Teacher: "if 12 oranges cost $3, then how much do 24 oranges cost?"
You: "I cannot tell, there might be a discount for more oranges, some oranges may have more mass and the sale may be per kg instead of per number, maybe there are no more than 12 oranges available, no I cannot answer the question"
"If b = 60 and a = 5.6"
++ I did not say b = 60 and a = 5.6. You misunderstood.
"P(5 errors) << P(3 errors) << P(1 error)
Hypothesis 3."
++ That is no hypothesis, that is basic arithmetic.
"Hypothesis 4. Not to mention pathology in game trees, even if the evaluation was strongly biased, more time would lead to more draws, because the engine plays against an equally flawed evaluator, itself."
++ It plays against itself at both time controls. The draw rate goes up with more time.
"Counter-hypothesis: with less time the result has simply greater variance. Calculations make it possible for the evaluator to avoid those lines that lead to lower expected results, therefore stabilizing the evaluation. The average outcome is the same: 0.5 and cannot be anything but that, with self-play/self-analysis and limited search."
++ No, there are clear trends:
more time = more draws
white wins > black wins
draws > decisive games
Even more: even with stalemate = win the decisiveness does not go up.
Moreover look at figure 4d: for King's Gambit the white / black trend is reversed, so you cannot attribute the outcome to variance
"this phenomenon (a more stable evaluation with increasing depth) has been observed basically in any engine. Following your reasoning, we could infer that the game is a draw even using a 1700 elo-rated engine."
++ It is not only more stable, it approaches all draws. That is why at TCEC they had to imposed unbalanced openings to avoid all draws. There the engines are different, so they do not play against themselves.
https://tcec-chess.com/
"We are talking about 11.8% decisive games @ 1 s/move and 2.1% decisive games @ 1 min/move. That justifies to neglect the occurence of 3, 5, 7, 9... errors in 1 game, even more so at 60 h/move.
Hypothesis 5."
++ That is not a hypothesis, that is basic arithmetic: you can neglect a smaller number to a larger number
"I have to suppose only one error per game, to obtain this error rate per position. Can you be more explicit?"
++ A priori a game can contain 0, 1, 2, 3, 4, 5... errors. Under the generally accepted hypothesis that chess is a draw, an even number of errors leads to a draw and an odd number of errors leads to a decisive game. We observe more draws than decisive games and even more so with more time / move. Thus there are more games with an even number of errors than with an odd number of errors and even more so with more time.
That leads to
games with 0 errors > games with 1 error > games with 2 errors > games with 3 errors...
in particular games with 1 error >> games with 3 errors
"Given a game tree of 10¹⁷ nodes"
++ The tree does not even have 10^17 nodes. See the paper on solving checkers: there were as much nodes in the tree as positions to consider per node. Many leaves do not grow to branches.
"1) Every black node (representing a position after Black's move) has 4 child nodes (white nodes)
2) Every white node (representing a position after White's move) has 1 child node (black node)"
++ That is right.
"a) What's the depth of the game tree?"
++ I guess between 30 and 50.
For data on that look at the game lengths of near perfect ICCF drawn games.
https://www.iccf.com/event?id=85042
"b) If 60 hours are needed to produce the 4 white nodes and their 4 child black nodes (one for each white node), how many hours are needed to produce the entire tree?"
++ about 5 years
Do not forget about transpositions, which make up a large part of chess.
For more information on that see the paper on Losing Chess: they used a transposition table.
I wonder if ANYONE here is capable of discussing the topic area and not completely meaningless mumbo-jumbo, repeated for the nth time?
#2133
"Many people still believe that."
On this thread there are even people who believe that:
chess is a forced win for white in 3 trillion moves starting with 1 e4 e5 2 Ba6
1 a4 is better than 1 e4 or 1 d4
positions with 7 white rooks, 3 black rooks, 3 black bishops, and 5 black knights are common
solving chess requires floating point operations
engines play weaker when they use more time
strongly solving is the same as weakly solving
Prof. van den Herik cannot even define the subject he wrote a paper about
GM Sveshnikov knew nothing about chess analysis
2 data points are not enough to estimate 2 parameters
nodes do not include evaluation
there are twice as much positions as diagrams
the 50 moves rule is invoked in most chess games
there is a huge difference between 3-fold, 2-fold, or 5-fold repetition
I've marked the ones that are the most obviously false in red. I'm sure other people would dispute more of these as well. Most of these are your own straw man arguments.
One thing is abundantly clear. You lack the objectivity to ever be trusted with the scientific method, and have zero hope of ever being part of an actual solution for chess
. You are driving everything from the 5 year conclusion tossed out as an offhand comment by a deceased GM, and your hypotheses are all contorted to fit this reality you have decided upon in advance.
"At infinite time the error rate is 0. At zero time the error rate is infinite.
Hypothesis 1."
++ That is obvious.
To you and that does not prove anything.
"error = a / time^b
Hypothesis 2. Occam's razor is not a proof."
++ What else would you fit to it?
Teacher: "if 12 oranges cost $3, then how much do 24 oranges cost?"
Teacher: "if a car costs $30000 and can safely reach 125mph, how much does a car that can safely reach 300mph cost?" You: "$60000, obviously".
"If b = 60 and a = 5.6"
++ I did not say b = 60 and a = 5.6. You misunderstood.
Evasive. How
error = a / time^b
becomes
time * 60 = error / 5.6?
"P(5 errors) << P(3 errors) << P(1 error)
Hypothesis 3."
++ That is no hypothesis, that is basic arithmetic.
Lie 1. "<<" means much smaller than, so only if you suppose the error rate very small and the errors statistically independent, your "basic arithmetic" holds true.
"Hypothesis 4. Not to mention pathology in game trees, even if the evaluation was strongly biased, more time would lead to more draws, because the engine plays against an equally flawed evaluator, itself."
++ It plays against itself at both time controls. The draw rate goes up with more time.
Oh, thanks, I didn't understand that. Is that supposed to be an objection?
"Counter-hypothesis: with less time the result has simply greater variance. Calculations make it possible for the evaluator to avoid those lines that lead to lower expected results, therefore stabilizing the evaluation. The average outcome is the same: 0.5 and cannot be anything but that, with self-play/self-analysis and limited search."
++ No, there are clear trends:
more time = more draws
white wins > black wins
draws > decisive games
"No" what? I use the same trends: 1) more time more draws, because the evaluation function is more stable with time; 2) White wins more than Black, but we cannot infer it is a win for White. In fact you say it is generally accepted the game value is a draw; 3) more draws than decisive games, because the game is too complex for an engine to consistently find a way and win against itself.
Even more: even with stalemate = win the decisiveness does not go up.
Moreover look at figure 4d: for King's Gambit the white / black trend is reversed, so you cannot attribute the outcome to variance.
Nonsense.
"this phenomenon (a more stable evaluation with increasing depth) has been observed basically in any engine. Following your reasoning, we could infer that the game is a draw even using a 1700 elo-rated engine."
++ It is not only more stable, it approaches all draws. That is why at TCEC they had to imposed unbalanced openings to avoid all draws. There the engines are different, so they do not play against themselves.
As always, don't you understand, or you pretend not to? It does not matter if an engine plays against itself or not. It does matter if an engine plays against an equally strong engine. In that case, as I said, the game might be simply too complex for either the two to defeat the other. If an engine plays against itself or analyze, it cannot be stronger than itself.
"We are talking about 11.8% decisive games @ 1 s/move and 2.1% decisive games @ 1 min/move. That justifies to neglect the occurence of 3, 5, 7, 9... errors in 1 game, even more so at 60 h/move.
Hypothesis 5."
++ That is not a hypothesis, that is basic arithmetic: you can neglect a smaller number to a larger number
Lie 2. You cannot say that 3+1≈ 3 because 1 is smaller than 3. You have not proven you didn't make an hypothesis. You are just repeating your hypotheses, denying they are hypotheses, because you cannot prove they aren't.
"I have to suppose only one error per game, to obtain this error rate per position. Can you be more explicit?"
++ A priori a game can contain 0, 1, 2, 3, 4, 5... errors. Under the generally accepted hypothesis that chess is a draw, an even number of errors leads to a draw and an odd number of errors leads to a decisive game. We observe more draws than decisive games and even more so with more time / move. Thus there are more games with an even number of errors than with an odd number of errors and even more so with more time.
That leads to
games with 0 errors > games with 1 error > games with 2 errors > games with 3 errors...
in particular games with 1 error >> games with 3 errors
Lie 3. 10 > 9 > 8 does not mean 10 >> 8. This also proves that you supposed only 1 error per game in your calculations, so:
Estimating the parameters a & b comes from the 2nd step: extrapolating from 1 s/move and 1 min/move to 60 h/move. That is independent from neglecting 3, or 5, or 7 errors and only assuming 1 error as there are only 2.1% decisive games @ 1 min/move.
is lie 4.
"Given a game tree of 10¹⁷ nodes"
++ The tree does not even have 10^17 nodes. See the paper on solving checkers: there were as much nodes in the tree as positions to consider per node. Many leaves do not grow to branches.
Don't you understand, do you pretend to not understand, or do you blatantly lie? You have already admitted that in checkers 10¹⁴ nodes have been searched, out of a search space of 5*10²⁰ nodes:
Earlier you tried to use a paper to prove that only the square root of the search space has been checked, and there is no proof. Now you say "the other paper"... Which one? Page, line, or paragraph? Are you saying that indeed 10^14 nodes have been searched?
You:
#1828
https://www.researchgate.net/publication/231216842_Checkers_Is_Solved
page 4 right column, §1-3, yes this paper says 10^14
Now you say that in chess it is enough to search less than 10¹⁷ nodes? That's not what you said earlier:
"You stated that you expect to search only 10¹⁷ positions out of 10³⁷ and in order to do that, you want to search only 4 candidates for White and one for Black at any move."
++ Yes, that is correct.
"how do you determine those 4 top candidates and be sure the optimal move is among them?"
++ I determine the 4 top candidates with the Stockfish evaluation function or a simplified version of it. I reckon the optimal move is among them by extrapolation: 1 error in 10^5 positions for the top 1 move, 1 error in 10^10 positions for the top 2 moves, 1 error in 10^15 moves for the top 3 moves, 1 error in 10^20 moves for the top 4 moves. That should do as I plan to consider only 10^17 moves.
Don't play with words, @tygxc, and don't just answer "I don't play with words". Try harder. Search space, proof tree and nodes actually searched are not the same thing. The game tree I asked of is made of nodes actually searched.
"a) What's the depth of the game tree?"
++ I guess between 30 and 50.
Evasive. "I guess"? I didn't ask for a guess. You can use some approximation, but that is far too much vague. That answer is paramount for your theory.
"b) If 60 hours are needed to produce the 4 white nodes and their 4 child black nodes (one for each white node), how many hours are needed to produce the entire tree?"
++ about 5 years
Wrong.
Do not forget about transpositions, which make up a large part of chess.
For more information on that see the paper on Losing Chess: they used a transposition table.
And in fact they searched 10¹⁷ nodes. This is not the search space, which is much bigger and not the proof size, which is much smaller. See the paper on Losing Chess better. BTW, any common engine uses TT.
You are just repeating your hypotheses, denying they are hypotheses, because you cannot prove they aren't.
The central technique, yes. It happens over and over again in many threads, and he is refuted in every one of them, simply moving on to the next when one dies out from people being tired of repeating themselves.
I can believe it. The only problem is that some of those claims are not mere opinions, they are actually disinformation. That's not enough for reporting, I'm afraid, but for sure they are offensive... to our intelligence
.
#2100
I tried to replicate your perceived error on Stockfish 14 NNUE, but could not replicate your findings: the engine goes straight for the table base correct win 1 Rh8+ Ke7 2 Rh7+ Kf6 3 Rf7+ Kg6 4 Rd7 Rb1+ 5 Ka5 Kf6 etc. I guess there is something wrong with your version of Stockfish 14.
That's probably because you don't know your arse from your elbow.
You almost certainly gave it a FEN with ply count 0 instead of 100, because you have a total lack of comprehension of how the rules affect either SF14 or the game results and you don't take any notice or what people post for you.
From earlier:
"One thing is abundantly clear. You lack the objectivity to ever be trusted with the scientific method, and have zero hope of ever being part of an actual solution for chess
. You are driving everything from the 5 year conclusion tossed out as an offhand comment by a deceased GM, and your hypotheses are all contorted to fit this reality you have decided upon in advance."
Apparently.
And he's investing a tremendous tremendous effort to so contort.
But one of those points can be qualified.
This one:
"You lack the objectivity to ever be trusted with the scientific method"
Could be qualified this way:
Whatever substantial degree of objectivity he possesses - if he does - he's not applying it Here to any degree regarding these chess solving subjects.
By putting it that way - it more applies to his posts than he himself.
How does he get away with it?
Because he avoids personally attacking.
Its similiar with the people pushing flat earth and 'viruses don't spread diseases' nonsense on this website ... they push their silliness but are careful not to violate TOS (terms of service of chess.com) -
so - they get away with it.
And by the way - there's now a new red Exclam report button within every post.
I tried it out just now. In another forum.
When you hit that button - it gives you a popup menu to qualify your report.
So I ticked the 'spam' option on the posts that keep turning up about 'playing for cash on another website' from the same person over and over again.
Yes I was careful just now not to also spam by Not repeating the name of the spammed website ! ![]()
#2143
Of course I gave the position as it is, with ply count 0 to start with. There is no point in discussing a position with circumstances close to the 50-moves rule that do not happen.
I still challenge you to show me one grandmaster game or ICCF game where the 50 moves rule was invoked before the 7-men table base was hit.
I bet there is none, but I cannot prove the non existence of such a game.
#2125
Game theory is a branch of mathematics.
Here is again a reference to the classical paper on solving games
https://reader.elsevier.com/reader/sd/pii/S0004370201001527?token=285C2283E58C900FB0F801D77A2E7F5BE5B1F4E3238544BC6BADBAB2FA8A956CD32DD00A4C0C577FED2A13E98A821265&originRegion=eu-west-1&originCreation=20220311103154