Chess will never be solved, here's why

Sort:
alexandermatrone-eleleth

I really need a coach

playerafar
alexandermatrone-eleleth wrote:

I really need a coach

This is the best Coach in the world:

https://www.chess.com/puzzles/learning 
You can do the puzzles unrated.
You can set the rating range of the puzzles you want to play against -
without being rated (better that way) ...
Plus after each puzzle done -
there's a black bubble button that takes you to a discussion forum for each and every one of the 50,000 tactics puzzles !

They're all about 'Solving' chess.  And in a very big way !  
'Chess will never be solved ?'
Its Solved there Constantly !  And in a big way.

MARattigan
tygxc wrote:

#2403
You find my personal alternative definition not reasonable but you give no reasons.

Your personal alternative definition is, ""To weakly solve chess is to prove that black can draw against all reasonable white moves." - my objection is that it requires you to provide no algorithm for White to even draw. Nobody else would take that to be a weak solution.
You find the definitions is Wikipedia not reasonable.

Yes, I gave a full explanation earlier. It would be rather tedious to post it again here so if you didn't read / didn't understand it try again.
You find the definition by van den Herik slipshod.

Yes; if you think a weak solution means finding an infallible way for one side to lose, I think you may struggle to get funding for your supercomputers.
I find your personal alternative definition not reasonable for several reasons.
"my definition is generally accepted (also by van den Herik"
++ Please show the letter where van den Herik accepts your definition.

As you probably guessed I haven't been in correspondence with Prof. van den Herik - I just assumed that he would be in line with the rest of the game theory world.

How does that make my definition not reasonable? 
This discussion about definitions is pointless.

Does that mean it doesn't matter to you if you don't know what you're trying to do?
Accept what is written by van den Herik.

No.
Propose something else written in a reputable source.

Unless someone can find a valid reason to reject what I wrote I am a reputable source. 

 

playerafar

Lol !
I liked that post by Martin !  

But I think I've figured out maybe about @tygxc ?  Maybe.  Maybe not.
Maybe:  its not about Sveshnikov - nor 'weak solving' (too weak) - nor 'knows more about mathematics' - nor Cloud computers.
He just wants to talk about solving chess.  Its innocent. 
The other stuff is window dressing.

haiaku
MARattigan wrote:

weakly solved means that for the initial position either a timely strategy has been determined for one player that achieves a win against any opposition, or a timely strategy has been determined for both players that avoids a loss against any opposition. 

Ok, I think we can agree on that, but this definition (and other equivalent ones) doesn't say anything about suboptimal strategies. In the definition you gave, the optimality is implied in that "against any opposition". That is, White can try something else than 1. Qa7, in that position you posted above, but he has some chances only against a suboptimal strategy by Black; he cannot avoid loss against any opposition (and among those oppositions there is the optimal strategy for Black). You say "that avoids", not "that makes any effort to avoid" (e.g. through clever moves which can induce the opponent to make a mistake). In fact, your definition says about the players when the game value is a draw, and about the player with a won position, but nothing about the player with the lost one. I'd say that 1. Qa7 is part of the strategy for Black (Black's strategy must work against any opponent's move).

haiaku

Sorry everyone, I confused White and Black in the previous post. I have corrected it.

MARattigan
@haiaku

I agree my definition says nothing about suboptimal strategies.  I don't think it's generally regarded as necessary for a weak solution to say anything about suboptimal strategies. 

Obviously they're useful in practical play, but the broad categories of solution in game theory don't address practical play. 

Weak solutions don't need to provide a strategy for a losing player. The strategy defaults to, "do whatever you want".

 

haiaku
MARattigan wrote:

Weak solutions don't need to provide a strategy for a losing player. The strategy defaults to, "do whatever you want".

Indeed. So

tygxc wrote:

weakly solved means that for the initial position a strategy has been determined to achieve the game-theoretic value against any opposition [ . . .] - van den Herik

holds true for your position, if we say that 1. Qa7 is one of the possible moves the losing side can play, and it has been determined what to play against it (and the other possible moves) in order to reach the game value, which is a win for Black. The losing side can be included just as limit case: any strategy is "optimally" losing, so who cares happy.png.

playerafar

The losing side can look for forced play to draw by 50 moves.
Or perpetual check.  Or draw by deterrent.
Or there may be a way to force a third repetition different from perpetual.

By the way - (or maybe Not by the way) -
regarding 'pure' repetition of position that is not perpetual check ...
I'm thinking most players never ever encounter it in their games.
Never !
Its even rarer than N+B versus lone King. 
Which also might Never be encountered.  

playerafar

And - does 'anybody' ever encounter the 50 move rule in their games ?
Sometimes.  Its rare.
Like it could be that very thing - N+B versus lone King and whoever forgot how to do it and went over the 50.
I would imagine many if not most players under 2000 would forget how to do it - assuming they ever knew.  And go over.  And maybe even get hit with 'pure' repetition of position.
All these things 'coming together'.

MARattigan
playerafar wrote:

...
By the way - (or maybe Not by the way) -
regarding 'pure' repetition of position that is not perpetual check ...
I'm thinking most players never ever encounter it in their games.
Never !
Its even rarer than N+B versus lone King. 
Which also might Never be encountered. 

... 

I have encountered it several dozen times practising KNNKP against Wilhelm/Nalimov. You need only choose the wrong direction to force the king at some point. If you force it back again and Wilhelm sees the chance of a triple repetition, he's in like Flynn.

What is quite likely is that players never notice it if the repetitions are separated by more than about ten moves if they're not playing electronically.

playerafar

Lol !   happy.png

playerafar

Does chess.com 'call' the 50 moves in games ?
Maybe a player has to claim it.
I would think its called in the 'workouts' though.

MARattigan
tygxc wrote:

#2140
"How can you say the error rate is that function of time?"
At infinite time the error rate is 0. At zero time the error rate is infinite.

I'm OK with your first sentence. If SF thinks for infinite time it never makes a move, so, a fortiori, never makes an error.

Been struggling with that last sentence.

How can you get an error rate greater than 1?

You do subsequently give the first data point as (0,1) so presumably you must be making the assumption 1 ≅ ∞.

That would correspond with SF14, given zero think time, making an error with a probability of 1.

Even so, I'm still struggling. I've been trying to work out the probability that it would make an error in, say, this position (as either side)

but I still can't get it to 1.

Perhaps you could tell me where I'm going wrong. You know more about mathematics than me.

 

tygxc

#2411

"In the first two cases you have two searches, in the latter just one."
++ Yes, that is correct.
However the probability of a blunder (??) = double error in the ICCF world championship finals is very, very low to start with. Once they get a won position, they play it out to a win. They do not get tired, they do not get into time trouble, they have 5 days per move, they can use engines.
Try to find one single game among the hundreds of ICCF WC games where a blunder (??) might have occured that turned a win into a loss. It is already hard to find some wins.

More generally in a won position the set of legal moves has 3 subsets:
1) good moves that keep the win,
2) errors (?) that return to a draw, and
3) blunders = double errors (??) that turn into a loss.
The first set is never empty by definition of a won position. The 2nd and 3rd sets may be empty.

"a car accident that causes a damage which costs d dollars"
++ No, this is no good analogy.
Maybe there are better analogies in quantum mechanics, like the probabilities to absorb 1 or 2 particles.
I also like the analogy of a straight line always intersecting a circle at two points. The tangent only intersects it at 1 point, which however mathematically counts as 2 points: a double root. If the straight line lies outside of the circle, then the 2 points are imaginary (complex numbers).

DJET125

woa

haiaku

I'll start from the end:

tygxc wrote:

"a car accident that causes a damage which costs d dollars"
++ No, this is no good analogy.
Maybe there are better analogies in quantum mechanics, like the probabilities to absorb 1 or 2 particles. I also like the analogy of a straight line always intersecting a circle at two points. The tangent only intersects it at 1 point, which however mathematically counts as 2 points: a double root. If the straight line lies outside of the circle, then the 2 points are imaginary (complex numbers).

You say that people are either unable or unwilling to understand you, but as usual I could say the same about you. I have the feeling that you try to impress the occasional reader of this thread using concepts like cloud computing, heuristics, quantum mechanics, complex numbers, etc., as if they were something special, in the hope to have them think:" Oh, this is high level stuff, this @tygxc must be a genius or something...". For sure we do not impress any of the regulars here which such or higher level things, so try to stick to the point; you should still prove that in general if an event causes an effect e₁ with probability P(e₁), and another statistically independent event causes an effect e₂ with probability P(e₂), and the sum of the two effects is e₁+e₂, the probability of an event to cause an effect e₁+e₂ is P(e₁+e₂)=P(e₁)*P(e₂). Do you want to use discrete effects? Ok, but the formula will not prove itself automagically because of that.

tygxc wrote:

"In the first two cases you have two searches, in the latter just one."
++ Yes, that is correct. However the probability of a blunder (??) = double error in the ICCF world championship finals is very, very low to start with. Once they get a won position, they play it out to a win. They do not get tired, they do not get into time trouble, they have 5 days per move, they can use engines.
Try to find one single game among the hundreds of ICCF WC games where a blunder (??) might have occured that turned a win into a loss. It is already hard to find some wins.

Again, you don't understand, or... We have not strongly solved chess, so the only positions we know are wins or draws, are those which can be calculated to the checkmate or to the endgame tablebase, thus it is obvious that if they get a won position they play it optimally, because we know that position is won by analysis and tablebases; that doesn't mean they play all the other positions with few errors. Therefore, you cannot start with the assumption that the probability of a blunder is "very, very" low.

The draw rate does not mean anything. You say check the ICCF games... If you look at the games played only by the first classified (let's say the top 10) in the ICCF WC finals (thus excluding games they played against the classified below position 10), you can see that at least from edition 17, started in 2002, the draw rate was already well above 80%. You could infer the error rate from those results, then, and what would you get? That engines twenty years ago were as close to perfection as engines today...

Stop jumping to conclusions: you are too creative and selective in choosing hypotheses. May be that helps in practical play, but that's another story.

haiaku
haiaku wrote:

The draw rate does not mean anything. You say check the ICCF games... If you look at the games played only by the first classified (let's say the top 5)

Well, let's do top 10. The size of the sample cannot be too small, but the sample cannot include all the players. I think that twenty years ago the differences in hardware and software between participants were bigger. In the past, the speed of computations was based on the speed of the cores, but today we reach higher speed more through parallelization. There's a limit to the increment in speed we can get this way, though, especially for chess. So today there is less difference in performances. If we couple this with the increased sarch depth, which for sure produces more stable, but not necessarily more correct evaluations, I think the year by year inrceasing rate of draws in ICCF games can be explained, without the hypothesis that the evaluations are closer to perfection, today.

tygxc

#2429

"you try to impress the occasional reader of this thread using concepts like cloud computing, heuristics, quantum mechanics, complex numbers, etc."
++ No, not at all. See the reference on cloud engines: they do reach 10^9 nodes/s. See the paper on solving Losing Chess: it does mention heuristics. Quantum mechanics is based on probability calculation and on complex numbers. I have seen a quantum electrodynamical calculation on the probability of absorption of 1, 2, 3... photons using Feynman diagrams. It is in the booklet "QED" by Feynman. I do not have it on my shelf, so I cannot tell you page and line.

Even in your car crashes analogy: every insurance company will tell you they settle more minor damages than they settle total losses.

Let us do some simple high school math
Let D represent the rate of decisive games
Let E represent the error rate per game
D = E + E³ + E^5 + E^7 + ... = E / (1 - E²)
Hence
E² + E/D - 1 = 0
Hence
E = Sqrt (1 + 1 / (2*D)²) - 1 / (2*D)
Let us now apply this
ICCF WC32:               D = 17 / 125 = 0.14, E = 0.13, E² = 0.018,   E³ = 0.0024
ICCF WC31:               D = 14 / 133 = 0.11, E = 0.10, E² = 0.011,   E³ = 0.0011
ICCF WC30:               D = 9 / 136   = 0.07, E = 0.07, E² = 0.0043, E³ = 0.00029
Yekaterinburg 2021: D = 25 / 56   = 0.45, E = 0.38, E² = 0.15,     E³ = 0.056
Zürich 1953:             D = 90 / 210 = 0.43, E = 0.37, E² = 0.14,     E³ = 0.051
So the data show, that 99% of ICCF WC draws are ideal games with optimal moves that thus are part of the weak solution of chess.

"We have not strongly solved chess, so the only positions we know are wins or draws, are those which can be calculated to the checkmate or to the endgame tablebase"
++ I also consider chess ultra-weakly solved and the game-theoretic value of the initial position to be a draw. Any other would contradict the observed data.
Also positions with a forced 3-fold repetition are known draws, e.g. perpetual checks.
This commonly happens in ICCF WC games, more than table base draws.
Some other endgames with more than 7 men are also known to be draws, e.g. most endgames with opposite colored bishops, rook endings with 4 vs. 3 pawns or less on the same wing etc.

"thus it is obvious that if they get a won position they play it optimally, because we know that position is won by analysis and tablebases"
++ Once they reach the table base draw, they stop playing and just claim the draw.

"that doesn't mean they play all the other positions with few errors"
++ No, but the low error rate results from the low draw rate.
In an ICCF WC the error rate E = 0.10: 1 error in 10 games.

"you cannot start with the assumption that the probability of a blunder is "very, very" low"
++ No, I do not start with that assumption, I derive it from the data.
In the ICCF WC games E² = 0.01: 2 errors in 1 of 100 games.

"Stop jumping to conclusions"
++ That is how science works: deduction and induction. E.g. the celestial orbits have not been deduced from the laws of motion and gravity: it was the other way around. Kepler derived his laws of planetary motion from astronomical observations by Tycho Brahe. Newton derived his laws of motion and of gravity from Kepler's laws. He invented the calculus he needed for that.

DiogenesDue

You're doing it backwards.  You can't use an error rate derived from imperfect play and imperfect evaluations.  You need to weakly solve chess before you can claim a valid error rate.  You *can* see how often engine play matches a tablebase if you turn off their tablebase access, because tablebases do represent perfect play, but that still would not tell you a valid error rate for a middle game or opening at all.

All those math formulas are useless when you plug in assumptions and bad logic.

Here's where you go off track:

"So the data show, that 99% of ICCF WC draws are ideal games with optimal moves that thus are part of the weak solution of chess."

What you mean to say is that 99% of ICCF WC draws have no errors *detectable by the engines evaluating the games*.  These engines are improving weekly, monthly, yearly, etc. and ergo cannot be used as a basis for determining "best play"...only "currently best understood play until next week rolls around...".  These engines *cannot determine ideal games or even optimal moves in most cases* until/unless they are forcing mate via brute force, or they utilize a tablebase.