Chess will never be solved, here's why

Sort:
Avatar of DiogenesDue
Thee_Ghostess_Lola wrote:

u just need opti back so u can pick on s/o. and ty for re: that crypto thingy from so long ago. wow lol ! why doncha go obsess on s/o else...plz ?

It's not that long ago, and the obsession was yours, which is why I remember it easily. You're acting like I never refuted your assertions while also handling Optimissed's. I'm an equal opportunity crackpot-buster.

Avatar of Thee_Ghostess_Lola

lol !! are u thru dopey ?

Avatar of Thee_Ghostess_Lola

burst !

Avatar of DiogenesDue
Luke-Jaywalker wrote:

he loves winning arguments inside his head.

its what gives him the buzz he needs to start every day as an adventure

At least I actually engage in discussions...the two of you just take potshots from the sidelines offering nothing, knowing you will get yourselves in trouble in a head-ups debate. It's been a decade and nothing much has changed.

You'll always be the serial sockpuppeteer stirring up drama for fun, and Lola will always be playing the kooky-but-cute role looking for male validation from the wrong source.

It's almost poignant and nostalgic...but not quite. Sorry for the blunt assessment of reality.

Avatar of Thee_Ghostess_Lola

blah blah blah...quite overanalyzing milk toast

Avatar of AyusGiri

Hello

Avatar of DiogenesDue
Thee_Ghostess_Lola wrote:

blah blah blah...quite overanalyzing milk toast

That would be "quit" and "milquetoast". The latter term comes from a fictional character from the Sunday comics. If by "overanalyzing" you mean that my observations/points are more accurate...well, that tracks with this exchange right here.

You two both like to take potshots, but don't seem to like anything coming back your way.

Avatar of Thee_Ghostess_Lola

That would be "quit" and "milquetoast". The latter term comes from a fictional character from the Sunday comics. If by "overanalyzing" you mean that my observations/points are more accurate...well, that tracks with this exchange right here.

ok fine ur the smartest person in the world...now woodja *hut up already ? lol !

* > s

Avatar of DiogenesDue
Thee_Ghostess_Lola wrote:

That would be "quit" and "milquetoast". The latter term comes from a fictional character from the Sunday comics. If by "overanalyzing" you mean that my observations/points are more accurate...well, that tracks with this exchange right here.

ok fine ur the smartest person in the world...now woodja *hut up already ? lol !

* > s

I do not claim to be smarter than everybody around me. You're thinking of someone else. Have a good weekend.

Avatar of Elroch
DiogenesDue wrote:
tygxc wrote:

"an assumption that cannot be proven about underpromotions not being relevant"
++ Underpromotions to rook or bishop only make sense to avoid a draw by stalemate.
That could make sense for one side, but not for both sides.
In the perfect games we have, some promotions to a 3rd and/or 4th queen occur, but promotions to a 3rd knight, bishop, or rook do not occur. If underpromotion were limited to pieces previously captured, then all games would be the same.

"This previous step takes a 0.001% bite"
++ 1 in 10,000 is 0.01%. Tromp conjectured it to be only 1 in 10^6 i.e. 0.0001%.

"Not proven to be possible for chess as of yet, but true for checkers" ++ Alpha-Beta search is a universal method to prune search trees and does not depend on the game.

"The two 10^17 numbers represent entirely different things"
++ Yes, one is the number of positions to be considered to weakly solve Chess and the other is the number of positions actually considered during the ongoing ICCF WC Finals.
That they turn out the same shows that the effort is commensurate with the required effort.

Alpha Beta pruning *when optimal* can effectively reduce the game tree complexity to its square root. Checkers has an average branching factor of 10, chess has an average branching factor of 35. In order to achieve optimal pruning, the branching moves must be evaluated in the best possible order. This means that the best moves are to be considered first.

Slight problem: you have no way of ordering the average 35 moves perfectly (and, as you might imagine, ordering 35 in a more complex game correctly would be a very tall order compared to ordering 10 in a much simpler game), only via a vague approximation based on human and engine-derived valuations. So, I will maintain that it is not at all proven to be possible to achieve the 17 orders of magnitude downward leap you are positing.

The only value of ordering the moves to a solution of chess is selecting the sole move to be played in a strategy. It leaves the roughly 35 moves for the other side to play. 35^13.

By chance, log10(35^11) = 10^17 (near as damn it). Which only has value to emphasise the absurdity of tygxc's claims.

My best guess is that you need about 10^30 positions for a weak solution of chess. The rule set does not matter much, since both strategies have a side playing optimally (so you don't get silly examples where the dumb route that has been taken to a position lowers the value of the position. Optimal players can avoid double repetitions when winning and are happy with draws the rest of the time.

Avatar of playerafar
DiogenesDue wrote:
tygxc wrote:

"an assumption that cannot be proven about underpromotions not being relevant"
++ Underpromotions to rook or bishop only make sense to avoid a draw by stalemate.
That could make sense for one side, but not for both sides.
In the perfect games we have, some promotions to a 3rd and/or 4th queen occur, but promotions to a 3rd knight, bishop, or rook do not occur. If underpromotion were limited to pieces previously captured, then all games would be the same.

"This previous step takes a 0.001% bite"
++ 1 in 10,000 is 0.01%. Tromp conjectured it to be only 1 in 10^6 i.e. 0.0001%.

"Not proven to be possible for chess as of yet, but true for checkers" ++ Alpha-Beta search is a universal method to prune search trees and does not depend on the game.

"The two 10^17 numbers represent entirely different things"
++ Yes, one is the number of positions to be considered to weakly solve Chess and the other is the number of positions actually considered during the ongoing ICCF WC Finals.
That they turn out the same shows that the effort is commensurate with the required effort.

Alpha Beta pruning *when optimal* can effectively reduce the game tree complexity to its square root. Checkers has an average branching factor of 10, chess has an average branching factor of 35. In order to achieve optimal pruning, the branching moves must be evaluated in the best possible order. This means that the best moves are to be considered first.

Slight problem: you have no way of ordering the average 35 moves perfectly (and, as you might imagine, ordering 35 in a more complex game correctly would be a very tall order compared to ordering 10 in a much simpler game), only via a vague approximation based on human and engine-derived valuations. So, I will maintain that it is not at all proven to be possible to achieve the 17 orders of magnitude downward leap you are positing.

Game tree - versus number of positions 'tree'.
Whether going from the back (endgame tablebases) with 'positions'
or going from the front (the opening position with 32 pieces) with 'games' as opposed to positions - either way the numbers get prohibitive kind of quickly.
Prohibitively large.
-------------------------
I don't know if its been done in the forum yet - 
but with enough 'cheating approximation' - chess would not only be very quickly 'weakly' solved ... but there could be a new method to do that both weakly and weekly.
Maybe that's been done many times ...
Limit each side to just two optimal options each - at every point.
How many games would there be? How many possible positions?
tygxc would have the data perchance.
-----------------------------------------------
Then try three options each.
If tygxc could refrain from using 'nodes per second' and make his arguments with 'computer operations per second' instead ... then he might really be able to present something 'good'.
Like where is the cutoff where the number of options that actually produce further variations to be considered - becomes prohibitive?
Five each?
-----------------------------
Note that chess software has to 'prune' the variations it will consider.
How many moves can it 'brute force' ahead? (all legal options considered and their resulting variations)
And even there - evaluations will not be perfect.

Avatar of playerafar

Note that chess software has to 'prune' the variations it will consider.
Game software. As opposed to 'project' software.
Stockfish or whatever.
It has to prune it. Anyway.
Programmers probably have to face that constantly.
'hey we want a strong engine but we're not trying to 'solve' chess. Not even 'weakly'.'

Avatar of MEGACHE3SE
playerafar wrote:

Note that chess software has to 'prune' the variations it will consider.
Game software. As opposed to 'project' software.
Stockfish or whatever.
It has to prune it. Anyway.
Programmers probably have to face that constantly.
'hey we want a strong engine but we're not trying to 'solve' chess. Not even 'weakly'.'

yeah tygxc's 10^17 argument assumes that it's already pre pruned LOL. even though such a pruning would already be enough to weakly solve.

Avatar of Thee_Ghostess_Lola

howbout we try to soft solve (using a game derivative) by halving the board (quartering it ??). & then taking that quant into eulers # (for the decay rate toward outright checkmate)...u know for a little better number ? it beats the probabilistic avenue right ?

Avatar of playerafar
Thee_Ghostess_Lola wrote:

howbout we try to soft solve (using a game derivative) by halving the board (quartering it ??). & then taking that quant into eulers # (for the decay rate toward outright checkmate)...u know for a little better number ? it beats the probabilistic approach right ?

You could try chess on a 4 x 4 board but then the pawns start right up against each other.
6x6 would be better but in that case - what pieces are not there to start?
Each side would need to 'lose' two pieces.
--------------------------------------
Get rid of the a-rooks and the b-knights and the pawns in front of them.
Then try and solve.
The endgame tablebases are going to have about the same problems.
So will the 'game tree' from the opening position.
The numbers will still get too big too fast. In either case.

Avatar of playerafar
Luke-Jaywalker wrote:

he has actually put some serious thought into that one Lola.

totally unexpected

Many things would be unexpected to LJ.
Is LJ going to put 'thought' into anything but his trolling?
Would that be 'unexpected' when it happens?
Why can't LJ compete with O when it comes to trolling?
Because LJ just isn't as fragile and delicate and insecure as O is ...

Avatar of Anika

My god, so many comments

Avatar of tygxc

@12142

"chess has a branching factor from 31" ++ Some positions can have 31 legal choices, but average only 3 that do not transpose: 3^80 = 10^38.

"until you can perfectly evaluate a single position" ++ 1 e4 e5 2 Ba6? loses for white, without needing any game tree. The perfect evaluation of 1 a4 cannot be better than that of 1 e4, without needing any game tree.

"Whether AB or BA are actually identical in result is not something you can determine"
++ Of course move order matters, but when the same position results, it is the same and needs only consideration once.
Chess is full of transpositions: the branches come back together in the same node.

"you are not actually working with 10^44" ++ I work with 10^38: nobody promotes to a 3rd rook, bishop, or knight in a real game, let alone in an optimal game.

"you double, triple, and possibly quadruple count the elimination of positions"
++ No. The reduction from 10^44 to 10^38 is by eliminating underpromotions to pieces not previously captured, i.e. to a 3rd rook, bishop, or knight. The reduction from 10^38 to 10^34 (or 10^32 per Tromp) is to eliminate positions that make no sense, i.e. cannot result from optimal play by both sides. The reduction from 10^34 (or 10^32) to 10^17 is by assuming perfect Alpha-Beta search for the weak solution.

"Your argument tries to apply it's largest reduction twice" ++ No.

Avatar of ThePersonAboveYou

Saying we have a machine to solve chess would be a useless hypothetical, also how long can a chess game go until someone gets an advantage and how long can the opponent prolong not getting checkmated? How many calculations would it need for that to be "solved"? Because chess games do have a max possible move count with the 50 move rule. Would like some insight and hopefully I'm not asking useless questions

Avatar of MARattigan
Elroch wrote:
MARattigan wrote:
Elroch wrote:
MEGACHE3SE wrote:

"As Schaeffer wrote: 'Even if an error has crept into the calculations, it likely
does not change the final result.'

sorry, math proofs dont work like that. thats an appeal to authority fallacy

Mathematicians do sometimes make such comments about very difficult proofs but, as you say, this comment is not about what a proof is. It is about human fallibility and an uncertain beliefs about what is true if it has compromised the work! Schaeffer would definitely agree that IF an error was found, some of the analysis would need to be done for the conclusion to stand. He (and everyone else) would expect the same conclusion in the end.

It's a little different in the case of the solution of checkers, since it is more about the correctness of the code than the correctness of a proof designed for humans. I recall hearing that when the 4 color theorem was proved (it was actually achieved while I was at school), there was distrust from some graph theorists because it was not practical for a human to check the computer's working! Of course, what was really needed was for someone to check that the program was correct according to the mathematics - i.e. that it checked examples in a valid way and that it checked all the necessary examples. The execution of the program could be taken as reliable.

There is a printed proof available - I haven't read it. It's rather long but probably not impossible. I think about the same as a fourth volume added to Whiehead & Russell's PM in size and I suspect in not dissimilar style.

"The execution of the program could be taken as reliable."

Really?

Yes, really. When you do deterministic operations with a computer, you can be confident of the result. This makes it possible to spend several months running the Lucas-Lehmer algorithm to find a new prime, for example. Or to calculate 105 trillion digits of pi. This was probably a much bigger computation than solving checkers (and surely more pointless, beyond breaking the record!). To do it, you need an extremely high reliability of the elementary calculations,

They say they used 100 million gigabytes (c. 1e17 bytes) of data while the entire result would only require 4.36e13 bytes, assuming incompressibility. It may be that the algorithm uses a lot more storage to run efficiently in time, for reasons that would require knowledge of the algorithm.

Of course, when elementary calculations are not close enough to 100% reliable, you can always increase the reliability by doing error checking. The very simplest (and quite expensive) way is to do the calculation twice in parallel and repeat anything that does not agree. This roughly squares the error rate, a vast improvement.

I've spent probably a quarter of my working life debugging system problems. In that time I've come across two instances of machine bugs (one in microcode and one probably ditto) plus two (essentially the same) that were either the machine or right in the heart of error recovery routines in the operating system. (That's not including my own PCs going poo.) A lot more problems with bugs in language code e.g. statistics code that runs out of precision and gives duff results without telling you.. There have been well publicised potential logical problems with computer chips. I've spent another quarter of my working life installing and applying fixes to mainframe operating systems and components , the fixes released must have run to hundreds of thousands.

I came to the conclusion that any program of a moderate size usually does three completely distinct things; firstly what the users think it does, secondly what the documentation says it does and thirdly what it does, the exceptions being that the second is sometimes absent.

But those problems pale into insignificance if you're handling large amounts of data over a significant period compared with human fumble problems. Operators mounting the wrong tapes, failing to spot error messages, running things in the wrong order etc.

So I would say there are two chances of Schaeffer's calculations running perfectly over the length of time it took, the number of systems involved and the volume of data processed.

Of course if you run the same approach on separate systems with different developers and run a full check of the results it would greatly increase the confidence (I believe that was done with some of the tablebases).

The problem is you can't. Not only is most of Scheffer's result missing because there was no room to save it, but even if it all ran without error and you could run exactly the same code without error again you wouldn't get the same solution to check.