Chess will never be solved, here's why

Sort:
Thee_Ghostess_Lola

howbout we try to soft solve (using a game derivative) by halving the board (quartering it ??). & then taking that quant into eulers # (for the decay rate toward outright checkmate)...u know for a little better number ? it beats the probabilistic avenue right ?

playerafar
Thee_Ghostess_Lola wrote:

howbout we try to soft solve (using a game derivative) by halving the board (quartering it ??). & then taking that quant into eulers # (for the decay rate toward outright checkmate)...u know for a little better number ? it beats the probabilistic approach right ?

You could try chess on a 4 x 4 board but then the pawns start right up against each other.
6x6 would be better but in that case - what pieces are not there to start?
Each side would need to 'lose' two pieces.
--------------------------------------
Get rid of the a-rooks and the b-knights and the pawns in front of them.
Then try and solve.
The endgame tablebases are going to have about the same problems.
So will the 'game tree' from the opening position.
The numbers will still get too big too fast. In either case.

playerafar
Luke-Jaywalker wrote:

he has actually put some serious thought into that one Lola.

totally unexpected

Many things would be unexpected to LJ.
Is LJ going to put 'thought' into anything but his trolling?
Would that be 'unexpected' when it happens?
Why can't LJ compete with O when it comes to trolling?
Because LJ just isn't as fragile and delicate and insecure as O is ...

Anika

My god, so many comments

tygxc

@12142

"chess has a branching factor from 31" ++ Some positions can have 31 legal choices, but average only 3 that do not transpose: 3^80 = 10^38.

"until you can perfectly evaluate a single position" ++ 1 e4 e5 2 Ba6? loses for white, without needing any game tree. The perfect evaluation of 1 a4 cannot be better than that of 1 e4, without needing any game tree.

"Whether AB or BA are actually identical in result is not something you can determine"
++ Of course move order matters, but when the same position results, it is the same and needs only consideration once.
Chess is full of transpositions: the branches come back together in the same node.

"you are not actually working with 10^44" ++ I work with 10^38: nobody promotes to a 3rd rook, bishop, or knight in a real game, let alone in an optimal game.

"you double, triple, and possibly quadruple count the elimination of positions"
++ No. The reduction from 10^44 to 10^38 is by eliminating underpromotions to pieces not previously captured, i.e. to a 3rd rook, bishop, or knight. The reduction from 10^38 to 10^34 (or 10^32 per Tromp) is to eliminate positions that make no sense, i.e. cannot result from optimal play by both sides. The reduction from 10^34 (or 10^32) to 10^17 is by assuming perfect Alpha-Beta search for the weak solution.

"Your argument tries to apply it's largest reduction twice" ++ No.

ThePersonAboveYou

Saying we have a machine to solve chess would be a useless hypothetical, also how long can a chess game go until someone gets an advantage and how long can the opponent prolong not getting checkmated? How many calculations would it need for that to be "solved"? Because chess games do have a max possible move count with the 50 move rule. Would like some insight and hopefully I'm not asking useless questions

MARattigan
Elroch wrote:
MARattigan wrote:
Elroch wrote:
MEGACHE3SE wrote:

"As Schaeffer wrote: 'Even if an error has crept into the calculations, it likely
does not change the final result.'

sorry, math proofs dont work like that. thats an appeal to authority fallacy

Mathematicians do sometimes make such comments about very difficult proofs but, as you say, this comment is not about what a proof is. It is about human fallibility and an uncertain beliefs about what is true if it has compromised the work! Schaeffer would definitely agree that IF an error was found, some of the analysis would need to be done for the conclusion to stand. He (and everyone else) would expect the same conclusion in the end.

It's a little different in the case of the solution of checkers, since it is more about the correctness of the code than the correctness of a proof designed for humans. I recall hearing that when the 4 color theorem was proved (it was actually achieved while I was at school), there was distrust from some graph theorists because it was not practical for a human to check the computer's working! Of course, what was really needed was for someone to check that the program was correct according to the mathematics - i.e. that it checked examples in a valid way and that it checked all the necessary examples. The execution of the program could be taken as reliable.

There is a printed proof available - I haven't read it. It's rather long but probably not impossible. I think about the same as a fourth volume added to Whiehead & Russell's PM in size and I suspect in not dissimilar style.

"The execution of the program could be taken as reliable."

Really?

Yes, really. When you do deterministic operations with a computer, you can be confident of the result. This makes it possible to spend several months running the Lucas-Lehmer algorithm to find a new prime, for example. Or to calculate 105 trillion digits of pi. This was probably a much bigger computation than solving checkers (and surely more pointless, beyond breaking the record!). To do it, you need an extremely high reliability of the elementary calculations,

They say they used 100 million gigabytes (c. 1e17 bytes) of data while the entire result would only require 4.36e13 bytes, assuming incompressibility. It may be that the algorithm uses a lot more storage to run efficiently in time, for reasons that would require knowledge of the algorithm.

Of course, when elementary calculations are not close enough to 100% reliable, you can always increase the reliability by doing error checking. The very simplest (and quite expensive) way is to do the calculation twice in parallel and repeat anything that does not agree. This roughly squares the error rate, a vast improvement.

I've spent probably a quarter of my working life debugging system problems. In that time I've come across two instances of machine bugs (one in microcode and one probably ditto) plus two (essentially the same) that were either the machine or right in the heart of error recovery routines in the operating system. (That's not including my own PCs going poo.) A lot more problems with bugs in language code e.g. statistics code that runs out of precision and gives duff results without telling you.. There have been well publicised potential logical problems with computer chips. I've spent another quarter of my working life installing and applying fixes to mainframe operating systems and components , the fixes released must have run to hundreds of thousands.

I came to the conclusion that any program of a moderate size usually does three completely distinct things; firstly what the users think it does, secondly what the documentation says it does and thirdly what it does, the exceptions being that the second is sometimes absent.

But those problems pale into insignificance if you're handling large amounts of data over a significant period compared with human fumble problems. Operators mounting the wrong tapes, failing to spot error messages, running things in the wrong order etc.

So I would say there are two chances of Schaeffer's calculations running perfectly over the length of time it took, the number of systems involved and the volume of data processed.

Of course if you run the same approach on separate systems with different developers and run a full check of the results it would greatly increase the confidence (I believe that was done with some of the tablebases).

The problem is you can't. Not only is most of Scheffer's result missing because there was no room to save it, but even if it all ran without error and you could run exactly the same code without error again you wouldn't get the same solution to check.

tygxc

@12172

"how long can a chess game go until someone gets an advantage and how long can the opponent prolong not getting checkmated?"
++ The ongoing ICCF World Championship Finals now has 109 draws out of 109 games and they end in draws in average 39 moves. https://www.iccf.com/event?id=100104

"How many calculations would it need for that to be "solved"?"
++ Weakly solving chess needs to consider 10^17 positions = Sqrt (10^37*10 / 10,000).
The 17 ICCF World Championship finalists looked at 1.9*10^17 positions = 90*10^6 positions/s/server * 2 servers/finalist * 17 finalists * 3600 s/h * 24 h/d * 365.25 d/a * 2 a

"chess games do have a max possible move count with the 50 move rule"
++ The 50-moves rule plays no role at all.
Games end in draws in average 39 moves, long before the 50-moves rule can trigger.

DiogenesDue
tygxc wrote:

"chess has a branching factor from 31" ++ Some positions can have 31 legal choices, but average only 3 that do not transpose: 3^80 = 10^38

Not "some", the average middlegame position. Some have as high as 40. Your 3 number is one I have seen from only one source, you...and you are distinctly unreliable for numbers.

"until you can perfectly evaluate a single position" ++ 1 e4 e5 2 Ba6? loses for white, without needing any game tree. The perfect evaluation of 1 a4 cannot be better than that of 1 e4, without needing any game tree.

This logic has never worked, sorry. You might as well extend your mistaken premise and say that 1 e3 is not winning because 1 e4 is superior, or 2 Be2 is not winning because 2 Bc4 and Bb5 are better.

"Whether AB or BA are actually identical in result is not something you can determine"
++ Of course move order matters, but when the same position results, it is the same and needs only consideration once.
Chess is full of transpositions: the branches come back together in the same node.

"you are not actually working with 10^44" ++ I work with 10^38: nobody promotes to a 3rd rook, bishop, or knight in a real game, let alone in an optimal game.

"you double, triple, and possibly quadruple count the elimination of positions"
++ No. The reduction from 10^44 to 10^38 is by eliminating underpromotions to pieces not previously captured, i.e. to a 3rd rook, bishop, or knight. The reduction from 10^38 to 10^34 (or 10^32 per Tromp) is to eliminate positions that make no sense, i.e. cannot result from optimal play by both sides. The reduction from 10^34 (or 10^32) to 10^17 is by assuming perfect Alpha-Beta search for the weak solution.

Once again...you cannot achieve perfect Alpha Beta search. Your Tromp number I will not believe coming from you, you have misrepresented him before.

"Your argument tries to apply it's largest reduction twice" ++ No.

MEGACHE3SE

tygxc why arent you addressing the fact that we've proven you wrong again and again?

do you get some sort of sick kick from spreading misinformation?

MEGACHE3SE

". The reduction from 10^34 (or 10^32) to 10^17 is by assuming perfect Alpha-Beta search for the weak solution"

but you dont list the computational effort required for that lmfao

MEGACHE3SE

I used to be confused as to hwo somebody like tygxc could be so confidently incorrect for so long without being strictly anti science but thats when I saw the terrence howard podcast.

tygxc is just terrence howard. He cites a ton of fancy terms but he doesnt understand any of the principles., nor does he understand basic logic.

Elroch
MEGACHE3SE wrote:

". The reduction from 10^34 (or 10^32) to 10^17 is by assuming perfect Alpha-Beta search for the weak solution"

but you dont list the computational effort required for that lmfao

It's junk. It relies on unambiguously failing to do what is necessary, but guessing that doing so doesn't matter.

Elroch
MEGACHE3SE wrote:

I used to be confused as to hwo somebody like tygxc could be so confidently incorrect for so long without being strictly anti science but thats when I saw the terrence howard podcast.

tygxc is just terrence howard. He cites a ton of fancy terms but he doesnt understand any of the principles., nor does he understand basic logic.

I had never heard of Terence Howard, but now I see he is one of the people who have realised that for every truth there is an alternative truth for which you can find a market. Admittedly that market consists of idiots, but why should that matter?

playerafar
MEGACHE3SE wrote:

tygxc why arent you addressing the fact that we've proven you wrong again and again?

do you get some sort of sick kick from spreading misinformation?

tygxc's 'strongest' position so far ... as in 'least weak' is something that he just mentioned but has been saying for years.
Like this:
"1 e4 e5 2 Ba6? loses for white, without needing any game tree."
Now I would say that's reasonable.
White simply 'drops his bishop' for a compensation that is just too much less.
(black then has doubled isolated a-pawns)
So I would say that the computer not bothering to analyze further from there - is reasonable 'approximate solving'. A term unfortunately coined is 'weakly solving'.
After bxa6 there ... black is not 'weak'. He's strong. A bishop up.
--------------------------
but tygxc is using the logic of that one argument - to build illogic.
For the computer to keep dismissing 'further game tree' on any line because its materially up -
isn't legitimate because 'compensation' varies and it isn't easy to assign definitions of what is 'enough compensation' or 'more than enough compensation' ...
but tygxc would then be likely to use 'computer evaluation' ... like have the computer 'dismiss' if its evaluation is for more than two point advantage for one side.
But he doesn't get it that that's a circular argument that fails to take into account that the computer is fallible and hasn't 'solved' chess in the first place and can't do so.
He's arguing that the computer could validate itself and is right because it says so.
Does tygxc believe in his own false argument or is he being deceitful?
Has tygxc fallen into a pitfall because in tactics puzzles - the puzzle doesn't continue beyond the solution moves? Has tygxc 'fallen' because players Resign in lost positions?
-------------------------
Experience and observation over decades tells me that such things aren't A or B.
Whoever - if invested enough - stops caring much whether they think they believe their own nonsense or not.
So they're not even aware either way.
Is there another specific instance of this in tygxc's spiels?
Yes.
It becomes more apparent when he asserts 'We don't Care about the number of operations per second that supercomputers can do - we only Care about Nodes per second!'
That is basically revealing about tygxc's internal illogic.
He doesn't care because he doesn't know.
He doesn't know because he doesn't want to know.
---------------------------------
You'll find this in many crass denial of logic or denial of science or denial of reality situations around the world.
Do flat-earthers know they're pushing nonsense or they think they aren't?
They stopped caring long before - as to which - and they stopped being internally aware of same. And don't want to be.
Result - nobody can talk to them - about them.
And the result is projection. Which always involves falsehood by the person so doing.
'No! Its You! And MSM! Main Stream Media. The world is Flat!'

playerafar

When somebody lies - but they internally choose to believe their own lie and then maintain in that way - are they still then lying?
The short answer is yes.
Is anybody exempt from this?
If you tell yourself that something is true that isn't - are you lying?
Some would say 'if whoever was given false data to work with - then that could be an instance about being honest in telling a falsehood'.
That's the medium answer.
---------------------------------
Could a computer be assigned to simply analyze all lines of chess and stop analyzing any lines where the advantage for one side has reached +5 ?
Could chess be 'approximately solved' that way?
Or to have a chance - would the 'stopping advantage' have to be as little as 0.5 ?
That would be ridiculous.
Stockfish could 'solve' chess right now if the stopping advantage only had to be 0.1
It would Solve chess in seconds. Already has.

BigChessplayer665

The issue is tyxgcs claim.makes it that chess is likely a draw but it doesn't prove anything unfortunately for him

MEGACHE3SE
Elroch wrote:
MEGACHE3SE wrote:

I used to be confused as to hwo somebody like tygxc could be so confidently incorrect for so long without being strictly anti science but thats when I saw the terrence howard podcast.

tygxc is just terrence howard. He cites a ton of fancy terms but he doesnt understand any of the principles., nor does he understand basic logic.

I had never heard of Terence Howard, but now I see he is one of the people who have realised that for every truth there is an alternative truth for which you can find a market. Admittedly that market consists of idiots, but why should that matter?

i just noticed that tygxc's style of commentary was very reminiscent of terrence howard's.

Elroch

Right. I have not delved far enough to learn how he presents his absurd nonsense.

BigChessplayer665
Elroch wrote:

Right. I have not delved far enough to learn how he presents his absurd nonsense.

He's forgetting that high probabilities doesn't mean you assume he's assuming it means that it is true