Chess will never be solved, here's why

Sort:
Avatar of tygxc

#266
No, not at all.
There are 2 different things: 1) assessing the feasibility of solving chess and 2) solving chess itself.

1) To assess the feasibility we must estimate the number of legal, sensible, and relevant positions so as to estimate the resouces required i.e. CPU time on cloud engines. So the estimate should exclude illegal positions, non sensible positions like random samples from the Tromp count, and non relevant positions which are legal and sensible, but which will become unreachable in the course of the calculation, e.g. all positions with a white pawn on e2 after 1 e4.
2) To actually solve chess the cloud engines calculate from a humanly prepared 26-men tabya e.g. C67 towards the 7-men table base.

Avatar of MARattigan
 tygxc wrote:

#251
Solving chess = proving that black has at least 1 move that draws against every sensible white move.

No it isn't. Solving chess is, among other things, determining if that is the case.

#254
A position with castling rights is different from a position without castling rights. A position with possible en passant is different from a position with en passant possibility. It is counted differently as such in the 3.8125 legal chess positions. See also
https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation 

One assumes you meant to say 3.8521*10^37 or 3.8521*10^39 depending on whether you intend to solve it under basic or competition rules (on which point you seem to be ambivalent). But you really need to stop referring to those figures as representing legal positions. We've already established they don't.

The best estimate for legal positions would be a little under 10⁴⁶ under basic rules or a little under 10⁴⁸ if the 50 move rule is added to the basic rules. (Over a quarter of an American trillion times larger in either case.)

 

Avatar of playerafar
btickler wrote:

The premise that removing positions that "don't make sense" will reduce the problem by 7 orders of magnitude is not feasible.

The very calculations to check the criteria you would need on each position in itself is a huge amount of processing power at that scale...or, were you planning on having the human assistants check the criteria manually?

[Note that *even if you somehow predetermined every position that is not sensible* to avoid those calculations, just skipping the ones that are on the list (3.1622776e+44 if you accept the faulty 10^37 number) would again be a huge amount of time spent]

I essentially agree with all of this post.
'Don't make sense' is far too arbitrary and subjective.  
But there are other methods of 'cutdown'.
And people actually use these cutdowns in their studies of chess ...
For example - King and rook against King.   
Its simply a book win unless the rook falls immediately or its already stalemate or the side with the rook blunders.  
Once players have it 'down' they go on to other things.
The number of possible K+R versus K positions is in the thousands - but its essentially two for practical purposes.  A win is possible or it isn't.  

Avatar of DiogenesDue
tygxc wrote:

#266
No, not at all.
There are 2 different things: 1) assessing the feasibility of solving chess and 2) solving chess itself.

1) To assess the feasibility we must estimate the number of legal, sensible, and relevant positions so as to estimate the resouces required i.e. CPU time on cloud engines. So the estimate should exclude illegal positions, non sensible positions like random samples from the Tromp count, and non relevant positions which are legal and sensible, but which will become unreachable in the course of the calculation, e.g. all positions with a white pawn on e2 after 1 e4.
2) To actually solve chess the cloud engines calculate from a humanly prepared 26-men tabya e.g. C67 towards the 7-men table base.

Ahh yes, I forgot, you still think that current engines can determine perfect play and ergo make the evaluations by simply extending their horizons out to reach tablebase endings.

Sorry, you can't bootstrap yourself to perfect evaluations using imperfect engines, and the only way to ensure perfect engine play is to solve chess *first* wink.png.

Avatar of playerafar

Still too arbitrary about 'don't make sense'.

Much better:  cut down the positions where there's obvious equivalency.
I'm also suggesting that's part of how improved chess is learned.  
Many positions are in fact 'solved'.  Or clear enough.

Avatar of Optimissed

A sensible position is one where an observable blunder hasn't been made. That will reduce the number of positions to be assessed by a factor of a high power of ten. An algorithm is necessary to do that. The design if such an algorithm isn't straightforward and first, it is necessary to depict chess mathematically. We are nowhere near that. Therefore, we do not have the resources to accomplish this "solving of chess" until maths is more integrated in software, which may be a while.

Avatar of playerafar

One thing for chess students to look out for is 'false equivalency'.
An example is with the b and g pawns.
The method of winning with sixth rank King (in K+P) - is more constrained than it is with the four more central pawns.  Its 'easier' to mess up !

Avatar of playerafar

Regarding 'upper bounds' on the maximum number of 32-piece or less setups of chess positions - that's well within reach.  Even without computers.  And has been done.
And any programmer worth his salt could probably use computers efficiently to greatly advance through lesser upper bounds.  
At issue - how much?
Then - what about cutting down according to equivalencies ?
Could also be done.  But there's a massive number of them.
Can a computer isolate all equivalencies ?
I would say no.  But still -  I suspect that a practical upper bound of number of essentially different chess positions - 
could be found in the twenties-range of powers of ten.
Or has been found.
But that's still way too Big For 'Solving'.  Daunting.
Its comparable to the Avogadro number in chemisty/physics ...  gigantic.
Unmanageable.
Instead of maybe comparable to an upper bound on the number of atoms in the observable universe ?  happy.png

Avatar of Elroch
tygxc wrote:

#255
Do not push the agnosticism too far.
Yes 1 e4 e5 2 Ba6 might win for white, there is no proof of the contrary.
Yes 1 e4 f5 might win for black, there is no proof of the contrary.
However we know that is not so.

You've just contradicted yourself.

You are confusing "know" and "are very confident of".

I am very confident that if I toss a fair coin a thousand times, it will not come up heads every time. But I do not know that (if I did know that, I could also exclude each of the other possible sequences, one by one, and finally conclude no sequence is possible).

You don't have reason to be as confident about the status of 1. e4 f5 until there is a demonstrated forced result for white. If you disagree, pick a larger number of coin tosses until you agree.

 

Avatar of playerafar

e4 f5 is the Fromm gambit I believe.  Its considered unsound.
Whereas the King's Gambit is considered 'sound' by GM's if I've got that right.  But many GM's would prefer the black side of it?
I'm getting that impression too.
But that preference could depend in turn on who the opponent is ...  happy.png

Avatar of Kowarenai

it might or it wont either way we dont know if we will even be alive by that point

Avatar of tygxc

#270
"Solving chess is, among other things, determining if that is the case."
No: proving the Riemann Hypothesis, Goldbach Conjecture, Fermat's Last Theorem, Four Color Theorem... is/was not determining if that is the case: it is proving what has long been known.
Also checkers was known in practice to be a draw long before the proof.
We know that 1 e4 f5 wins for white, we know 1 e4 e5 2 Ba6 wins for black.
A full game tree until checkmate in all variations would prove it, but we know without proof.
Likewise we know chess is a draw based on massive evidence, but without a proof yet.
Solving chess = proof black has at least 1 move that draws against all reasonable white moves.

Avatar of playerafar

It would seem logical to have a computer 'dismiss' positions arising from a Crazy move.
Like e4 e5 Ba6.  Crazy.
But dismissing the Fromm gambit ...  that's different.
There's massive pitfalls in such an approach.  
Far too many 'Morphy-positions' where crazy looking moves and the positions they create are Good !

But even dismissing positions where somebody just hung their rook - pitfalls again.
e4 e5 Ba6.  Leave it in.  Yes - it qualifies as 'solved' - but leave it in anyway.  
An idea.  Not some kind of arrogance.  happy.png
Number of possible setups - is always going to be an Avogadro kind of number ...  hey I'll look it up.  10 to the 26th with a coefficient in front of it?  Coming up.
Awww - Avogadro # is 'only' about 6 x 10 to the 23 ...  Shucks !  

Avatar of playerafar

Next idea:  comparing some processing speeds of computers ...
with typical upper bounds for the number of possible positions. 

Try - 10 Ghz.  That's actually more than 10 billion operations per second.
A 'gig' is more than a billion.  But try 10 billion.
That's 10 to the tenth ops per second.

Say by some stretch we could get the number of possible chess setups down to 10 to a 'small' 27th power ....
then it would take our 10 billion ops/second computer 10 to the 17th seconds to do an op on each position.  
(which isn't enough to analyze it to any degree at all. It could take many many millions of ops if not billions/trillions on each position ...)
but just one op per position would take 10 to the 17th seconds.
There's about 31 million seconds in a year.
That's about 3 x 10 to the seventh seconds.
So it would take our single digit Ghz computer well over 10 to the 9th years to do just one op per second on each position.
A billion years.  happy.png
Does that link up with the forum topic?

"Chess will never be solved, here's why"

Avatar of tygxc

#283
No, no, present could engines reach 10^9 nodes per second.
I have calculated above how 3 cloud engines can solve chess in 5 years.

Avatar of playerafar

"No, no, present could engines reach 10^9 nodes per second."

I just finished posting to the effect that a common desktop might do 10 to the 10th ops per second.
Faster than your 10 to the ninth.
And its still going to take a billion years.  For one op per second.
More like many trillions of years.
1000 ops on any position would probably not be anything like enough to even categorize a position ....
A kilobyte to come up with something about a chess position ?
Hardly.
This is beginning to look like 'case closed'?
Well - everyone decides for himself/herself is how I like to look at it ...

by what factor can an array of supercomputers outdo a strong desktop computer?
that could be googled I guess.

Avatar of tygxc

#285
10^9 nodes/second = 10^9 chess positions per second on a cloud engine

a common desktop is at 10^6 chess positions per second
I calculated it above: 5 years on 3 cloud engines.

Please see my calculation above.

Avatar of playerafar

"10^9 nodes/second = 10^9 chess positions per second on a cloud engine"

No.  You're not going to get anything on a position with a byte or bit per position.
If you want to define what you mean by 'nodes' there - you could say here and now and define it for all reading.   
Your choice.

Avatar of tygxc

#287
node = chess position
https://www.chessprogramming.org/Nodes_per_Second 

 

Avatar of playerafar


"Summit can deliver 200 petaFLOPS at peak. This is equivalent to 200 quadrillion floating-point operations per second."
I got that from here: 
https://www.rankred.com/fastest-supercomputers-in-the-world/

200 quadrillion is 2 x 10 to the 17th power.
So Summit is about 20 million times as fast as a good desktop computer.
But with one op per position - that's still going to take 50 years to go through 10 to the 27th positions.
With a kilobyte per position (not enough) - that's going to take 50,000 years.
With a meg per position (not enough for so many positions) - 
that's going to take 50 million years ...  

Even if you eat shark cartilage and watch Jimmy Fallon ...  
you're not going to make it ...