Chess will never be solved, here's why

Sort:
playerafar

And nobody seems to be interested in how the initial 'two Kings only' tablebase should be set up in order to accomodate the various features of chess.
Like pawns on 48 squares only - to provide for pawn direction - two types of bishops each moving on 32 squares only -
(as opposed to two colors of bishops themselves) - at least 32 squares always empty - 
all obvious illegals pre-excluded like Kings adjacent - triple check always illegal - both Kings in check always illegal - setting up the tablebase not only to accomodate the en passant factor - but castling rights also ! - then necessity to provide for perpetual check as draw but not incidental repetitions which do have the 'uniqueness' feature that availability of perpetual check does ... (not that subtle) - and the non-necessity to provide for the 50 move rule - 

playerafar

The number of ways to legally arrange two Kings on the chessboard will always be the same no matter how you group it into categories ...  and it'll always be a draw -
but if you're going to add pieces - then that total number needs to be 'grouped'.
So that there's grouping and progress - instead of groping with things like 'nodes'.  

People study things like N+B versus lone King.  A classic endgame.
But that might never come up once in a person's games their whole life.
That endgame is an intellectual exercise.  But so is chess.
This business of how chess might be 'solved' someday is an intellectual exercise too.  

Serepheon

Will this forum ever be solved

btw #1901

Also, Page 96

playerafar
EheuMyKing wrote:

chess will never be solved

Not if global warming and nuclear proliferation and worldwide pollution continue on their present course ...
the human race just won't survive long enough.
But maybe chess has been solved on another planet somewhere out in another Big Bang somewhere   !!
But its probably not the same kind of chess ...
Maybe you can castle out of check ...

addy_x

Nice

MARattigan
Elroch wrote:

Competition rules chess is a less interesting game to a game theorist I feel. The 50 move rule is a very ugly practicality.

Yes and no.

The idea of an n-move rule adds some very interesting theoretical problems. Guy Haworth proposed DTR tables giving the moves that win under the minimum n-move rule.

The pc=0 position below can't be won within the 50 move rule, but what is the smallest n-move rule that would be sufficient? Nobody knows - none of the current tablebases tell you. For that you need DTR tablebases.

 

On the other hand the practicality is that players at a current level stand no chance of playing the competition rules game perfectly. (Though they also stand no chance of playing the basic rules game perfectly either, so they don't notice.)

Kavita_39

Great read

MARattigan
playerafar wrote:


...
Apparently the three tablebases Lomonosov and Szygy and Nalimov all actually did 'handle' en passant and factor it in. ...

A good point.

I posted some crap about that earlier in the thread in response to one of @Elroch's posts for which my apologies (brain malfunction on my part).

The positions with an e.p. opportunity are omitted from the tablebases merely to save space. These positions can be evaluated on a suck it and see basis.

This is different from castling because the e.p. opportunity immediately disappears whatever is played but the castling opportunity can persist. That means that a whole new set of tablebases is required for all the classifications that include a rook.

This would probably add about 12000 new tablebases to the 3000 or so existing tablebases for 3-7 men. So the builders knock off when they've got the tablebases complete without castling. For the puposes of an engine playing in TCEC this is approximately good enough because it's not expecting to reach 7 men with any castling rights intact.

It does mean that the tablebases don't weakly solve any n man chess in either basic rules chess or competition rules chess. 

@tygxc would no doubt have to spend the first couple of minutes on his supercomputer building the missing tablebases.

 

DiogenesDue
tygxc wrote:

#1870
"Go ahead and dig up some conversion rate for boolean and/or integer operations if you like...I don't have any need of doing more homework."
If you really know something about computers, then you need no homework for that.
In terms of time to execute:

boolean operation < integer addition < integer multiplication < floating point multiplication < floating point division

Funny, I don't see a conversion rate there at all.  Just the most basic fundamentals any C++ student taking a class teaching them about what's under the covers would spew out.  So, yes, apparently you do need to do homework on that.  Homework you seem to be incapable of.

At its core a computer consists of boolean gates only. Integer addition is done by several boolean operations. Integer multiplication is done by several integer additions. floating point multiplication is done by several integer multiplications, floating point division is done by several floating point multiplications.

Congratulations.  Did you get 10/10 on the pop quiz?  I have designed logic circuits and worked on AND/OR/XOR gates directly, soldering the transistors by hand.  I'm willing to bet I understand boolean operations just a little bit better than you do.  FLOPS is still the measure used, because listing a processor's speed in boolean operations can be deceiving and does not tell the story.  Like a simple processor with one or a few registers, that can do sequential simple operations very quickly and at a superficial level might look faster, but bogs down as soon as it has to juggle more than 2-3 numbers in a single procedure and ends up moving values on and off the stack like crazy.

Typical problems like forecasting the weather with Navier-Stokes equations require many floating point operations. That is why FLOPS is the right metric for those common problems.

Solving chess requires not a single floating point operation.
It needs only boolean operations and a few operations on small integers.

Then I guess you really need to go dig up the specs on that cloud engine's backend server array and come up with a real conversion rate rather than a vague "boolean < floating point", don't you? 

If you cannot find one, might I suggest you dig up the exact instruction set used by the array's processors, write an assembly language benchmark that you can can ensure yourself is as "boolean" as feasibly possible (you do know that you can't really process anything significant purely with bitwise operators, right?), buy some small sliver of time, and run it yourself.  Of course, that's if they even let you run assembly language and do not limit you to Javascript, Python, and Ruby on Rails or something wink.png .

Until you do, though...FLOPS is the standard and correct measure.

That is why nodes/s is the right metric to assess the feasibility of solving chess.

Nodes are a very poor measurement unit for this discussion, for the reasons I already pointed out.  Your Stockfish nodes being apples to oranges compared to tablebase entries foremost among them.

You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  

10^9 nodes/s equals 1 nanosecond per position to consider
It does not mean that 1 core considers 1 position in 1 nanosecond.
It means that N cores consider N positions in N nanoseconds.

The way you spout the obvious as if it is deep and revelatory really speaks to the level of knowledge you are working at.

 

DiogenesDue
Optimissed wrote:

Around 1979 to 81 I worked for a fluid handling engineering applications company in London and we used various types of logic circuitry. One was transistorised and early chips but the others were hydraulic and pneumatic logic control circuits, as well as mechanical. I was making some prototype devices for hydraulic circuitry. The pneumatics were bought in and we had a Cambridge graduate designing the electronics. A complete pain in the neck because they kept changing designs on me when something was nearly made. At other times, they asked me to design something, I would do it in an hour, they wouldn't believe it could be so simple and wasted days trying to make it operate more positively before reverting to my design and taking the credit. But the money was ok.

The circuits I designed were for military communications systems, but in a similar fashion, I was not a trained engineer.  I was a maintenance technician at the time, but when they wanted the system to do something different it was not designed to do, I went into the schematics, re-purposed some circuits, and had to find some unused spare circuits on a few boards that I could wire into the system (specifically some NAND gates in this case) to make it work the way they wanted it to.  It gave us a leg up on other bases using the same system.

dogs4evr

..................cool

TJ-Henry
Interesting thread!
playerafar
btickler wrote:
tygxc wrote:

#1870
"Go ahead and dig up some conversion rate for boolean and/or integer operations if you like...I don't have any need of doing more homework."
If you really know something about computers, then you need no homework for that.
In terms of time to execute:

boolean operation < integer addition < integer multiplication < floating point multiplication < floating point division

Funny, I don't see a conversion rate there at all.  Just the most basic fundamentals any C++ student taking a class teaching them about what's under the covers would spew out.  So, yes, apparently you do need to do homework on that.  Homework you seem to be incapable of.

At its core a computer consists of boolean gates only. Integer addition is done by several boolean operations. Integer multiplication is done by several integer additions. floating point multiplication is done by several integer multiplications, floating point division is done by several floating point multiplications.

Congratulations.  Did you get 10/10 on the pop quiz?  I have designed logic circuits and worked on AND/OR/XOR gates directly, soldering the transistors by hand.  I'm willing to bet I understand boolean operations just a little bit better than you do.  FLOPS is still the measure used, because listing a processor's speed in boolean operations can be deceiving and does not tell the story.  Like a simple processor with one or a few registers, that can do sequential simple operations very quickly and at a superficial level might look faster, but bogs down as soon as it has to juggle more than 2-3 numbers in a single procedure and ends up moving values on and off the stack like crazy.

Typical problems like forecasting the weather with Navier-Stokes equations require many floating point operations. That is why FLOPS is the right metric for those common problems.

Solving chess requires not a single floating point operation.
It needs only boolean operations and a few operations on small integers.

Then I guess you really need to go dig up the specs on that cloud engine's backend server array and come up with a real conversion rate rather than a vague "boolean < floating point", don't you? 

If you cannot find one, might I suggest you dig up the exact instruction set used by the array's processors, write an assembly language benchmark that you can can ensure yourself is as "boolean" as feasibly possible (you do know that you can't really process anything significant purely with bitwise operators, right?), buy some small sliver of time, and run it yourself.  Of course, that's if they even let you run assembly language and do not limit you to Javascript, Python, and Ruby on Rails or something .

Until you do, though...FLOPS is the standard and correct measure.

That is why nodes/s is the right metric to assess the feasibility of solving chess.

Nodes are a very poor measurement unit for this discussion, for the reasons I already pointed out.  Your Stockfish nodes being apples to oranges compared to tablebase entries foremost among them.

You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  

10^9 nodes/s equals 1 nanosecond per position to consider
It does not mean that 1 core considers 1 position in 1 nanosecond.
It means that N cores consider N positions in N nanoseconds.

The way you spout the obvious as if it is deep and revelatory really speaks to the level of knowledge you are working at.

 

Another good post by @btickler.
If @tygxc is a computer student - then hopefully he is not just 'cramming' inforation without trying to understand it.
 the concept that the FLOPs speed of the computer would be irrelevant ??  That seems so Ridiculous to me !
And so is the argument 'we don't care' in relation to it.

We don't care about the pollen count in Hong Kong either -
and does anybody 'care' about the forum topic ?? 
but 'don't care' isn't an argument.
Saying the FLOPs speed 'doesn't matter' would be like ...
"Your car can only go 80 kmh"
and then whoever replying "We don't Care ! We're going by lampposts per second!"  

And there was his concession that 'nodes'/positions per second are not 'solves' - just 'considers' whatever that's supposed to mean.
Was it @Elroch who liked the post about 'we don't care' about the FLOPs speed of the computer ??  Maybe 'liked it' as a Joke ?? happy.png
tygxc's  reticence about booleans per second - his pushing of 'reasonableness of moves' - and other things he's posting makes me wonder if he believes his own posts.  Or how much?
(maybe he's wondering - 'hey I've got a lot of people actually Not Sure - ha ha ha')

And did he read post #22 that @MARattigan posted specially for him ??
(yes I know we don't Care !  No need to accuse me of accusing somebody else of Caring in these contextes  grin.pngevil.png )

tygxc

#1907
"You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  "
You are fixated on FLOPs while solving chess needs no single floating point operation.
Well, Mr. maintenance technician, that is exactly the reason why nodes/s is the right metric and not boolean operation/second.
Question: How much time does it take to travel 120 km?
Me: On that road you can drive 60 km/h so 120 km / 60 km/h = 2 hours
You: No you must know how many revolutions per minute your engine does.

DiogenesDue
tygxc wrote:

#1907
"You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  "
You are fixated on FLOPs while solving chess needs no single floating point operation.
Well, Mr. maintenance technician, that is exactly the reason why nodes/s is the right metric and not boolean operation/second.
Question: How much time does it take to travel 120 km?
Me: On that road you can drive 60 km/h so 120 km / 60 km/h = 2 hours
You: No you must know how many revolutions per minute your engine does.

Bad analogy.  Here's a better one:

Question:  how long does it take light to travel between our galaxy and Andromeda?

You:  10^17 furlongs.  

Me:  First, furlongs are measure of distance, not time.  Second, light years would be the proper measure of distance if distance were needed.  Ergo, third, years would currently be the correct measurement unit of time for this question.  Now, did you want to know the time for light to travel from galactic center to galactic center, or the nearest points on the spiral arms?  And if the latter, would that be nearest points at the time of departure and subsequent arrival, or do you want an answer that takes rotation into account?  Because as it stands the question is imprecise and cannot be answered without assumptions or approximations...

As for impugning my job at 22, well...it's not a strong move, but I guess you feel you need to say something.  Hey, I sold raffle tickets in grade school for fundraisers...maybe you can make that even more disparaging..."Ok, raffle ticket boy..."

In the meantime, all you've effectively done is point out how I knew more about boolean operations at 22 in my first military assignment than you do today (at least as far you've been able to demonstrate thus far) wink.png.

tygxc

#1896 “keep debunking already-refuted and partially-conceded claims“

Refuted claims keep coming up. A persistent example:
#1837 “10^44.5 positions to be evaluated (Tromp's number, best estimate currently available) “

Refutation:

Tromp counted exactly
8726713169886222032347729969256422370854716254 possible chess positions.
https://github.com/tromp/ChessPositionRanking 

Then he randomly sampled 10,000 of these and he found 538 of these legal
Thus he arrived at his number of legal positions:
8726713169886222032347729969256422370854716254 * 538 / 10000 = 4.5 * 10^44
All of his 538 legal positions look similar like this:

 

This position is legal: here is the proof game:

 

This game has an accuracy near 0: almost all moves are blunders.
An ideal game with optimal moves would have an accuracy near 100%.
None of the 538 positions found legal can result from an ideal game with optimal moves.
Thus 4.5*10^44 is much too large an estimate to assess the feasibility of solving chess.

 

DemonStalker_UA

Hello. Do you know people who want to help Ukrainians but don't know how to do it? To be specific targeted assistance?
I don't know English well so I use a translator

DiogenesDue
DemonStalker_UA wrote:

Hello. Do you know people who want to help Ukrainians but don't know how to do it? To be specific targeted assistance?
I don't know English well so I use a translator

You seem to know English well enough to mask your intentions wink.png...

DemonStalker_UA

what do you mean

 

MARattigan
tygxc wrote:

#1896 “keep debunking already-refuted and partially-conceded claims“

Refuted claims keep coming up. A persistent example:
#1837 “10^44.5 positions to be evaluated (Tromp's number, best estimate currently available) “

Refutation:

Tromp counted exactly
8726713169886222032347729969256422370854716254 possible chess positions.
https://github.com/tromp/ChessPositionRanking 

Then he randomly sampled 10,000 of these and he found 538 of these legal
Thus he arrived at his number of legal positions:
8726713169886222032347729969256422370854716254 * 538 / 10000 = 4.5 * 10^44
All of his 538 legal positions look similar like this:

 

This position is legal: here is the proof game:

 

 

This game has an accuracy near 0: almost all moves are blunders.
An ideal game with optimal moves would have an accuracy near 100%.
None of the 538 positions found legal can result from an ideal game with optimal moves.
Thus 4.5*10^44 is much too large an estimate to assess the feasibility of solving chess.

 

Ace refutation! Congratulations!

You could give even more examples. Here's another position from Tromp's 4.5 * 10^44.

 

Here's the proof game ("the" of course, not "a")

 

I think this game has an accuracy near 0 and almost all moves are blunders, so we can obviously forget about that position too. 

In fact we can probably forget about all Tromp's positions and just call it a draw.