Chess will never be solved, here's why

Sort:
Avatar of Kavita_39

Great read

Avatar of MARattigan
playerafar wrote:


...
Apparently the three tablebases Lomonosov and Szygy and Nalimov all actually did 'handle' en passant and factor it in. ...

A good point.

I posted some crap about that earlier in the thread in response to one of @Elroch's posts for which my apologies (brain malfunction on my part).

The positions with an e.p. opportunity are omitted from the tablebases merely to save space. These positions can be evaluated on a suck it and see basis.

This is different from castling because the e.p. opportunity immediately disappears whatever is played but the castling opportunity can persist. That means that a whole new set of tablebases is required for all the classifications that include a rook.

This would probably add about 12000 new tablebases to the 3000 or so existing tablebases for 3-7 men. So the builders knock off when they've got the tablebases complete without castling. For the puposes of an engine playing in TCEC this is approximately good enough because it's not expecting to reach 7 men with any castling rights intact.

It does mean that the tablebases don't weakly solve any n man chess in either basic rules chess or competition rules chess. 

@tygxc would no doubt have to spend the first couple of minutes on his supercomputer building the missing tablebases.

 

Avatar of DiogenesDue
tygxc wrote:

#1870
"Go ahead and dig up some conversion rate for boolean and/or integer operations if you like...I don't have any need of doing more homework."
If you really know something about computers, then you need no homework for that.
In terms of time to execute:

boolean operation < integer addition < integer multiplication < floating point multiplication < floating point division

Funny, I don't see a conversion rate there at all.  Just the most basic fundamentals any C++ student taking a class teaching them about what's under the covers would spew out.  So, yes, apparently you do need to do homework on that.  Homework you seem to be incapable of.

At its core a computer consists of boolean gates only. Integer addition is done by several boolean operations. Integer multiplication is done by several integer additions. floating point multiplication is done by several integer multiplications, floating point division is done by several floating point multiplications.

Congratulations.  Did you get 10/10 on the pop quiz?  I have designed logic circuits and worked on AND/OR/XOR gates directly, soldering the transistors by hand.  I'm willing to bet I understand boolean operations just a little bit better than you do.  FLOPS is still the measure used, because listing a processor's speed in boolean operations can be deceiving and does not tell the story.  Like a simple processor with one or a few registers, that can do sequential simple operations very quickly and at a superficial level might look faster, but bogs down as soon as it has to juggle more than 2-3 numbers in a single procedure and ends up moving values on and off the stack like crazy.

Typical problems like forecasting the weather with Navier-Stokes equations require many floating point operations. That is why FLOPS is the right metric for those common problems.

Solving chess requires not a single floating point operation.
It needs only boolean operations and a few operations on small integers.

Then I guess you really need to go dig up the specs on that cloud engine's backend server array and come up with a real conversion rate rather than a vague "boolean < floating point", don't you? 

If you cannot find one, might I suggest you dig up the exact instruction set used by the array's processors, write an assembly language benchmark that you can can ensure yourself is as "boolean" as feasibly possible (you do know that you can't really process anything significant purely with bitwise operators, right?), buy some small sliver of time, and run it yourself.  Of course, that's if they even let you run assembly language and do not limit you to Javascript, Python, and Ruby on Rails or something wink.png .

Until you do, though...FLOPS is the standard and correct measure.

That is why nodes/s is the right metric to assess the feasibility of solving chess.

Nodes are a very poor measurement unit for this discussion, for the reasons I already pointed out.  Your Stockfish nodes being apples to oranges compared to tablebase entries foremost among them.

You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  

10^9 nodes/s equals 1 nanosecond per position to consider
It does not mean that 1 core considers 1 position in 1 nanosecond.
It means that N cores consider N positions in N nanoseconds.

The way you spout the obvious as if it is deep and revelatory really speaks to the level of knowledge you are working at.

 

Avatar of Optimissed
btickler wrote:
tygxc wrote:

#1870
"Go ahead and dig up some conversion rate for boolean and/or integer operations if you like...I don't have any need of doing more homework."
If you really know something about computers, then you need no homework for that.
In terms of time to execute:

boolean operation < integer addition < integer multiplication < floating point multiplication < floating point division

Funny, I don't see a conversion rate there at all.  Just the most basic fundamentals any C++ student taking a class teaching them about what's under the covers would spew out.  So, yes, apparently you do need to do homework on that.  Homework you seem to be incapable of.

At its core a computer consists of boolean gates only. Integer addition is done by several boolean operations. Integer multiplication is done by several integer additions. floating point multiplication is done by several integer multiplications, floating point division is done by several floating point multiplications.

Congratulations.  Did you get 10/10 on the pop quiz?  I have designed logic circuits and worked on AND/OR/XOR gates directly, soldering the transistors by hand.  I'm willing to bet I understand boolean operations just a little bit better than you do.  FLOPS is still the measure used, because listing a processor's speed in boolean operations can be deceiving and does not tell the story.  Like a simple processor with one or a few registers, that can do sequential simple operations very quickly and at a superficial level might look faster, but bogs down as soon as it has to juggle more than 2-3 numbers in a single procedure and ends up moving values on and off the stack like crazy.

Typical problems like forecasting the weather with Navier-Stokes equations require many floating point operations. That is why FLOPS is the right metric for those common problems.

Solving chess requires not a single floating point operation.
It needs only boolean operations and a few operations on small integers.

Then I guess you really need to go dig up the specs on that cloud engine's backend server array and come up with a real conversion rate rather than a vague "boolean < floating point", don't you? 

If you cannot find one, might I suggest you dig up the exact instruction set used by the array's processors, write an assembly language benchmark that you can can ensure yourself is as "boolean" as feasibly possible (you do know that you can't really process anything significant purely with bitwise operators, right?), buy some small sliver of time, and run it yourself.  Of course, that's if they even let you run assembly language and do not limit you to Javascript, Python, and Ruby on Rails or something .

Until you do, though...FLOPS is the standard and correct measure.

That is why nodes/s is the right metric to assess the feasibility of solving chess.

Nodes are a very poor measurement unit for this discussion, for the reasons I already pointed out.  Your Stockfish nodes being apples to oranges compared to tablebase entries foremost among them.

You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  

10^9 nodes/s equals 1 nanosecond per position to consider
It does not mean that 1 core considers 1 position in 1 nanosecond.
It means that N cores consider N positions in N nanoseconds.

The way you spout the obvious as if it is deep and revelatory really speaks to the level of knowledge you are working at.

 

Around 1979 to 81 I worked for a fluid handling engineering applications company in London and we used various types of logic circuitry. One was transistorised and early chips but the others were hydraulic and pneumatic logic control circuits, as well as mechanical. I was making some prototype devices for hydraulic circuitry. The pneumatics were bought in and we had a Cambridge graduate designing the electronics. A complete pain in the neck because they kept changing designs on me when something was nearly made. At other times, they asked me to design something, I would do it in an hour, they wouldn't believe it could be so simple and wasted days trying to make it operate more positively before reverting to my design and taking the credit. But the money was ok.

Avatar of DiogenesDue
Optimissed wrote:

Around 1979 to 81 I worked for a fluid handling engineering applications company in London and we used various types of logic circuitry. One was transistorised and early chips but the others were hydraulic and pneumatic logic control circuits, as well as mechanical. I was making some prototype devices for hydraulic circuitry. The pneumatics were bought in and we had a Cambridge graduate designing the electronics. A complete pain in the neck because they kept changing designs on me when something was nearly made. At other times, they asked me to design something, I would do it in an hour, they wouldn't believe it could be so simple and wasted days trying to make it operate more positively before reverting to my design and taking the credit. But the money was ok.

The circuits I designed were for military communications systems, but in a similar fashion, I was not a trained engineer.  I was a maintenance technician at the time, but when they wanted the system to do something different it was not designed to do, I went into the schematics, re-purposed some circuits, and had to find some unused spare circuits on a few boards that I could wire into the system (specifically some NAND gates in this case) to make it work the way they wanted it to.  It gave us a leg up on other bases using the same system.

Avatar of dogs4evr

..................cool

Avatar of TJ-Henry
Interesting thread!
Avatar of playerafar
btickler wrote:
tygxc wrote:

#1870
"Go ahead and dig up some conversion rate for boolean and/or integer operations if you like...I don't have any need of doing more homework."
If you really know something about computers, then you need no homework for that.
In terms of time to execute:

boolean operation < integer addition < integer multiplication < floating point multiplication < floating point division

Funny, I don't see a conversion rate there at all.  Just the most basic fundamentals any C++ student taking a class teaching them about what's under the covers would spew out.  So, yes, apparently you do need to do homework on that.  Homework you seem to be incapable of.

At its core a computer consists of boolean gates only. Integer addition is done by several boolean operations. Integer multiplication is done by several integer additions. floating point multiplication is done by several integer multiplications, floating point division is done by several floating point multiplications.

Congratulations.  Did you get 10/10 on the pop quiz?  I have designed logic circuits and worked on AND/OR/XOR gates directly, soldering the transistors by hand.  I'm willing to bet I understand boolean operations just a little bit better than you do.  FLOPS is still the measure used, because listing a processor's speed in boolean operations can be deceiving and does not tell the story.  Like a simple processor with one or a few registers, that can do sequential simple operations very quickly and at a superficial level might look faster, but bogs down as soon as it has to juggle more than 2-3 numbers in a single procedure and ends up moving values on and off the stack like crazy.

Typical problems like forecasting the weather with Navier-Stokes equations require many floating point operations. That is why FLOPS is the right metric for those common problems.

Solving chess requires not a single floating point operation.
It needs only boolean operations and a few operations on small integers.

Then I guess you really need to go dig up the specs on that cloud engine's backend server array and come up with a real conversion rate rather than a vague "boolean < floating point", don't you? 

If you cannot find one, might I suggest you dig up the exact instruction set used by the array's processors, write an assembly language benchmark that you can can ensure yourself is as "boolean" as feasibly possible (you do know that you can't really process anything significant purely with bitwise operators, right?), buy some small sliver of time, and run it yourself.  Of course, that's if they even let you run assembly language and do not limit you to Javascript, Python, and Ruby on Rails or something .

Until you do, though...FLOPS is the standard and correct measure.

That is why nodes/s is the right metric to assess the feasibility of solving chess.

Nodes are a very poor measurement unit for this discussion, for the reasons I already pointed out.  Your Stockfish nodes being apples to oranges compared to tablebase entries foremost among them.

You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  

10^9 nodes/s equals 1 nanosecond per position to consider
It does not mean that 1 core considers 1 position in 1 nanosecond.
It means that N cores consider N positions in N nanoseconds.

The way you spout the obvious as if it is deep and revelatory really speaks to the level of knowledge you are working at.

 

Another good post by @btickler.
If @tygxc is a computer student - then hopefully he is not just 'cramming' inforation without trying to understand it.
 the concept that the FLOPs speed of the computer would be irrelevant ??  That seems so Ridiculous to me !
And so is the argument 'we don't care' in relation to it.

We don't care about the pollen count in Hong Kong either -
and does anybody 'care' about the forum topic ?? 
but 'don't care' isn't an argument.
Saying the FLOPs speed 'doesn't matter' would be like ...
"Your car can only go 80 kmh"
and then whoever replying "We don't Care ! We're going by lampposts per second!"  

And there was his concession that 'nodes'/positions per second are not 'solves' - just 'considers' whatever that's supposed to mean.
Was it @Elroch who liked the post about 'we don't care' about the FLOPs speed of the computer ??  Maybe 'liked it' as a Joke ?? happy.png
tygxc's  reticence about booleans per second - his pushing of 'reasonableness of moves' - and other things he's posting makes me wonder if he believes his own posts.  Or how much?
(maybe he's wondering - 'hey I've got a lot of people actually Not Sure - ha ha ha')

And did he read post #22 that @MARattigan posted specially for him ??
(yes I know we don't Care !  No need to accuse me of accusing somebody else of Caring in these contextes  grin.pngevil.png )

Avatar of tygxc

#1907
"You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  "
You are fixated on FLOPs while solving chess needs no single floating point operation.
Well, Mr. maintenance technician, that is exactly the reason why nodes/s is the right metric and not boolean operation/second.
Question: How much time does it take to travel 120 km?
Me: On that road you can drive 60 km/h so 120 km / 60 km/h = 2 hours
You: No you must know how many revolutions per minute your engine does.

Avatar of DiogenesDue
tygxc wrote:

#1907
"You also seem oblivious to the fact that traversing the positions themselves and storing both interim and permanent findings will take a large chunk of time relative to the boolean operations you seem fixated on.  "
You are fixated on FLOPs while solving chess needs no single floating point operation.
Well, Mr. maintenance technician, that is exactly the reason why nodes/s is the right metric and not boolean operation/second.
Question: How much time does it take to travel 120 km?
Me: On that road you can drive 60 km/h so 120 km / 60 km/h = 2 hours
You: No you must know how many revolutions per minute your engine does.

Bad analogy.  Here's a better one:

Question:  how long does it take light to travel between our galaxy and Andromeda?

You:  10^17 furlongs.  

Me:  First, furlongs are measure of distance, not time.  Second, light years would be the proper measure of distance if distance were needed.  Ergo, third, years would currently be the correct measurement unit of time for this question.  Now, did you want to know the time for light to travel from galactic center to galactic center, or the nearest points on the spiral arms?  And if the latter, would that be nearest points at the time of departure and subsequent arrival, or do you want an answer that takes rotation into account?  Because as it stands the question is imprecise and cannot be answered without assumptions or approximations...

As for impugning my job at 22, well...it's not a strong move, but I guess you feel you need to say something.  Hey, I sold raffle tickets in grade school for fundraisers...maybe you can make that even more disparaging..."Ok, raffle ticket boy..."

In the meantime, all you've effectively done is point out how I knew more about boolean operations at 22 in my first military assignment than you do today (at least as far you've been able to demonstrate thus far) wink.png.

Avatar of tygxc

#1896 “keep debunking already-refuted and partially-conceded claims“

Refuted claims keep coming up. A persistent example:
#1837 “10^44.5 positions to be evaluated (Tromp's number, best estimate currently available) “

Refutation:

Tromp counted exactly
8726713169886222032347729969256422370854716254 possible chess positions.
https://github.com/tromp/ChessPositionRanking 

Then he randomly sampled 10,000 of these and he found 538 of these legal
Thus he arrived at his number of legal positions:
8726713169886222032347729969256422370854716254 * 538 / 10000 = 4.5 * 10^44
All of his 538 legal positions look similar like this:

 

This position is legal: here is the proof game:

 

This game has an accuracy near 0: almost all moves are blunders.
An ideal game with optimal moves would have an accuracy near 100%.
None of the 538 positions found legal can result from an ideal game with optimal moves.
Thus 4.5*10^44 is much too large an estimate to assess the feasibility of solving chess.

 

Avatar of DemonStalker_UA

Hello. Do you know people who want to help Ukrainians but don't know how to do it? To be specific targeted assistance?
I don't know English well so I use a translator

Avatar of DiogenesDue
DemonStalker_UA wrote:

Hello. Do you know people who want to help Ukrainians but don't know how to do it? To be specific targeted assistance?
I don't know English well so I use a translator

You seem to know English well enough to mask your intentions wink.png...

Avatar of DemonStalker_UA

what do you mean

 

Avatar of MARattigan
tygxc wrote:

#1896 “keep debunking already-refuted and partially-conceded claims“

Refuted claims keep coming up. A persistent example:
#1837 “10^44.5 positions to be evaluated (Tromp's number, best estimate currently available) “

Refutation:

Tromp counted exactly
8726713169886222032347729969256422370854716254 possible chess positions.
https://github.com/tromp/ChessPositionRanking 

Then he randomly sampled 10,000 of these and he found 538 of these legal
Thus he arrived at his number of legal positions:
8726713169886222032347729969256422370854716254 * 538 / 10000 = 4.5 * 10^44
All of his 538 legal positions look similar like this:

 

This position is legal: here is the proof game:

 

 

This game has an accuracy near 0: almost all moves are blunders.
An ideal game with optimal moves would have an accuracy near 100%.
None of the 538 positions found legal can result from an ideal game with optimal moves.
Thus 4.5*10^44 is much too large an estimate to assess the feasibility of solving chess.

 

Ace refutation! Congratulations!

You could give even more examples. Here's another position from Tromp's 4.5 * 10^44.

 

Here's the proof game ("the" of course, not "a")

 

I think this game has an accuracy near 0 and almost all moves are blunders, so we can obviously forget about that position too. 

In fact we can probably forget about all Tromp's positions and just call it a draw. 

Avatar of Kowarenai

lol

Avatar of tygxc

#1919
The proof game I gave is the shortest. Your trolling 'proof game' is not the shortest.
Tromp sampled 10,000 positions, of which 538 were legal and your position is not one of those.
The 538 legal Tromp positions only result from games full of blunders.
Yes, we can forget about all 538 legal Tromp positions and thus also about his count 4.5*10^44.
The Gourion count 10^37 is a better estimate and it is still too high.

Avatar of DiogenesDue
tygxc wrote:

#1919
The proof game I gave is the shortest. Your trolling 'proof game' is not the shortest.
Tromp sampled 10,000 positions, of which 538 were legal and your position is not one of those.
The 538 legal Tromp positions only result from games full of blunders.
Yes, we can forget about all 538 legal Tromp positions and thus also about his count 4.5*10^44.
The Gourion count 10^37 is a better estimate and it is still too high.

You still have to traverse these positions or otherwise evaluate them (which takes the same basic timeframes at best, or takes much much longer at worst)...which is patently obvious.  The only reason that we have the 10^44.5 number to begin with is that we discarded the notion of evaluating all possible games (which is what you are proposing here, ironically).

So, sure, you could verify your 10^37 number of positions by evaluating 10^120 games and throwing out the positions arrived at via blunder filled games...you and Martiggan have already eliminated 2 games this way!  Let us know when you're done with your first ECO code wink.png.

You never seem to understand that if you move part of the evaluation out of the process to some pre-process or parallel process, you aren't actually saving any time/effort.  Your 10^9 nodes + evaluating all positions for "reasonableness" beforehand takes as long or longer than just traversing the 10^44.5 positions.

Avatar of tygxc

#1922
"You still have to traverse these positions or otherwise evaluate them"
++ No, these positions never turn up in the course of the proof, they need neither traversing nor evaluation.

"The only reason that we have the 10^44.5 number to begin with is that we discarded the notion of evaluating all possible games"
++ The vast majority of the 10^44 legal positions are nonsense like I provided one example of the 538 they do not happen while weakly solving chess.

"if you move part of the evaluation out of the process to some pre-process or parallel process, you aren't actually saving any time/effort"
++ The main evaluation is hitting the table base with deep calculation and looking up the exact evaluation draw/win/loss.

"Your 10^9 nodes + evaluating all positions for "reasonableness" beforehand takes as long or longer than just traversing the 10^44.5 positions."
++ No, the vast majority of the 10^44 positions never turns up at all. There is no evaluation for reasonableness. There is only a rough ranking of tier 1, tier 2, tier 3, tier 4 moves to loosely guide the search. That is included in the nodes/s.

Avatar of DiogenesDue
tygxc wrote:

#1922
"You still have to traverse these positions or otherwise evaluate them"
++ No, these positions never turn up in the course of the proof, they need neither traversing nor evaluation.

"The only reason that we have the 10^44.5 number to begin with is that we discarded the notion of evaluating all possible games"
++ The vast majority of the 10^44 legal positions are nonsense like I provided one example of the 538 they do not happen while weakly solving chess.

"if you move part of the evaluation out of the process to some pre-process or parallel process, you aren't actually saving any time/effort"
++ The main evaluation is hitting the table base with deep calculation and looking up the exact evaluation draw/win/loss.

"Your 10^9 nodes + evaluating all positions for "reasonableness" beforehand takes as long or longer than just traversing the 10^44.5 positions."
++ No, the vast majority of the 10^44 positions never turns up at all. There is no evaluation for reasonableness. There is only a rough ranking of tier 1, tier 2, tier 3, tier 4 moves to loosely guide the search. That is included in the nodes/s.

Lol.  I guess I should have said "your 10^9 nodes + the extra evaluation required to actually make these node results viable...".  You still want to bridge the gap to the concrete tablebase evaluations with Stockfish's mud and reeds...