Chess will never be solved, here's why

Sort:
tygxc

#3307

"And previously A0 and Lc0 defeated SF"
++ It was a crippled version of Stockfish. TCEC is good version against good version.

"sometimes the first approach prevailed, sometimes the other."
++ No, with short time control thick nodes are better. For solving chess the aim is to hit the 7-men endgame table base and thus thin nodes are better for that purpose.

"Different moves may be equally effective"
++ You give an erroneous explanation: the humans play more like engines and thus it is like autoplay. No, humans over the board and ICCF have different strengths, different weaknesses and play differently. Human vs. human is never like engine autoplay. In ICCF even less: they use different engines and the human decides. In Carlsen - Nepo Nepo made 4 clear errors, Carlsen jumped at it and won 4 times. I can give examples from ICCF: won games where one side clearly erred and the other side jumped at it.

"Your thinking is circular"
++ Yes, in a way it is, but it is usual in many sciences. Assume something a priori.
Calculate using that assumption. Verify the assumption is valid a posteriori . Calculation OK.

"because the classical model is validated by experiments, for v << c"
++ I deduce the error distribution from the plausible assumption of statistical independance. Then I calculate the error distribution. Then I verify that the error rate is low enough to validate statistical independence. That validates the assumption and the distribution.

"if you think it is plausible that nobody else has thought of it before"
++ Yes, it is plausible. Maybe somebody else thought of it before but did not care to communicate. Maybe somebody else communicated it somewhere before but we did not read it. Maybe nobody thought of the possibility. By your logic no patent would ever be awarded: either it is not new, or it is new and thus not plausible as nobody has thought of it before.

"So Schaeffer solved checkers as an hobby (you say that!) starting the project with 200 computers and letting them run for 20 years" ++ His main effort was generating his 10-men endgame table base. Later he had a reduced number of engines.

"he (or some other hobbyst) do not find the resources to start a project to solve chess in 20 years using 300 computers." ++ Schaeffer was not into chess. Maybe somebody does it with a cluster of 300 desktops of 10^7 nodes/s. It is a major hindrance to coordinate 300 desktops, but it is doable. The main difference between Chess and Checkers or Losing Chess is that Chess requires more knowledge. Also Losing Chess made use of knowledge. As the 7-men endgame table base is already available, weakly solving chess is a chess analysis problem.

playerafar
sachin884 wrote:
Optimissed wrote:
Elroch wrote:
tygxc wrote:

#3298

 The 50-moves rule is never invoked in positions > 7 men

Another guess.

You seem to have kind of an addiction to guessing. It is a chess player's habit by contrast with a chess solver's habit.

In contrast to, in contrast to by contrast with. Apart from that, this is ridiculous, wouldn't you agree? I mean, even I don't talk such crpaola. At least most of the time.

here we go again with another trash forum

"even I"
What happened there?
That looks like a blunder.   Introspection ??   Whaat??  happy
Major concession?
Kind of like one of tygxc's rare concessions ...
he made a major one at one point - during those days when his repetitions of invalid were not quite as nauseous ...
I and btickler both caught it.  Just can't remember what it was.

haiaku
tygxc wrote:

"sometimes the first approach prevailed, sometimes the other."
++ No, with short time control thick nodes are better. For solving chess the aim is to hit the 7-men endgame table base and thus thin nodes are better for that purpose.

Not necessarily, because "thin" nodes mean less selectivity, and thus the engine spend more time on less promising lines.

"Different moves may be equally effective"
++ You give an erroneous explanation: the humans play more like engines and thus it is like autoplay. No, humans over the board and ICCF have different strengths, different weaknesses and play differently.

As I said, the fact that they play differently does not mean they are not equally biased.

"Your thinking is circular "
++ Yes, in a way it is, but it is usual in many sciences.

No.

I deduce the error distribution from the plausible assumption of statistical independance. Then I calculate the error distribution. Then I verify that the error rate is low enough to validate statistical independence. That validates the assumption and the distribution.

No, because your verification process is just like the one in the example of the room.

"if you think it is plausible that nobody else has thought of it before"
++ Yes, it is plausible. Maybe somebody else thought of it before but did not care to communicate. Maybe somebody else communicated it somewhere before but we did not read it. Maybe nobody thought of the possibility.

Maybe.

By your logic no patent would ever be awarded: either it is not new, or it is new and thus not plausible as nobody has thought of it before.

They are usually a little more difficult to conceive than your deductions. As you say, most people think that the game-theoretic value is a draw and it is common to assume statistical independence to simplify calculations. Yet nobody did it...

"he (or some other hobbyst) do not find the resources to start a project to solve chess in 20 years using 300 computers." ++ Schaeffer was not into chess. Maybe somebody does it with a cluster of 300 desktops of 10^7 nodes/s. It is a major hindrance to coordinate 300 desktops, but it is doable. The main difference between Chess and Checkers or Losing Chess is that Chess requires more knowledge. Also Losing Chess made use of knowledge. As the 7-men endgame table base is already available, weakly solving chess is a chess analysis problem.

But no one started such a project...

playerafar
Optimissed wrote:

he made a major one at one point - during those days when his repetitions of invalid were not quite as nauseous ...>>

Perhaps they are the same repetitions. If you can't change your angle of attack after about two years, it means that you haven't understood the necessity of doing so. I don't mean "you-you" btw. I think only people who are new to it will read it.

Sure they're the same repetitions.  Just not as nauseous.
But the point is the concession he made.
Perhaps a unique moment.
His sole moment of direct admission after several months and 3300 posts.

Elroch
tygxc wrote:

#3301
That is not a guess. Look through ICCF games, look at any data base.

Suppose something happens 1 in 10^24 times and you examine a sample of 1 million. What are you going to learn?

How is it you are still unaware of such basic statistical issues?

The answer is a refusal to learn.

Solving chess is chess analysis.

No, solving chess is like proving a solution of a chess problem is correct. (The problem being to achieve the optimal result from the opening position (or against it)).

If the 'chess solver' invents nonexistent obstacles, then he will get nowhere.

By contrast you are ignoring most of the problem and still getting nowhere. Surely much more efficient.

 

playerafar


In other words - he is proving nothing.
Efficient point there again - in Elroch's post.  

MARattigan
tygxc wrote:

#3301
That is not a guess. Look through ICCF games, look at any data base.
Solving chess is chess analysis.
If the 'chess solver' invents nonexistent obstacles, then he will get nowhere.

Here you go

You can stop saying it now. It never had any relevance to the topic anyway.

tygxc

#3314

"Not necessarily, because "thin" nodes mean less selectivity, and thus the engine spend more time on less promising lines." ++ Yes, but weakly solving chess needs to hit the 7-men endgame table base to retrieve the exact evaluation draw / win / loss. Thin nodes allow more table base hits, as the TCEC graphs of table base hits show.

"As I said, the fact that they play differently does not mean they are not equally biased."
++ No, say two equally strong versions of Stockfish play against each other.
One version has settings to let it play like Tal, it will sacrifice material if it sees no clear refutation. One version has settings to play like Petrosian, it will never sacrifice unless it sees a clear win and it will allow and accept all sacrifices unless it sees a clear loss.
Now in game 1 the Tal engine sacrifices, the Petrosian engine accepts, they play it out and the Tal engine wins. So the Petrosian engine made a mistake allowing the sacrifice and lost as a consequence.
Now in game 2 the same scenario: the Tal engine sacrifices, the Petrosian engine accepts, they play it out and the Petrosian engine wins. So the Tal engine made a mistake making the incorrect sacrifice and lost as a consequence.
Two equally strong engines, totally uncorrelated errors.

++ Yes, in a way it is, but it is usual in many sciences.
"No." ++ Yes. No car designer uses relativistic quantum mechanics. No plumber uses the Navier-Stokes equations, no optician or electrician uses the Maxwell equations. They all use approximations for their problems: Newtonian mechanics, the Bernoulli equation, geometrical optics, Kirchhoff's laws... Sometimes they know in advance that their approximation is valid for their problem, sometimes they do not know in advance. Then they assume, calculate, and then verify the validity of the approximation. Like in the problem of the electron velocity. For high V it needs relativistic mechanics, for low V it needs quantum mechanics. For 1000 Volt it is clear Newtonian mechanics is OK. For 1 MV or for 1 µV it is not clear in advance.

"Yet nobody did it" ++ That is like saying to Einstein: smart people before you like Newton could have thought of that and nobody did it, so your theory must be wrong.

"But no one started such a project."
++ The major obstacle is money. 5 years for 3 cloud engines, or 3000 common 10^6 nodes/s desktops or 300 of your super 10^7 desktops, or a quantum computer is no peanuts.
Finding good assistants is not easy either: active grandmasters will demand a fee, some retired grandmasters or ICCF grandmasters might do it for free, as a hobby.
I believe that Carlsen, Nepo, Caruana, and Karjakin and their teams of grandmasters and cloud engines already have solved a few ECO codes.

tygxc

#3318
That is a bad example.
1) It is an artificial construct, not a real game between humans or engines or ICCF.
2) It is a clear draw; in a real game between humans or engines or ICCF they would agree on a draw and not play 50 useless moves.

tygxc

#3316

"Suppose something happens 1 in 10^24 times and you examine a sample of 1 million."
++ Tromp counted 8726713169886222032347729969256422370854716254 positions, sampled 10^6, found 56011 legal and thus learned there are 10^44 legal positions.
As there are only 10^17 legal, sensible, reachable, relevant positions, something that happens 1 in 10^24 times does not happen.

"The problem being to achieve the optimal result from the opening position (or against it))."
++ Yes, white tries to win, black tries to draw. Black succeeds, white fails, so white tries something else. When all reasonable white attempts are exhausted, then chess is weakly solved.

haiaku
tygxc wrote:

"Not necessarily, because "thin" nodes mean less selectivity, and thus the engine spend more time on less promising lines." ++ Yes, but weakly solving chess needs to hit the 7-men endgame table base to retrieve the exact evaluation draw / win / loss. Thin nodes allow more table base hits, as the TCEC graphs of table base hits show.

You overgeneralize.

"As I said, the fact that they play differently does not mean they are not equally biased."
++ No, say two equally strong versions of Stockfish play against each other.
One version has settings to let it play like Tal, it will sacrifice material if it sees no clear refutation. One version has settings to play like Petrosian, it will never sacrifice unless it sees a clear win and it will allow and accept all sacrifices unless it sees a clear loss.
Now in game 1 the Tal engine sacrifices, the Petrosian engine accepts, they play it out and the Tal engine wins. So the Petrosian engine made a mistake allowing the sacrifice and lost as a consequence.
Now in game 2 the same scenario: the Tal engine sacrifices, the Petrosian engine accepts, they play it out and the Petrosian engine wins. So the Tal engine made a mistake making the incorrect sacrifice and lost as a consequence.
Two equally strong engines, totally uncorrelated errors.

Like players moved without considering what the opponent may do. A really strong attacker knows how to defend too, because they must foresee what the opponent can do to prevent or stop their attack; vice versa a very strong defender knows how to attack, to foresee what the opponent can do to break their defense:

 "It is to Petrosian's advantage that his opponents never know when he is suddenly going to play like Mikhail Tal" - Boris Spassky.

Each player takes on the role of the opponent, as he analyzes variations, and if they are equally strong their final evaluation is similar, as shown by their ACL and regardless of their style (two moves can have a similar expected outcome and the choice depend on psychological factors, too). On the other hand, if their evaluations were significantly different, we would see what you describe in your example: in every game either one or the other would make a mistake, fifty-fifty, because they would be equally strong but uncapable to understand and predict the opponent's moves, and so we would see a lot of decisive games. However, of 49 official games between Petrosian and Tal, Petrosian won 5 and Tal 5. Same strength ⇒ similar evaluation ⇒ similar bias. Their systematic errors (not simple oversights) would be uncorrelated if their evaluations were uncorrelated. That's obviously not the case. They share similar knowledge of the game.

++ Yes, in a way it is, but it is usual in many sciences.
"No." ++ Yes.

No, for the reason you accurately ignore, expressed in a previous post.

"Yet nobody did it" ++ That is like saying to Einstein: smart people before you like Newton could have thought of that and nobody did it, so your theory must be wrong.

If not that everybody, as you say, think that the game value is a draw, and assuming statistical independence is common practice, when feasible, while Einstein's postulates and calculations (especially in GR) aren't exactly obvious.

"But no one started such a project."
++ The major obstacle is money. 5 years for 3 cloud engines, or 3000 common 10^6 nodes/s desktops or 300 of your super 10^7 desktops, or a quantum computer is no peanuts.

Super!? 😦

PRAGNAYSRIVIZ

HEY I GOT 3 MILLION DOLLERS WHO WANTS IT

tygxc

#3322

"Like players moved without considering what the opponent may do."
++ Of course they do consider, but as they cannot calculate through to the end, they have to decide on imperfect information and they do that differently.
“If Tal sacrifices a piece, take it. If Petrosian sacrifices a piece, don’t take it.” - Botvinnik

"A really strong attacker knows how to defend too, because they must foresee what the opponent can do to prevent or stop their attack; vice versa a very strong defender knows how to attack, to foresee what the opponent can do to break their defense"
++ All of that is right, but eventually the player has to decide: to sacrifice or not to sacrifice.

"Each player takes on the role of the opponent, as he analyzes variations, and if they are equally strong their final evaluation is similar" ++ No, that is not true. Evaluations vary considerably. Strengths and weaknesses vary considerably. Even two identical Stockfish engines on identical hardware play totally different with different settings.

"in every game either one or the other would make a mistake"
++ No, that is not true. In classical world championships, ICCF correspondence, TCEC many games are without any mistake and thus drawn.

"they would be equally strong but uncapable to understand and predict the opponent's moves" ++ Two grandmasters at a tournament had finished their game and were looking at another grandmaster game in progress. They bet if they could predict the next move. They both failed miserably.

"we would see a lot of decisive games" ++ We would see a lot of decisive games if they make a lot of errors. That is why we see a lot of decisive games in blitz and rapid and not in classical.

"of 49 official games between Petrosian and Tal, Petrosian won 5 and Tal 5. Same strength"
++ Yes, "similar evaluation" ++ No. "similar bias" ++ No.

"They share similar knowledge of the game." ++ Yes, but they have a different approach, i.e. a different evaluation. If they had the same evaluation and bias then they would not have 10 decisive games out of 59. For comparison Kasparov - Karpov: 119 draws, 28 wins, 21 losses.

tygxc

#3325
It is an exact figure, calculated with a Haskell program.
https://github.com/tromp/ChessPositionRanking 

#3326
Game theoretically it does not matter if black wins or draws, it is a failure for white trying to win, so white has to try something else. That is why 1 e4 e5 2 Ba6? needs no attention.

 

haiaku
tygxc wrote:

"Each player takes on the role of the opponent, as he analyzes variations, and if they are equally strong their final evaluation is similar" ++ No, that is not true. Evaluations vary considerably. Strengths and weaknesses vary considerably. Even two identical Stockfish engines on identical hardware play totally different with different settings [ . . . ]

Maybe I did not make myself clear and we do not understand each other on the terminology. Let' say that according to the best engine BE ("best" meaning that it is the engine with the highest rating), in a position P the best move A gives an evaluation in centipawns of 71 and the second best move B, of 70. They give basically the same number, (also the expected score can be used, or WDL, but it is not relevant to the point), which is what I mean with "evaluation". The two moves may be very different stylistically, though. Now, it is well known that there is a clear correlation between a player's rating and their ACL. The higher the rating, the lower the ACL. Does it mean that the high rated player plays in the same style as BE? Not necessarily, but if a very strong player gives a move m a value v, BE will generally assign to m a value not very different from v (ACL close to 0). Now let's suppose that v is 71, but in fact the game-theoretic value of the move is a loss. In a game against Player₂, Player₁ makes that move, and obviously s/he (or it) does not recognize it as an error; how in the hell an equally strong Player₂ can always recognize it as an error, when even BE assigns to m the same wrong value v (I am not talking of simple oversights, but of systematic errors due to the incomplete knowledge of the game)? If Player₂ does not recognize the error, how does he know how to exploit it? He will likely fail to make the correct move in response, thus making another error which compensate the opponent's.
As you can see, I do not use any major assumption to deduce the correlation between systematic errors. What's your answer to the objection that your reasoning is circular, instead, raised in a previous post?

MARattigan
tygxc wrote:

#3318
That is a bad example.
1) It is an artificial construct, not a real game between humans or engines or ICCF.

So far as can be understood, you plan to use Stockfish to play from artificially constructed positions. How does that make it a bad example? It's Stockfish playing from an artificially constructed position.

The bad examples, so far as any of your suggestions about how you might go about solving "chess" are concerned are precisely the real examples between engines or ICCF which are played under different rules from the chess variant you apparently intend to solve. (The game I posted isn't even a game under TCEC rules - it ended on White's move 36 under the TCEC draw rule.)

At any rate you can now desist from posting  "the 50 move rule is never invoked with more than 7 men" ad nauseam. You have a counterexample.

2) It is a clear draw; in a real game between humans or engines or ICCF they would agree on a draw and not play 50 useless moves.

Your statement has never previously included any caveats about how you would assess a position. Why do you start now?

You regard the starting position as a clear draw. Does that mean you regard ICCF games as all sequences of useless moves?  

 

MARattigan
tygxc wrote:

...

2) Weakly solved means that for the initial position a strategy has been determined to achieve the game-theoretic value against any opposition. Hence black tries to achieve the draw and white tries to oppose against the draw.

...

There seems to be little correlation between the two sentences.

Even if you assume that the game-theoretic value is a draw and that the meaning in your first sentence is that a strategy is provided for both players, it's difficult to see why the strategies for the two players should have different aims. Shouldn't both aim for a draw in that case?

Moreover your description of Black's strategy appears to be in direct conflict with FIDE's

Art.1.4 The objective of each player is to place the opponent’s king ‘under attack’ in such a way that the opponent has no legal move.

and you assert that 1.e4 e5 2.Ba6 is a loss for White, which would obviously oppose the draw after 1.e4 e5, yet you also say you don't plan to consider that continuation.

Could you elucidate the connection between the sentences, please?

playerafar


Of Course the " 10^17 " figure is 'Drivel'  !!
He has selected it to correspond to 'five years' with 'nodes' computers.
Which is also Drivel.
One year would appear to push too hard.

Ten years also more difficult for 'selling'.  More 'daunting'.
It is even more ridiculous than the 'Y2K' movement that predicted world disaster because of future change in the dating system.
A fringe lunacy !
But unlike flat earth craziness - this crazy 'five years' notion attempts to 'infiltrate' science rather than deny it.
Its a slight deviation from the usual pseudoscience and pseudomath.
Which even have 'mainstreams' of their own !

Phony computer projects !
There's probably a whole 'genus' of such species ?
There's another term ... phylum.
Its probably an easy google about phony computer projects !!

pcwildman

Which begs the next question, is Chess infinite or finite?

playerafar
pcwildman wrote:

Which begs the next question, is Chess infinite or finite?

I don't 'beg'.
This is a Statement ...
the number of possible chess position is Gigantic.  Daunting.
But it is finite.  The upper bounds on it are well known.
The first upper bound is 13 multiplied by itself 63 times.
13 to the 64th power.
Every square has 13 possible states.  Six kinds of piece multiplied by two colors plus empty square.  13.  Sixty-four squares.
So now - right away - you know the number of possible positions is Finite.
That number can be further cut down because of maximum two Kings and at least 32 squares must always be empty.  And other maximums on pieces.

But regarding the number of possible games - without a repetitions rule the number of possible games would be infinite ... 
and without a 50 move rule also ...  Game length would be infinite too ...
unless you rule that the game ends when one players lifespan has ended.
The winner wins by 'default' happy

'Solving' using numbers of games - is therefore unrealistic.
Positions are used.
But the 'game' element ... sequences of moves ...  is still in there.
Can't be avoided in 'solving'.  Or not with today's computers.
Modern tablebase projects try to 'backtrack' from two Kings only.
But that task becomes Daunting with just 8 pieces on the board ...
So Daunting that even with just 7 pieces they couldn't factor in castling rights.