Chess will never be solved, here's why

Sort:
tygxc

#2329
"About 20 people have provided various arguments."
1 Sveshnikov > 20 weak players

tygxc

#3237
"ask the moderators to just close the thread"
++ We cannot handle the truth, so let us forbid to talk about it.
Let Galileo swear that the Earth does not revolve around the Sun.

XOXOXOexpert

There is a solution to Chess. Heres why:

Chess has many limiting rules that makes its largest possible moves a finite number therefore solvable and they are:

1. Time constraint

2. 3 move repetition

and 3. 50 move rule

playerafar
XOXOXOexpert wrote:

There is a solution to Chess. Heres why:

Chess has many limiting rules that makes its largest possible moves a finite number therefore solvable and they are:

1. Time constraint

2. 3 move repetition

and 3. 50 move rule

'Time constraint' is apparently never used in the table base projects and almost never used in discussions about solving chess.
The other two things are used - but so far in these discussions its my humble opinion that 3-fold and 50 move rules have been presented insistently and technically rather than with perspective.
Arguments about terminology rather than trying to present those factors generically.
Regarding any project to 'solve' chess ... it should be recognized there are multiple projects within the bigger project.
Most of the posts are reactions to or defenses of an invalid '5 years to solve' because of 'nodes' and Sveshnikov and arbitrarily cutting the number of positions to be solved to less than a quintillionth of its value.
Having said that though - this being a chess site means that the marketing is directed towards clients who 'have time'.  happy

XOXOXOexpert

ty

haiaku

@tygxc, how much money the project would require?

tygxc

#3243
rental for 5 years of 3 cloud engines of 10^9 nodes/s each as modern computers plus 3 (ICCF) grandmasters as good assistants cost a few million $.
The most feasible is probably to run a pre-project for just 1 ECO code.
Checkers and Losing Chess have been solved by hobbyists using desktop computers.
Chess requires more.

haiaku

A little vague, how many millions? Schaeffer is an hobbyst to you?

tygxc

#3245
The rental fee for the engines is a matter of negociation.
IBM or Google might offer a discount in exchange for publicity.
Likewise the compensation for the good assistants. Some might do it for free out of interest, some might want to receive a decent pay so as to earn their cost of living.
I estimate like 3 million $ would do.

chessisNOTez884

this topic is going off topic for sure.. as i said this is the conclusion:-it does not matter whether it gets solved or not.. all i matter that its an great BOARD AND MIND AND BRAIN game..BY ME ON CHESS

haiaku

@tygxc, Schaeffer is an hobbyst, to you? How much for just one ECO code?

tygxc

#3248
"Schaeffer is an hobbyst"
He is a professional computer scientist, but the 16 years of solving checkers were not his job, but rather some personal side-project, i.e. hobby.
The people that solved Losing Chess were professional mathematicians, but in their paper they describe themselves as hobbyists, so solving Losing Chess was a hobby to them.

"How much for just one ECO code?"
++ I estimate $100,000 for 1 ECO code. I suspect Carlsen, Nepo, Caruana, Karjakin and their teams of grandmasters and cloud engines already have solved a few ECO codes (Petrov, Berlin, Marshall, Sveshnikov) during their months of match preparation, but they obviously keep that to themselves. That is also why Sveshnikov published his analysis of B33 in 1988: as he was diagnosed with 3rd stage cancer he realised his ambitions as a professional player were over and so he could just as well publish his findings.

haiaku
tygxc wrote:

the 16 years of solving checkers were not his [Schaeffer's] job, but rather some personal side-project, i.e. hobby.

If you say so... Anyway thank you for your answer about the cost.

About your idea of project, I insisted so much on the game-theoretic value of the game, because it is crucial for your theory. As I said the percentage of drawn games is not a sufficiently strong evidence to assume that the game-theoretic value is a draw. More importantly, it is not sufficient to give a reliable estimation of the error rate per move. To do that, you start from the assumption that errors are statistically independent... Now, let's say that an engine is playing in autoplay, it is White's turn at move n of the game and the engine analyzes the position P, reaching depth d; then it plays a move M which is a mistake, and turns a draw into a loss for White. After that, the engine takes Black and analyzes the new position P₁ at depth d (on average). It already analyzed the line starting from P₁ in the previous turn, but now one other ply has been played; nonetheless, with some approximation we can say that reaching depth d at plycount 2n-1, gives for the line the same evaluation as depth d+1 at plycount 2(n-1) and it is well known that the difference between an evalutation at depth d and one at depth d+1 is on average smaller and smaller, the larger d. That means that very likely an engine does not recognize at plycount 2n-1 an error made at plycount 2(n-1). Most of the times these mistakes can be exploited only playing a very precise move, hence the engine will likely play another wrong move, that does not exploit the error, when the evaluation is still wrong. Even if the engine is lucky and play the right move at plycount 2n-1, it would face the same problem at plycount 2n+1, 2n+3... So if an engine makes a mistake in autoplay, likely with the other colour it will very soon make another mistake that will rebalance the game. That's why, even in the case that engines are becoming more accurate and the game value is a draw, it is still not possible to say whether they make 0, 2, 4, 6 or more errors, in general: they are not statistically independent.

You used simple maths to do your calculations, yet you think we cannot understand it. Did Tromp make such calculations to estimate the error rate per move? If that's not the case, do you think he too is not capable enough to conceive those calculations and arrive to your very conclusion?

Elroch

I predict that @tygxc will fail to be convinced. wink

Hilariously, he thinks Sveshnikov solved B33 in the game theoretic sense a decade before computers reached the strength of the best (puny) humans! (Note carefully that what they need to do is converge on the standard of play of a 32 piece tablebase. They are presently woefully short of reaching the level of a much smaller tablebase).

I am now going to solve C67.

It's a draw, by example. Ta da!!

[irony]

tygxc

#3251

"As I said the percentage of drawn games is not a sufficiently strong evidence to assume that the game-theoretic value is a draw."
I mentioned 5 kinds of evidence:
1) General consensus of expert opinions in this century
2) AlphaZero autoplay even with stalemate = draw and more draws with more time
3) ICCF WC even with 7 men table base wins > 50 moves without capture or pawn move
4) TCEC even with imposed openings intended to be slightly unbalanced
5) human classical world championship matches prepared by teams of grandmasters & engines
Maybe 1 of the 5 is not sufficient proof, but all 5 taken together are.

"you start from the assumption that errors are statistically independent"
++ Like I hang a piece and you fail to notice it and so you do not take it. Yes, errors in AlphaZero autoplay could come in pairs: in ICCF, TCEC, human WC they are independent.

"You used simple maths to do your calculations, yet you think we cannot understand it."
++ It is only high school math, but yet some do not even understand simpler proofs.

"it is still not possible to say whether they make 0, 2, 4, 6 or more errors"
++ For the ICCF results it is the only way to explain these. It does not even need the assumption that chess is a draw: that follows as the only way to explain the data.

"Did Tromp make such calculations to estimate the error rate per move?"
++ Tromp estimated the number of legal positions by induction.

tygxc

#3253
"he thinks Sveshnikov solved B33 in the game theoretic sense a decade before computers"
++ That is what Sveshnikov himself said. It is his variation.

"They are presently woefully short of reaching the level of a much smaller tablebase"
++ Humans get tired, get nervous in time trouble, get disheartened by previous losses.
Even ICCF grandmasters fall ill and then blunder from their sickbed.
Otherwise 99% of ICCF WC draws are ideal games with no errors, i.e. perfect play.
Human classical WC match games are close to perfect:
whenever a clear error is made it is in an otherwise still drawn position.
All 4 games Nepo lost to Carlsen were by blunders in drawn positions.

dannyhume
Chess is a closed mathematical system with precise technical rules … It will eventually be solved. I would imagine the methodology would be something different than calculating how much time more powerful and faster engines can push through the seemingly infinite possibilities. Maybe something akin to AlphaZero figuring out how to beat the strongest current chess engine. Or how chess engines became much stronger, not based on brute mathematical speed, but rather by factoring in positional principles and tweaking them. Or how Einstein discovered Relativity.
haiaku
tygxc wrote:

"As I said the percentage of drawn games is not a sufficiently strong evidence to assume that the game-theoretic value is a draw."
I mentioned 5 kinds of evidence:
1) General consensus of expert opinions in this century [ . . . ]

The general consensus is that the game is not ultra-weakly solved, in fact no one but you say that the game is ultra-weakly solved. Third time I repeat that, and your "objection" so far is: "I do".

"you start from the assumption that errors are statistically independent"
++ Like I hang a piece and you fail to notice it and so you do not take it. Yes, errors in AlphaZero autoplay could come in pairs: in ICCF, TCEC, human WC they are independent.

As for games between humans or different engines, we have to understand that all of them use in fact some sort of "evaluation function", that does not encompass all the possible situations, and therefore they are biased: they use rules of thumbs that give statistically the best outcome. Same strength, similar biases.

To make a very simple example, let's say that we are playing a videogame, and in a particular type of situation we can only play two moves, A and B; the outcome can only be 1 or 0 and we want to maximize it. A gives 1 80% of the times and B gives 1 20% of the times. Which is the best strategy? Without other informations, it is: play always A, of course. Any other strategy would be "suboptimal", but two "optimal" players will both fail to treat properly that 20% of cases where the best move is B. Something like that, but with much more options, happens for chess too, so players of the same strength evaluates things in a very similar way, and therefore it's impossible that the errors made by one of them are completely uncorrelated with the errors made by the opponent, especially in case of engines, which are not affected by random disturbances like fatigue, emotions, etc.

"it is still not possible to say whether they make 0, 2, 4, 6 or more errors"
++ For the ICCF results it is the only way to explain these. It does not even need the assumption that chess is a draw: that follows as the only way to explain the data.

I deduced the very same data you mention from premises not based on those data, see above. Third time I repeat that, but you just state your hypothesis, with no explanation at all: "most games end in a draw and most experts think it's a draw" and then the jump to the conclusion "therefore the game value is a draw", and the explanation is "because it's the only way to explain that", which is begging the question. Like: "The Apple iPhone is the best smartphone on the planet because no one makes a better smartphone than Apple does".

"Did Tromp make such calculations to estimate the error rate per move?"
++ Tromp estimated the number of legal positions by induction.

Is that an objection? Why do you ignore the core point? If that's not the case, do you think he too is not capable enough to conceive those calculations and arrive to your very conclusion?

dannyhume
Chess falls under combinatorial game theory in mathematics. The possibilities are practically infinite with respect to our current capabilities, but a limit is present given the current rule set, even if the possibilities currently outnumber the number of atoms in existence.
dannyhume
Chess has perfect information, which lends itself to more predictive and theoretically quantifiable models (compare to poker or a pro-con list, which have elements of chance and subjectivity, respectively).

The current rules do not allow a game to go on forever, so my thought is that chess can, in theory, be solved because of these limits, though we can debate about when this will be achievable in the course of humanity.

Though maybe the discussion is not so much whether chess is solvable, but that humanity won’t be able to figure it out in time even if it is (I haven’t read hardly any of the 3000+ comments in this thread).