Forget the title .... it isn't brilliant but it may be more intelligent and it's applicable to this thread, which has gone completely offtrack. So many solutions here have required the addition of more facets or procedures in solving chess. The same could be applied to the Big Bang Theory. When it was found that the BBT wasn't working, the standard procedure was to add bits in an ad hoc fashion, to make it work .... instead of taking something away! Regarding solving chess, one obvious thing to get rid of is the flawed fixation on solving positions and not games .... definitely not intelligent, because fixations are NEVER intelligent.
https://www.scientificamerican.com/article/our-brain-typically-overlooks-this-brilliant-problem-solving-strategy/
Chess will never be solved, here's why
Whoever said Deep mind can be generalized is right on! Alpha fold, in my old protein folding field using hypercomputing is an amazing example. When comparing go to chess folks often mix up position combinatorics with state spaces, and we can argue complexity then via big O computing (polynomial, exponential, p/np hard etc) or simple game state complexity (ie Shannon's log(35 #now closer to 31# x^y to ^80) = 123 where 35 is candidate moves and 80 is game length in plies (ie 40 moves). Remember, we've found forced mate in 500+ ignoring the 50 move rule, and in saying solving chesd won't happen, Kasparov defined it as finding forced mate after move 2-- or in programming terms WLD always a forced draw like tic tac toe. This cant be done currently with an end game database like strategy due to impossible computing power at 10^123, forcing an algorithmic solution, which would be higher dimensional than currently feasible because chess takes on stochastic characteristics even though deterministic with an edge of chaos variable when we switch from bayesian tree searches to algorithms.
@7407
"we've found forced mate in 500+"
++ Yes, but not reachable from the initial position by optimal play by both sides.
"always a forced draw like tic tac toe" ++ Yes, like Checkers and Nine Men's Morris.
"impossible computing power at 10^123" ++ There are only 10^44 legal positions, of which only 10^17 are relevant to weakly solve Chess.
[snip]
only 10^17 [positions] are relevant to weakly solve Chess.
Pah.
I can get it down to 10^0 using an equally valid approach!
1. The initial position is materially balanced with no obvious advantage for one side.
2. Therefore, it is proven to be a draw, using your own alternative logic system (ahem)
I also want $1000000 each for solving the Riemann and Twin Prime conjectures by confident guessing.
![]()
@7409
"it is proven to be a draw" ++ Of course the initial position is a draw.
The question is: how? How to draw against 1 e4? How to draw against 1 d4?
Your reasoning says that if a position is "obviously" (to a weak chess player, even though not a strong one like Leelachess) drawn then you can ignore the details.
For example, you ignore the majority of legal responses in positions presented to the opponent by a strategy. In some cases, you ignore ALL of them! ![]()
![]()
![]()
@7411
"to a weak chess player" ++ No, to the strongest chess players that exist: ICCF grandmaster with engine and 50 days per 10 moves.
"you ignore the majority of legal responses"
++ If the best moves cannot win for white, then the worst moves cannot win for white either.
They are examples of weak chess player - i.e. those that would be beaten (occasionally or often) by a perfect chess player (to be certain, one that distributed its play across all tablebase optimal moves in some way).
A weak chess player (as defined above) is sometimes simply wrong. Relying on such a player by saying "probably it isn't wrong here so we can rely on it 100%" should be obviously foolish.
Note also that no engines express certainty about the result in uncertain positions. AI engines explicitly express uncertainty about a position. Eg Leelachess gives positive probabilities of win or loss in difficult tablebase draws. This will obviously be true for positions beyond any tablebase. Even with traditional engines, an evaluation of, say 0.2 means Stockfish is not certain about the draw. It sees a positive expectation for white which means at least some wins. It would be very foolish to infer a certain result from an evaluation that DEFINITELY has uncertainty, such as this. Sometimes there is a very narrow route to a forced win, making a draw SEEM likely.
Amusingly, although you might say Stockfish is only certain when it says "mate in NN", observation shows that even this is uncertain. It gives this evaluation based on imperfect analysis which is "convincing". Sometimes it turns out to be wrong and it reevaluates the evaluation to not a forced mate!
@7413
"Relying on such a player"
++ Weakly solving chess does not rely on the engine, but on the 7-men endgame table base.
"no engines express certainty about the result in uncertain positions"
++ The 7-men endgame table base does.
The point is to calculate until the 7-men endgame table base.
That is also how Checkers has been weakly solved.
Usually confused nonsense. The problem is not with positions in the (tiny) tablebase. The problem is with the grossly inadequate analysis getting there.
I have explained why. You have explained it is beyond your ability to comprehend.
Also, that is NOT how checkers was solved. Fudging it as badly as you suggest would not have required much computation, and would not pass peer review as being of significance. If you disagree, get your waffling peer reviewed.
Whoever said Deep mind can be generalized is right on! Alpha fold, in my old protein folding field using hypercomputing is an amazing example. When comparing go to chess folks often mix up position combinatorics with state spaces, and we can argue complexity then via big O computing (polynomial, exponential, p/np hard etc) or simple game state complexity (ie Shannon's log(35 #now closer to 31# x^y to ^80) = 123 where 35 is candidate moves and 80 is game length in plies (ie 40 moves). Remember, we've found forced mate in 500+ ignoring the 50 move rule, and in saying solving chesd won't happen, Kasparov defined it as finding forced mate after move 2-- or in programming terms WLD always a forced draw like tic tac toe. This cant be done currently with an end game database like strategy due to impossible computing power at 10^123, forcing an algorithmic solution, which would be higher dimensional than currently feasible because chess takes on stochastic characteristics even though deterministic with an edge of chaos variable when we switch from bayesian tree searches to algorithms.
Yes, finally someone I can agree with .... an algorithmic solution is the only possibility. IMO what is necessary is to isolate state-changing dynamics and to attempt to define them mathematically. I see no alternative path. I define "state" in this context as "forced win or forced draw etc".
@7409
"it is proven to be a draw" ++ Of course the initial position is a draw.
The question is: how? How to draw against 1 e4? How to draw against 1 d4?
That's incorrect. If you know it's a draw from move one, the question "how" is irrelevant since there would be many drawing methods. Just another example of you thinking incoherently .... no big deal.
@7415
"that is NOT how checkers was solved"
++ That IS how Checkers has been weakly solved.
See Figure 2
https://www.cs.mcgill.ca/~dprecup/courses/AI/Materials/checkers_is_solved.pdf
Of course, I know you only answer people more on your level than I am.
You can take that how you want to.
@7411
"to a weak chess player" ++ No, to the strongest chess players that exist: ICCF grandmaster with engine and 50 days per 10 moves.
"you ignore the majority of legal responses"
++ If the best moves cannot win for white, then the worst moves cannot win for white either.
And if your big red telephone is giving you bum information about which are the best and worst moves they probably won't win for White even against Stockfish.
Try it.
(I say even against Stockfish, because Stockish can also fail to win simple mates in 16 with a king and rook against my king, as already posted.)
We know SF isn't much good: at least most of us do. ...
And we also know it's better than just about anything else that has ever played chess.
So not much point in asking questions like, "Is chess a theoretical draw?", and expecting a sensible answer from chess players. They're all not much good.
@7415
"that is NOT how checkers was solved"
++ That IS how Checkers has been weakly solved.
See Figure 2
https://www.cs.mcgill.ca/~dprecup/courses/AI/Materials/checkers_is_solved.pdf
You don't understand that article. Note that the valid solution required about more than the 2/3 power of the total number of legal checkers positions. This is a clue how badly wrong you are.
The key point every peer-reviewed researcher would agree on is that a strategy has to address ALL legal responses. It is blatantly obvious to those of us who do have a clue that you it is ARBITRARY, SUBJECTIVE, and INADEQUATE to say "this position looks bad, let's ignore it".
It's not even consistently wrong. It's wrong in a different way for every engine you might use to guide doing it wrong.
@7423
"a strategy has to address ALL legal responses"
++ No. If ways for black to draw against the best moves are found,
then it is trivial to find a draw or even a win against the worst moves.
That is the best first heuristic, as decribed in peer-reviewed litterature.
Once one way is established to draw against 1 e4 and 1 d4, it is trivial to do the same for 1 a4.
Once one way is established to draw against 1 Nf3, it is trivial to do the same for 1 Na3.
2 things.
first, Go is much more complex than chess. it took 20 years after deep blue to get the same level for AlphaGo.
second, a massive number of permutations doesnt necessarily mean that something cant be solved. checkers had 10 ^20 and was still solved. of course, chess is much much more complex, but the big number alone should dissuade us.
Your premise assumes that the efforts put into beating the world champs for Chess and Go were the same. This is not the case. Solving Chess was much better PR for IBM than solving Go would have been, so a lot more resources were brought to bear.
In terms of actually solving, IIRC Go has more positions...but evaluating Go positions should take less CPU horsepower than evaluating Chess positions.
evaluating go positions takes WAY more CPU than chess positions.
i recommend you look at the smithsonian article or the one by scientific american.
in terms of raw computing power,
I vaguely recall such articles being published in the pre internet age. There was a certain amount of talk about them.