#3125
"Checkers was not weakly solved by Marion Tinsley"
++ Yes, that is right. Checkers and Losing Chess were weakly solved by mathematicians and computer scientists. Most of their work was to establish an endgame table base for their game. That is a task for computer scientists. For chess 7-men endgame table bases have been generated by computer scientists, not by grandmasters.
The second part of solving Checkers or solving Losing Chess requires more game knowledge. Chess has discretionary captures and requires even more game knowledge to set up openings and to terminate a search when the outcome is not in doubt like in most but not all opposite colored bishop endings.
That is why weakly solving chess with Stockfish - as chess equivalent of Chinook that Schaeffer used for Checkers - should be done by (ICCF) (grand)masters. Somebody like GM Sveshnikov was most qualified for such a task.
Chess will never be solved, here's why
#3277
"You claim that only 10^44 legal positions need consideration"
++ That is not my claim: Tromp proved there are 10^44 legal positions. So that is what is needed to strongly solving chess i.e. a 32-men table base. It is not feasible with present technology. Only weakly solving chess is feasible with present technology. That needs far less legal, sensible, reachable, and relevant positions: 10^17.
"Your assumption that somehow, positions can be assessed by one engine-glance"
++ That is not my assumption: it is the contrary of what I say all the time. I say positions can only be evaluated by deep brute-force calculation until the 7-men endgame table base or a 3-fold repetition is reached. You are the one with the daft evaluation algorithm.
"just let the engine glance at the starting position and you've got it"
++ I say all the time that no evaluation algorithm can assess a position, the only way is deep calculation towards the 7-men endgame table base.
"In reality, games hundreds of moves long have to be checked."
++ The longest ICCF draw was a perpetual check after 102 moves. The average is 39 moves and the standard deviation 14 moves. So 60 moves is exceptionally long.
#3281
So you would let a 1381 rated player decide which lines to analyse and which endgames not to analyse further.
#3284
Weakly solving chess is the ultimate chess analysis: white tries to win, black tries to draw. Whenever white fails to win, white tries something else.
When white has exhausted all reasonable attempts to win, chess is weakly solved.
Somebody like Sveshnikov is most qualified: experienced in analysis without engines, experienced in analysis with engines, author of opening analysis books, grandmaster, world champion 65+.

#3277
"You claim that only 10^44 legal positions need consideration"
++ That is not my claim: Tromp proved there are 10^44 legal positions. So that is what is needed to strongly solving chess i.e. a 32-men table base. It is not feasible with present technology. Only weakly solving chess is feasible with present technology. That needs far less legal, sensible, reachable, and relevant positions: 10^17.
"Your assumption that somehow, positions can be assessed by one engine-glance"
++ That is not my assumption: it is the contrary of what I say all the time. I say positions can only be evaluated by deep brute-force calculation until the 7-men endgame table base or a 3-fold repetition is reached. You are the one with the daft evaluation algorithm.
"just let the engine glance at the starting position and you've got it"
++ I say all the time that no evaluation algorithm can assess a position, the only way is deep calculation towards the 7-men endgame table base.
"In reality, games hundreds of moves long have to be checked."
++ The longest ICCF draw was a perpetual check after 102 moves. The average is 39 moves and the standard deviation 14 moves. So 60 moves is exceptionally long.
60 moves is not "exceptionally long" for one of my blitz games.
357 move game - Leela versus Stockfish 2020
#3275
Your response is utterly wrong for 3 reasons.
1) The length of 60 moves implies it is representative. An average ICCF game lasts 39 moves and that is relevant as 99% of ICCF WC draws are ideal games with optimal moves from both sides, i.e. perfect play and thus relevant to weakly solving chess.
The length of 60 moves implies that is the figure in the statement which you are claiming is utterly wrong. Nothing more, nothing less.
ICCF games which generally terminate in agreed draws and are not played under FIDE rules and, most likely, 0% of which are ideal games are relevant to what? Probably not relevant to solving chess in any sense. Certainly not relevant to any of the tablebase solutions completed so far, that can be said to weakly or strongly solve cut down versions of chess under cut down rules.
Whether they are relevant to your proposal it's impossible to say because you are incapable of either settling on a set of rules for the game you say you intend to solve or settling on a fixed method for doing it.
2) Weakly solved means that for the initial position a strategy has been determined to achieve the game-theoretic value against any opposition. Hence black tries to achieve the draw and white tries to oppose against the draw.
Obvious non sequiteur. And obviously no reason to claim @Optimissed's statement utterly wrong. Quite irrelevant to that.
You have many times implied that you don't even intend to provide a solution under the definition you have just given, but you still say you can solve chess. In what sense are you using the word "solve" when you use it in any context other than quoting that definition?
3) The same position reached by a different move order is the same position and has the same evaluation.
Since you obviously couldn't understand it the first time, I'll restate it. @Optimissed's statement is about numbers of games. If two games reach the same final position by different sequences of moves they're still different games, irrespective of whether the moves are permutations of each other or whether you have a sensible meaning of the term "position".
With your meaning of "position" under competition rules - games that reach the same position don't necessarily have the same evaluation; for that you need a sensible definition of "position". Different move orders will result in different intermediate positions in any sense and the final evaluation is not necessarily the same. All mates must avoid repeating any positions that have already occurred twice each in its game which may render the mates impossible in some games, but not others, reaching the same position in your sense.
There are enough chess positions, it is bad to assess the same position multiple times.
But maybe necessary if you have a lame understanding of the term "position". Depends on exactly what you propose to do, which you steadfastly refuse to disclose.
The 50-moves rule plays no role, as it is never invoked in positions >7 men in ICCF, TCEC, human world championships etc. Chess has already been strongly solved for positions of 7 men or less.
No it hasn't. It depends what you mean by "solve" and "chess". Even with "chess" redefined as a zero sum game there is no current strong solution of a version of chess that includes both the 50 move and triple repetition rules, nor any planned. That would apply for any meaning of strongly solved that has been generally accepted and would certainly apply to any of the versions you have suggested. It may well not be feasible beyond 3 men.
The 10^72 is ridiculous: even for strongly solving chess i.e. a 32-men table base only 10^44 legal positions need consideration.
You are confused not only by the difference between diagrams and positions but also between games and positions and further between positions in a game of chess under rules that make the game zero sum and either do or do not include a fifty move rule and or a triple repetition rule.
Your "ridiculous" figure of 10^72 presumably is supposed to refer to @Optimissed's figure for the number of 60 move games viz. 4^120 (c. 1.77 x 10^72) and you both (mis)quote Tromp's figure for the number of legal positions, not games in basic rules chess as well as failing to understand (a) that a tablebase solution needs consider only a smallish multiple of the number of winning legal positions in basic rules chess and (b) that current tablebase solutions are not strong solutions of the positions covered if the rules contain a triple repetition rule.
For weakly solving chess 10^17 legal, sensible, reachable, and relevant positions need consideration.
Then tell us how. I assume you continue to refuse to post pseudocode for your proposed method so you can continue to post nonsensical figures indefinitely without any possibility of a definite refutation.

"3) Different move orders leading to the same position are different games irrespective of the final position, so again this point serves only to show it's own irrelevance"
Not only different games - they can spin off into other games and positions too.
Defining 'solving' chess well - would include defining the enormity of the task.
But without illegitimate 'cutdowns'.
In other words - the whole task is set out objectively.
With all its sub-tasks and sub-projects.
No pretenses.
Analogy - somebody at Lockheed-Martin wants to cut corners.
Boss: "Hey skip 90% of those project proposals. Now !"
Engineers/mathematicians/scientists: "OK Boss !"
Then it comes time for the first test flight of the fighter jet.
Government officials there. Competing executives from Grumman and other companies ...
Does the engine even start up ?
Does the plane get off the ground?
Lets say by some miracle it did ...
On the very first bank - the plane simply slides down into the ground.
"Hey it was a 'weak solve' "
I would want neither tygxc nor Sveshnikov in charge of developing the aircraft.
No 'skipping'. No cutting corners.
By the way - the death of the pilot would lead to indictments against 'the Boss'.

If poorly justified stubbornness was the key to solving chess, @tygxc would have done it by now.
Here here ! And Hear Hear !
Jolly Good post and Good Show !
Really. Most efficient !
@Optimissed re #3292
I intended to say @tygxc, not you, misquoted Tromp (quite apart from comparing your figure intended to be games generated with Tromp's figure of legal basic rules positions).
Tromp actually estimates the number of basic rules legal chess positions as (4.82 +- 0.03) * 10^44 not 10^44. @tygxc routinely throws away the majority of whatever figure he arrives at for positions of various sorts, this being a minor example. The problem is it's infectious; the same figure of 10^44 is appearing in other users' posts.
If @tygxc intends to solve a version of chess that includes the 50 move and triple repetition rules then the number of basic rules legal chess positions would be a minuscule fraction of the nodes in his game tree and Tromp's figure would probably have no relevance to his proposed solution.
If he really has a proposed solution.
#3295
4.82*10^44 is the same as 10^44 for all practical purposes.
Up / down symmetry and left / right symmetry after loss of castling rights reduce a factor 4, so that leaves 1.21*10^44. For all practical purposes it is 10^44 legal positions. That would be the number of positions needed for strongly solving chess i.e. a 32-men table base. Time and storage are prohibitive with present technology.
For weakly solving chess that figure has no relevance, as the vast majority of the positions Tromp found legal cannot occur in a reasonable game with >50% accuracy and thus certainly not in an ideal game with optimal moves i.e. perfect play. Those positions have multiple underpromotions to pieces not previously captured from both sides. Each such underpromotion represents a blunder worth a piece. Moreover underpromotions from both sides are not consistent: game-theoretically it makes no sense that both sides avoid stalemate.
A better figure is the 10^37 as calculated by Gourion.
Even the vast majority of these positions is not sensible: they cannot occur in a game of > 50% accuracy. Even after adding some positions with 3 or 4 queens there only remain about 10^32 legal and sensible positions.
The majority of these positions cannot be reached during the solving process. E.g. after 1 e4 all positions with a pawn on e2 are unreachable. That leaves about 10^20 legal, sensible positions reachable during the solving process.
The majority of reachable positions is not relevant. If 1 e4 e5 draws, then it is not relevant if 1 e4 c5 draws as well or not. If good moves cannot win, then bad moves cannot win either. That is why 1 a4 or 1 e4 e5 2 Ba6 are not relevant. That leaves 10^17 legal, sensible, reachable, and relevant positions.
3 cloud engines of 10^9 nodes/s can thus weakly solve chess in 5 years.
An alternative would be a cluster of 3,000 desktops of 10^6 nodes/s.
Another alternative would be Stockfish translated from C++ to Python and running on a quantum computer rented from D-Wave, Google, or IBM.

[snip] 10^44 legal positions. [snip]
You need to try to mimic intelligent people, by taking on board relevant facts from others. In view of this you should have said FOR BASIC RULES CHESS. This is not chess as it is played.
For weakly solving chess that figure has no relevance, as the vast majority of the positions Tromp found legal cannot occur in a reasonable game with >50% accuracy
Firstly, this is a guess.
Secondly it is a guess based on an undefined concept - "accuracy". If you want to have any credibility, state what you mean by "accuracy" in a way that would make sense to knowledgeable people.
#3298
"you should have said FOR BASIC RULES CHESS. This is not chess as it is played."
++ As said many times: the 3-fold repetition rule is essential and is invoked in 16% of ICCF WC draws. It can be simplified to a 2-fold repetition rule for the purpose of solving chess. The 50-moves rule is never invoked in positions > 7 men and thus can be considered unwritten for the purpose of solving chess. As said many times: a position (FEN) is a diagram with the side to move, castling and en passant flags. To verify if two positions are the same for the 3-fold repetition rule, it is checked if the new FEN has previously occured twice. See Laws of Chee 9.2.2
"Firstly, this is a guess. Secondly it is a guess based on an undefined concept - accuracy."
++ No, it is neither a guess, nor an undefined concept. None of the 56011 sampled positions Tromp found legal can occur in a reasonable game because of the multiple underpromotions to pieces not previously captured they contain, even on both sides. Each underpromotion represents a loss worth a piece. That is clear at first glance of the sampled positions, but to make it objective load the pgn of the proof game that proves legality into the analysis tool in this site and read the accuracy figure. The accuracy figure does not prove that all moves are optimal, but if the accuracy figure is < 50%, then that proves that the moves are not optimal.

#3298
The 50-moves rule is never invoked in positions > 7 men
Another guess.
You seem to have kind of an addiction to guessing. It is a chess player's habit by contrast with a chess solver's habit.
#3301
That is not a guess. Look through ICCF games, look at any data base.
Solving chess is chess analysis.
If the 'chess solver' invents nonexistent obstacles, then he will get nowhere.

"The general consensus is that the game is not ultra-weakly solved"
++ The general consensus is that chess is a draw, but it is not yet formally proven.
I presented 5 kinds of evidence.
Adding the word "formally" does not change things: it's unproven. Period. You propose to use the conjecture to reduce the search space to 10¹⁷ nodes. The evidence is not extraordinary enough to support such extraordinary claim. But don't make me repeat that. It's only the first problem.
"As for games between humans or different engines, we have to understand that all of them use in fact some sort of "evaluation function", that does not encompass all the possible situations"
++ The key is not the evaluation function, but the calculation depth. [ . . . ] When a human or engine loses, it is not because of a worse evaluation function, but because of too shallow calculation. The side that has not looked deep enough loses.
That's not true, and it is proven: Lc0 at 1 node plays already at master level and can beat many engines rated less than 2100; back in 1995 Fritz 3 running on a Pentium at 90Hz won the computer championship ahead of Star Socrates and Deep Blue, the latter using 14 processors
capable to calculate 5 million positions per second.
"Same strength, similar biases."
++ No, that is not true. Human or engine players of the same strength have different biases. Some (Petrosian) will never sacrifice until completely clear, some (Tal) will always sacrifice if they see some chance. Some will always trade, some will avoid trades. LC0 has an elaborate evaluation function (thick nodes), Stockfish has a simpler evaluation function but calculates deeper (thin nodes).
Different moves may be equally effective, but it is well known that the stronger a player, the lower (on average) their ACL (average centipawn loss) to the best available engine. Therefore a game between to equally strong players resembles more and more a game autoplayed by the best engine, the stronger they are. Hence their systematic errors cannot be uncorrelated.
As for "thick" and "thin" nodes, the evaluation of a position is for the major part based on experience. This experience can be acquired both from past complete (but always imperfect) games played from similar positions, or from incomplete games played "on the fly", from the current position, in the player's mind (through calculations). The effects are comparable: the greater the experience, the "better" (more effective) the evaluation, but also more biased.
"Without other informations, it is: play always A, of course."
++ I stress the importance of incorporating chess knowledge into the brute force method.
The information available is the knowledge, which makes the player choose A in that example. New informations, might alter the expected outcome of A.
"I deduced the very same data you mention from premises not based on those data"
++ No, you did not: you said you cannot tell. 127 draws, 6 white wins, 3 black wins, what is a plausible distribution of games with 0, 1, 2, 3... errors? I say: 126, 9, 1. You say?
Data are factual informations, like the increasing percentage of draws in ICCF games. Your error rate cannot be directly measured, so it cannot be considered a datum. You are as usual very careless about these distincions, if not blatantly unscrupulous. Plus, I did not say I cannot tell which is the error rate, I said it's impossible to tell, because they are not statistically independent. What's your estimation, if errors are not statistically independent?
"do you think he too is not capable enough to conceive those calculations"
++ Of course Tromp being a mathematician is capable enough to conceive my calculations, but that was not his subject: he was interested in the number of legal positions.
Please, @tygxc: decades passed talking about the accuracy of engines, and no one has found 5 minutes do your calculations and derive your error rate per move?
3 cloud engines of 10^9 nodes/s can thus weakly solve chess in 5 years.
An alternative would be a cluster of 3,000 desktops of 10^6 nodes/s.
SF on modern desktops already reaches 10⁷ nodes/second, so why do you insist so much on cloud engines is beyond me (or maybe not). But that's the least of the problems, anyway.
#3303
"in 1995 Fritz 3 running on a Pentium at 90Hz won the computer championship ahead of Star Socrates and Deep Blue, the latter using 14 processors"
++ 1995 is 27 years ago. Stockfish defeated LC0.
"Different moves may be equally effective"
++ Different human or engine players have different strengths and weaknesses.
An oversight by one does not correlate to an oversight by the other.
"The information available is the knowledge, which makes the player choose A in that example."
++ The expected outcome of hanging a piece is a loss.
However sacrifices can be correct and yield an expected outcome of a win.
"I said it's impossible to tell" ++ I say it is possible to tell: 126, 9, 1.
"because they are not statistically independent"
++ I say they are statistically independent based on the available games. Assume they are statistically independent, plausible as they are two different entities ICCF grandmasters with engines. Now calculate the result: 126, 9, 1. Now note there are only 9 single errors, independent while alone and one double error, independent or not. Thus the initial assumption is valid in this case.
"no one has found 5 minutes do your calculations and derive your error rate per move?"
++ So it must be wrong because nobody else has thought of it before?
"SF on modern desktops already reaches 10⁷ nodes/second"
++ Then the alternative would be a cluster of 300 modern desktops.

#3298
The 50-moves rule is never invoked in positions > 7 men
Another guess.
You seem to have kind of an addiction to guessing. It is a chess player's habit by contrast with a chess solver's habit.
In contrast to, in contrast to by contrast with. Apart from that, this is ridiculous, wouldn't you agree? I mean, even I don't talk such crpaola. At least most of the time.
here we go again with another trash forum

"in 1995 Fritz 3 running on a Pentium at 90Hz won the computer championship ahead of Star Socrates and Deep Blue, the latter using 14 processors"
++ 1995 is 27 years ago. Stockfish defeated LC0.
And previously A0 and Lc0 defeated SF. In hystory there have been several examples of "thicker" vs. "thinner"nodes, sometimes the first approach prevailed, sometimes the other.
"Different moves may be equally effective"
++ Different human or engine players have different strengths and weaknesses.
An oversight by one does not correlate to an oversight by the other.
I have given a detailed explanation. You don't even object and just claim otherwise.
"The information available is the knowledge, which makes the player choose A in that example."
++ The expected outcome of hanging a piece is a loss.
However sacrifices can be correct and yield an expected outcome of a win.
That's not an objection too and does not add anything to what I said.
"I said it's impossible to tell" ++ I say it is possible to tell: 126, 9, 1.
"because they are not statistically independent"
++ I say they are statistically independent based on the available games. Assume they are statistically independent, plausible as they are two different entities ICCF grandmasters with engines. Now calculate the result: 126, 9, 1. Now note there are only 9 single errors, independent while alone and one double error, independent or not. Thus the initial assumption is valid in this case.
Your thinking is circular, I do not know how to tell you. The example you made before:
Example: What is the velocity v of an electron with charge e and mass m accelerated by a voltage V?
Solution: start by the hypothesis: v << c the velocity of light c. Thus use Newtonian mechanics: eV = mv² / 2 and calculate v = sqrt (2eV / m). Now check if v << c then the hypothesis was true and the calulation valid, else change to relativistic mechanics.
is different, because the classical model is validated by experiments, for v << c, independently of the initial assumption (in fact it was thought correct for any v, before Einstein). Your reasoning is closed in itself: the result confirms the assumption with no external source of validation. Let's say coherent, but fallacious. Let's make another example: "I assume that people never meet each other in the room R. Therefore, no more than one person can be in R at any given time. Since no more than one person can be in R at any given time, people never meet each other in R".
"no one has found 5 minutes do your calculations and derive your error rate per move?"
++ So it must be wrong because nobody else has thought of it before?
I am not saying that. Since you are so fond with probabilities, I am asking you if you think it is plausible that, assuming your calculations correct, nobody else has thought of it before,
"SF on modern desktops already reaches 10⁷ nodes/second"
++ Then the alternative would be a cluster of 300 modern desktops.
So Schaeffer solved checkers as an hobby (you say that!) starting the project with 200 computers and letting them run for 20 years, but he (or some other hobbyst) do not find the resources to start a project to solve chess in 20 years using 300 computers. Do you think it's plausible?
#3275
Your response is utterly wrong for 3 reasons.
1) The length of 60 moves implies it is representative. An average ICCF game lasts 39 moves and that is relevant as 99% of ICCF WC draws are ideal games with optimal moves from both sides, i.e. perfect play and thus relevant to weakly solving chess.
2) Weakly solved means that for the initial position a strategy has been determined to achieve the game-theoretic value against any opposition. Hence black tries to achieve the draw and white tries to oppose against the draw.
3) The same position reached by a different move order is the same position and has the same evaluation. There are enough chess positions, it is bad to assess the same position multiple times. The 50-moves rule plays no role, as it is never invoked in positions >7 men in ICCF, TCEC, human world championships etc. Chess has already been strongly solved for positions of 7 men or less.
The 10^72 is ridiculous: even for strongly solving chess i.e. a 32-men table base only 10^44 legal positions need consideration. For weakly solving chess 10^17 legal, sensible, reachable, and relevant positions need consideration.