Chess will never be solved, here's why

Sort:
playerafar

In computer self play games - many of the errors would be repeated by both sides.
Or - just inferior moves. 
More likely positionally inferior than tactically inferior.
Or - just moves the computer is in error assigning as 'superior' where there are in fact many comparable moves.  
Computers detecing their own errors? 
Basing the error percentage on the computers' own reports of same?  happy.png

Regarding 'facts and figures' people disagreeing with posts can use the same facts and figures already presented in the posts they're disagreeing with.
There's no 'imperative obligation' to obtain one's own 'peer-reviewed' facts and figures.   
Idea:  present one's own logical arguments instead of parroting stuff from the net.   

tygxc

#2120
"You mean figure 2, page 7?" 
Yes, figure 2, my bad.

"you are implicitly assuming that the game-theoretic value is a draw"
++ Yes, I wrote that above: I assume the generally accepted hypothesis that chess is a draw to be true. From that follows that any decisive games contains an odd number of errors, at least 1. The number of decisive games gives the error rate in absolute terms. We do not know what move was the error, but we know it is there. The fact that the draw rate goes up with more time even supports the hypothesis.
To the mathematical nitpickers: I fit
error = a / time^b
and estimate 2 parameters a and b from 2 data points, that is mathematically sufficient but not redundant. In plain English:
time * 60 = error / 5.6

"if the game was a win for White or for Black, following your reasoning we should think that at least one error occurred in all the drawn games"
++ Yes, that is right: if chess were a win for white or even for black, then all the drawn games would contain an odd number of errors, at least 1. That would lead to the odd outcome that more time = more errors.

"Errors might have occurred in all games, independently from the result."
++ That is right, under the hypothesis that chess is a draw a drawn game might contain an even number of errors, at least 2. However at 1 error per 10^5 positions at 60 h/move, the probability of 2 errors would be 1 in 10^10 << 10^5.

#2115
"if errors are not statistically independent, how can you say that P(2nd error | 1st error) ~= P(1st error)?"
++ There is hardly any statistical dependence. If the 1st error allows a checkmate in 1, then that ends the game and there can be no 2nd error, but that is rare. For all practical purposes
P(2 errors) = P(1 error)^2.
I wrote the exact formula with conditional probability
P(error 1 & error 2) = P(error 2|error 1) * P (error 1)
only to avoid mathematical hair splitting.


tygxc

#2121

"In computer self play games - many of the errors would be repeated by both sides."
++ There are not that many errors, and the error rate goes down with more time:
time * 60 = errors / 5.6

"Or - just inferior moves. "
++ There are no 'inferior moves' a move is either an error or not: it either changes the game state draw / win / loss or it does not change it.

"More likely positionally inferior than tactically inferior."
++ Positionally inferior = tactically inferior at more depth

"the computer is in error assigning as 'superior' where there are in fact many comparable moves.  "
++ There are no superior moves: all moves that do not change the game state draw / win / loss are objectively equally good

"Computers detecing their own errors? 
Basing the error percentage on the computers' own reports of same?"
Under the generally accepted hypothesis that chess is a draw each decisive game must contain an odd number of errors at least one. So from the number of decisive games we can infer the error rate.

"Regarding 'facts and figures' people disagreeing with posts can use the same facts and figures already presented in the posts they're disagreeing with."
++ Yes, I would wellcome if people backed up their claims with facts and figures. Now it is more like I cannot conclude what I conclude from the facts and figures. Then they jump to their own conclusions without any backing.

"There's no 'imperative obligation' to obtain one's own 'peer-reviewed' facts and figures."
That is correct, there is no obligation and neither is there any ownership of the facts and figures I present. Of course people are free to base their arguments on the facts and figures I present, or on the facts and figures thay can bring from other reputable sources.

"present one's own logical arguments instead of parroting stuff from the net."
++ I do present my own logical arguments and back these up by facts and figures from reputable sources. If I were to use facts and figures without quoting the sources, then people would question those facts and figures. There are even those here who demand page and line for each source I quote.

haiaku
tygxc wrote:

"you are implicitly assuming that the game-theoretic value is a draw"
++ Yes, I wrote that above: I assume the generally accepted hypothesis that chess is a draw to be true. From that follows that any decisive games contains an odd number of errors, at least 1. The number of decisive games gives the error rate in absolute terms. We do not know what move was the error, but we know it is there. The fact that the draw rate goes up with more time even supports the hypothesis.
To the mathematical nitpickers: I fit
error = a / time^b
and estimate 2 parameters a and b from 2 data points, that is mathematically sufficient but not redundant. In plain English:
time * 60 = error / 5.6

How can you say the error rate is that function of time? And those two equations are not coherent. Besides, you admit that the games may contain any odd number of errors, but then you make your calculations assuming there is only one, not at least one. In other words, if errors by both sides do compensate each other, that parameter "a" can be any number.

goodwitch13

In any math problem you are solving for something...chess is a series of moves...not unlike Algebra or Fractions. You raise a value you remove a value in hopes of solving for X or calculating the next 1/2 move to count for the whole move you intend to make next. Chess isn't solved by who wins and loses. Chess is solved by each piece being a number that is part of infinite equations and each one having their own solution. IE Opening Knight g1 to f3 putting me in position to take pawn at E5. If I am solving for how to take the pawn with Knight I know in 2 moves I am able to do this. As a problem it could be written as such (g1+f3)(f3-e5)=x(pawn) or as (2+2)(2-2)...(+4)(-4)=0(pawn). Another example of this is open move (QPD2+QPD3)=-1qp by moving forward one you lost the space behind you for pawn. Followed by QBC1-KPE5=4(pawn) or (-5+1)=X...(-4)=X, take away 5 spaces gaining a new position of 1. By adding 4 to both sides  X solves for 4 spaces to moved to take pawn. Yes, chess can be solved. It just depends on how you wish to play as a strategist or mathmatician.

tygxc

#2140
"How can you say the error rate is that function of time?"
At infinite time the error rate is 0. At zero time the error rate is infinite.
So the simplest monotone function that satisfies those 2 asymptotic boundary conditions is

error = a / time^b

or, if you prefer

log (error) = log(a) - b * log(time) 

"And those two equations are not coherent." ++ What do you mean?

"the games may contain any odd number of errors, but then you make your calculations assuming there is only one"
++ P(5 errors) << P(3 errors) << P(1 error) 

"if errors by both sides do compensate each other, that parameter "a" can be any number."
We are talking about 11.8% decisive games @ 1 s/move and 2.1% decisive games @ 1 min/move. That justifies to neglect the occurence of 3, 5, 7, 9... errors in 1 game, even more so at 60 h/move.
It were different if you took the results of 2 beginners: that would lead to much more decisive games because of more errors per game.
All this relates to the 1st step: inferring the error rate from the autoplay game results.
Estimating the parameters a & b comes from the 2nd step: extrapolating from 1 s/move and 1 min/move to 60 h/move. That is independent from neglecting 3, or 5, or 7 errors and only assuming 1 error as there are only 2.1% decisive games @ 1 min/move.
The 3rd step is converting error/game to error per position, assuming games of 40 moves i.e. 80 positions.
The 4th step is then to infer from that that the top 4 engine moves only produce 1 error in 10^20 positions and thus suffice as only 10^17 positions need consideration.

tygxc

#2125
Game theory is a branch of mathematics.
Here is again a reference to the classical paper on solving games
https://reader.elsevier.com/reader/sd/pii/S0004370201001527?token=285C2283E58C900FB0F801D77A2E7F5BE5B1F4E3238544BC6BADBAB2FA8A956CD32DD00A4C0C577FED2A13E98A821265&originRegion=eu-west-1&originCreation=20220311103154 

tygxc

#2100
I tried to replicate your perceived error on Stockfish 14 NNUE, but could not replicate your findings: the engine goes straight for the table base correct win 1 Rh8+ Ke7 2 Rh7+ Kf6 3 Rf7+ Kg6 4 Rd7 Rb1+ 5 Ka5 Kf6 etc. I guess there is something wrong with your version of Stockfish 14.

playerafar

"At infinite time the error rate is 0. At zero time the error rate is infinite."
A lot of math things have 'infinities' at either end.
Doesn't prove anything in between. 
Doesn't validate 'extrapolations' from the infinities.

FIDEshutoutKarjakin
🕊🇺🇦
llama51

By the way, this is fun.

Can just watch 60 seconds from where I've started it.

-

tygxc

#2129
"Doesn't prove anything in between."
at time 0 / move: infinite error / game
at time 1 s / move: 11.8% error / game
at time 1 min / move: 2.1% error / game
at time infinite / game: 0 error / game
That is enough to interpolate everything in between.

playerafar

Anything is 'enough' if you want it to be.
The earth can even be flat if whoever wants that. 
Many people still believe that.
Even on chess.com there's at least one believer of that.  
And another who insists that 'viruses can't spread diseases'.

tygxc

#2133
"Many people still believe that."
On this thread there are even people who believe that:
chess is a forced win for white in 3 trillion moves starting with 1 e4 e5 2 Ba6
1 a4 is better than 1 e4 or 1 d4
positions with 7 white rooks, 3 black rooks, 3 black bishops, and 5 black knights are common
solving chess requires floating point operations
engines play weaker when they use more time
strongly solving is the same as weakly solving
Prof. van den Herik cannot even define the subject he wrote a paper about
GM Sveshnikov knew nothing about chess analysis
2 data points are not enough to estimate 2 parameters
nodes do not include evaluation
there are twice as much positions as diagrams
the 50 moves rule is invoked in most chess games
there is a huge difference between 3-fold, 2-fold, or 5-fold repetition

haiaku
tygxc wrote:

#2140
"How can you say the error rate is that function of time?"
At infinite time the error rate is 0. At zero time the error rate is infinite.

Hypothesis 1.

tygxc wrote:

So the simplest monotone function that satisfies those 2 asymptotic boundary conditions is

error = a / time^b

Hypothesis 2. Occam's razor is not a proof.

tygxc wrote:

In plain English:
time * 60 = error / 5.6

"And those two equations are not coherent." ++ What do you mean?

Wrong calculation. Are you kidding? If b = 60 and a = 5.6, then

error = 5.6/time⁶⁰  and error/5.6 = 1/time⁶⁠⁰ = time⁻⁶⁰ ≠ time*60

tygxc wrote:

"the games may contain any odd number of errors, but then you make your calculations assuming there is only one"
++ P(5 errors) << P(3 errors) << P(1 error) 

Hypothesis 3.

tygxc wrote:

"if the game was a win for White or for Black, following your reasoning we should think that at least one error occurred in all the drawn games"
++ Yes, that is right: if chess were a win for white or even for black, then all the drawn games would contain an odd number of errors, at least 1. That would lead to the odd outcome that more time = more errors.

Hypothesis 4.

Counter-hypothesis: not to mention pathology in game trees, even if the evaluation was strongly biased, more time would lead to more draws, because the engine plays against an equally flawed evaluator, itself. With less time the result has simply greater variance. Calculations make it possible for the evaluator to avoid those lines that lead to lower expected results, therefore stabilizing the evaluation. The average outcome is the same: 0.5 and cannot be anything but that, with self-play/self-analysis and limited search.

Fact: this phenomenon (a more stable evaluation with increasing depth) has been observed basically in any engine. Following your reasoning, we could infer that the game is a draw even using a 1700 elo-rated engine.

tygxc wrote:

"if errors by both sides do compensate each other, that parameter "a" can be any number."
We are talking about 11.8% decisive games @ 1 s/move and 2.1% decisive games @ 1 min/move. That justifies to neglect the occurence of 3, 5, 7, 9... errors in 1 game, even more so at 60 h/move.

Hypothesis 5.

tygxc wrote:

Estimating the parameters a & b comes from the 2nd step: extrapolating from 1 s/move and 1 min/move to 60 h/move. That is independent from neglecting 3, or 5, or 7 errors and only assuming 1 error as there are only 2.1% decisive games @ 1 min/move.

I did not assume a linear dependence.
On the contrary I assumed logarithmic dependence:
at 1 s / move: 1 error / 679 positions

I have to suppose only one error per game, to obtain that error rate per position. Can you be more explicit?

tygxc wrote:

The 4th step is then to infer from that that the top 4 engine moves only produce 1 error in 10^20 positions and thus suffice as only 10^17 positions need consideration.

A magician of numbers like you, then, can answer a question I have asked you four times and is still unanswered. I ask it here for the fifth time, in a different form.

Given a game tree of 10¹⁷ nodes, with this structure:

1) Every black node (representing a position after Black's move) has 4 child nodes (white nodes)

2) Every white node (representing a position after White's move) has 1 child node (black node)

a) What's the depth of the game tree?

b) If 60 hours are needed to produce the 4 white nodes and their 4 child black nodes (one for each white node), how many hours are needed to produce the entire tree?

playerafar

Since everybody seems to agree or at least not contest that chess cannot be strongly solved in a feasible way as things stand now ...
then there could be many discussions as to what computers might otherwise achieve instead.  Within chess.  
Have/are computers changing chess or chess development as a result of computer-solving of anything?
Tactics puzzles and their solutions can be computer-checked.
One can computer-check one's game.
One can 'work out' against pre-selected positions - or play computers.
(could make a good 'warmup' just before a real tournament game against 'the living' happy.png)

But do those change anything about chess 'instruction'?
Or about chess theory?
The biggest such change might be that players now have a much faster route to improvement.
Because computers have compiled many tactics puzzles. 
Isolating key concepts.  Much more efficient than games.
But in that role - the computers are simply presenting - not instructing.
But the analysis (Stockfish feature) button can tell players why and how some moves don't work in puzzles.

tygxc

#2135

"At infinite time the error rate is 0. At zero time the error rate is infinite.
Hypothesis 1."
++ That is obvious.
With unlimited time you can strongly solve chess by visiting all positions so there is 0 error.
If you have 0 time, then you can get nothing right at all, so infinite error.

"error = a / time^b
Hypothesis 2. Occam's razor is not a proof."
++ What else would you fit to it?
Teacher: "if 12 oranges cost $3, then how much do 24 oranges cost?"
You: "I cannot tell, there might be a discount for more oranges, some oranges may have more mass and the sale may be per kg instead of per number, maybe there are no more than 12 oranges available, no I cannot answer the question"

"If b = 60 and a = 5.6"
++ I did not say b = 60 and a = 5.6. You misunderstood.

"P(5 errors) << P(3 errors) << P(1 error) 
Hypothesis 3."
++ That is no hypothesis, that is basic arithmetic.

"Hypothesis 4. Not to mention pathology in game trees, even if the evaluation was strongly biased, more time would lead to more draws, because the engine plays against an equally flawed evaluator, itself."
++ It plays against itself at both time controls. The draw rate goes up with more time.

"Counter-hypothesis: with less time the result has simply greater variance. Calculations make it possible for the evaluator to avoid those lines that lead to lower expected results, therefore stabilizing the evaluation. The average outcome is the same: 0.5 and cannot be anything but that, with self-play/self-analysis and limited search."
++ No, there are clear trends:
more time = more draws
white wins > black wins
draws > decisive games
Even more: even with stalemate = win the decisiveness does not go up.
Moreover look at figure 4d: for King's Gambit the white / black trend is reversed, so you cannot attribute the outcome to variance

"this phenomenon (a more stable evaluation with increasing depth) has been observed basically in any engine. Following your reasoning, we could infer that the game is a draw even using a 1700 elo-rated engine."
++ It is not only more stable, it approaches all draws. That is why at TCEC they had to imposed unbalanced openings to avoid all draws. There the engines are different, so they do not play against themselves.
https://tcec-chess.com/ 

"We are talking about 11.8% decisive games @ 1 s/move and 2.1% decisive games @ 1 min/move. That justifies to neglect the occurence of 3, 5, 7, 9... errors in 1 game, even more so at 60 h/move.
Hypothesis 5."
++ That is not a hypothesis, that is basic arithmetic: you can neglect a smaller number to a larger number

"I have to suppose only one error per game, to obtain this error rate per position. Can you be more explicit?"
++ A priori a game can contain 0, 1, 2, 3, 4, 5... errors. Under the generally accepted hypothesis that chess is a draw, an even number of errors leads to a draw and an odd number of errors leads to a decisive game. We observe more draws than decisive games and even more so with more time / move. Thus there are more games with an even number of errors than with an odd number of errors and even more so with more time.
That leads to
games with 0 errors > games with 1 error > games with 2 errors > games with 3 errors...
in particular games with 1 error >> games with 3 errors

"Given a game tree of 10¹⁷ nodes"
++ The tree does not even have 10^17 nodes. See the paper on solving checkers: there were as much nodes in the tree as positions to consider per node. Many leaves do not grow to branches.

"1) Every black node (representing a position after Black's move) has 4 child nodes (white nodes)

2) Every white node (representing a position after White's move) has 1 child node (black node)"
++ That is right.

"a) What's the depth of the game tree?"
++ I guess between 30 and 50.
For data on that look at the game lengths of near perfect ICCF drawn games.
https://www.iccf.com/event?id=85042 

"b) If 60 hours are needed to produce the 4 white nodes and their 4 child black nodes (one for each white node), how many hours are needed to produce the entire tree?"
++ about 5 years
Do not forget about transpositions, which make up a large part of chess.
For more information on that see the paper on Losing Chess: they used a transposition table.

DiogenesDue
tygxc wrote:

#2133
"Many people still believe that."
On this thread there are even people who believe that:
chess is a forced win for white in 3 trillion moves starting with 1 e4 e5 2 Ba6
1 a4 is better than 1 e4 or 1 d4
positions with 7 white rooks, 3 black rooks, 3 black bishops, and 5 black knights are common
solving chess requires floating point operations
engines play weaker when they use more time
strongly solving is the same as weakly solving
Prof. van den Herik cannot even define the subject he wrote a paper about
GM Sveshnikov knew nothing about chess analysis
2 data points are not enough to estimate 2 parameters
nodes do not include evaluation
there are twice as much positions as diagrams
the 50 moves rule is invoked in most chess games
there is a huge difference between 3-fold, 2-fold, or 5-fold repetition

I've marked the ones that are the most obviously false in red.  I'm sure other people would dispute more of these as well.  Most of these are your own straw man arguments.

One thing is abundantly clear.  You lack the objectivity to ever be trusted with the scientific method, and have zero hope of ever being part of an actual solution for chess wink.png.  You are driving everything from the 5 year conclusion tossed out as an offhand comment by a deceased GM, and your hypotheses are all contorted to fit this reality you have decided upon in advance.

haiaku
tygxc wrote:

"At infinite time the error rate is 0. At zero time the error rate is infinite.
Hypothesis 1."
++ That is obvious.

To you and that does not prove anything.

tygxc wrote:

"error = a / time^b
Hypothesis 2. Occam's razor is not a proof."
++ What else would you fit to it?
Teacher: "if 12 oranges cost $3, then how much do 24 oranges cost?"

Teacher: "if a car costs $30000 and can safely reach 125mph, how much does a car that can  safely reach 300mph cost?" You: "$60000, obviously".

tygxc wrote:

"If b = 60 and a = 5.6"
++ I did not say b = 60 and a = 5.6. You misunderstood.

Evasive. How

error = a / time^b

becomes

time * 60 = error / 5.6?

tygxc wrote:

"P(5 errors) << P(3 errors) << P(1 error) 
Hypothesis 3."
++ That is no hypothesis, that is basic arithmetic.

Lie 1. "<<" means much smaller than, so only if you suppose the error rate very small and the errors statistically independent, your "basic arithmetic" holds true.

tygxc wrote:

"Hypothesis 4. Not to mention pathology in game trees, even if the evaluation was strongly biased, more time would lead to more draws, because the engine plays against an equally flawed evaluator, itself."
++ It plays against itself at both time controls. The draw rate goes up with more time.

Oh, thanks, I didn't understand that. Is that supposed to be an objection?

tygxc wrote:

"Counter-hypothesis: with less time the result has simply greater variance. Calculations make it possible for the evaluator to avoid those lines that lead to lower expected results, therefore stabilizing the evaluation. The average outcome is the same: 0.5 and cannot be anything but that, with self-play/self-analysis and limited search."
++ No, there are clear trends:
more time = more draws
white wins > black wins
draws > decisive games

"No" what? I use the same trends: 1) more time more draws, because the evaluation function is more stable with time; 2) White wins more than Black, but we cannot infer it is a win for White. In fact you say it is generally accepted the game value is a draw; 3) more draws than decisive games, because the game is too complex for an engine to consistently find a way and win against itself.

tygxc wrote:

Even more: even with stalemate = win the decisiveness does not go up.
Moreover look at figure 4d: for King's Gambit the white / black trend is reversed, so you cannot attribute the outcome to variance.

Nonsense.

tygxc wrote:

"this phenomenon (a more stable evaluation with increasing depth) has been observed basically in any engine. Following your reasoning, we could infer that the game is a draw even using a 1700 elo-rated engine."
++ It is not only more stable, it approaches all draws. That is why at TCEC they had to imposed unbalanced openings to avoid all draws. There the engines are different, so they do not play against themselves.

As always, don't you understand, or you pretend not to? It does not matter if an engine plays against itself or not. It does matter if an engine plays against an equally strong engine. In that case, as I said, the game might be simply too complex for either the two to defeat the other. If an engine plays against itself or analyze, it cannot be stronger than itself.

tygxc wrote:

"We are talking about 11.8% decisive games @ 1 s/move and 2.1% decisive games @ 1 min/move. That justifies to neglect the occurence of 3, 5, 7, 9... errors in 1 game, even more so at 60 h/move.
Hypothesis 5."
++ That is not a hypothesis, that is basic arithmetic: you can neglect a smaller number to a larger number

Lie 2. You cannot say that 3+1≈ 3 because 1 is smaller than 3. You have not proven you didn't make an hypothesis. You are just repeating your hypotheses, denying they are hypotheses, because you cannot prove they aren't.

tygxc wrote:

"I have to suppose only one error per game, to obtain this error rate per position. Can you be more explicit?"
++ A priori a game can contain 0, 1, 2, 3, 4, 5... errors. Under the generally accepted hypothesis that chess is a draw, an even number of errors leads to a draw and an odd number of errors leads to a decisive game. We observe more draws than decisive games and even more so with more time / move. Thus there are more games with an even number of errors than with an odd number of errors and even more so with more time.
That leads to
games with 0 errors > games with 1 error > games with 2 errors > games with 3 errors...
in particular games with 1 error >> games with 3 errors

Lie 3. 10 > 9 > 8 does not mean 10 >> 8. This also proves that you supposed only 1 error per game in your calculations, so:

tygxc wrote:

Estimating the parameters a & b comes from the 2nd step: extrapolating from 1 s/move and 1 min/move to 60 h/move. That is independent from neglecting 3, or 5, or 7 errors and only assuming 1 error as there are only 2.1% decisive games @ 1 min/move.

is lie 4.

tygxc wrote:

"Given a game tree of 10¹⁷ nodes"
++ The tree does not even have 10^17 nodes. See the paper on solving checkers: there were as much nodes in the tree as positions to consider per node. Many leaves do not grow to branches.

Don't you understand, do you pretend to not understand, or do you blatantly lie? You have already admitted that in checkers 10¹⁴ nodes have been searched, out of a search space of 5*10²⁰ nodes:

haiaku wrote:

Earlier you tried to use a paper to prove that only the square root of the search space has been checked, and there is no proof. Now you say "the other paper"... Which one? Page, line, or paragraph? Are you saying that indeed 10^14 nodes have been searched?

You:

tygxc wrote:

#1828
https://www.researchgate.net/publication/231216842_Checkers_Is_Solved
page 4 right column, §1-3, yes this paper says 10^14

Now you say that in chess it is enough to search less than 10¹⁷ nodes? That's not what you said earlier:

tygxc wrote:

"You stated that you expect to search only 10¹⁷ positions out of 10³⁷ and in order to do that, you want to search only 4 candidates for White and one for Black at any move."
++ Yes, that is correct.

"how do you determine those 4 top candidates and be sure the optimal move is among them?"
++ I determine the 4 top candidates with the Stockfish evaluation function or a simplified version of it. I reckon the optimal move is among them by extrapolation: 1 error in 10^5 positions for the top 1 move, 1 error in 10^10 positions for the top 2 moves, 1 error in 10^15 moves for the top 3 moves, 1 error in 10^20 moves for the top 4 moves. That should do as I plan to consider only 10^17 moves.

Don't play with words, @tygxc, and don't just answer "I don't play with words". Try harder. Search space, proof tree and nodes actually searched are not the same thing. The game tree I asked of is made of nodes actually searched.

tygxc wrote:

"a) What's the depth of the game tree?"
++ I guess between 30 and 50.

Evasive. "I guess"? I didn't ask for a guess. You can use some approximation, but that is far too much vague. That answer is paramount for your theory.

tygxc wrote:

"b) If 60 hours are needed to produce the 4 white nodes and their 4 child black nodes (one for each white node), how many hours are needed to produce the entire tree?"
++ about 5 years

Wrong.

tygxc wrote:

Do not forget about transpositions, which make up a large part of chess.
For more information on that see the paper on Losing Chess: they used a transposition table.

And in fact they searched 10¹⁷ nodes. This is not the search space, which is much bigger and not the proof size, which is much smaller. See the paper on Losing Chess better. BTW, any common engine uses TT.

DiogenesDue
haiaku wrote:

You are just repeating your hypotheses, denying they are hypotheses, because you cannot prove they aren't.

The central technique, yes.  It happens over and over again in many threads, and he is refuted in every one of them, simply moving on to the next when one dies out from people being tired of repeating themselves.