His meaning of "relevant" is blatantly wrong because of the (relatively) tiny quoted number of positions, which corresponds to a mere 57 binary choices.
Chess will never be solved, here's why

which corresponds to a mere 57 binary choices
lol, when you put it like that it certainly seems absurd.

That's a nice sanity check.
No matter how eloquently you argue for the 10^17 reduction, after realizing this you have to go back and figure out how it's wrong.

This seems like a chat for smart people I'm not one of them so I'm going to leave
Sounds smart.

As you point out, @llama36, only the choices of one side contribute to the number.
Empirically there are not only a lot of legal moves for the defender, there are very often several reasonable moves (the number 4 is one conservative estimate for a typical number of non-blunders in positions in empirical chess).
@6636
"Are you saying you can square root the positions because we're assuming we can ignore all non-optimal play by black?" ++ We can even discard optimal play by black. Suppose both 1 e4 e5 and 1 e4 c5 draw. To weakly solve chess it is possible to look only at 1 e4 e5 and discard 1 e4 c5.
"How do you discard non-optimal moves without analyzing them?"
++ By the end result. If all lines end in a draw 7 men-table base endgame position or a prior 3-fold repetition, then that retroactively validates all black moves as optimal.

You should stop being completely tedious because you're becoming Kinda Spongey.
I believe there is an old aphorism concerning the color of pots and kettles that might apply here.
@6635
"What weak chess players call "relevant" has no place in weakly solving chess, as indicated by it having no place in the academic literature."
++ Check Schaeffer's solution of checkers: only 10^14 positions relevant of the 5*10^20 legal.
Only 19 of the 300 openings relevant.
@6640
"No matter how eloquently you argue for the 10^17 reduction, after realizing this you have to go back and figure out how it's wrong."
++ 10^17 is not wrong. Chess is just not as deep and as wide as some people seem to think.
However 10^17 is still a huge number.
3 powerful computers working 24/7 during 5 years is huge.

You should stop being completely tedious because you're becoming Kinda Spongey.
I believe there is an old aphorism concerning the color of pots and kettles that might apply here.
I'm afraid I don't think it applies & there's no need for that kind of pedantry. Save it for someone who can learn from it, perhaps.
It might seem that the fact you immediately replied with more of the same might indicate that it does apply. I certainly never believe that some posters are capable of learning much.

Just pointing out that someone who insists on misinterpreting another poster's use of the term "car", going off on on long sidetracks on that point, and then complaining that someone else noticing said mistake is reacting inappropriately is in fact acting inappropriately.
It's likely most people here don't care for or pay much attention to those who assume the mantle of arbiter of acceptable posting.

Let's say there are two chess-playing super computers that have equal processing power. Neither has an advantage over the other in CPU speed or in any other parameter by which computers can be compared. They are also running the identical chess engine. What could possibly be the explanation for one computer defeating the other? Is that even possible?
There are plenty of examples of Leela vs Stockfish, AlphaZero vs Komodo, Fritz vs Leela, etc... But I have never seen Stockfish vs Stockfish on identical computing platforms.

As has been pointed out in the past, engines that are "identical" in software and hardware configuration parameters still run on different hardware under different software instances, and variances will occur.
If you have ever overclocked a CPU, then you will know that CPUs that are supposed to be identical...are not. They are built to fall within tolerances. Software running on an OS is subject to variances in resource sharing and interrupts, etc. The programming *is* the same for identical releases, for the record.
Much of the bad information on this thread comes from people who seem to know diddly and squat about computers, Tygxc included.

IM Levy Rozman did an interesting experiment that comes close to what I was talking about above (Stockfish vs Stockfish) but he forced a particular opening sequence before allowing them to play. He thinks that if he hadn't done that, that the computers would simply play a Ruy Lopez to a draw every time. But, would they? I would find it much more interesting if the engines were allowed to calculate from move #1. And, even if most games resulted in a draw, what could possibly be the explanation for any game ending *not* in draw?
https://www.youtube.com/watch?v=Vq-iWlbqX-0

Cause every hand a loser, and every hands a winner, and the best that you can hope for is to die in yer sleep..
@6559
"What could possibly be the explanation for one computer defeating the other?"
++ One making a mistake. It happens, though rarely.
That is why TCEC imposes slightly unbalanced openings to avoid all draws.
Assume a computer calculates 20 ply deep and plays against itself.
Assume there is a tactic that lies 21 ply deep.
So the computer misses it and plays its move. Now the tactic lies 20 ply deep and it finds it.
"I have never seen Stockfish vs Stockfish on identical computing platforms."
++ There is AlphaZero vs. AlphaZero
See Figure 2 https://arxiv.org/pdf/2009.04374.pdf
At 1 s/move: 88.2% draws
At 1 min/move: 97.9% draws
Are you saying you can square root the positions because we're assuming we can ignore all non-optimal play by black?
How do you discard non-optimal moves without analyzing them?