"As Schaeffer wrote: 'Even if an error has crept into the calculations, it likely
does not change the final result.'
sorry, math proofs dont work like that. thats an appeal to authority fallacy
Mathematicians do sometimes make such comments about very difficult proofs but, as you say, this comment is not about what a proof is. It is about human fallibility and an uncertain beliefs about what is true if it has compromised the work! Schaeffer would definitely agree that IF an error was found, some of the analysis would need to be done for the conclusion to stand. He (and everyone else) would expect the same conclusion in the end.
It's a little different in the case of the solution of checkers, since it is more about the correctness of the code than the correctness of a proof designed for humans. I recall hearing that when the 4 color theorem was proved (it was actually achieved while I was at school), there was distrust from some graph theorists because it was not practical for a human to check the computer's working! Of course, what was really needed was for someone to check that the program was correct according to the mathematics - i.e. that it checked examples in a valid way and that it checked all the necessary examples. The execution of the program could be taken as reliable.
There is a printed proof available - I haven't read it. It's rather long but probably not impossible. I think about the same as a fourth volume added to Whiehead & Russell's PM in size and I suspect in not dissimilar style.
"The execution of the program could be taken as reliable."
Really?
Yes, really. When you do deterministic operations with a computer, you can be confident of the result. This makes it possible to spend several months running the Lucas-Lehmer algorithm to find a new prime, for example. Or to calculate 105 trillion digits of pi. This was probably a much bigger computation than solving checkers (and surely more pointless, beyond breaking the record!). To do it, you need an extremely high reliability of the elementary calculations,
They say they used 100 million gigabytes (c. 1e17 bytes) of data while the entire result would only require 4.36e13 bytes, assuming incompressibility. It may be that the algorithm uses a lot more storage to run efficiently in time, for reasons that would require knowledge of the algorithm.
Of course, when elementary calculations are not close enough to 100% reliable, you can always increase the reliability by doing error checking. The very simplest (and quite expensive) way is to do the calculation twice in parallel and repeat anything that does not agree. This roughly squares the error rate, a vast improvement.
I've spent probably a quarter of my working life debugging system problems. In that time I've come across two instances of machine bugs (one in microcode and one probably ditto) plus two (essentially the same) that were either the machine or right in the heart of error recovery routines in the operating system. (That's not including my own PCs going poo.) A lot more problems with bugs in language code e.g. statistics code that runs out of precision and gives duff results without telling you.. There have been well publicised potential logical problems with computer chips. I've spent another quarter of my working life installing and applying fixes to mainframe operating systems and components , the fixes released must have run to hundreds of thousands.
I came to the conclusion that any program of a moderate size usually does three completely distinct things; firstly what the users think it does, secondly what the documentation says it does and thirdly what it does, the exceptions being that the second is sometimes absent.
But those problems pale into insignificance if you're handling large amounts of data over a significant period compared with human fumble problems. Operators mounting the wrong tapes, failing to spot error messages, running things in the wrong order etc.
So I would say there are two chances of Schaeffer's calculations running perfectly over the length of time it took, the number of systems involved and the volume of data processed.
Of course if you run the same approach on separate systems with different developers and run a full check of the results it would greatly increase the confidence (I believe that was done with some of the tablebases).
The problem is you can't. Not only is most of Scheffer's result missing because there was no room to save it, but even if it all ran without error and you could run exactly the same code without error again you wouldn't get the same solution to check.
Saying we have a machine to solve chess would be a useless hypothetical, also how long can a chess game go until someone gets an advantage and how long can the opponent prolong not getting checkmated? How many calculations would it need for that to be "solved"? Because chess games do have a max possible move count with the 50 move rule. Would like some insight and hopefully I'm not asking useless questions