@12195
"I need to know these 4 ways of how to draw a game."
Against 1 e4:
- 1 e4 e6
- 1 e4 c5
- 1 e4 e5 2 Nf3 Nf6
- 1 e4 e5 2 Nf3 Nc6
Against 1 d4:
- 1 d4 d5 2 c4 c6
- 1 d4 d5 2 c4 dxc4
- 1 d4 d5 2 c4 e6
- 1 d4 Nf6 2 c4 e6
- 1 d4 Nf6 2 c4 g6
The redundancy makes it fail safe.
Even if a double error were found in one line, there are 3-4 backup lines of defense to draw.
I don't know why my opponents bother playing on after those. They must not be aware that chess has been solved.
"As Schaeffer wrote: 'Even if an error has crept into the calculations, it likely
does not change the final result.'
sorry, math proofs dont work like that. thats an appeal to authority fallacy
Mathematicians do sometimes make such comments about very difficult proofs but, as you say, this comment is not about what a proof is. It is about human fallibility and an uncertain beliefs about what is true if it has compromised the work! Schaeffer would definitely agree that IF an error was found, some of the analysis would need to be done for the conclusion to stand. He (and everyone else) would expect the same conclusion in the end.
It's a little different in the case of the solution of checkers, since it is more about the correctness of the code than the correctness of a proof designed for humans. I recall hearing that when the 4 color theorem was proved (it was actually achieved while I was at school), there was distrust from some graph theorists because it was not practical for a human to check the computer's working! Of course, what was really needed was for someone to check that the program was correct according to the mathematics - i.e. that it checked examples in a valid way and that it checked all the necessary examples. The execution of the program could be taken as reliable.
There is a printed proof available - I haven't read it. It's rather long but probably not impossible. I think about the same as a fourth volume added to Whiehead & Russell's PM in size and I suspect in not dissimilar style.
"The execution of the program could be taken as reliable."
Really?
Yes, really. When you do deterministic operations with a computer, you can be confident of the result. This makes it possible to spend several months running the Lucas-Lehmer algorithm to find a new prime, for example. Or to calculate 105 trillion digits of pi. This was probably a much bigger computation than solving checkers (and surely more pointless, beyond breaking the record!). To do it, you need an extremely high reliability of the elementary calculations,
They say they used 100 million gigabytes (c. 1e17 bytes) of data while the entire result would only require 4.36e13 bytes, assuming incompressibility. It may be that the algorithm uses a lot more storage to run efficiently in time, for reasons that would require knowledge of the algorithm.
Of course, when elementary calculations are not close enough to 100% reliable, you can always increase the reliability by doing error checking. The very simplest (and quite expensive) way is to do the calculation twice in parallel and repeat anything that does not agree. This roughly squares the error rate, a vast improvement.