I predict that @tygxc will fail to be convinced.
Hilariously, he thinks Sveshnikov solved B33 in the game theoretic sense a decade before computers reached the strength of the best (puny) humans! (Note carefully that what they need to do is converge on the standard of play of a 32 piece tablebase. They are presently woefully short of reaching the level of a much smaller tablebase).
I am now going to solve C67.
It's a draw, by example. Ta da!!
[irony]
the 16 years of solving checkers were not his [Schaeffer's] job, but rather some personal side-project, i.e. hobby.
If you say so... Anyway thank you for your answer about the cost.
About your idea of project, I insisted so much on the game-theoretic value of the game, because it is crucial for your theory. As I said the percentage of drawn games is not a sufficiently strong evidence to assume that the game-theoretic value is a draw. More importantly, it is not sufficient to give a reliable estimation of the error rate per move. To do that, you start from the assumption that errors are statistically independent... Now, let's say that an engine is playing in autoplay, it is White's turn at move n of the game and the engine analyzes the position P, reaching depth d; then it plays a move M which is a mistake, and turns a draw into a loss for White. After that, the engine takes Black and analyzes the new position P₁ at depth d (on average). It already analyzed the line starting from P₁ in the previous turn, but now one other ply has been played; nonetheless, with some approximation we can say that reaching depth d at plycount 2n-1, gives for the line the same evaluation as depth d+1 at plycount 2(n-1) and it is well known that the difference between an evalutation at depth d and one at depth d+1 is on average smaller and smaller, the larger d. That means that very likely an engine does not recognize at plycount 2n-1 an error made at plycount 2(n-1). Most of the times these mistakes can be exploited only playing a very precise move, hence the engine will likely play another wrong move, that does not exploit the error, when the evaluation is still wrong. Even if the engine is lucky and play the right move at plycount 2n-1, it would face the same problem at plycount 2n+1, 2n+3... So if an engine makes a mistake in autoplay, likely with the other colour it will very soon make another mistake that will rebalance the game. That's why, even in the case that engines are becoming more accurate and the game value is a draw, it is still not possible to say whether they make 0, 2, 4, 6 or more errors, in general: they are not statistically independent.
You used simple maths to do your calculations, yet you think we cannot understand it. Did Tromp make such calculations to estimate the error rate per move? If that's not the case, do you think he too is not capable enough to conceive those calculations and arrive to your very conclusion?