It's well-known that chess is not only a game, not only
art, but also science. I myself became fascinated by chess not because you could beat your granddad with it, or because you could play beautiful attacking games, but because you could
look things up afterwards.
By Arne MollThe fact that you could actually find out what 'theory', an
objective authority, had to say about some of the individual choices you made during the game, was, for me, perhaps the most fascinating aspect of the game. Recently, I had an e-mail conversation with Dave Munger, a
professor of psychology writer who blogs at
Scienceblogs and is the president of
Research Blogging, a site which promotes serious blogs on
peer-reviewed science articles. He asked me if there were any scientific standards for
chess writing or blogging. Suddenly, I realized there were none. Or not really, anyway. Of course, chess is a well-known field of research in various subjects, for instance psychology (pattern recognition, etc.), mathematics and artificial intelligence, and there have been countless peer-reviewed publications on chess in these research areas. But what about chess as we chess fans know it? What about Nimzowitsch and his theories... in fact, what about the Sicilian Najdorf?
Well, we have
Chess Informant and the
New in Chess Yearbook series, which definitely use certain standards (like standardized symbols). Was this what Munger meant? Surely, there's more to chess writing than using certain symbols? For example, are the annotations in these volumes actually consitstently
checked by the editors? I'm sure
someone checks all variations, but do standards exist for doing this? Perhaps the Informant editors check the variations by hand, or with
Rybka, and the New in Chess editors check them with
Fritz? Who is right when the engines give different evaluations?
And then, of course, there's the obvious fact that chess writing is not only about moves, but also about
concepts and
theories. What can we say about the scientific basis for Nimzowitsch's
Mein System? Can a chess author 'prove his point' by simply referring to Nimzowitsch, like a mathematician may refer to Euclid or G?ɬ?del? Surely, he has to
demonstrate, in analysis with concrete variations, what he means? Come to think of it, it's not clear how we should evaluate 'authorities' in chess in the first place. Is it enough to consider Jonathan Rowson an authority simply because he is a grandmaster and working in the field of science himself? What about Max Euwe's strategy concepts? Or, if we consider chess
history writing, is
Edward Winter the final authority when it comes to the truth?Still, I wrote to Munger, it should be possible to introduce scientific standards in chess writing. Some kind of symbolic notation would probably be essential. Ending each variation with a clear evaluation (rather than the vague 'needs to be checked') would be another useful thing. I would also add specifying the engine (and the computer configuration) that has checked the variations (and mentioning where the engine and the human disagree in their evaluation.) Perhaps referring to other publications would help a lot to put things in perspective. Maybe you can think of more standards which could be useful for chess publications?A final question is whether we
want such rigid standards? Who could benefit from them? Clearly, it would make life hard for some authors who only write chess books to make as much money as possible, instead of trying to find out any 'truth' in chess. (And there are a lot of them!) It would make it more difficult for an author to claim that the Grand Prix Attack leads to a forced win for White, or that there is a waterproof defence against 1.d4. And people could refer to those publications on the Research Blogging site, which would be nice. But would buyers care? Would
you? Would scientific chess books get more attention in the press? Or less? Finally, do we want authors to make money with scientific publications?
Shouldn't science - including chess science - be free for all?