Why do the problems frequently seem much easier than the rating applied to them? "That was too easy to be rated that high!!"
Look at the comments after solving a problem, and complaints about the ratings of the problems are amazingly frequent. From a historical perspective, it is also amusing to note that if individuals also include the rating of the chess problem when they complain, over a period of several years you can see the rating of specific problems vary by hundreds of points. (And it doesn't vary because people complain.)
So how did we get to where we are today, and what do the Tactics Trainer problems' ratings really mean?
Good question, and here is my notion (based purely on my background in math, computer science, operations research and chess) of ground truth.
Day 1. The overlords of Chess.com spoke, "So let it be said, so let it be written: There shall be a Tactics Trainer, because it will be really cool, we'll all have fun with it, and it can help us warm up before an OTB game or speed chess online."
Day 2. The programmers programmed.
Day 3. The programmers were given numerous chess problems and rated ALL of them (every single one) at, let's assume, 2000 points. This simplifies a major problem, to wit, how can one both qualitatively and quantitatively compare tactics problems to each other...the answer, no need, don't bother, we'll get the members at Chess.com to do it for us. The programmers realized that if enough chessophiles try the problems, then a qualitative comparison between problems can be generated. That is, if a problem is very easy, then a larger percentage of players will get the answer correct, and vice versa. Easy problems get more correct answers, so those problems should have their rating reduced. Hard problems get more wrong answers, so those problem ratings increase.
Day 4. There was a beta period during which bugs were worked out and it was determined that the maximum rating change for any given problem should be + or - (about) 16 points, with variations in that score based on speed and, for longer variations, the number of moves correctly entered. (As we all know to our dismay, Tactics Trainer cannot account for our fumblings with either the mouse, or the coffee we just spilled in our lap, that lead us to either enter incorrect moves or enter the moves slowly due to distractions.)
Day 5. Gradually more and more people tried Tactics Trainer, and the problems lost or gained rating points (as did the participants) in a bubble sorting process that separates both the problems and the players in a very understandable way (statistically speaking).
Day 6. Now the Tactics Trainer puzzles range from a low end of (I'm guessing here) 800 to over 3400 (the high score on the list...and much higher than several GMs, IMs, etc...showing that the results one can achieve with Tactics Trainer (by whatever means) does not correspond one-for-one with playing ability)
Day 7. I realized I hadn't added to my Tactics Trainer blogs in a while. Two reasons. First, I hadn't done any tactical training in half a year. Second, I didn't really have anything useful to say until now...though I hope that is inextricably related to the first reason.
And with that, I shall rest on my writing laurels for the day, and go back to the Tactics Trainer, as I am very depressed not to have broken the 2500 barrier yet.