Yes, probably the reason your number is lower than mine is because of the large number of people you included who have ratings above 2000. The difference seems to shrink at higher levels.
Another factor is how 'active' the player is. I looked specficially at people who are playing chess.com almost every day, and playing in USCF tournaments nearly every week. Obviously, the less frequently someone plays, the more likely the difference will be distorted. In particular, people who haven't recently played in a lot of USCF tournaments will have a lower difference if they have been playing lots of blitz, because they have been improving which isn't reflected within their USCF rating.
I don't understand people who talk about "apples and oranges". I think they simply lack abstract ability. Its easy to see blatant differences between two objects, but its harder to understand relationships, which do exist. People seem to routinely exaggerate the difference between blitz and standard. Although there is one, a relationship exists regardless.
It is also important to note that this relationship holds only for people who play both USCF standard and chess.com blitz. If someone has never played standard, or never played blitz, they will obviously have some initial difficulty adjusting to the different time controls, but that difficulty is reduced with experience.
Its surprising how adamant people are about the idea that chess.com ratings are inflated, rather than deflated. This was the case years ago, but it is certainly no longer true.
Furthermore, since this seems to be a constant source of confusion for people who lack reading comprehension skills: WE ARE NOT TALKING ABOUT 'ONLINE' CHESS. Really, chess.com should change the name of its 'postal' chess, since all the variants here are technically 'online'.
-Adam Rinkleff
There appear to be two kinds of people at Chess.com: those who care about their ratings and wonder how well they correspond to “real” ratings like USCF ratings, and those who dismiss the importance of ratings and question how they could possible mean anything at all. Another point of contention has been whether Chess.com ratings are inflated or "deflated" relative to USCF ratings. Fully aware of the risks of entering into a debate that has become heated at times, I humbly offer my own insights below, along with selected summaries from people who have considered this question before.
A couple of years ago, for example, DrawMaster examined data from more than 100 Chess.com members, rated 1500 to 1699, who also reported USCF ratings on their profiles. His main finding was that these members had Chess.com blitz ratings that were, on average, 73 points lower than their reported USCF ratings (1592 vs. 1665). He also provided a graph showing a linear relationship between USCF ratings and Chess.com ratings. He concluded tentatively that online blitz ratings "generally, if only slightly" underestimate OTB playing strength.
AdamRinkleff also argued that Chess.com blitz ratings were generally lower than USCF ratings, but he stated that the discrepancy was quite a bit larger. According to his own observations of about 20 people who maintain active ratings in both systems, AdamRinkleff suggested that Chess.com ratings are consistently 200-300 points lower than USCF ratings. Unfortunately, this claim was put forth from a small sample and without supporting statistical analysis. This led to a substantial debate on the forum and no real consensus. Many posters claimed that the two sets of ratings were like “apples and oranges” and could not be meaningfully compared.
In an attempt to test AdamRinkleff's hypothesis, ShindouHikaru posted data from 54 Chess.com members who had USCF ratings that had been verified by official records. Unlike DrawMaster's data, this sample was well above average in skill level, with many of the players taken from a list of titled players on the website. ShindouHikaru concluded that although many players had lower Chess.com ratings than USCF ratings, there was no formula he could detect for explaining the relationship.
I got interested in the question at this point and decided to do some additional statistical analyses. I took ShindouHikaru's data, which he had kindly posted on the forum, double checked the USCF ratings and adjusted them for changes in the intervening months. I also added additional data to the sample by using my friends, friends of friends, etc. I only included members if they had active, nonprovisional ratings based on many games. I examined blitz ratings, online (correspondence) ratings, tactics ratings, and USCF regular ratings.
Obviously, this is a nonrepresentative, convenience sample and I would have preferred a large, random sample of chess players from both systems. Nevertheless, the data are still useful and allow us to answer some questions about how the variables relate to each other. In the end, I was able to gather data from 80 people, which I figured would be large enough to detect statistically significant effects. If any Chess.com staff are reading this and curious about these findings, I could do a much more extensive analysis with access to more data...
According to my results, the average Chess.com blitz rating for this sample was 1817 and the average USCF rating was 1945. This supports previous statements that Chess.com ratings tend to be lower than USCF ratings. The difference was 128, which is higher than DrawMaster’s estimate, but lower than AdamRinkleff's. All the variables in the analysis were highly correlated with each other, suggesting that they are all aspects of the same underlying entity of chess skill. USCF ratings correlated r = .83 with tactics ratings, r = .79 with online ratings, and r = .93 with blitz ratings, all p < .001.
Blitz ratings are strongly related to USCF ratings in these data. Honestly, I was shocked at the magnitude of the correlation. Rather than apples and oranges, the situation is more like apples and apples of the same variety, but from a different tree. Then again, the fact that the sample consisted of active, relatively high level players who were serious enough about their Chess.com ratings to put their names on their profiles may have influenced the results. I would expect the correlation to go down with a larger, more casual sample of players.
Because the ratings are so closely related, it makes sense to use regression to calculate an equation to predict one from the other. This produced the following formula: USCF estimate = (Chess.com rating * .93) + 283. Or, if you don’t like math, you could do almost as well with the simpler formula where you just add the difference: USCF estimate = Chess.com rating + 128. Remember these are just estimates based on generalizations. Your mileage may vary.
As a final note, I was perusing the discussion on this issue in the forums and found a post by Pegrin from several years ago. Using Google, he found 59 Chess.com profiles that contained USCF ratings. With the caveat that people’s self-reported ratings might not be accurate, he computed a Pearson correlation of r = .67 and a regression equation of USCF estimate = (Chess.com rating * .74) + 280.5. My data suggest that this would underestimate the USCF rating for serious chess players, but I’ll leave it to the reader to decide which formula works best for them.
So what's my conclusion, for those of you who just skipped to the bottom? According to all the available data, Chess.com ratings and USCF ratings are substantially and meaningfully related to each other. They are clearly the same type of fruit. Also, people generally have lower Chess.com ratings than their published USCF ratings. Finally, the pattern in the data is clear enough that you can get a quick and reasonably accurate estimate of your USCF rating from your Chess.com blitz rating. You can try one of the formulas above, of if you are math phobic, just draw a regression line over the scatterplot with your finger and see where your USCF rating should be.