Brilliant!
FIDE vs Chess.com ratings explained

StMichealD Chess mentor ratings mean almost nothing for a number of reasons (eg the lesson ratings are only rough estimates, hints/critical square etc. features mess them up, and so on), you don't have to do any analysis to know this.
btickler You raised some good points. For instance, I quite agree with your first point (players inflating their ratings). But, at least, some times it's possible to detect those players. In particular, those reporting nice round number such as 1800 are probably not to be trusted; I also remember a couple of players who reported their best FIDE elo performance and they did mention it in their profile description.
That's only one of the reasons that I did point out that what I provided is only an approximate estimate, and I do recognise that these are different rating systems and they're not directly comparable.
But, considering these issues, and that the question is raised quite often, this kind of analysis is the best one can manage and it does say a few things at least (for instance, it's quite evident that online ratings are significantly higher than live ratings) - just make sure you know these are estimates and nothing more.

Cool...I just want people to be clear that this data has built-in issues and can't really be used to back up any hard conclusions. I agree, this is the best that could be done with the data available.
It's not unlike survey results where people opt in to the survey or receive an incentive to participate...just the manner of how people come to be in the survey skews the results...self-reported FIDE ratings have too many possible motivations for not being truthful baked in and that cannot be mitigated, really.
There is another way to do this analysis, to avoid the practical problems.
Look at the rating update formulas, and deduce how much an interval of say 100 pts FIDE equals chess.com. Someone did a similar correlation study between Swedish national rating, LASK, and FIDE official ELO rating.
The correlation they found was LASK = 1.12*FIDE - 255, and the explanation was that the formulas for rating update "stretched" the scale. In LASK a player would be expected to score 62.5% against a 100pts weaker opponent, but in FIDE ELO the expectation would be 64%. This makes a difference of 100pts FIDE equivalent of 109 pts LASK.
Maybe a more theoretical analysis, or a computer simulation, could avoid some of the real world problems with reported ratings?

Everyone is having a great time here and then Btickler comes in..
(just kidding :P) Thanks for the interesting post, Gagaringambit!

@OP ... The mean ratings could be accompanied by either (i) standard errors or (ii) 95% confidence intervals to give an idea of how precise they are.
A sample size of 121 is more than enough for this kind of analysis, but the players should be qualified, e.g. "Have played 20 or more games in each type of play", and then randomly selected after such qualification.

When I was 15, and competing on local clublevel in Norway back in 1977, i was very proud when I won over a 1600-player. I was unrated. I am new here, and a bit rusty, scoring 1300 and something. I guess I would loose if I met a Fide 1300-player now.
Maybe the chess.com rating is close to Fide above 1700?
I am talking about our standardratings. Our 1300-players do lots of mistakes, and is not always educated in openings and more. I guess the fide1300-players do have solid openings, knows endgames better, and plays more flawless. I am bad in openings and endgames, does a lot of unfocused mistakes, and looses a lot with black. I guess the Fide 1300 would hold remis with black.

Nice analysis! Hadn't seen this before. Bookmarked the link. As a statistical person myself I do recognize some problems with it but it's a nice starting point. I did something similar myself with USCF ratings but didn't go into that level of depth. The regression findings I find quite surprising. Considering I've known and played some people with for example 2195 FIDE (verified) 2700 tactics but only 1850 live standard. And it's not because he doesn't play a lot. These numbers and what I've seen would indicate that there should be a negative intercept and a slope greater than 1 not the other way around. Also the average rating for titled players is much much lower than their actual FIDE rating, especially for standard. The most important though, is that the people that actually report their USCF and FIDE ratings tend to be proud of them and people who have much lower USCF and FIDE ratings than chess.com are less likely to report.
If I had SPSS (I've only used SAS and R) I would look into this and add maybe a squared term to the analysis which may yield better results.
Could you link the data file you used for this?

Thanks for your comments, here's the link to the spss data file:
https://dl.dropboxusercontent.com/u/31982038/ChesscomRatings.sav

Interesting analyses - well done! The puzzle is the low correlation of bullet vs online & standard, but NOT against FIDE, which I'm not sure is fully explained på the time control argument (given that blitz corresponds well with online and std). Many seem to believe they can predict their own "FIDE rating" from the analyses, but such is not the case - correlations describe patterns of correspondence in samples. Related to that: a comment said one shouldn't conclude from small samples. But these analyses are fairly robust, with p

I agree that 1500 blitz is a very strong rating, and it is a lot stronger than 1500 online, and stronger than 1500 Fide.
Online I feel that the numbers ar big compared to the difference in strenght. Do you think that 1100 blitz is the same strenght as 1300 online, and 1400 blitz like 1700 online?
The problem is not just the tiny sample, it is the fact that FIDE and Chess.com use entirely different rating systems, have wildly differing pools of rated players, and apply entirely different conditions to rate games.
There is simply NO way to draw any correlation between these completely different systems.
Chess.com provides an explanation of the rating system in the FAQ, I believe, while FIDE's is on their site as well as discuss all over the internet. USCF and BCF and Canada all have other different systems, as well.
There is NO way to draw ANY corelation? So, you wouldn't be suprised if a 2500 rated FIDE player was rated 800 on chess.com??? That would have to be the case if there was actually NO correlation whatsoever. Yes, they are different systems and different pools, but they are not COMPLETELY different. Two major things remain the same- they both involve playing chess, and stronger players usually have higher ratings. Yes, they are different, but not EVERYTHING about chess.com and FIDE is different because YOU ARE STILL PLAYING CHESS. Blitz and Standard time OTB are very different. Would you contend that there is no correlation between the highest rated blitz players and the highest rated standard players? Is it just a crazy coincidence that you see the same players at top blitz tournaments as you do at top standard tournaments?
I don't know, I'm 1600 blitz (10-15 minute) here and wouldn't consider myself any higher than 1200 otb.

In further response to Estragon: A correlation is simply a measure of statistical relatedness between two measures. Here's what was reported at the top of the thread:
ELO Correlations
Bullet Blitz Standard Online Tactics
FIDE rating 0.563 0.738 0.573 0.675 0.640
Bullet - 0.786 0.167 0.205 0.671
Blitz - 0.647 0.531 0.737
Standard - 0.541 0.521
Online - 0.534
(pearson correlations, cases excluded pairwise, almost all correlations are significant at the 0.001 level)
When a statistical association is established, as in the table above, the two measures co-vary: when one is high, the other is high, and vice versa. Simple as that. The correlation coefficient describes the tightness of the relationship, and the p-value the statistical robustness of the described relationship. With two exceptions, correlation coefficients are in the range of 0.5-0.8 in the table, which are strong correlations. And p-values are small - the patterns are highly robust.
So on the simple question of whether ELO ratings and chess.com ratings are related, there's nothing to discuss. They are, as convincingly demonstrated in the initial analysis. To various degrees, for ratings in the various modes of play.
Why that is so is an entirely different matter. Then rating algorithms etc become part of the equation. Those interested may surely be able to delve deeper into the logic of the relationships. But maybe rather play chess???
By the way, contrary to bticker said, I also think that there are a correlation among all these rankings.
I did not say there is no correlation at all between rankings. I said they were not directly comparable. If you are going to disagree with me, make sure it's actually me and not some straw man of your perceptions...