Really? wow
Question: Is it true that Game Review ELO is partially based on current ELO?

If you are talking about the 'report card' where the coach says "Here's how I think you and your opponent played this game for someone of your skill level."
then not only does it seem dependent on your Elo, it seems to be also very much based on your opponents Elo
Elsewhere (which I can't find) I compared two of my blitz games when I was rated 783, both shown as played at approx 83% accuracy and a similar number of moves.
I won one game against a 750 rated player and was assessed around 1100 and lost the other game against a 2400 rated player and was assessed at 2510.
I've mentioned this in a few posts now, because while it is a cute feature, I'm really struggling to come up with how this can be in anyway meaningful, especially when trying to analyse (review) to improve.
I haven't yet had a response from anyone suggesting how this numerical assessment is supposed to be useful or even worthwhile.

I suppose the use case for users is minimal if it is dependent on current rating. If it were an objective score of the moves then it would be useful to see if your games are getting stronger, on average (yes I know your rating would also improve, however if you are a quick learner the ELO gain is slower than your actual strength gain).
See below a game between Hikaru and a cheater account that was closed. The reported ELO of the cheater is nearly 4000! This could be a more useful method than mere accuracy % for finding and reporting cheaters in your games if implemented well. https://www.chess.com/analysis/game/live/67800230705?tab=review

playing 80% against 1000 and playing 80% against 2000 isn't same i mean right?
Isn't it? It might be harder to do against a higher rated opponent, but any objective measurement of accuracy would be some measure of picking the best move in whatever circumstances are on the board. Surely? (I could very well be wrong here!)
However, the phrase "Here's how I think you and your opponent played this game for someone of your skill level." suggests it isn't meant to be an objective assessment of your play alone.
As I say above, I don't really quite understand what it is measuring, or even supposed to be measuring.
At the moment it only features in the beta version (I believe). I would strongly suggest some further degree of explanation about this measurement should be included before it is released into the standard version.

No ! They might ban you because of cheating but you won't be a GM as there's a procedure for becoming a GM

playing 80% against 1000 and playing 80% against 2000 isn't same i mean right?
Isn't it? It might be harder to do against a higher rated opponent, but any objective measurement of accuracy would be some measure of picking the best move in whatever circumstances are on the board. Surely? (I could very well be wrong here!).
The reason a higher accuracy % is more difficult at higher levels is because the best moves become less obvious. It's easy to find the best move when your opponent outright hangs a piece.

I read somewhere that one's current rating is a factor in the estimated ELO Game Review generates for them. Is this true? If I, a 650 play exactly the same moves as a GM by coincidence, will their estimated ELO be higher?
Yas

It's harder to play the best moves against a higher-rated player because they'll play in such a way that demands greater precision from you.
Or consider the opposite perspective. It is far easier for me to get 90% against a 500 rated player than it is for me to get 90% against a 2000 rated player. Therefore the latter should have a higher estimated elo.
I suspect that the Game Review performance rating is based off your opponent's Elo, not your own.

I've noticed this: playing my usual way against much higher (+1200) ELO opponents inflates my rating.
I'd call this a "damped estimator" -- it applies an adjustment to your ELO but doesn't independently estimate it from scratch.
One test of an algorithm is if I look at all of the games by 1000 ELO players, for example, and plot the win % versus the estimated ELO of how their opponents played, then it should follow an exponential relationship for each player (the standard ELO estimator), perhaps with a slope change (10x/280 ELO instead of 10x/400ELO) to account for the fact we're eliminating the variability of one of the two players.

If you are talking about the 'report card' where the coach says "Here's how I think you and your opponent played this game for someone of your skill level."
then not only does it seem dependent on your Elo, it seems to be also very much based on your opponents Elo
Elsewhere (which I can't find) I compared two of my blitz games when I was rated 783, both shown as played at approx 83% accuracy and a similar number of moves.
I won one game against a 750 rated player and was assessed around 1100 and lost the other game against a 2400 rated player and was assessed at 2510.
I've mentioned this in a few posts now, because while it is a cute feature, I'm really struggling to come up with how this can be in anyway meaningful, especially when trying to analyse (review) to improve.
I haven't yet had a response from anyone suggesting how this numerical assessment is supposed to be useful or even worthwhile.
Psychologically there definitely could be benefits, making you think you played really good and make u want to keep playing. If you know the Dunning-Kruger effect it is sort of getting past that dip and makes people want to keep training.

I haven't yet had a response from anyone suggesting how this numerical assessment is supposed to be useful or even worthwhile.
Psychologically there definitely could be benefits, making you think you played really good and make u want to keep playing. If you know the Dunning-Kruger effect it is sort of getting past that dip and makes people want to keep training.
Well, yes, I think that is a very good point. And, rather like the puzzle ratings, the Elo given in these game reports usually seems to be quite elevated.
E.g. Most of the blitz games I've played with people around my current rating of 850 tend to come out with a game rating of about 1100 for the winner and about 800 for the loser.
Not always though. There is at least one example I've found of a game with a pretty poor standard of play which I won with a 66.6% accuracy. Both players were rated over 800 but the report gives me 775 and the opponent (58% accuracy) just 520.
Perhaps I need to re-review a bigger sample of games with this 'report card' system which is relatively new to me, and I may start to see a clearer picture of what it is telling me (if anything).
I still maintain the feature needs additional explanation.

the answer to your question is: yes.
I'm pretty sure the estimated elo is your rating +/- some set range of values - play with high accuracy and you'll bounce up against the top of that range above your rating. I'm basing this off of my own observations in the few super-high accuracy games I've had since the feature was released.

i looked at it when they initially came up with it, and found this out:
it is a range +/- the players elo
if you do not insert any elo in the review, it will just use your own elo as a reference
I played my first 3-day game at 400 elo rating: Although I had 99.5% accuracy my Elo was estimated around 900, I think.
I get it, that it’s easy to have good accuracy against bad players, but the estimate doesn’t care how well the opponent plays, just how high their elo is. I can tell that, because I also won against someone who played around 85% accuracy and our elo estimates were trash still.
I bet you could play a 300-move Stockfisch draw at 400 elo and it would give both players 800 elo estimates. 🤣

@eadwig yeah its extremely dumb, idk why they do it like this, guess the algorithm isnt good enough to be able to rate the game without a reference?(the players elo). If thats not the case, the developers are just really really stupid

Yeah it makes no sense that they use your current elo as a benchmark for the estimate. It’s not really “estimating” anything then.
You can confirm this by putting the raw PGN into the analysis without elo’s vs. using the game analysis on a specific game. It will often yield different ratings estimates, despite being the exact same game.
I assume they do this so it seems “accurate”, as it will always be close to your rating — but that completely defeats the purpose of a true estimate based off the game. You’re better off copy / pasting the pgn without elo’s into the analysis, because then you’re getting an unbiased result. Ive found it seems more accurate this way
Highest estimated elo I've got is 1650 with a 98% accuracy, even if i get ~90% on a 50+ move game the estimated elo i feel can't be more than twice your own elo or some soft cap, i'm not playing against gms that punish inaccuracies but titled players hover around 90% or even less you can even see magnus's games below 90%.
On the other hand i can get very low acc and estimations on the following game just because both players missed some tactics or didn't want to trade queens or pieces while the engine considers is a big blunder not trading queens because you would get a 0.5 advantage which is minimal advantage unless you're high elo.
I read somewhere that one's current rating is a factor in the estimated ELO Game Review generates for them. Is this true? If I, a 650 play exactly the same moves as a GM by coincidence, will their estimated ELO be higher?