I think part of the issue is how the review analysis comes up with "estimated Elo" values. It factors in the Elo information of the players involved, so both yours and the bots in this case.
I wouldn't be surprised if you did the following.
Take one of your bot games and review it, get the Elo estimate for you and the bot.
Now, download the PGN for that game, open it in a text editor, and remove the tag with the bot's Elo, and upload that version. Review that.
Do that again, but this time leave the bot's Elo in and remove yours.
I bet the estimated Elo for you and the bot will be different in each of those, despite it being the same game.
I'm curious, though, as to whether or not the difference in the estimated Elo's remains fairly constant (i.e. in the above, is the winner always rated +200 over the loser?) If so, then the more important information you probably want to take from the review estimates is about how much stronger did you (or your opponent) play while taking the actual value of the Elo's with a grain of salt.
When this feature was originally released, it would provide estimated Elo values for games that had no information about the player's Elo. It was scoring my games as over 2000! Even I knew that wasn't accurate, but it was fun to see.
It doesn't rate games based on your elo vs the game. It rates them based on the overall hit rating of the game itself. If you took my elo away and magnus carlsons elo away, and we played a game I'd still rate around 1000 and he'd still rate around 2800, because it doesn't use the average games I play or he plays as a baseline, it rates the hit rating overall and compares it to the entire community of players.
Really? Do as I suggested above. Post the results. If I'm wrong and you're right, then the estimated Elo for your games will be the same whether or not you analyse them with the Elo Tages included.
Since I already know the answer to this (having done it myself), I know you're wrong, and you're posting what you think, rather than know. Might explain your chess too, come to think about it.
Can't you do that? It seems like a lot of work for you to calculate your own conclusion, if you want it, you can do it with my games, just click on my named and you'll see my games.
Bolded the bit that provided the information that you then asked? I have done it. I don't expect you to just believe me, since people can say anything, so I suggested you do it yourself as well. I got the information about how the Elo estimates are calculated from Chess.com in a thread that got into discussing them when it stopped providing estimates for games without player Elo tags.
I have no reason to think you're lying, it just struck me as strange that bots with elos can be so far off the elo they're supposedly programmed for.
In most chess programs the programmer puts an elo ranking on the different bots he creates because there are ranked games where we win or lose more or less elo points depending on the result of our matches, elo of bot and ours.
Here programmer didn't make ranked games but he made "the setup" for maybe if one day he decides to add this game mode (ranked games).
There are levels :
1) newbie : Martin to Karim
2) intermediate : Emir to Mateo
3) avanced : Antonio to Manuel
4) master : Noam to Wei
In many video games they often put 1) easy 2) normal 3) hard 4) very hard (so he does a bit of the same thing except he calls it differently) and for each bot he created he puts an elo.
Afterwards the real problem is that it wants to compare anything with anything. It wants to compare for example robot Ahmed (2200) playing in time 1s per move with human Jean-Kevin (2200) playing in time 1 hour 30 minutes for 40 moves then 30 minutes.