Why do bots have ratings?

For various reasons it's hard to give bots an accurate rating... chess.com doesn't try to make them accurate, but even if they tried it'd be hard.
What's the point of bots?
The point is chess can be intimidating for a new player. You might be going up against people who have been playing for years, and there's 1000s of hours of video and books and etc to look at. So bots offer an easy way for beginners to play a few games and try things out.
Yea but if you're making a bot with a rating, at least have the decency to make it play in that rating range, for a 1350, I'll accept 1250-1450, but 1600? Really?

For various reasons it's hard to give bots an accurate rating... chess.com doesn't try to make them accurate, but even if they tried it'd be hard.
What's the point of bots?
The point is chess can be intimidating for a new player. You might be going up against people who have been playing for years, and there's 1000s of hours of video and books and etc to look at. So bots offer an easy way for beginners to play a few games and try things out.
Yea but if you're making a bot with a rating, at least have the decency to make it play in that rating range, for a 1350, I'll accept 1250-1450, but 1600? Really?
It is not easy to make bots play badly in a natural way, so it is impossible to restrict it to a narrow rating range. That would also make no sense, as a human player at 1350 will also play games as a 1600 player sometimes (but probably rarely).

the bots have a rating due to their skill level
Apparently they don't if a 1350 rated bot played a 1600 game.

For various reasons it's hard to give bots an accurate rating... chess.com doesn't try to make them accurate, but even if they tried it'd be hard.
What's the point of bots?
The point is chess can be intimidating for a new player. You might be going up against people who have been playing for years, and there's 1000s of hours of video and books and etc to look at. So bots offer an easy way for beginners to play a few games and try things out.
Yea but if you're making a bot with a rating, at least have the decency to make it play in that rating range, for a 1350, I'll accept 1250-1450, but 1600? Really?
It is not easy to make bots play badly in a natural way, so it is impossible to restrict it to a narrow rating range. That would also make no sense, as a human player at 1350 will also play games as a 1600 player sometimes (but probably rarely).
You're missing the point. The whole thing about this is that this ISN'T a human. They CAN play 1350. And instead it's playing at 1600? This isn't emulating how some humans play, this is a robot playing well outside the parameters of the range it should be designed for.

I think part of the issue is how the review analysis comes up with "estimated Elo" values. It factors in the Elo information of the players involved, so both yours and the bots in this case.
I wouldn't be surprised if you did the following.
Take one of your bot games and review it, get the Elo estimate for you and the bot.
Now, download the PGN for that game, open it in a text editor, and remove the tag with the bot's Elo, and upload that version. Review that.
Do that again, but this time leave the bot's Elo in and remove yours.
I bet the estimated Elo for you and the bot will be different in each of those, despite it being the same game.
I'm curious, though, as to whether or not the difference in the estimated Elo's remains fairly constant (i.e. in the above, is the winner always rated +200 over the loser?) If so, then the more important information you probably want to take from the review estimates is about how much stronger did you (or your opponent) play while taking the actual value of the Elo's with a grain of salt.
When this feature was originally released, it would provide estimated Elo values for games that had no information about the player's Elo. It was scoring my games as over 2000! Even I knew that wasn't accurate, but it was fun to see.

I think part of the issue is how the review analysis comes up with "estimated Elo" values. It factors in the Elo information of the players involved, so both yours and the bots in this case.
I wouldn't be surprised if you did the following.
Take one of your bot games and review it, get the Elo estimate for you and the bot.
Now, download the PGN for that game, open it in a text editor, and remove the tag with the bot's Elo, and upload that version. Review that.
Do that again, but this time leave the bot's Elo in and remove yours.
I bet the estimated Elo for you and the bot will be different in each of those, despite it being the same game.
I'm curious, though, as to whether or not the difference in the estimated Elo's remains fairly constant (i.e. in the above, is the winner always rated +200 over the loser?) If so, then the more important information you probably want to take from the review estimates is about how much stronger did you (or your opponent) play while taking the actual value of the Elo's with a grain of salt.
When this feature was originally released, it would provide estimated Elo values for games that had no information about the player's Elo. It was scoring my games as over 2000! Even I knew that wasn't accurate, but it was fun to see.
It doesn't rate games based on your elo vs the game. It rates them based on the overall hit rating of the game itself. If you took my elo away and magnus carlsons elo away, and we played a game I'd still rate around 1000 and he'd still rate around 2800, because it doesn't use the average games I play or he plays as a baseline, it rates the hit rating overall and compares it to the entire community of players.

You don't understand. A bot play way better than 1350. The only way to get it closer to 1350 is to force it to make some weird mistakes. But it is not possible to make it play precisely around 1350 in every game.
You don't understand. A bot play way better than 1350. The only way to get it closer to 1350 is to force it to make some weird mistakes. But it is not possible to make it play precisely around 1350 in every game.
Your explanation makes the most sense. It must be hard to make bots play bad. The example in this thread is even an understatement. I've played so many 1700-bots that were way too strong for 1700 and according to analysis often were around 2400-2900. I assume this is partly because the stronger the bot the likelier its upward deviation: i.e. a 1700 bot is more likely to play around 3000 (close to its real natural rating) than a 1350 bot. It must have something to do with what you said, that it's hard for a bot to play bad. Also what i've noticed, when i played easier bots i could get "brilliant" moves more often and easier, whereas vs. stronger bots i barely get brilliant moves. Though that makes sense as it's way harder to make a brilliant move vs. strong bots.

You don't understand. A bot play way better than 1350. The only way to get it closer to 1350 is to force it to make some weird mistakes. But it is not possible to make it play precisely around 1350 in every game.
Then they shouldn't have a rating at all. The mistakes they make are calculated to predict closest to rating outcome. If they deviate 250 point in 1 direction and 1000 in the other then there's no point at all in saying "it's 1350" because it isn't, it's just a really good robot on game one, a really bad one on game 2 and somewhere in the middle on game 3. And the rating is moot.
bro!
in a game i played against chat GPT, it cut my king in the first move with it's queen!

You don't understand. A bot play way better than 1350. The only way to get it closer to 1350 is to force it to make some weird mistakes. But it is not possible to make it play precisely around 1350 in every game.
Then they shouldn't have a rating at all. The mistakes they make are calculated to predict closest to rating outcome. If they deviate 250 point in 1 direction and 1000 in the other then there's no point at all in saying "it's 1350" because it isn't, it's just a really good robot on game one, a really bad one on game 2 and somewhere in the middle on game 3. And the rating is moot.
Give us another method of classifying the rating of bots that can easily be converted from elo and back.

You don't understand. A bot play way better than 1350. The only way to get it closer to 1350 is to force it to make some weird mistakes. But it is not possible to make it play precisely around 1350 in every game.
Then they shouldn't have a rating at all. The mistakes they make are calculated to predict closest to rating outcome. If they deviate 250 point in 1 direction and 1000 in the other then there's no point at all in saying "it's 1350" because it isn't, it's just a really good robot on game one, a really bad one on game 2 and somewhere in the middle on game 3. And the rating is moot.
Give us another method of classifying the rating of bots that can easily be converted from elo and back.
I would if I were the one who programmed them. But since I'm not that's not my job. However whoever is doing it, is apparently not doing a very good job of it.
Bots have a rating to quantify there skill level, If they dont have there rating. then theres no way that it can show what skill level they are

Bots have a rating to quantify there skill level, If they dont have there rating. then theres no way that it can show what skill level they are
But apparently it doesn't show what rating they are, because if they say they're 1 rating and they play in 2 other rating that are nowhere near the rating they say they are, then it's not actually saying anything is it?

I think part of the issue is how the review analysis comes up with "estimated Elo" values. It factors in the Elo information of the players involved, so both yours and the bots in this case.
I wouldn't be surprised if you did the following.
Take one of your bot games and review it, get the Elo estimate for you and the bot.
Now, download the PGN for that game, open it in a text editor, and remove the tag with the bot's Elo, and upload that version. Review that.
Do that again, but this time leave the bot's Elo in and remove yours.
I bet the estimated Elo for you and the bot will be different in each of those, despite it being the same game.
I'm curious, though, as to whether or not the difference in the estimated Elo's remains fairly constant (i.e. in the above, is the winner always rated +200 over the loser?) If so, then the more important information you probably want to take from the review estimates is about how much stronger did you (or your opponent) play while taking the actual value of the Elo's with a grain of salt.
When this feature was originally released, it would provide estimated Elo values for games that had no information about the player's Elo. It was scoring my games as over 2000! Even I knew that wasn't accurate, but it was fun to see.
It doesn't rate games based on your elo vs the game. It rates them based on the overall hit rating of the game itself. If you took my elo away and magnus carlsons elo away, and we played a game I'd still rate around 1000 and he'd still rate around 2800, because it doesn't use the average games I play or he plays as a baseline, it rates the hit rating overall and compares it to the entire community of players.
Really? Do as I suggested above. Post the results. If I'm wrong and you're right, then the estimated Elo for your games will be the same whether or not you analyse them with the Elo Tages included.
Since I already know the answer to this (having done it myself), I know you're wrong, and you're posting what you think, rather than know. Might explain your chess too, come to think about it.

I think part of the issue is how the review analysis comes up with "estimated Elo" values. It factors in the Elo information of the players involved, so both yours and the bots in this case.
I wouldn't be surprised if you did the following.
Take one of your bot games and review it, get the Elo estimate for you and the bot.
Now, download the PGN for that game, open it in a text editor, and remove the tag with the bot's Elo, and upload that version. Review that.
Do that again, but this time leave the bot's Elo in and remove yours.
I bet the estimated Elo for you and the bot will be different in each of those, despite it being the same game.
I'm curious, though, as to whether or not the difference in the estimated Elo's remains fairly constant (i.e. in the above, is the winner always rated +200 over the loser?) If so, then the more important information you probably want to take from the review estimates is about how much stronger did you (or your opponent) play while taking the actual value of the Elo's with a grain of salt.
When this feature was originally released, it would provide estimated Elo values for games that had no information about the player's Elo. It was scoring my games as over 2000! Even I knew that wasn't accurate, but it was fun to see.
It doesn't rate games based on your elo vs the game. It rates them based on the overall hit rating of the game itself. If you took my elo away and magnus carlsons elo away, and we played a game I'd still rate around 1000 and he'd still rate around 2800, because it doesn't use the average games I play or he plays as a baseline, it rates the hit rating overall and compares it to the entire community of players.
Really? Do as I suggested above. Post the results. If I'm wrong and you're right, then the estimated Elo for your games will be the same whether or not you analyse them with the Elo Tages included.
Since I already know the answer to this (having done it myself), I know you're wrong, and you're posting what you think, rather than know. Might explain your chess too, come to think about it.
Can't you do that? It seems like a lot of work for you to calculate your own conclusion, if you want it, you can do it with my games, just click on my named and you'll see my games.

I think part of the issue is how the review analysis comes up with "estimated Elo" values. It factors in the Elo information of the players involved, so both yours and the bots in this case.
I wouldn't be surprised if you did the following.
Take one of your bot games and review it, get the Elo estimate for you and the bot.
Now, download the PGN for that game, open it in a text editor, and remove the tag with the bot's Elo, and upload that version. Review that.
Do that again, but this time leave the bot's Elo in and remove yours.
I bet the estimated Elo for you and the bot will be different in each of those, despite it being the same game.
I'm curious, though, as to whether or not the difference in the estimated Elo's remains fairly constant (i.e. in the above, is the winner always rated +200 over the loser?) If so, then the more important information you probably want to take from the review estimates is about how much stronger did you (or your opponent) play while taking the actual value of the Elo's with a grain of salt.
When this feature was originally released, it would provide estimated Elo values for games that had no information about the player's Elo. It was scoring my games as over 2000! Even I knew that wasn't accurate, but it was fun to see.
It doesn't rate games based on your elo vs the game. It rates them based on the overall hit rating of the game itself. If you took my elo away and magnus carlsons elo away, and we played a game I'd still rate around 1000 and he'd still rate around 2800, because it doesn't use the average games I play or he plays as a baseline, it rates the hit rating overall and compares it to the entire community of players.
Really? Do as I suggested above. Post the results. If I'm wrong and you're right, then the estimated Elo for your games will be the same whether or not you analyse them with the Elo Tages included.
Since I already know the answer to this (having done it myself), I know you're wrong, and you're posting what you think, rather than know. Might explain your chess too, come to think about it.
Can't you do that? It seems like a lot of work for you to calculate your own conclusion, if you want it, you can do it with my games, just click on my named and you'll see my games.
Bolded the bit that provided the information that you then asked? I have done it. I don't expect you to just believe me, since people can say anything, so I suggested you do it yourself as well. I got the information about how the Elo estimates are calculated from Chess.com in a thread that got into discussing them when it stopped providing estimates for games without player Elo tags.
Let's say I play a bot that's rated 1350, One game I'll play at a 1400 level and another at an 800 level. Take a guess which game I usually win? The 800 one. Why? Because the first game the bot that's 1350 played according to the analysis like a 1600, and the next one they play like a 550. Wtf is the point of having bots rated something when they play nowhere near that?