### Most Recent

- Why are databases allowed in correspondence chess?
- Preparation for a High School Chess Tournament
- The value of the pieces
- How you play the Queens gambit declined
- Why is this not played more often
- Approximate Ratings of chess.com Computer Levels?
- Library Size Chess Sets
- The Grob's Attack, and Why We Don't Play it More???
- Interesting checkmate trap | Lightning game
- 1/16/2017 - Becerra-Robson, US championship 2009

### Forum Legend

- Following
- New Comments
- Locked Topic
- Pinned Topic

Also elo-model has been designed with the idea of playing many different people - the level of confidence probably decreases at the extremes...

And if you go through that calculation, you'll find that the confidence interval here would be highly skewed to the GM no matter what the sample size one uses precisely because the probability of any one trial, let alone more than one, having any non-wins for the higher rated player is so small.

Precisely. And people cannot argue about human factors such as blunders and fatigue, etc while talking about ratings because the rating formula does not encorporate such variables. However, people still tend to.

Beck15 - i don't disagree with your point.

I simply am reporting what the mathematical model upon which the ELO rating system is constructed says the score can be expected to be.

But as you note, that isn't the whole story. There is also the question of how much confidence one can have that in a run of 10,000 games the 1300 player will score any wins or draws at all. That is a different question than what the expected outcome is.

And if you go through that calculation, you'll find that the confidence interval here would be highly skewed to the GM no matter what the sample size one uses precisely because the probability of any one trial, let alone more than one, having any non-wins for the higher rated player is so small.

This is interesting in thoeretical means, but realistically, any 1300 level player that plays 10,000 games will end up learning from it, and in theory, figure out via process of elimination how to draw a certain line. Of course, if you had two computers playing between 1300 and 2600 strength, the computer 2600 should win 100% of the time. Human error muddles this whole concept though.

hicetnunc: That's an interesting take, but most people file that off to its own unique definition.

I think you would, then, need not only chess skills, but also mind-reading skills :)

Precisely. And people cannot argue about human factors such as blunders and fatigue, etc while talking about ratings because the rating formula does not encorporate such variables. However, people still tend to.

Actually, rating often

doesincorporate such things. Rating doesn't carewhyyou lose, it simply caresthatyou lose (or draw, or win). If you lose because you were tired, that's a loss, and it's going to count towards your rating if you're playing in a rated tournament. In that case, being tired caused a decrease in rating.Of course, this only applies when a variable creates a result that wouldn't have otherwise happened. An example of when you probably wouldn't see the effect of fatigue in rating change would be if you played someone 800 points lower rated; most likely, you would win no matter what happened.

Nonetheless, in competitions between players of closer strength, such factors could very well be what the result depends on; thus, rating would reflect such factors then.

Guys, there is a probability that a random chess move generator (or a person making random moves) will win which is greater than zero. That player should be spending that luck on the lottery rather than on chess, because the probability of me winning against a 2700 with random moves is less than the probability of me winning the lottery jackpot 16 times in a row. That would mean winning enough times to make you a billionaire.

Before everyone jumps on this, there is a concept known as statistical zero, which is 10^-50. Your odds of winning a chess game against a 2700 with random moves is lower than that. The chances of winning a chess game against a 2700 are below statistical zero, so a 1300's probability of beating a 2700 is 0.

About a month ago (and I wish I would have wrote the move order down), I played a OTB game against a friend of mine that began as a Petroff, then quickly went into a Four Knights Italian Game. Play went about 14 moves with neither side being able to claim an advantage. However, once my opponent played his 15th move, I decided to play a "quiet" move of Kh8. My friend then played 3 moves (Ne1, f4, and Nf3) and on the Nf3 move, I played Kg8. Though it took a bit to get into this position, this was the ending on the board:

Neither one of us thought anything about those 2 King moves but it got me my first win against a former 2100+ rated player that still plays regularly though not in tournaments anymore. I on the other hand have never been rated nor played in any tournaments. The point is, aside from a means to jockey everyone behind Anand and Carlson, ratings are not a surefire indicator of a persons playing level as anyone can lose to anyone at any time.

It seems to me that this concept of statistical zero is pretty arbitrary. In any case, 10^-50 is not the same number as "0." You can probably

treat the two exactly the same and not run into any problems, but that doesn't mean they are the same number.Actually, rating often does incorporate such things. Rating doesn't care why you lose, it simply cares that you lose (or draw, or win). If you lose because you were tired, that's a loss, and it's going to count towards your rating if you're playing in a rated tournament. In that case, being tired caused a decrease in rating.

Of course, this only applies when a variable creates a result that wouldn't have otherwise happened. An example of when you probably wouldn't see the effect of fatigue in rating change would be if you played someone 800 points lower rated; most likely, you would win no matter what happened.

Nonetheless, in competitions between players of closer strength, such factors could very well be what the result depends on; thus, rating would reflect such factors then.

Yes, rating does not care why your result was what it was. While the result itself might have everything to do with the mood, temperament, mental acquity, luck, yada yada of the player at any given time, the rating itself does not. And for a strong, established player, the rating will more or less stabilize around a certain value. The formula for rating has no such variable where you 'plug in' the likeliness of a player suffering from fatigue, their tendency to blunder, or their chance of being lucky. The rating is what it is, it's an estimate of a player's chess strength when the result of a game comes in.

That is why in one of my earlier posts, I suggested that two computers play against each other (belle, rated 2350, and houdini, rated 3350).

That may be so, Vengeance. But on the forum of a chess website we have to stroke the egos of chess intellectuals, and that means the only acceptable opinion is that the high rated chess player can never lose, because of his inate superiority. NEVER LOSE!

...

Neither one of us thought anything about those 2 King moves but it got me my first win against a former 2100+ rated player that still plays regularly though not in tournaments anymore. I on the other hand have never been rated nor played in any tournaments. The point is, aside from a means to jockey everyone behind Anand and Carlson, ratings are not a surefire indicator of a persons playing level as anyone can lose to anyone at any time.

Against a 2700, the 2100 will score less than 3%.

Just for the record, I think 'perfect play' requires even more than knowing the result of any given position with best play. I would assume the perfect player not only to know that, but also to be able to assess the odds of his opponent to choose this or that move in any given position. This way, the 'perfect player' would be able to create maximum problems for any given opponent and reduce his drawing chances to the minimum

Hehe. I've recently made some funny predictions in some of my blitz games. When I make a threat and my opponent pauses longer than usual, sometimes I get this feeling and I think "he's trying to talk himself into this move, even though it's a bad move"

Pretty standard stuff, we've all talked ourselves into a bad move, but I've never gotten the feeling that it's happening to my opponent right at the moment it seems to be actually happening (often they end up playign it). :)

That may be so, Vengeance. But on the forum of a chess website we have to stroke the egos of chess intellectuals, and that means the only acceptable opinion is that the high rated chess player can never lose, because of his inate superiority. NEVER LOSE!

Nonsense. Either you haven't read the posts or you are incapable of understanding them.

In ratings systems like Elo or USCF, the maximum deviation is 400 points difference. At that range, there is no practical possibility of the lower rated player winning except by a complete fluke - which does happen on very rare occasions. But it is a statistical anomaly when that happens, it cannot be predicted but the possibility is allowed for by any decisive game having to result in at least a 1 point rating change.

The question of this thread is whether a Super-GM of 2700 could lose to a 1300 player. If we assume accurate ratings, this is more than THREE TIMES the standard deviation. These are not players of different levels, they are players of different worlds.

The rating system cannot exclude the possibility that the 2700 might drop dead of a heart attack in the middle of the game and lose on time, but that's about the only way it is ever going to happen.

Just for the record, I think 'perfect play' requires even more than knowing the result of any given position with best play. I would assume the perfect player not only to know that, but also to be able to assess the odds of his opponent to choose this or that move in any given position. This way, the 'perfect player' would be able to create maximum problems for any given opponent and reduce his drawing chances to the minimum

Perfect play is defined with assumption that your opponent will make the best moves.

But you can define it differently, for example: perfect player can not only read but also control mind of his opponents. You look in his eyes (or screen) and you suddenly start blundering.

My definition isn't something alien to what many OTB players do. When selecting a move, besides its 'objective' value (as much as you can assess it), you also take into account how to pose problems to your opponent.

From an objective point of view, many different moves may lead to the same result on 'best play'. If you assume your opponent can make mistakes (if he can't, the 'perfect player' might as well agree to a draw immediately), then it makes sense to take into account how to provoke those mistakes, or at least increase their likelyhood (orth. ?). I can't imagine any 'perfect play' definition not including this dimension.

Not making mistakes is great. Provoking your opponent into making some is even better

I don't consider making you opponent unable to play by unethical means as a part of the equation.

Persuading opponents to go wrong is a considerable skill in poker. Seemingly the two games have at least something in common.

Slim Chance!

thief1 that is not perfect play.

Read the comments. It is a distance to conversion position, not distance to mate, and the last several moves by Black were suboptimal.

Definition of perfect play:

For side with advantage - convert the advantage as soon as possible (shortest number of moves).

For the side with disadvantage - hold on to the position before capitulating for as long as possible (longest number of moves).

I don't agree with your definition for the side with the disadvantage beck. Sometimes the computer will play moves, especially in an endgame, where they are playing to increase the number of moves to checkmate, but other moves are harder to work out a win against.