Average Difference

Sort:
WVRadar

Played my first game on chess.com and clicked on the Quick Analysis button. What exactly does the Average Difference column mean? Mine was 0.84 while my opponent's was 0.72.

smarty715

Good question.  Curious myself.  Can't seem to find the answer here in the forum.  Hopefully someone will reply with the answer...

efinitelynotverygood

t shows how much your move varies from the computer moves/how much bad your move was during game on average.

in your case .84 and .72 means how much advantage you and your opponent lose per move on average during a game(lower average difference shows better quality of moves during a game)

it is also called average centipawn loss. from wikipedia-http://chess.wikia.com/wiki/Centipawn

The centipawn is the unit of measure used in chess as measure of the advantage. A centipawn is equal to 1/100 of a pawn. Therefore 100 centipawns = 1 pawn.These values play no formal role in the gamebut are useful to players, and essentials in computer chess, in order to evaluate positions.

The pieces have usually an integervalue in pawns, but using the centipawn allows strategic features of the position, worth less than a single pawn,to be evaluated without requiring fractions.

Standard ValuationThe following is the most common assignment of point values:- The queen is worth 900 - Each rook is worth 500; - Eachknight is worth 300; - Each bishop is worth 300; - Each pawn is obviously worth 100 centipawns.

The value of the king isundefined as it cannot be captured,let alone traded, during the course of the game. Some early computer chess programs gave the king an arbitrary large value (such as 100,000,000 centipawns) to indicate that the inevitable loss of the king due to checkmate trumps all other considerations. In the endgame, when there is little danger of checkmate, the fighting value of the king is about four pawns. The king is good at attacking and defending nearby pieces and pawns. It is better at defending such pieces than the knight is, and it is better at attacking them than the bishop is.

smarty715

Thanks so much for the explanation and the link!  

itlynch

Thanks efinitelynotverygood!

Herpa-Derp

So, even if you are a superior player to the computer, you will still have an Average Difference value, since the computer may find a move that is part of your plan or strategy disadvantageous to gaining centipawn value (according to its programmed formulas). That is to say, the rating of the computer doing the analysis must be known for the Average Difference value to have full meaning (especially if you are a high-rated player, which I am not). This makes sense to me, anyway, but I didn't know what the Avg. Diff is until I came to this page, so, maybe efinitelynotverygood can enlighten us some more...

kylesprague

 

Busted_Bicycle

Stockfish, the chess engine chess.com uses for analysis, is the best chess player in the world - so there are no superior players to the computer.  Everyone has an average difference value because no human player makes the correct moves, ever.

Andrew-Dufresne

If you are Bobby Fischer and make every correct move, then is your average difference 0 (is an average difference of 0 possible)? Also, is it a good goal for players to lower their average difference between games, or is the number too flexible and unpredictable where say, for instance, one really bad move increases the avg. difference by as much as three wrong moves in a different game? Thanks in advance.

Xifong

Andrew-Dufresne

I'm no expert, but this is what I have gathered.

Bobby Fischer would not make every correct move; that's basically impossible for any human player. But if he did play all the engine moves then it would by definition give a 0 centipawn loss average.

Looking at centipawn loss is a fun way to consider your play standard, but like you said, each game is different and even something like playing against a stronger player could make your loss worse. It would be kinda interesting to track the game to game changes of player centipawn loss though, however, I couldn't see it being a reliable progress metric.

If one made a material or positional blunder your loss would also balloon. Looking at the last game I played, I played the engine "correct" moves around 60% of the time but because I failed to take advantage of an opponents positional blunder on 2 occasions, I was considered to have made a mistake. Hence, my average centipawn loss was closer to 0.55.

mrhjornevik

@xifong @andrew  The centipawn is about calculating a position, not a move. However the centipawn difference is calculation the difference between two positions. aka a move.

 

In chess white win more times than black so white actually starts with a centipawn advantage. aka a better position, but other than that an equal position would give a 0 centipawn advantage to either side. If one side loses a pawn that color get a -100 centipawn disadvantage, while the other get +100 advantage. 

 

Not only material translates to centipaws. A better position also gives exstra centipawns. 

 

Sipulipekka
[COMMENT DELETED]
Sipulipekka
festusmaximus wrote:

i played a game where i destroyed my opponent, but had a significantly higher avg diff, how common is this?

mine was 2.44 as opposed to his 1.03

Did your opponent make a big blunder at some point? That could be the case if on average your position was worse than his.

Luitpoldt

I notice the same thing when I play the chess computer on this site at level 5.  The computer's average difference is almost always 0.4 points better than mine when I nonetheless win.  So my moves are worse, but still, somehow they add up to victory.  I suspect the problem is that the computer is doing a snapshot evaluation of a succession of positions, but is not paying attention to the overall strategy which is motivating that succession.

Abyssional

I'm not sure about Chess.com's Avg diff. I analyzed this same game on both lichess and chess.com. Chess.com said I had an av diff of 1.00, whereas lichess said I had 3 cpl. I'm not sure how it works. https://lichess.org/k3VU3w0O/black#0

Luitpoldt

My average difference is also highly unstable, ranging from 0.50 to 2.4, suggesting that it is not a very accurate indication of a player's characteristic strength, but just of individual game performance.

Humphrey-Bogart

Can Avg.Diff be bigger than 1 ?

Luitpoldt

Oh yes it can be greater than one, unfortunately, as I can attest from my own experience.  But something I find quite often which seems strange is that my average difference in a game can be much higher than the computer's, even though I still beat the computer.  So I am consistently making worse moves which somehow add up to a victory.  Does the machine, in assigning average difference ratings, not notice how apparently sub-par moves are adding up to a deeper plan in terms of which they are more sensible?

Luitpoldt

What is the approximate correspondence between a player's average difference score and a player's rating?

Abyssional

There is no comparison, because average difference varies greatly from game to game, and it depends on the opponent. Two players of approximate strength will likely have high average difference, whether it's two 1200s or two GMs

Luitpoldt
The definition of the chess-db.com play quality index is inspired by the work of Matej Guid and Ivan Bratko on analyzing quality of play in chess games. For a comprehensive overview and technical discussion on this topic one can read their paper Using Heuristic-Search Based Engines for Estimating Human Skill at Chess, 2011.

In short, the method compares the moves which were played in a chess game with the ones that a strong chess engine would play. The average difference of the evaluation of the moves played with the ones suggested by the engine is used as the indicator of how "good" the game was played. More detailed, yet short description of the method is the following:
  • The analysis of each game starts at move 12 (Thus opening moves that might just come out of theory are not evaluated).
  • The chess engine evaluates the best moves (according to the computer) and the moves played by the player.>
  • All engine's evaluations are obtained at the same depth of search.
  • The score is then the average difference between evaluations of the best moves and the moves played.
  • If the player's mistake (as seen by the engine) at particular move is greater than 3.00, the score for this particular move becomes 300 "centipawns" (to avoid unreasonably high penalties for gross mistakes).
  • Moves where both the move played and the move suggested by the computer had an evaluation outside the interval [-2.00, 2.00], are discarded. (In clearly won positions players are tempted to play a simple safe move instead of a stronger, but risky one. Such "inferior" moves are, from a practical viewpoint, perfectly justified. Similarly, in lost positions players sometimes deliberately play an objectively worse move.)
  • Once the average difference d (in "centipawns", i.e. 1/100-th of a pawn advantage) is computed, we calculate the player quality index q as q = 100 - d. A negative number q would be rounded to zero. Thus, the final player quality index in the analyzed game will be in the range of 0 to 100, where higher number indicates better play.
  • The overall quality index for a given chess player is the average quality of play from all his analysed games in the last 2 years.
The scores obtained only measure the average differences between the players' choices of move and the computer's choice. Several studies have shown that these scores that are relative to the chess engine used have good chances to produce sensible rankings of the players.
This article will be continiously extended with further examples that demonstrate the method and the results it produces for notable chess players and tournaments, as well as further technical details regarding the calculations we carried out.


Correlation between ELO and Play Quality Index

 


As our calculations are progresing (as of January 2016, we have fully analyzed over 100,000 chess games) we can already make some observations. One is that the play quality index of the players correlates with players' ELO ratings, even only if as little as just few games per player are taken into account. While exact statistical analysis still has to be done, the practical result we expect out of this finding is that we can estimate player strenght and performance in tournaments much faster (accurately) than ELO formulas would when only few games of a player are available. The graph below shows the quality of play index (Scaled, so that it can be easily correlated with the ELO rating graph). The QoP index graph is very closely correlating with ELO rating for the range of players that has at least 10 games analyzed. At the tail (for players with ELO rating smaller than 2300) the correlation still exists but the variance is higher. This is basically because much less players and their games have yet been analyzed in those ranges.