What is so excellent about this move?

Sort:
basketstorm

Is this a some very smart bishop sacrifice or the review tab just got worse?

basketstorm
The full game is below. Analysis is showing a lot of "excellent" moves. Move losing a rook? Excellent. Losing a queen? Excellent. Or maybe I don't understand chess enough?
 
insane

Black is already dead lost so a blunder will make the evil bar go up by like 0.1.

basketstorm

I mean that bishop move certainly is not excellent, this looks like a mistake of the review

justbefair

The descriptive evaluations are computer generated. The descriptions are based on the results of running the "Expected Loss Model" on the position.

As was stated earlier, the position was already judged to be completely won for white, so throwing another bishop on the fire didn't really change things.

https://support.chess.com/en/articles/8572705-how-are-moves-classified-what-is-a-blunder-or-brilliant-etc

That's what "excellent" means in these situations.

I expect that AI will be able to make the evaluations more "human".

justbefair

Expected points model

Expected Points employs data science to calculate a player's probability of winning by considering their rating and the engine's evaluation of the position.

Expected Points uses data science to determine a player’s winning chances based on their rating and the engine evaluation, where 1.00 is always winning, 0.00 is always losing, and 0.50 is even.

 

At 1.00, you have a 100% chance of winning, and at 0.00, you have a 0% chance of winning. After you make a move, we evaluate how your expected points—likely game outcome—have changed and classify the move accordingly.

 

The table below shows the expected points cutoffs for various move classifications. If the expected points lost by a move is between a set of upper and lower limits, then the corresponding classification is used:

Classification

Lower Limit

Upper Limit

Best

0.00

0.00

Excellent

0.00

0.02

Good

0.02

0.05

Inaccuracy

0.05

0.10

Mistake

0.10

0.20

Blunder

0.20

1.00

basketstorm

That model sucks tbh. Bad move does not become better just because a lower rated player made it. And chess.com ratings have absolutely no correlation with the engine or ratings of engines. Besides I think I've got this analysis from analysis tab from a PGN file without any Elo fields.

justbefair
basketstorm wrote:
  1. That model sucks tbh. Bad move does not become better just because a lower rated player made it. And chess.com ratings have absolutely no correlation with the engine or ratings of engines. Besides I think I've got this analysis from analysis tab from a PGN file without any Elo fields.

I don't agree that the model "sucks."

I do agree that a bad move does not become better just because a low rated player plays it. However, I see the point that losing a piece is not necessarily as fatal for a 500 rated player as it would be for a 2000 rated player.

The model isn't saying that losing a piece is good for a 500. It's just pointing out that losing a single piece is often not a fatal mistake for a 500.

basketstorm

In context of the position on the board there were moves that are better. And so if THAT move is "excellent", what a better move should called then. Again, that analysis had no information about rating of players.

justbefair

Remarks like "good" or "bad" or "blunder" are left over from the "human descriptive" period or pre-engine period of chess analysis. Usually, they came surrounded by loads of qualifying adjectives and analysis. However, one man's blunder was often different than anothers. 

Computer evaluations have always needed to be simpler. Things had to be reduced to fit a formula. Human evaluations and early computer evaluations often used simple material gain or loss as a major part of valuing moves. Beginners and programmers decided that a blunder was anything that changed the material balance by more than 2 points. Moves that caused little change in the material balance were deemed excellent or good.

Material change was often a good indicator. However, there were problems. For example, a sacrifice leading to some later gain or checkmate must be valued differently.

Newer programmers came up with a statistical measure of winning chances called the expected points model. Expected Points employs data science to calculate a player's probability of winning by considering their rating and the engine's evaluation of the position

Simply put, any move other than the best move will make you somewhat more likely to lose.

https://support.chess.com/en/articles/8572705-how-are-moves-classified-what-is-a-blunder-or-brilliant-etc

In this model, a blunder is not always a blunder. A blunder is a move that materially alters winning chances.

The old human/ early computer evaluations were clumsily mapped over to the new expected points model. This often produced situations like the ones you mentioned.

Moves that were not the "Best" were deemed "Excellent" in the old model. Therefore, in the mapping, the second to best category was also deemed "Excellent". Those not qualifying for "Excellent" were deemed "Good".

Is this appropriate in all cases? Far from it?

Is it better than the old model? I think so.