Fixing New Analysis

Sort:
flashlight002

@jonnie303 @jas0501 this ain't the first time someone reports a weird engine output...and then when the game link is supplied and the game is re analysed the problem is gone. My suggestion to everyone who sees a problem. Please take a screenshot of it the moment it happens. Then one has the proof of it.

flashlight002

@jas0501 in relation to your post #444 the drop down menu you are showing "engine time limit" is a setting that only operates when you are doing self analysis with the engine move by move. In other words when you have ticked BOTH "self analysis" and "show lines" The full game analysis implemented when one has just finished a game and one clicks the analysis icon is an automated scan run to a depth of d=20 on the servers. A user has NO control over this in any way. So your full game analysis is always to depth =20. However if you want to do self analysis on any move using the "show lines" (basically Stockfish's MultiPV function) this is when the "engine time limit" setting will kick in. You can prove it for yourself by watching the d indicator. It will surpass d=20 in most cases if you have engine time limit set high enough...say from 1 minute and up. Note that the d=20 full game analysis is done on the servers but the "show lines" MultiPV function runs off your machine's processor cores and hardware. All engine feedback functions (i.e. the part that will show icons, arrows, indicate the move quality designation e.g. "best move", "good" move etc and its associated colour) is always done to d=20...even when one clicks any of one's saved self analysis moves or variations after a full game scan is complete (saved self analysis only applies naturally to daily games).

flashlight002

@jas0501 the unlimited mode in "engine time limit" essentially means that Stockfish will continue analysing nodes on it's decision tree branches to the maximum depth it feels it needs to in order to make a decision. It is no longer curtailed to stop analysis at a point in time. At some point it will naturally stop...it ain't going to go to keep on at it to a depth of 200 . Already due to the alpha beta pruning employed by the minimax algorithm it will have isolated the most promising moves initially and discarded improbable moves. Then these "good" moves will be analysed deeper and deeper till it reaches a cut off depth point where the mathematical algorithms see no need to analyse to a greater depth to ascertain possible moves feasibilities and evaluation scores. In other words it could find the position simple and stop at d=20 and provide its decision, or find the position complex and need to go to d=35. But it isn't told to stop analysing deeper and deeper after a time limit is reached. As such the horizon effect is minimised.
How does one know when it is finished analysing?? On this system just watch and wait till no more moves change in the "show lined" MultiPV section. There isn't any indicator that says "Stockfish has completed its analysis".
If there are any mathematicians who want to explain further please do. But this is my understanding on reading up on engines, their associated maths etc.

fschmitz422

Just imagine someone took this serious:

Instead of an easy mate he'd then face (following the engine's suggestion) ...

... which is a -M34, if I let the client side engine run some time. (The server side engine won't find any forced mate then.)

As helpful as ever.

https://www.chess.com/analysis/game/live/3967095009

 

 

chrka

@flashlight002 Due to the weather being extra lovely this week (a welcome change from the previous one), I'm still on a kind of mini-vacation, and feeling pleasantly drowsy after a nice meal, but hopefully I still make some sense =)

So, an engine looks at the moves in a given position, and tries them out to see what happens. It keeps doing that with the positions that result from the moves. Now, in general, there isn't enough time to look at moves going all the way to the end of the game. (It has to look at every position from every move, and then every position from every move from those positions, etc. It explodes pretty quickly.) Instead, it only goes so many moves deep, and then tries to estimate how good the position is (by some heuristic, eg., counting the difference in point values for the pieces of each side).

When choosing a move, it takes the move that looks as if it's the best for the player whose turn it would be. (That is, when I look at the moves I can make, I know that for whatever move I choose, you will pick the reply  that you think is best for you — the minmax algorithm.)

But it's generally better to go deeper (this doesn't necessarily mean a better move will be chosen — heuristics are not perfect, and it might turn out that the engine gets fooled as it goes a step deeper — but usually it will be), so a simple way of going deeper and deeper, is to first search until some fixed depth, and then if there is time, restart the search but let it go deeper this time — and so on, and so on. (Iterative deepening.)(This might sound stupid, why redo all the previous analysis just to go a little bit deeper? but it turns out that the number of extra positions you will look at grows fast enough that the previously looked-at positions are kind of insignificant.

Wrt pruning, there are several kinds. Alpha-beta pruning is just a way of using information from moves you've already looked at during to search to skip looking at some other moves. ("Ok, I know this move won't be played, because that other move would be better in that case anyway.") It won't change the result from regular minmax, but will speed it it up. But it's dependent on looking at more promising moves first, eg., it might be an idea to look at a capture before a regular move.

There are other kinds of pruning, where you don't look at moves you're kind of sure will be bad. Eg., it's probably not a good idea to make a move that would make you lose your queen for no material gain — but that does run the risk of missing a brilliant sac.

You also will probably want to introduce some randomness — at least for a game-playing engine — otherwise someone might be able to exploit your predictability.  (Arguably, it could also make it explore moves better that way.)

And you can choose to go deeper in certain positions, and do all kinds of fancy tricks.

Monte Carlo-based methods work a little bit differently, but there are many similarities. Often what you'll do is that you will take a look at the moves from a position, and then if you haven't looked at them before, you'll play out a random game (just keep playing random moves until the game ends or something like that). This gives you some winning probability for each of the moves, and when you have that for the moves, you'll pick one of them and look at the moves available after that moveand repeat. The trick is that the way you pick a move to look at, is that you start at the first move, and move downwards by taking two things into consideration and make a random choice: 1) how much information you have about that move (how many times you have played out a game from it), and 2) how good it seems (based on the winning probability, for example). As you look at moves deeper and deeper down, the information propagates up; after you have looked at 1. e4 e5 2.d4, you will have some information of all three moves up the chain. After an infinite number of roll-outs (the random games), this algorithm will select the same moves as minmax (that is, the optimal moves).

When it's time to actually select the move to be played, it will pick the move that has the best winning probability (the reason you want to also look at moves which you haven't looked that much at previously when searching, is that you want to make sure to explore enough possibilities — while still trying to exploit the moves that have performed the best — a bit of a trade-off there.)

The brilliant thing about MC-methods is that they don't require any heuristic. (AlphaZero does use a neural net to guide the search, which one could maybe think of being something self-taught heuristicish. happy.png That's kind of the secret sauce, since just using MC methods with chess hasn't worked that great in the past, and just using a neural net for evaluating positions hasn't worked either, but taken together: blitz.png! )

flashlight002

@fschmitz422 it seems the engine feedback system is not handling giving valid suggestions on ending tactics very well. Would you agree? There have been cases of erroneous missed wins labels etc. And now this as an extra example. Then there's the video posted also showing questionable endgame move suggestions. Am I correct in my assumptions on the trend of errors here? If I am the chess.com dev responsible for engine development and maintenance need to figure out where the problems lie and figure out how to fix them. Yes? 

flashlight002

@chrka thanks for your insights into engine move valuation logics. I know that Stockfish uses agressive alpha beta pruning and late move reduction as part of its programming...thus allowing greater search depth on the more promising moves. I know one can employ endgame tablebases as well to run with the engine. This certainly would improve the engine's IQ (for want of a better word) when it reaches the endgame with 7 pieces or less on the board.

FiddlerCrabSeason

(original post deleted by mgt3)

...thanks  PawnstormPossie  for answering this question for me...

- M

 
FiddlerCrabSeason
PawnstormPossie wrote:

"self analysis" wasn't selected, it uses depth=20 as it does for the report...

 

 

Got it.  Thanks for that.

-M

flashlight002

@PawnstormPossie @mgt3 opening moves are analysed to d= 35....it looks like the first 4 moves. Then there is a step down to 30 or 25 depending on complexity, before it all moves to analysis at d= 20. My whole premise is that certain moves or positions will require more than d=20 to beat the horizon effect and arrive at a more accurate answer. So I feel setting  analysis of every move at d=20 could be causing trouble in certain instances and impacting on accuracy. Hence my idea to give users the option to choose a deeper scan if they want to wait that little bit longer.  

erik
flashlight002 wrote:

So I feel setting  analysis of every move at d=20 could be causing trouble in certain instances and impacting on accuracy. Hence my idea to give users the option to choose a deeper scan if they want to wait that little bit longer.  

This is in the works right now!

flashlight002

@erik that is brilliant!! Music to my ears. happy.png Thank you for this update. Much appreciated! Btw, and this has nothing to do with this forum post, but I thought you should know...from Friday afternoon (South African time...so morning in USA) all hell broke loose trying to use the site on the latest version of Chrome for Android. Elements of pages not loading or partially loading, squares and lines behind headers and certain graphics, Avatars and opponent names missing, entire sections of the analysis page not loading or coming out with lines through them or bits of the elements showing etc etc. I took lots of screenshots. I have sent a full report to customer support. My Lte 4g connection is lightning fast. The site doesn't load with all these problems on Firefox for Android or the built in Samsung browser. I use Chrome for Android because it is so much faster at loading pages. I really really hope the dev guys can get to the bottom of this and fix it. I haven't seen anything like this before on any other website. Here is an example from the analysis page (every time I open the page though It's never exactly the same the affected areas i.e. sometimes they are missing, or a ghost if it is showing or there are lines through it etc):

 

flashlight002

Below is a screenshot of an analysis of mine. The main topic I would like to bring to @erik attention is this: One's saved self analysis moves from during the game are brought into the analysis game list. Now these saved ideas may contain brilliant ideas or they may be terrible ideas. Often they are experimental ideas. Now let's say one does a retry and gets it right. Just under the Green " Perfect" heading is a suggested continuation variation to d= 20. If one clicks on the last move in this list of moves this entire variation gets inserted into the analysis game list. Great! One can go through the moves and investigate them. HOWEVER these engine moves LOOK NO DIFFERENT IN FONT STYLE from my saved self analysis moves! The same thing applies if one wants to insert a variation worked out in the "show lines" multiPV section. These 'correct' engine suggestions are in the same font style.  SO EVERYTHING LOOKS THE SAME! . 

MY SUGGESTION TO IMPROVE THIS DRAMATICALLY:

IDEA 1.) Make all engine variations and moves added to the analysis game list a different font style  like in italics to differentiate them from self analysis moves 

IDEA 2) Make the grey background behind the engine variations and moves a different colour instead of grey. This would actually look quite neat and nice, and one could quickly find engine evaluations at a glance. Of the 2 ideas I like this idea the best. It is simple to implement too!!

Right now its rather a big mess with engine moves and self analysis moves all looking the same. It is VERY VERY hard to remember when one comes back to the analysis which lines are self analysis and which lines are engine evaluation the way it is right now! This is because  EVERYTHING looks the same!

 

jas0501

Nice idea. 

With regard to sharing the pgn that could be an issue. Currently the shared pgn  inserted into the Analysis window shades the background on lines enclosed in parenthesis. With regard to pgn sharing It may not be possible to support your suggestions given the pgn standard specification.

flashlight002

@jas0501 thanks happy.png

Mnnmm I didn't consider the whole sharing issue. Portable game notation is a plain text format- a VERY basic file architecture. I am happy that say the shading option is chosen, but it is lost when a pgn is shared. If one thinks about it all the colour coding is lost too. I guess if one was sending the pgn to a friend who knew nothing about the game one I would annotate before the engine moves something like "Engine suggested variation" in order to differentiate it from self analysis moves. I probably would then also clean up my own self analysis moves and clean out the nonsense or speculative stuff. 

Maybe another idea to differentiate the  engine evaluations from self evaluations could be to have an automated annotation before engine evaluations inserted e. g. (Engine Evaluation 12.e6 d4 13...).

9thBlunder

Chess.com should hire flashlight. Its the only way I'll renew my membership :)

flashlight002

Thanks @9thBlunder happy.png happy.png

flashlight002

Pay to work. Yeah right. I don't come cheap...quality like this costs happy.png....but I have a special running right now on my services and totally awesome creative and problem solving skills! LOL.

Mmmmm I never thought of joining the beta club. Good point! 

FiddlerCrabSeason

Hey all -

I was revisiting the bizarre results originally posted in this thread a while back (post #327).  The results I received today were equally bizarre.  Differently bizarre, but equally bizarre:

https://www.chess.com/analysis/game/daily/230408288?tab=analysis

 

Again, the actual move order = (thematic game starting with 1. e4 e6) 2. d4 d5 3. Nc3 Nf6 4. Bg5 Be7 5. e5 Nfd7 6. h4 Nc6 7. Nf3 h6 8. Bxe7 Qxe7 9. Qd2 a6 10. O-O-O Nb6 11. Rh3 Bd7 12. Rg3 f6 13. Re1 O-O-O 14. exf6 gxf6 15. Bd3 Rdf8 16. Rg6 h5 17. Kb1 Kb8 18. a3 Rfg8 19. Rxg8+ Rxg8 20. g3 1/2-1/2

- M

flashlight002

Hey @mgt3 when I followed your link to the game it now inserts all the moves in the right order! See screenshot below. Your first game analysis clearly disregards the first set of moves, putting moves 2. into moves 1s place. Then things get jumpled up further! One can see moves from the game. They just are not in the correct order and the starting point 1.e4 e6 was ignored. I experienced a sort of similar bug with the new analysis board when it got launched a while back where it disregarded a position where black was to move, and went and put black's move where white's move would go....causing a jumbled up game as well. Now the fact that I get the right result when I go now and redo the analysis means nothing, as we have seen this happen before....a problem is experienced but when we rerun it through the analysis board it has corrected itself. The issue really is that it gets it wrong initially. That's the bug. Luckily you have the screenshots and move order history to back up your report. I guess you are going to have to report this to membersupport@chess.com so the dev guy who is responsible for the running of the engine gets to see this. It clearly is a bug that relates to the analysis of themed openings games. The bug is how it remembers the move order the first time it runs an analysis.