Upgrade to Chess.com Premium!

What are your thoughts on these chess engines?


  • 10 months ago · Quote · #21

    EscherehcsE

    I'm sure all of the engines mentioned by the OP have positional aspects to their evaluation functions. The user's problem is that this evaluation process is typically a black box. The various evaluation criteria (material count and various positional adjustments) go into the black box, and a final evaluation number gets spit out and presented to the user...This line is +1.24.  OK, fine, but that number doesn't tell the user whether the line is tactical, positional, or some combination of the two. It takes a strong, knowledgeable person to understand what the line is doing.

  • 10 months ago · Quote · #22

    watcha

    Did you mean this one?

  • 10 months ago · Quote · #23

    watcha

    Or this one?

  • 10 months ago · Quote · #24

    2mooroo

    Use Stockfish because it is free and open source software as compared to the proprietary Houdini, and the two are constantly back in forth in strength. 

    "None of the engines mentioned above play postional/strategic chess and in the event that any of these engines do suggest or plate a positional move is simply sheer luck and that it played or suggested the move solely because of the highest value and it had no idea the move necessary had any postional merits?"

    I think you misunderstand.  These engines are programmed to evaluate a position based on specific criteria.  For instance the engines DO "understand" the positional idea of making pawn breaks to open the board up when holding the bishop pair.  But you have to keep in mind that this "understanding" is not the same as that of a GM's.  You cannot learn very well from an engine because it doesn't comprehend what exactly made the move a wise choice; all it knows is that according to the numbers the move is best according to its searching through hundreds of thousands of lines. 

    A GM and an engine could both recommend the same move to you but only the GM can stop and say "This move is good because white does not have enough compensation for the doubled pawns and black will have excellent endgame prospects."  You could spend hours reading engine analysis and never reach this conclusion.  You would be left guessing at what exactly made the move so strong.  The reason for this is simple: unless you are a robot the method a computer uses to reach a move is totally useless to you.  If you want to become good at chess you have to master the pieces, how they interact, and patterns.  An engine can show you when a move is bad.  That's it.  You can't even use it to decide if a move is good or not because your analysis might be flawed yet the move is powerful for a completely different reason.

  • 10 months ago · Quote · #25

    btickler

    watcha wrote:

    Did you mean this one?

     

    No, the Turk plays the Ruy Lopez Deferred Exchange variation poorly...mine not only plays all variations of the Ruy Lopez, it actually fries liver when it plays the Fried Liver Attack...

  • 10 months ago · Quote · #26

    Somebodysson

    interesting and instructive thread

  • 10 months ago · Quote · #27

    watcha

    May be it would be a good idea to modify the UCI standard in a way that components of the evaluation function could be reported by engines and displayed by chess interfaces instead of a single position value.

  • 10 months ago · Quote · #28

    EscherehcsE

    The only concrete thing I can say about the choice of engine for manual analysis is that you want the engine to have multi-PV (multiple principal variations) cabability. I'm not familiar with all of those engines, and I don't know if they are all multi-PV capable. If an engine doesn't have that, then I would cross it off the list.

  • 10 months ago · Quote · #29

    watcha

    All major engines have muli-PV. I have not tested all the engines in the Top 20 but I'm pretty sure they all have it.

  • 10 months ago · Quote · #30

    MervynS

    Since the vast majority of games below master level (and probably a very large number at master level and higher) are lost because of tactics or short term calculation, I'm sure any engine rated 2500-2600 and up will point out the rather huge  number of embarrassing moves we make during a game.

    I myself use Arena and Rybka 2.3.2a, I'll check out that Stockfish next.

    I'd say at this point, the game program interface, stability, features and how smoothly it runs is now more important than which of the top 40 chess engines we use.

  • 10 months ago · Quote · #31

    EscherehcsE

    watcha wrote:

    May be it would be a good idea to modify the UCI standard in a way that components of the evaluation function could be reported by engines and displayed by chess interfaces instead of a single position value.

    The old Bringer GUI (ver 1.9) separated the static score into a material number and a positional number. Smile

  • 10 months ago · Quote · #32

    Xilmi

    mldavis617 wrote:

    I don't think you can say that, exactly.  The algorithms are programmed by humans, and different engines have different strengths and emphasis.  As a general rule I think most engines rate positions by material balance, but what is a good line as seen 10 or more moves ahead is likely to have a lot of strategical or positional merit.  Remember that these engines are rated above 3000 and Magnus is "only" 2800+.

    The problem is that the engines cannot tell you why a move is good except to show you the resulting lines and results of those lines.  They are not teachers, they are analyzers.

    The source-code of Stockfish is free to look at. And for this one I can say that the value of a position consists of way more than simply material-balance.

    There's all sorts of evaluation-boni/mali for certain positional factors like open files, doubled pawns, doubled rooks, rooks on rank 7 or 8.

    I think if one would want, one could make an engine write an "explanation" for a move like where it breaks down all the components that lead to the evaluation result and it should almost look like a good annotation.

  • 10 months ago · Quote · #33

    mldavis617

    Fritz 13 attempts to do that, but the comments are repetitive and nothing like a coach would comment.  I'm sure there are other factors than simple material gain involved.  What was instructive to me (and I'm not a programmer) is listening to commentators like Nigel Short during the candidates matches when he would dismiss the computer (Houdini 3?) evaluations at certain times, only to have the computer re-adjust the scoring in a move or two to conform to Short's evaluations.


Back to Top

Post your reply: