Forums

How good do chess engines play chess?

Sort:
StockfishDevelopment

Stockfish 6 The Stongest Chess Engine in The World!

 

Which just Happens to be FREE!

 

Check out the Only 24/7 Testing Site below...

 

http://spcc.beepworld.de

 

Stockfish 6 Rating = 3276 ELO

 

The Nearest Rival (After another (10) Stockfish Versions :) is Komodo 9.2 + = 3251

 

Stockfish will be the first Chess Engine to Rate at (3500) ELO :)

 
Blackbirdx61

 I think the problem here is pointed out in your first reply. If I play a GM the GM will of course understand the position better than I do; If that same GM say Judith 2nds for a computer in a Centaur match she will inherently understand the position better than the computer as well; because the computer for all its awesome calculating power; has no understanding what ever it is doing. I applies an algorithm to a situation and that produces a numbered score and it picks the line with the highest score; but that score has no meaning to the computer.

So given the current state of the art, Judith can see positional subtleties that the computer can not and promote a 2nd line move in the computer assessment; because of her superior judgment; but I have no doubt that in 10 or 20 years the Algorithms will have improved to the point that  no humans judgment will be able to compete with the computers assessment;

I remember when Bell was the first engine to make Master. I still have the Feb 198n Chess life announcing Cray Blitz winning the world computer championship and everyone thought the beasties could never compete with a real GM; or when the way to beat a computer was to get it to the endgame because their endgame play was awful.

No soon enough they will objectively beat us across the board; but they will never understand what they are doing; they will never have that AHa moment a 1500 woodpusher (more or less like myself.) has when they see a winning combination.  It will remain meaningless numbers in a buffer, as computers will remain perfect Idiot savants.  

StockfishDevelopment
Blackbirdx61 wrote:

 I think the problem here is pointed out in your first reply. If I play a GM the GM will of course understand the position better than I do; If that same GM say Judith 2nds for a computer in a Centaur match she will inherently understand the position better than the computer as well; because the computer for all its awesome calculating power; has no understanding what ever it is doing. I applies an algorithm to a situation and that produces a numbered score and it picks the line with the highest score; but that score has no meaning to the computer.

So given the current state of the art, Judith can see positional subtleties that the computer can not and promote a 2nd line move in the computer assessment; because of her superior judgment; but I have no doubt that in 10 or 20 years the Algorithms will have improved to the point that  no humans judgment will be able to compete with the computers assessment;

I remember when Bell was the first engine to make Master. I still have the Feb 198n Chess life announcing Cray Blitz winning the world computer championship and everyone thought the beasties could never compete with a real GM; or when the way to beat a computer was to get it to the endgame because their endgame play was awful.

No soon enough they will objectively beat us across the board; but they will never understand what they are doing; they will never have that AHa moment a 1500 woodpusher (more or less like myself.) has when they see a winning combination.  It will remain meaningless numbers in a buffer, as computers will remain perfect Idiot savants.  

Long Winded beyond beleif! :(

LoekBergman

@StockfishDevelopment: what is in a name? :-) I believe your conviction that Stockfish is the best chess engine.

@Blackbirdx61: thanks for your interesting contribution.

You describe the source of my question very well. Comparing the strength of chess engines and humans using rating - which is roughly the chance of winning - might not be the best method. Chess engines win, as they have the biggest calculating capacity. But is the chance of winning the best way to compare the way chess engines and humans understand chess? It is between humans and it is between chess engines. But what is the best way to compare chess engines and humans?

Nakamura said in the Sinquefield Cup after his game against Wesley So that So should not trust the evaluations of computers blindly as they sometimes do not understand a position very well. That shows that there is a difference in the way chess engines and humans understand chess positions and that their way of different understanding might be compared with each other in a different way then using winning chances. And then might it get understood (by me) how good chess engines play chess.

Blackbirdx61

Thanks LB. : )

@ Stockfish, sometimes it takes a few words to layout an idea. I was trying to lay out a difference between a Humans Judgment and computers Assessment. As in this instance I see these two seeming synonyms as very different things. But I did find it a bit amusing you chose to post every letter of my 'Long Winded' post. BB.

StockfishDevelopment
Blackbirdx61 wrote:

Thanks LB. : )

@ Stockfish, sometimes it takes a few words to layout an idea. I was trying to lay out a difference between a Humans Judgment and computers Assessment. As in this instance I see these two seeming synonyms as very different things. But I did find it a bit amusing you chose to post every letter of my 'Long Winded' post. BB.


It's always best to show all the Content regardless even if it takes up an entire encyclopedia :)

Hawksteinman

chess computers are naturally materialistic.

Earth64

The people who don't know how to analyze scientifically argue  stupidly.

uri65
Blackbirdx61 wrote:

No soon enough they will objectively beat us across the board; but they will never understand what they are doing; they will never have that AHa moment a 1500 woodpusher (more or less like myself.) has when they see a winning combination.  It will remain meaningless numbers in a buffer, as computers will remain perfect Idiot savants.  

What does it mean "to understand what they are doing"? How do you know that that computers don't understand but humans do?

SmyslovFan

Here's a short answer: Computers play about 3300 strength now while the best humans play ~2800 strength. The best computer would have an expected score of  ~94.7% against Magnus Carlsen. Carlsen would probably get about one draw out of ten games against the best engine in standard time controls.

http://www.computerchess.org.uk/ccrl/4040/

Computers aren't perfect. They still lose an occasional game. This can be seen in the various ICCF tournaments where engines are allowed. Also, matches between the same program will produce many decisive results. Computers aren't perfect. 

One GM recently said that computers don't always find the best move, but they never blunder. 

KM378
SmyslovFan wrote:

Here's a short answer: Computers play about 3300 strength now while the best humans play ~2800 strength. The best computer would have an expected score of  ~94.7% against Magnus Carlsen. Carlsen would probably get about one draw out of ten games against the best engine in standard time controls.

http://www.computerchess.org.uk/ccrl/4040/

Computers aren't perfect. They still lose an occasional game. This can be seen in the various ICCF tournaments where engines are allowed. Also, matches between the same program will produce many decisive results. Computers aren't perfect. 

One GM recently said that computers don't always find the best move, but they never blunder. 

That seems to be true!

kkl10

Computers (or artificial intelligence) lack intuition, the ability to perform holistic evaluation of a position, the ability to strategize according to a "theme" or symbolic expression, and some other things intrinsic to human cognition (and metacognition).

In short, humans still have the edge over computers (AI) when it comes to heuristics. That's where I'd imagine that human intervention can significantly improve machine performance in chess - "educated heuristics".

Heuristic techniques in computer science are still very rough and archaic compared to humans' innate ability. It's a rapidly developing discipline in many fields. May get a boost with artificial neural networks, but I'm just speculating...

SmyslovFan

No need to speculate.

ICCF allows engine use. Most players who use engines there and second-guess the engines do poorly compared to those who are just engine jockeys. It seems that the people who do best there are +1800 strength and trust the engines most of the time. Reading some of the comments of these people, their main task is simply steering the engines to consider different alternatives more deeply.

Nakamura recently lost a centaur vs computer match because he kept pushing too hard against the engine's recommendations.

mcris

@kkl10: Why chess engines would need (more) heuristics, when they already have over 3000 rating and win over any human on this planet?

LoekBergman

@Earth64: please explain why this statement is not interpreted as a self fulfilling prophecy?

@Smyslovfan: There is no doubt chess engines perform better than humans. I might be more ambitious than the GM who said that chess engines never blunder. They never blunder at the human level. Some years ago Mato Jelic complained about constantly being crushed by Rybka. Then he made the experiment to let Rybka play chess against Hiarcs. He was completely shocked when he saw that Hiarcs made Rybka look stupid. It shows that both chess engine make different kind of evaluations. Both chess engines seem to understand chess in another way. It might be interesting to make a formal language which can show those kind of differences in the way chess is understood by different engines and humans.

@mcris: to be the best chess engine. That is an unique selling point.

kkl10
mcris escreveu:

@kkl10: Why chess engines would need (more) heuristics, when they already have over 3000 rating and win over any human on this planet?

Having better concrete performance than humans by default doesn't automatically mean that there's no room for improvement. The fact that, as the OP said earlier, human intervention can improve machine performance in Centaur Chess suggests so. Since the methods are different and have their own merits (heuristic vs brute force), it makes perfect sense to mix them up for improved performance. Maybe not necessarily absolute performance, but rather context specific performance.

On a basic level, the question of whether better heuristics is pertinent or necessary for machines comes down to how much it may help to eliminate redundancy and leverage resources. That's a call for developers to make, not me. But it's not just about that. On a more advanced level, it's also about developing the ability to strategize or to "think" long-term according to a particular motive, concept, goal, symbolic expression, whatever. Strategizing is, in a way, a heuristic process. Humans can strategize, but machines can't or their ability to do so is still very rudimentary.

I think this could improve machine performance at least in context specific situations where the contextless brute force is not the best method. It could help machines to "see" chess a bit more like humans do. Moreover, a "heuristic machine" could enhance our ability to ever solve chess. But that's not within the current technological means, I think. Human-like heuristics would probably require the ability to learn, which is why artificial neural networks have crossed my mind.

SmyslovFan

Loek, there's a difference between "mistakes" and "blunders". You will be hard pressed to find a position where an engine drops a piece. There are a few endgames where the computer misevaluates the position, but those still require masterful play to prove the error.

The GM's point was that humans can find the best move in some positions where engines sometimes fail to find the best move. But the engine is constantly playing moves that are good enough, and will eventually wear down the human's resistance.

In ICCF, people using multiple engines will find the occasional mistake that one engine will make. It's enough of a difference to win a tournament, but not necessarily a single game. It's hard to call such mistakes "blunders" if it requires an engine to find the mistake.

Engines still have difficulty calculating pawn storms and recognising fortresses. They also have difficulty properly evaluating some endgames. But the areas where they have difficulty are dwindling. 

As I said, computers aren't perfect. But they're far better than the best human. 

KM378
SmyslovFan wrote:

Loek, there's a difference between "mistakes" and "blunders". You will be hard pressed to find a position where an engine drops a piece. There are a few endgames where the computer misevaluates the position, but those still require masterful play to prove the error.

The GM's point was that humans can find the best move in some positions where engines sometimes fail to find the best move. But the engine is constantly playing moves that are good enough, and will eventually wear down the human's resistance.

In ICCF, people using multiple engines will find the occasional mistake that one engine will make. It's enough of a difference to win a tournament, but not necessarily a single game. It's hard to call such mistakes "blunders" if it requires an engine to find the mistake.

Engines still have difficulty calculating pawn storms and recognising fortresses. They also have difficulty properly evaluating some endgames. But the areas where they have difficulty are dwindling. 

As I said, computers aren't perfect. But they're far better than the best human. 

Well said!

kkl10
SmyslovFan escreveu:

Loek, there's a difference between "mistakes" and "blunders". You will be hard pressed to find a position where an engine drops a piece. There are a few endgames where the computer misevaluates the position, but those still require masterful play to prove the error.

Engines still have difficulty calculating pawn storms and recognising fortresses. They also have difficulty properly evaluating some endgames. But the areas where they have difficulty are dwindling.

This too is where engines could benefit from better heuristics, I think. Pattern recognition abilities which allow humans to understand positional play, but engines lack.

xman720

In the arena of "computer vs human play style" I see the humans as being the flawed version.

A human is just using brute force and generic pattern memorization to try and guess a move based on what he's see before. His brain searches millions of similar looking positions and memories and tries to remember the consequences based on what happened before.

A computer systematically checks every move and evaluates the following position.

Which style of play is more elegant?

One is random and erradic while the other is logical and precise.

I believe it is humans who are crude and clumsy in the way we choose our moves.

"Learning from experience" is something that people like to boast about as a strong part of being human, but I think that learning from experience and pattern recognition are inheriently clumsy ways to learn skills.