Forums

Does a chess engine "understand" chess?

Sort:
mujakapasa

Hey folks

What would you say: Does a chess engine "understand" chess? (let´s assume there are different levels of understanding. Is there a level in which you would say, a chess engine really understands chess?

Some of you maybe read or heard about ELIZA, SHRDLU or Schank and Abelsons Programs and the  Chinese Room "Gedankenexperiment" from John R. Searle where he argues against the thesis of strong AI which says, an appropriately programmed computer really is something like a mind, in the sense that computers given the right programs can literally said to understand and have other cognitive states. (the context of Searles writing is the question: what psychological and philosophical significance we should attach to efforts at computer simulations of human cognitive capacities).

In short he argues that computers and programs are just calculating and processing information and that no computer is able to develop semantics out of syntax. He would say a human "understands" chess, but a chess-computer / chess-engine doesn´t understand chess since it is only calculating etc. But what is the difference between the playing of a chess-computer / chess-engine and the playing of a human chess player? 

I understand the arguments referring to natural language understanding, intentionality etc.
But in the case of chess-engines I would definitely say that a chess-engine understands chess. It understands the rules, pins, winning material, forks, hanging pieces etc. The way how an engine plays may be different from the way a human plays, but isn´t the way of playing somehow exactly the same as the way of the human player? - calculating positions, trying out some variations and tactics etc?

Maybe I just don´t understand enough how chess engines work, but in the case of chess-engines I would say they understand chess on a specific level. 

(Let´s assume the level of Stockfish 50 in future, will it understand chess?).

willitrhyme

Without an ego and everything which comes along with it, like pride, fear, a sense for beauty etc. no machine ever truly "plays" any game, as they're just the soulless performers of a mental task given to them.

Rocky64

That's a very interesting question, and I agree with Searle (one of my favourite philosophers) who's skeptical of "strong" artificial intelligence. Chess engines don't really understand chess, because the term "understand" implies a subjectivity that computers lack. A computer running a chess program is merely following a set of objective rules, which have no meaning to an inanimate machine. A human being subjectively wants to win a game but a program doesn't want anything - it simply does things objectively, and though we may project that Stockfish wants to beat you, that's just an illusion.   

mujakapasa
Rocky64 hat geschrieben:

That's a very interesting question, and I agree with Searle (one of my favourite philosophers) who's skeptical of "strong" artificial intelligence. Chess engines don't really understand chess, because the term "understand" implies a subjectivity that computers lack. A computer running a chess program is merely following a set of objective rules, which have no meaning to an inanimate machine. A human being subjectively wants to win a game but a program doesn't want anything - it simply does things objectively, and though we may project that Stockfish wants to beat you, that's just an illusion.   


I agree with you but I question if a chess engine doesn´t want to win. Of course it doesn´t want to win in a sense a human being does. There are no thoughts like, "when I win against Kasparov, then I´m really strong. I want to be really strong so therefore I have to win against Kasparov". I agree that for an engine the opponent, the rating, the nice feeling after a win doesn´t matter. But the goal of a chess engine is to achieve a check mate or when the situation is hopeless a draw. If the engine doesn´t want to win, then all the calculations, variations and tactics would be meaningless. At least that´s what I´m assuming. 

That´s why I tried to argue with the aspect of the different levels of understanding. A chess engine doesn´t understand the history of chess or the feeling while playing chess or what it means to improve the own rating +1000 etc. But it somehow understands (even if it´s just calculating), that it doesn´t make sense to sacrifice a queen just to get a pawn (of course there could be situations where such a sacrifice could be possible, but I mean in general).

When I add a 1 to a 2, then I get a 3. When a computer adds a 1 to a 2, then it also get´s a 3. Why can we say that human being understand, that 1 + 2 equals 3, but a computer is just manipulating and processing information, so it doesn´t understand that 1 +2 equals 3?

mujakapasa
willitrhyme hat geschrieben:

Without an ego and everything which comes along with it, like pride, fear, a sense for beauty etc. no machine ever truly "plays" any game, as they're just the soulless performers of a mental task given to them.

Nice argument. Haha, after reading I just thought that there are human beings that are totally the same as you wrote about machines tongue.png

mujakapasa
mujakapasa hat geschrieben:
Rocky64 hat geschrieben:

That's a very interesting question, and I agree with Searle (one of my favourite philosophers) who's skeptical of "strong" artificial intelligence. Chess engines don't really understand chess, because the term "understand" implies a subjectivity that computers lack. A computer running a chess program is merely following a set of objective rules, which have no meaning to an inanimate machine. A human being subjectively wants to win a game but a program doesn't want anything - it simply does things objectively, and though we may project that Stockfish wants to beat you, that's just an illusion.   


I agree with you but I question if a chess engine doesn´t want to win. Of course it doesn´t want to win in a sense a human being does. There are no thoughts like, "when I win against Kasparov, then I´m really strong. I want to be really strong so therefore I have to win against Kasparov". I agree that for an engine the opponent, the rating, the nice feeling after a win doesn´t matter. But the goal of a chess engine is to achieve a check mate or when the situation is hopeless a draw. If the engine doesn´t want to win, then all the calculations, variations and tactics would be meaningless. At least that´s what I´m assuming. 

That´s why I tried to argue with the aspect of the different levels of understanding. A chess engine doesn´t understand the history of chess or the feeling while playing chess or what it means to improve the own rating +1000 etc. But it somehow understands (even if it´s just calculating), that it doesn´t make sense to sacrifice a queen just to get a pawn (of course there could be situations where such a sacrifice could be possible, but I mean in general).

When I add a 1 to a 2, then I get a 3. When a computer adds a 1 to a 2, then it also get´s a 3. Why can we say that human being understand, that 1 + 2 equals 3, but a computer is just manipulating and processing information, so it doesn´t understand that 1 +2 equals 3?


I guess at the end it is all about language, sentences and mental concepts. It´s also difficult to say what it means to understand something. I mean, does Magnus Carlson understand chess the same as Garri Kasparov? Or better, or different? Tricky thoughts nervous.png

DiogenesDue

I actually think that chess engines do understand chess...it is the imprecise and vague definition of "understanding" that is the problem, as other people have also mentioned.  If a lab rat runs mazes, does the rat "understand" mazes?  Well, it has memorized its current maze, and X number of past mazes that it still remembers, and it will also understand that the mazes periodically change, and it will have some rudimentary notion for itself about how it traverses new mazes when that happens.  Is that "understanding"? 

I would say yes.  But that's a mammal.  What about your car?  A modern car tells you when your tire pressure is low.  It has a sensor (perceptions), and it has a CPU (a interpretive brain).  Does your car "understand" low tire pressure?  I would say yes.  Does it understand tires?  No.  But it isn't designed to and doesn't need to.

Chess engines are programmed to follow the rules of chess, they receive input, play games, store results and evaluations, and even impart results back after it's all over.  Does an engine understand chess?  I would say yes.

Will an engine ever be able to explain to you in human terminology why it chose a certain move over another move and have a conversation about a game it played?  Not anytime soon.  But imparting knowledge in a human-friendly format is *not* required to meet the basic definition of understanding.

If someone adds a need self-awareness and/or the ability to teach a human being exactly how to evaluate and play the same moves the engine does, then I am not sure that will ever come to pass.  By the time that the technology would allow for engines to communicate that well, their "understanding" of chess will be so far beyond where it is now it would be pointless to try to communicate it in the same way that Einstein trying to teach a toddler relativity would be a waste of time.  What good would it do if an engine told Carlsen 10 years from now, "in this position, the light squared bishop is the key...to understand why, if you calculate these very particular 5 branches out to 60 ply deep, you will see that XYZ vs. ABC is the deciding factor...".  No human will ever be able to do it.

mujakapasa
PawnstormPossie hat geschrieben:

An engine, computer, etc...understand nothing. No brain for understanding.

Why is this so difficult to understand?

Because it is not as easy as your write. So your explanation is just because they have no brain, they do not understand? So every single artificial intelligence, even the crazy self-learning neuronal network computers that learn by themselves, they all do not understand because they have no brain?

I mean, the question is not about consciousness or feelings or the soul, it´s about levels of understanding. Then I could also say, if we can create something like a brain, then we do understand? (exactly what neuronal networks trying to simulate). And this way, a dog and a horse, a pig and an elephant would also understand chess, because you know, they have a brain?

That´s why it´s so difficult and why I´m collecting different views and opinions tongue.png

mujakapasa
btickler hat geschrieben:

I actually think that chess engines do understand chess...it is the imprecise and vague definition of "understanding" that is the problem, as other people have also mentioned.  If a lab rat runs mazes, does the rat "understand" mazes?  Well, it has memorized its current maze, and X number of past mazes that it still remembers, and it will also understand that the mazes periodically change, and it will have some rudimentary notion for itself about how it traverses new mazes when that happens.  Is that "understanding"? 

I would say yes.  But that's a mammal.  What about your car?  A modern car tells you when your tire pressure is low.  It has a sensor (perceptions), and it has a CPU (a interpretive brain).  Does your car "understand" low tire pressure?  I would say yes.  Does it understand tires?  No.  But it isn't designed to and doesn't need to.

Chess engines are programmed to follow the rules of chess, they receive input, play games, store results and evaluations, and even impart results back after it's all over.  Does an engine understand chess?  I would say yes.

Will an engine ever be able to explain to you in human terminology why it chose a certain move over another move and have a conversation about a game it played?  Not anytime soon.  But imparting knowledge in a human-friendly format is *not* required to meet the basic definition of understanding.

If someone adds a need self-awareness and/or the ability to teach a human being exactly how to evaluate and play the same moves the engine does, then I am not sure that will ever come to pass.  By the time that the technology would allow for engines to communicate that well, their "understanding" of chess will be so far beyond where it is now it would be pointless to try to communicate it in the same way that Einstein trying to teach a toddler relativity would be a waste of time.  What good would it do if an engine told Carlsen 10 years from now, "in this position, the light squared bishop is the key...to understand why, if you calculate these very particular 5 branches out to 60 ply deep, you will see that XYZ vs. ABC is the deciding factor...".  No human will ever be able to do it.


I love your example at the end grin.png thanks for your arguments and examples. Somehow exactly what I´m thinking. But as you said, I guess at the end it will all depend on a definition of understanding.  

drmrboss

The strength of engines is two components

 1.Search 

2.Evaluation

Many people think engines are superior cos engines can search 1 million times faster than human.

 

If we handicap the search of engines, like human, just allow  10 positions search per move, then we can compare evaluation between human vs engines(or compare knowledge)

 

If SF is searching only 10 positions per moves, elo of stockfish is approx 1200 (equivalent to knowledge of average chess players)

If Leela is searching only 10 positions per move, elo of Leela is approx  2300.( That means Leela core knowledge is equivalent to FM level)

Dont believe it? You can play live at another site (I cant mention the link , but just ask Leela discord "mini human bot", where the bot is searching only 6 positions per moves. It is just a small leela net running on $30 raspberry pi chip but beat me hard with 6-0 , I am 2100 in blitz) . There is a bigger and stronger Leela net  than Alpha zero already, if that net is actually running, it may be IM Level knowledge, I guess.

 

mujakapasa
PawnstormPossie hat geschrieben:

Re: the car

The car's computer reacts as programmed if a sensor's reading meets specific criteria. If the sensor is faulty, the car can "understand" only based on the reading.

Ok, if you insist engines and machine learning "understand" then you misunderstand. They do what they're programmed to do based on the understanding/reasoning of the people that programmed them.

If you think pigs, cows, etc, "understand" because they have a brain then you might look into psychology a bit.

 

I don´t think that. But when you argue to understand it needs a brain, then this is an example of it. They all have a brain so they understand. I studied Psychology and Philosophy, but I definitely have to read all the stuff, the theories and assumptions again. Because with your "explanations" everything seems to be so easy that it could be, that I´m just to stupid to understand.

DiogenesDue

Human beings also do what they are programmed to do based on the understanding of the/reasoning of the people that programmed them (i.e. everyone they interact with).

The difference is pretty simple, to my way of thinking...take a can opener.  A hand-turned can opener does not understand how to open a can.  An electric can opener you place the whole can into and then press a button while ensuring the correct placement of the can does not understand how to open a can.

An electric can opener that you place a can into that locks it into place by noticing the can with a sensor, then rotates it and opens the can itself, then pulls the lid off with a magnet...does "understand" how to open a can.

The distinction is very black and white, really.  The "understanding" can opener:

- Perceives the can on its own (sensor/external input)

- "Knows" what to do with it (knowledge/rules/definitions)

- Takes action and produces /and/or reports results based on its understanding (output/communication)

That's all that it takes to meet *and demonstrate* a rudimentary definition of "understanding"..."be knowledgeably aware of the character or nature of".

You are also trying to get other definitions in there..."perceive the intended meaning of" and "perceive the significance, explanation, or cause of" and "interpret or view (something) in a particular way".

I have bolded the problems with the latter dictionary definitions here.  Those definitions are all loaded, and you can add am implied "as a human being would" to each of them.

 

mujakapasa
PawnstormPossie hat geschrieben:

If you studied psychology, why are you...never mind. Did you pass the course?

Doesn´t have anything to do with the topic, but sure I did. Read something about Fodor, Searle, Churchland, Dennett, Chalmers, Turing, Putnam, Nagel, Armstrong. Maybe you can solve all the problems if it´s so easy. Would be very nice if you could bring in your knowledge and support all the efforts of cognitive psychologists, computer scientists, biologist and neuro scientists.

Flickas

This is an interesting question because it has nothing to do with computer "consciousness" or "self awareness." The question is whether a computer using either mass variation crunching like early successful programs, pattern recognition software or pruning software like Leela "understands" chess is a manner equivalent to a human playing chess at a competent level.

I would tend to believe the answer is yes I suspect human intuition in chess--the understanding in humans most difficult to fathom--is based in line pruning and pattern recognition. However, this is not at all clear. It is not a computer's approach to chess we do not understand--we do not understand how human's can play chess at such a high level when it is manifest that the human brain cannot consciously follow the tree of variations in a complex middle game position in any depth.

Flickas
Rocky64 wrote:

That's a very interesting question, and I agree with Searle (one of my favourite philosophers) who's skeptical of "strong" artificial intelligence. Chess engines don't really understand chess, because the term "understand" implies a subjectivity that computers lack. A computer running a chess program is merely following a set of objective rules, which have no meaning to an inanimate machine. A human being subjectively wants to win a game but a program doesn't want anything - it simply does things objectively, and though we may project that Stockfish wants to beat you, that's just an illusion.   

This only means that computers don't understand why we play chess. Big deal. My wife doesn't understand why we play chess either and she is quite intelligent.

mujakapasa

Flickas schrieb:

This is an interesting question because it has nothing to do with computer "consciousness" or "self awareness." The question is whether a computer using either mass variation crunching like early successful programs, pattern recognition software or pruning software like Leela "understands" chess is a manner equivalent to a human playing chess at a competent level.

I would tend to believe the answer is yes I suspect human intuition in chess--the understanding in humans most difficult to fathom--is based in line pruning and pattern recognition. However, this is not at all clear. It is not a computer's approach to chess we do not understand--we do not understand how human's can play chess at such a high level when it is manifest that the human brain cannot consciously follow the tree of variations in a complex middle game position in any depth.

I totally agree with you.

mujakapasa

PawnstormPossie schrieb:

LoL, I'm out

People love their robots because they're so understanding.

I guess you didn't even read my first entry. you just read the post title and answered it. The question is about chess engines and whats the difference in understanding between them and humans while playing chess. I don't believe in general that robots understand or have feelings or a soul or consciousness etc. Don't mix your stuff here.

DiogenesDue
PawnstormPossie wrote:

I read...

Perhaps it is because you don't understand how the engines work on a basic level. They don't think, they calculate. They simply don't understand the reasoning, they merely compute values using formulas, tables, etc.

If you insist they understand, fine. We're all entitled to our opinions. No need to argue over that.

Is there a way you could reword what you're asking and have the same meaning?

That's your impasse right there.  Your definition calls for the engine to not only understand chess, but the "reasoning", i.e. human reasoning (because by definition, there's no other kind), behind chess and the moves it makes.  Other's definitions of "understanding" do not require this caveat.  The word itself is ambiguous, and will not solve the argument.  This is the problem of all language, which is inherently imprecise. 

Given today's world and the quest for AI, as well as the possibilties of intelligent life elsewhere in the universe, the definition of "understanding" will have to be modified if such are discovered, though.  Meanwhile, the earth is still the center of the solar system wink.png...

mujakapasa

PawnstormPossie schrieb:

I read...

Perhaps it is because you don't understand how the engines work on a basic level. They don't think, they calculate. They simply don't understand the reasoning, they merely compute values using formulas, tables, etc.

If you insist they understand, fine. We're all entitled to our opinions. No need to argue over that.

Is there a way you could reword what you're asking and have the same meaning?

Let me try to reword what I mean. I understand that chess engines only calculate. They evaluate given positions and are calculating and trying to find the best move in the given position. They do not think or belief or love or whatever. To understand something it doesn't mean to have a soul or consciuosness in my definition of understanding. But you have to be able to distinguish good/bad moves and use tactics and strategies. Whats the difference in the following example. You play against a chess engine and then you play against a human being. Assume that you don't know if you are playing against an engine or a human being. Can you say which one is the engine and which one is the human being? Intuitively most would say a human being understands chess but a chess engine doesn't understand it. Why is that so? What does the human being do differently? I mean, he also calculates and evaluates, so whats the difference? thats what I am talking about. In no way I'm saying machines or robots or even chess engines can feel or do think in a way humans do. But in my opinion you can say, that a chess engine understands to play chess on some level. Even if it's just, that you call the calculating and evaluating some sort of understanding.

Flickas

I think the discussion has strayed to far into a general discussion of what "understanding" means and has not focused sufficiently on the nature of chess (a closed system with a finite number of variables) and whether humans actually understand chess. I would suggest a traditional appeal of chess is that humans don't understand it and furthermore that the computer age has in a sense hurt part of the appeal of chess since the bot has removed so many mysteries.

In a theoretical online Turing chess--which I undertake every time I play on Chess.com--I feel that you can tell a player is using a computer PRECISELY because he seems to understand the game wonderfully and makes unexpected but brilliant moves annihilating my game. Which is when I jab that report button screaming CHEATER CHEATER CHEATER and my wife sez Dear shouldn't you do something else?