Does a chess engine "understand" chess?

Sort:
DiogenesDue
RedGirlZ wrote:

The most simple way to word this is an engine like Stockfish isn't autonomous. Until AI are autonomous, or a chess engine is produced that is autonomous, then an engine will never understand chess. understanding isn't something that exists, until something has autonomy. The engine has been programmed to perform a task. In chess, that task includes analysing where to move these "pieces" which are all represented by a numerical value, in order to achieve a result they are programmed to achieve, which for us, is getting the best possible result in a chess game. Keep in mind that for an engine this isn't "The best possible result", they are searching for, they are just completing a task, DESCRIBED, by humans as the best possible result. 

We're not trying to determine whether chess engines are free or deserve civil rights wink.png.  Autonomy in and of itself is not relevant to the concept of "understanding" something.  An indentured servant has no autonomy, yet they can still figure out how a shovel works.

DiogenesDue
RedGirlZ wrote:
 

We're not trying to determine whether chess engines are free or deserve civil rights .  Autonomy in and of itself is not relevant to the concept of "understanding" something.  An indentured servant has no autonomy, yet they can still figure out how a shovel works.

I wasn't talking about rights Einstein. 

...and yet, you used a word that is defined as the right or condition of self government, Sherlock wink.png.  Why would someone/something that gains autonomy suddenly become more or less capable of understanding something?  You are looking for a different/better word.  Autonomy and the visible results thereof may be an indirect indicator of many things...self awareness, intelligence, imagination, planning, etc. but it actually proves or causes none of those things.

Synonyms of "autonomy":

accord, choice, free will, self-determination, volition, will, freedom, independence, liberty, self-determination, self-governance, sovereignty

...note the lack of "understanding", "knowledge", or "independent thought" in the list of synonyms.

You can teach a blind man to play chess in his head, having never seen or touched actual chess pieces.  Purely through description.  Does this mean they would not be capable of understanding the game either?  Your argument is typical of a person brought up watching too many movies and absorbing too many stereotypes about the failings of machines/computers/robots vs. human beings (movies which are made to prey upon human fears of losing primacy in their world wink.png...).

A numerical value is not any better or worse a way to represent the logical construct that chess represents than any other descriptors.

 

DiogenesDue
RedGirlZ wrote:

Maybe autonomous isn't the best word, but you understand what I mean. I'm not talking about AI rights, I'm talking about the fact that en engine doesn't have a brain the same way a human does. Subjectivity is something exists in humans, not something that exists within Robots, in the traditional sense. 

I hear you, but I don't think it matters, because human understanding is not the only form of understanding possible.  If the argument is that engines will never be human, or even "think" exactly like humans (in the context of chess or otherwise)...well then, yeah, of course.  

Rocky64
mujakapasa wrote:

I agree with you but I question if a chess engine doesn´t want to win. Of course it doesn´t want to win in a sense a human being does. There are no thoughts like, "when I win against Kasparov, then I´m really strong. I want to be really strong so therefore I have to win against Kasparov". I agree that for an engine the opponent, the rating, the nice feeling after a win doesn´t matter. But the goal of a chess engine is to achieve a check mate or when the situation is hopeless a draw. If the engine doesn´t want to win, then all the calculations, variations and tactics would be meaningless. At least that´s what I´m assuming. 

That´s why I tried to argue with the aspect of the different levels of understanding. A chess engine doesn´t understand the history of chess or the feeling while playing chess or what it means to improve the own rating +1000 etc. But it somehow understands (even if it´s just calculating), that it doesn´t make sense to sacrifice a queen just to get a pawn (of course there could be situations where such a sacrifice could be possible, but I mean in general).

When I add a 1 to a 2, then I get a 3. When a computer adds a 1 to a 2, then it also get´s a 3. Why can we say that human being understand, that 1 + 2 equals 3, but a computer is just manipulating and processing information, so it doesn´t understand that 1 +2 equals 3?

To want something, like to understand something, requires a subjective self that a computer program lacks. Stockfish no more wants to win than a rock wants to fall to the ground. When Stockfish checkmates you according to its programming, it's you who interprets the move as a win, i.e. only you understand the meaning of the move, not Stockfish.

Look at the definitions of understand on dictionary.com. "Perceive the meaning", "grasp the idea", and "interpret" all imply a subjective or mental self capable of doing these intangible things. You won't find "calculate" as a synonym of "understand" anywhere. To understand "1+1=2" means more than the ability to calculate 1+1=2, which is why it makes no sense to call your pocket calculator a mathematics genius. 

Vandros57
btickler hat geschrieben:

Given today's world and the quest for AI, as well as the possibilties of intelligent life elsewhere in the universe, the definition of "understanding" will have to be modified if such are discovered, though.  Meanwhile, the earth is still the center of the solar system ...

 

Yeah it's amazing: the tactical power thinking based on numbers is in some cases the same or even better than strategic thinking...

mujakapasa
Rocky64 hat geschrieben:
mujakapasa wrote:

I agree with you but I question if a chess engine doesn´t want to win. Of course it doesn´t want to win in a sense a human being does. There are no thoughts like, "when I win against Kasparov, then I´m really strong. I want to be really strong so therefore I have to win against Kasparov". I agree that for an engine the opponent, the rating, the nice feeling after a win doesn´t matter. But the goal of a chess engine is to achieve a check mate or when the situation is hopeless a draw. If the engine doesn´t want to win, then all the calculations, variations and tactics would be meaningless. At least that´s what I´m assuming. 

That´s why I tried to argue with the aspect of the different levels of understanding. A chess engine doesn´t understand the history of chess or the feeling while playing chess or what it means to improve the own rating +1000 etc. But it somehow understands (even if it´s just calculating), that it doesn´t make sense to sacrifice a queen just to get a pawn (of course there could be situations where such a sacrifice could be possible, but I mean in general).

When I add a 1 to a 2, then I get a 3. When a computer adds a 1 to a 2, then it also get´s a 3. Why can we say that human being understand, that 1 + 2 equals 3, but a computer is just manipulating and processing information, so it doesn´t understand that 1 +2 equals 3?

To want something, like to understand something, requires a subjective self that a computer program lacks. Stockfish no more wants to win than a rock wants to fall to the ground. When Stockfish checkmates you according to its programming, it's you who interprets the move as a win, i.e. only you understand the meaning of the move, not Stockfish.

Look at the definitions of understand on dictionary.com. "Perceive the meaning", "grasp the idea", and "interpret" all imply a subjective or mental self capable of doing these intangible things. You won't find "calculate" as a synonym of "understand" anywhere. To understand "1+1=2" means more than the ability to calculate 1+1=2, which is why it makes no sense to call your pocket calculator a mathematics genius. 

I see what you mean with the referring to intentionality and I agree with that.

And I also agree with the requirements for "understanding" you stated although they aren´t complete.
In other words: To understand you also need a learning-process and to learn something and to be able to distinguish if a declarative statement is true or not. And what I see is, that a chess engine fulfills these two requirements (but only in playing chess, of course you can´t speak with a chess engine). 

I think there are different levels of understanding. If you fulfill every requirement, then you are a human beging. When you only fulfill 2 of 10 requirements, then you understand but only on a low level.

To calculate, evaluate etc. also has to do with learning. When a chess engine hast 20 games in the database it makes faults. But if it has 20000000 games in the database, it won´t make these same faults. Of course you could say, that it´s always the human being that says the chess engine understands or learned or whatever. 

But when you say, it´s all just developped by humans how the chess engine can learn, then I can say ok - But your brain is also just developped by nature/evolution/biology how you can learn, so that would be to easy.

But maybe it´s just a persistent intuition of myself, that I think that a chess engine on some level understands to play chess.

LSW33

The Oxford Dictionary gives the following definitions of the verb "to understand":
a) to perceive the intended meaning of
b) to interpret or view (something) in a particular way.

By these definitions it seems clear to me that a computer does not "understand" chess. It does not know the purpose of what it is doing. It is not checkmating the king in order to checkmate the king. It doesn't even realize that it is playing a game, because it doesn't understand the concepts of fun or competition. All it is doing is blindly following rules that have been given to it by a human.

To claim that a computer understands chess is to claim that a ventriloquist's dummy understands what it is saying.


DiogenesDue
LSW33 wrote:

The Oxford Dictionary gives the following definitions of the verb "to understand":
a) to perceive the intended meaning of
b) to interpret or view (something) in a particular way.

By these definitions it seems clear to me that a computer does not "understand" chess. It does not know the purpose of what it is doing. It is not checkmating the king in order to checkmate the king. It doesn't even realize that it is playing a game, because it doesn't understand the concepts of fun or competition. All it is doing is blindly following rules that have been given to it by a human.

To claim that a computer understands chess is to claim that a ventriloquist's dummy understands what it is saying.

Bad analogy.  There is no human being directly performing any actions an engines makes.

Again, self-awareness and a "step outside yourself" understanding of "why" an engine plays chess is not germane to the idea of a chess engine understanding chess itself.  That's consciousness, not understanding.  There's no requirement of full consciousness in the concept of understanding, unless you fall into the "but engines don't have an immortal soul" or the "human beings have been set apart from the animals" category of thinking.  

Let me ask all the naysayers a question...if human beings killed themselves off with an extinction level event (likely in the next 500 years), and eons later a new species evolved intelligence, but that intelligence stayed around the level of a Roomba, would your position be that this new species does not "understand" anything, simply because it doesn't think like a human being? 

What if the new species could communicate with a kind of sign language with a vocabulary of 500+ words?  Oh wait, a gorilla can already do that, but they don't "understand" anything either, they are just using a form of mimicry, right? 

What if the new species became so intelligent that they recreated human beings from DNA trapped in amber and put them all on an island wink.png, then taught them to speak English.  Then let's say those human beings started talking amongst themselves, saying that their creators did not truly "understand" the world because they could not have a passable conversation in English and could not explain or demonstrate a human flavor of self-awareness. 

Would those people be ignorant/backwards for defining a universal concept like understanding to require human perceptions and judgments?  

"Humans use strategy and planning"

Humans calculate and evaluate with a set of valuations which are based on the sum of their life experiences and knowledge (programming).  The *only* real difference is that the human calculations are more complex, but also much fuzzier and imprecise, because we are biological machines with soft squishy chemical CPUs not designed correctly to play chess.  If anything, our self awareness and ability to question why we play chess is detrimental to our ability to perfectly play it and to ever understand it fully, because it's just a distraction.

DiogenesDue
PawnstormPossie wrote:

"Humans calculate and evaluate with a set of valuations which are based on the sum of their life experiences and knowledge (programming).  The *only* real difference is that the human calculations are more complex, but also much fuzzier and imprecise, because we are biological machines with soft squishy chemical CPUs not designed correctly to play chess.  If anything, our self awareness and ability to question why we play chess is detrimental to our ability to perfectly play it and to ever understand it fully, because it's just a distraction."

The human calculations during a gamr are more complex? Try explaining that one. Noone can calculate as much as a chess engine in the same amount of time. 

Are you still arguing that engines "understand" chess (or anything)? They don't understand what they do, they just do it...following only the coded instructions when specified to do so.

Are you the type that argues all water is wet and chess is a sport?

I didn't say better, I said more complex.  A human's chess calculation process is full of garbage, like "I'm hungry", "somebody moved in my peripheral vision", "somebody's tapping their foot", "I suck at chess, why did I make that last move?", "I hope he plays Nf6", "that's the TD that ruled against me last time", "is this all that there is?", "I need to remember my nephew's birthday" etc. ad nauseam.  Those flawed and faulty "calculation" errors are indeed so exceedingly complex that, as is being argued here, an engine could not possibly understand them. 

In just the last case, for example, an engine would have to understand the human race, human relationships, the concept of children and birth, the concept of time, the concept of a calendar to break down time into units and track it,, the concept of human memory and more importantly why humans forget things they want to remember, etc.

The coded "instructions" are coded understanding.  Not really much different than a child's synapses firing and storing what they learn in neuron groupings.

An engine works with the rules of chess, and rules about how to evaluate chess and play it.  It does not matter if it "understands" the human trappings surrounding the logical construct of a chess game.  It doesn't need to understand the concept of a game, or that other games exist, etc.

Human beings don't understand or perceive more than 2% of what goes on around them, even the stuff that their senses can process and interpret.  Are you aware of the Brownian motions of all the particles inside your body?  Do you actually understand yourself if you do not?  The bar being set is human. 

If there were some alien race that came along that saw things at some quantum level, they would consider human beings as completely unintelligent beings that react to some basic stimuli but are incapable of understanding themselves or the universe around them, 

In the alien's version of this thread they are saying:

"Yes, these chemical sacks do recoil from particle excitation that is too close to them, but they do not understand why or have any awareness of the universe around them, they simply react and do."

I find it funny that human beings, who know so incredibly little, would think they really understand things at all, or say things like "a machine could never reach a human level of understanding" *as if a human level of understanding would be the best goal*. 

It's like an ant belittling a dust mite wink.png.

mujakapasa
PawnstormPossie hat geschrieben:
mujakapasa wrote:

 

PawnstormPossie schrieb:

 

I read...

Perhaps it is because you don't understand how the engines work on a basic level. They don't think, they calculate. They simply don't understand the reasoning, they merely compute values using formulas, tables, etc.

If you insist they understand, fine. We're all entitled to our opinions. No need to argue over that.

Is there a way you could reword what you're asking and have the same meaning?

 

Let me try to reword what I mean. I understand that chess engines only calculate. They evaluate given positions and are calculating and trying to find the best move in the given position. They do not think or belief or love or whatever. To understand something it doesn't mean to have a soul or consciuosness in my definition of understanding. But you have to be able to distinguish good/bad moves and use tactics and strategies. Whats the difference in the following example. You play against a chess engine and then you play against a human being. Assume that you don't know if you are playing against an engine or a human being. Can you say which one is the engine and which one is the human being? Intuitively most would say a human being understands chess but a chess engine doesn't understand it. Why is that so? What does the human being do differently? I mean, he also calculates and evaluates, so whats the difference? thats what I am talking about. In no way I'm saying machines or robots or even chess engines can feel or do think in a way humans do. But in my opinion you can say, that a chess engine understands to play chess on some level. Even if it's just, that you call the calculating and evaluating some sort of understanding.

 

Can I say which is human or computer? Probably not with good certainty. Unless some magical combination that I realize later was employed by my patzer foe.

What do humans do that the engines don't?

Humans use strategy and planning.

On my level (which is low), I think the engines are much stronger vs human due to the shear calculation capabilities. No matter what I do, it will out calculate me.

On much higher levels, I think the engines are weaker strategically.

Take any opening of your understanding and play vs an engine. The engine will leave theoretical lines in favor of something that gives an initially higher value based on some calculating. The engine may not be able to calculate (depending on time) everything and after a few moves the evaluation changes for the worse. A GM will likely understand why a particular move is inferior to others and devise a flexible plan to address it accordingly.

The engines can "evaluate" positions in an instant based on more than a thousand pieces of information. Humans,  not so much. 

So, you can say the engines understand how to play chess, but they don't understand why they should/shouldn't  play particular moves over others.

Makes totally sense, thanks.

mujakapasa

"Humans calculate and evaluate with a set of valuations which are based on the sum of their life experiences and knowledge (programming).  The *only* real difference is that the human calculations are more complex, but also much fuzzier and imprecise, because we are biological machines with soft squishy chemical CPUs not designed correctly to play chess.  If anything, our self awareness and ability to question why we play chess is detrimental to our ability to perfectly play it and to ever understand it fully, because it's just a distraction."

I agree and understand what you are saying about understanding although I don´t think at all that human beings are biological machines. See there is the big problem between the relationship between mental states and brain states that is still "unsolved" to this day. You can´t just say everything is just in the brain and physical. Of course you could, as many physicalists and materialists trying to do, but they can´t reduce intentionality, subjectivity or consciousness physically because there is an explanatory gap, or the so called hard problem of consciousness. What does this mean? This means you can explain, that when three people punch in a wall then the pain centre will fire the same. But what you can´t explain is the qualitativ aspect, the subjective feeling of the pain. You can´t just say the three feel the same and have the same experience while beeing in the state of pain. And in no way you can explain how and why the heck these qualitative aspects appear. You won´t find them in the brain.  A human brain simply isn´t a "computer" at all. This analogy is to simple and with it you will come in big troubles explaining it properly^^

I would just argue that for some forms of understanding you won´t need intentionality. But it seems that when we say that a chess engine understands chess on some level, then this is only a human belief. I learned through the arguments in this discussion, that a chess engine is just doing his stuff which makes sense. But while doing it the engine can´t say that it does move X to prepare for move y and z but when my opponent plays move a, then I have to calculate all again. A human being understands this longterm, a chess engine won´t. As far as what I learned here^^

Mi_Amigo
mujakapasa wrote:

"Humans calculate and evaluate with a set of valuations which are based on the sum of their life experiences and knowledge (programming).  The *only* real difference is that the human calculations are more complex, but also much fuzzier and imprecise, because we are biological machines with soft squishy chemical CPUs not designed correctly to play chess.  If anything, our self awareness and ability to question why we play chess is detrimental to our ability to perfectly play it and to ever understand it fully, because it's just a distraction."

I agree and understand what you are saying about understanding although I don´t think at all that human beings are biological machines. See there is the big problem between the relationship between mental states and brain states that is still "unsolved" to this day. You can´t just say everything is just in the brain and physical. Of course you could, as many physicalists and materialists trying to do, but they can´t reduce intentionality, subjectivity or consciousness physically because there is an explanatory gap, or the so called hard problem of consciousness. What does this mean? This means you can explain, that when three people punch in a wall then the pain centre will fire the same. But what you can´t explain is the qualitativ aspect, the subjective feeling of the pain. You can´t just say the three feel the same and have the same experience while beeing in the state of pain. And in no way you can explain how and why the heck these qualitative aspects appear. You won´t find them in the brain.  A human brain simply isn´t a "computer" at all. This analogy is to simple and with it you will come in big troubles explaining it properly^^

I would just argue that for some forms of understanding you won´t need intentionality. But it seems that when we say that a chess engine understands chess on some level, then this is only a human belief. I learned through the arguments in this discussion, that a chess engine is just doing his stuff which makes sense. But while doing it the engine can´t say that it does move X to prepare for move y and z but when my opponent plays move a, then I have to calculate all again. A human being understands this longterm, a chess engine won´t. As far as what I learned here^^

with all that knowledge on humans I can see why there's a + sign next to you(doctor)

Prometheus_Fuschs
willitrhyme escribió:

Without an ego and everything which comes along with it, like pride, fear, a sense for beauty etc. no machine ever truly "plays" any game, as they're just the soulless performers of a mental task given to them.

You are assuming ad priori that you need to have emotions or be a human to understand.

DiogenesDue
mujakapasa wrote:

"Humans calculate and evaluate with a set of valuations which are based on the sum of their life experiences and knowledge (programming).  The *only* real difference is that the human calculations are more complex, but also much fuzzier and imprecise, because we are biological machines with soft squishy chemical CPUs not designed correctly to play chess.  If anything, our self awareness and ability to question why we play chess is detrimental to our ability to perfectly play it and to ever understand it fully, because it's just a distraction."

I agree and understand what you are saying about understanding although I don´t think at all that human beings are biological machines. See there is the big problem between the relationship between mental states and brain states that is still "unsolved" to this day. You can´t just say everything is just in the brain and physical. Of course you could, as many physicalists and materialists trying to do, but they can´t reduce intentionality, subjectivity or consciousness physically because there is an explanatory gap, or the so called hard problem of consciousness. What does this mean? This means you can explain, that when three people punch in a wall then the pain centre will fire the same. But what you can´t explain is the qualitativ aspect, the subjective feeling of the pain. You can´t just say the three feel the same and have the same experience while beeing in the state of pain. And in no way you can explain how and why the heck these qualitative aspects appear. You won´t find them in the brain.  A human brain simply isn´t a "computer" at all. This analogy is to simple and with it you will come in big troubles explaining it properly^^

I would just argue that for some forms of understanding you won´t need intentionality. But it seems that when we say that a chess engine understands chess on some level, then this is only a human belief. I learned through the arguments in this discussion, that a chess engine is just doing his stuff which makes sense. But while doing it the engine can´t say that it does move X to prepare for move y and z but when my opponent plays move a, then I have to calculate all again. A human being understands this longterm, a chess engine won´t. As far as what I learned here^^

I would say that the gap between the brain as science now understands it vs. the notion of something beyond the physical you described is more clearly an example of human belief vis a vis religion. 

The only reason there's a gap in understanding there is because we lack the tools or even the capacity if we had said tools to understand and hold in our head at one time the myriad of factors that explain why 3 people experience pain differently when they punch a wall.  DNA markers, physical development issues due to environment/diet, emotional trauma and subconscious fears based on various events, etc.

If you could create 2 absolutely identical human beings in 2 separate pocket universes, then have them experience the exact same lives in every way...they would feel pain the same way when they punch that wall.  There's no evidence for or any way to prove any metaphysical characteristics that differentiate them,

Your last paragraph is mistaken.  Traditional engines will not learn or remember unless they are programmed with new valuations, but the new engines (Leela, AlphaZero) will modify their own valuations, "remembering" without recalculation.  That's why the new engines play as well as, if not better, than the traditional engines even though they evaluate far fewer positions while deciding their next move.  They also play better because they built those valuations from the ground up, without being programmed with inherent human bias in the initial valuations like traditional engines are wink.png.

Mustafa_512

Without reading all this details, this is just a philosophical question.

Simply, understanding is a special behavior for creatures who have "realization" feature, and there is no way in which machines can gain this behavior, because simply.. it is a machine.

 

Machine is not intelligent nor stupid, actually it has no intelligent rating.

 

Let me give you an example, Deep Blue vs Kasparov in 1997. Deep Blue made a move that prompted Kasparov to accuse Deep Blue "Team" of cheating.

How did Kasparov realized that Deep Blue cheated? Because machines thinks in a mathematical way, it has no creativity, it just follow some of thousands lines of codes... maybe a programming error can cause one of the rarest happens and doing a brilliant move (although it is very rare, but as long as it is not 0, it can happens). And actually this move was far away from the way machines programmed, which made Kasparov to accuse Deep Blue Team of cheating, because simply, this move is related to creativity which is far beyond from the classical mathematical moves.

 

Machines are machines, they are just some of programming codes that do nothing other than translating this codes to 1's and 0's and doing only +(plus) operation.. No creativity, no intelligent, no understanding, no realization.. all this features are for living creatures only.

iRobot is just a movie, it will never be true.

DiogenesDue
mustafa_diaa wrote:

Without reading all this details, this is just a philosophical question.

Simply, understanding is a special behavior for creatures who have "realization" feature, and there is no way in which machines can gain this behavior, because simply.. it is a machine.

 

Machine is not intelligent nor stupid, actually it has no intelligent rating.

 

Let me give you an example, Deep Blue vs Kasparov in 1997. Deep Blue made a move that prompted Kasparov to accuse Deep Blue "Team" of cheating.

How did Kasparov realized that Deep Blue cheated? Because machines thinks in a mathematical way, it has no creativity, it just follow some of thousands lines of codes... maybe a programming error can cause one of the rarest happens and doing a brilliant move (although it is very rare, but as long as it is not 0, it can happens). And actually this move was far away from the way machines programmed, which made Kasparov to accuse Deep Blue Team of cheating, because simply, this move is related to creativity which is far beyond from the classical mathematical moves.

 

Machines are machines, they are just some of programming codes that do nothing other than translating this codes to 1's and 0's and doing only +(plus) operation.. No creativity, no intelligent, no understanding, no realization.. all this features are for living creatures only.

iRobot is just a movie, it will never be true.

This would be a better example if, you know, Deep Blue's team had actually cheated and had the result overturned by evidence wink.png.  But it wasn't.  

The point being argued is that the "realization feature" you mentioned is *not* actually the key defining characteristic of "understanding"...that's just a human affection (to think so).  It *is* the perhaps the key characteristic in defining human understanding, that the discrete knowledge is fit into an overall greater perspective.

"iRobot is just a movie, it will never be true"

Well, you are right about that...if/when AI has its breakthrough (no time soon), the intelligence that emerges will be nothing like a human (that's also a human affectation, that an AI will strive to be human-ish, but better...it will completely bypass human as irrelevant,).

najdorf96

Indeed. Sentience, to me, is what truly separates human chess understanding and an engine's percieved chess understanding. It is also the main driver as to why many of us feel they don't have an understanding of chess. In our definition. Because we view chess as a game; entertainment, sport, life. We are self-aware of this fact, whereas they are not. Of course not, because their level of "understanding " of chess is what we have instilled in them. We can quantify this by comparative tests among the various engines out right now. In their terms. Much like IQ tests. Which one or more has an adequate "understanding" in certain positions than the other in comparison, which could be "human-like". In which case, their programmers would work diligently for a X amount of time to level up their "understanding" of say, positional themes. Raising their Horizon effect, increasing their heuristics more and more to simulate at least to the nearest fraction of a human player's understanding of positional themes. We can say in that, in that event, it's getting closer to our level of underetanding. But being self-aware? I don't think so. As we are growing technologically, we are slowly building things to take over cognitive repetitive chores or jobs...I say you, how are chess engines any different in helping you analyze lines, evaluating than a calculator in working out Advanced Calculus, or powerful computers figuring out weather patterns or predicting storm systems? It plays chess very very well vs humans. Do they understand chess by our definition? No. Could we possibly quantify their perceived level of understanding in relative terms to our current state of advancement of chess engine development potential? I believe yes. Could we possibly quantify their perceived level of understanding in terms of said potential in relative tech advancement in the future to our current level of chess understanding? Maybe when they become self-aware. Sentient. Peace guys😎

Caesar49bc

Computers only understand strings of 1's and 0's. Everything else is an interpretation of data they obtain.

Artificial Intelligence is just a phrase. A made up phrase that programmers and the public understands, but it's really just a phrase for a computer doing something "human like" (Or animal like, for pet robots).

But in the end, they're just programs and machines with software to interpret new data they can get by sensors or human input or what ever else constitutes data they can process.

Don't get me wrong, Alpha Chess Zero and Leela Chess Zero are both phenomenal programs that were made in order to be able to take the basic rules of chess, and selectively remember what constitutes good move or bad moves, with a healthy amount of coding dedicated to processing positional considerations and to be able to apply long term consequences for moves.

On the flip side, no human plays millions of games to become a super GM or the World Chess Champion, so it's more of an exercise in how to distill millions of self played engine games into a data set that is usable for future games.

Still, that is the trend for future chess engine. Will be interesting if at some point in the future all the top engines will be based on that type of programming. In any event, for the near future, single PC adaptive learning chess engines will almost certainly have to start with some sort of base knowledge of decent chess, or else people would have to dedicate their machine to weeks of learning chess from scratch.

tntsd

We humans understand things differently, I guess. Even the top AI experts agree that the human brain is more complex than any artificial neural network.

DiogenesDue
Caesar49bc wrote:

Computers only understand strings of 1's and 0's. Everything else is an interpretation of data they obtain.

Artificial Intelligence is just a phrase. A made up phrase that programmers and the public understands, but it's really just a phrase for a computer doing something "human like" (Or animal like, for pet robots).

But in the end, they're just programs and machines with software to interpret new data they can get by sensors or human input or what ever else constitutes data they can process.

Don't get me wrong, Alpha Chess Zero and Leela Chess Zero are both phenomenal programs that were made in order to be able to take the basic rules of chess, and selectively remember what constitutes good move or bad moves, with a healthy amount of coding dedicated to processing positional considerations and to be able to apply long term consequences for moves.

On the flip side, no human plays millions of games to become a super GM or the World Chess Champion, so it's more of an exercise in how to distill millions of self played engine games into a data set that is usable for future games.

Still, that is the trend for future chess engine. Will be interesting if at some point in the future all the top engines will be based on that type of programming. In any event, for the near future, single PC adaptive learning chess engines will almost certainly have to start with some sort of base knowledge of decent chess, or else people would have to dedicate their machine to weeks of learning chess from scratch.

Artificial intelligence refers to software/hardware actually modifying it's own parameters and code based on its "environment".  Chess engines that use machine learning are not truly "Artificial Intelligence" by the definition of Strong AI, but they do "learn" and adapt their play without requiring new programming, on their own.

Saying that artificial intelligence is equivalent to 'a computer doing something "human like"' is like saying a robotic zombie arm used as a prop on Walking Dead is an example of AI.

Also,  computers don't actually "understand" strings of 1s and 0s any more or less than they understand any data they interpret.  Does a 6502 chip "understand" what is happening when you load a byte value into a register or the accumulator and process the next instruction?  Not by any human standard.  Machine learning engines do "understand" chess though...in the sense that they understand their environment (the rules of chess including all limitations and win/lose/draw conditions), can play chess, can evaluate their success, and play chess better next time without any further coding or loading of external data or valuations. 

So, you can decide that computers do or do not understand chess in this context by the definition of "understand" you choose, but if the computer does understand 1s and 0s, then it also "understands" any higher level functions built on the base processors's set of instructions by extension.  If you think computers *don't* understand chess...then they also don't understand 1s and 0s.