x
Chess - Play & Learn

Chess.com

FREE - In Google Play

FREE - in Win Phone Store

VIEW

Stockfish dethroned

  • #141
    Optimissed wrote:

    <<But as I said, there is no objective answer to the question "what is AI?". There are only answers to questions like "How do specialists use the term AI?". Another question with an objective answer would be "How does Optimissed use the term AI?", but you will forgive me if I don't think it is as important.>>

    I hope you'll excuse me for being sceptical but until you and the "experts" understand the concept of intelligence and how it works in animals and humans, then what is this artificial version of intelligence supposed to be? An automatic vacuum cleaner, maybe? A driverless car?

    I suppose I could ask you to define A.I. and to state how your experts define it but I don't think I'd get an answer out of you. Probably a Google link or something but nothing that indicates you can think for yourself. If you won't define it then what do you actually mean by it, in descriptive terms?? If you won't tell us what you mean by it, then what are we supposed to make of what you post? Perhaps you don't know what a conflict of interests is. No problem; you can continue to think as you do. Please, however, don't force your weak understanding on others; especially on me!

     

  • #142

    This is an interesting discussion, but @Optimissed's inability to quote the post to which the reply is directed and/or post legibly thereafter, rather spoils it.

  • #143
    Optimissed wrote:
    I suppose I could ask you to define A.I. 

     

    Consider it done.

    Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.

    Examples that come from DeepMind include agents that play video games by observing the screen and learning how to achieve certain objectives by experimenting with the controls, but also the classic games successes.

    There are of course two phases of operation of the AI in these examples. The first is the self learning phase where the agent interacts with, explores and learns to understand the entire environment defined by the rules of chess (i.e. an environment which is a chessboard in which two players act in opposition. The objective of this phase of operation of the AlphaZero AI is "become a good chess player".

    In the second phase, the environment consists of the specific game states that result from the AI's moves and the opponent's moves, and the objective is "get a good result in the game".

  • #144

    Intelligence means understanding(intel-lego from Latin, understand).

    What can an inanimate thing, a computer, understand?

  • #145
    After Adams-Carlsen at the London werewolves, there is not much point looking at GM games anymore, but AlphaZero using far more processing power versus Fishstock under the circumstances was like a prime Fischer versus a drunk Nigel Short.
  • #146

    <<Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.>>

    Thanks for the clarification, Elroch. I appreciate it. So that means that the study of A.I. is the study of agents etc.

    Intelligence is, loosely, interacting with the environment to achieve goals. However, it begs the question of where the volition to do this comes from. Sure, we are motivated to interact in a way that promotes our health, longevity, reproduction prospects, wealth, maybe power etc. To some extent, we don't have much choice since not to do that would probably mean we wouldn't be here, but I'm not one of those people who believe we can't make choices. I hold that making choices or decisions is the result of a brain structure which isolates decision-making from the environment. I call it the Zener Effect, for want of something better. Others may call it something different. I'm sure you know what a Zener diode does. Anyway, I'd be interested as to where the computer scientists believe that the chess engine gets its motivation from. From itself, from God or from a sequence of interrelated instructions called a program(me)?

    What I'm getting at is that so far, I don't see anything to distinguish this chess programme from a driverless car. At least, nothing qualitative. All it is, is a more algorithmic and less brute force programme which has a fully interactive database of positions.

  • #147
    Optimissed wrote:

    <<Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.>>

    Thanks for the clarification, Elroch. I appreciate it. So that means that the study of A.I. is the study of agents etc.

    Intelligence is, loosely, interacting with the environment to achieve goals. However, it begs the question of where the volition to do this comes from. Sure, we are motivated to interact in a way that promotes our health, longevity, reproduction prospects, wealth, maybe power etc. To some extent, we don't have much choice since not to do that would probably mean we wouldn't be here, but I'm not one of those people who believe we can't make choices. I hold that making choices or decisions is the result of a brain structure which isolates decision-making from the environment. I call it the Zener Effect, for want of something better. Others may call it something different. I'm sure you know what a Zener diode does. Anyway, I'd be interested as to where the computer scientists believe that the chess engine gets its motivation from. From itself, from God or from a sequence of interrelated instructions called a program(me)?

    What I'm getting at is that so far, I don't see anything to distinguish this chess programme from a driverless car. At least, nothing qualitative. All it is, is a more algorithmic and less brute force programme which has a fully interactive database of positions.

    I think you are in thermal runaway.

    Sorry, inside Zener diode joke...

  • #148

    It's worth pointing out that AlphaZero does not have a database of positions. It has a convoluted way of calculating a number when presented with a position (and a list of other numbers for the legal moves). The origin of this convoluted function is its reaction to experience of about 28 billion positions and what they led to.

  • #149
    Elroch wrote:
    Optimissed wrote:
    I suppose I could ask you to define A.I. 

     

    Consider it done.

    Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.

    ...

    That's actually a pretty poor, not very useful definition.  AI is not the study of something.... We may study AI, but it is not in and of itself the study of...

     

    Computer World, quoting John McCarthy who coined the term in 1957,  states, 

    Simply put, artificial intelligence is a sub-field of computer science. Its goal is to enable the development of computers that are able to do things normally done by people -- in particular, things associated with people acting intelligently.

     

    The article then goes into great depth to discuss the different types of AI that are currently being studied. 

     

    The full article on AI is well worth a read. It is written for the lay person but has not dumbed down the concept or obscured it in verbose jungles that when untangled, mean nothing.

    Here's a segment of the article. (For some reason, I'm having difficulty copying and pasting the link here.)

    Strong AI, weak AI and everything in between

    It turns out that people have very different goals with regard to building AI systems, and they tend to fall into three camps, based on how close the machines they are building line up with how people work.

    For some, the goal is to build systems that think exactly the same way that people do. Others just want to get the job done and don’t care if the computation has anything to do with human thought. And some are in-between, using human reasoning as a model that can inform and inspire but not as the final target for imitation.

    The work aimed at genuinely simulating human reasoning tends to be called “strong AI,” in that any result can be used to not only build systems that think but also to explain how humans think as well. However, we have yet to see a real model of strong AI or systems that are actual simulations of human cognition, as this is a very difficult problem to solve. When that time comes, the researchers involved will certainly pop some champagne, toast the future and call it a day.

    The work in the second camp, aimed at just getting systems to work, is usually called “weak AI” in that while we might be able to build systems that can behave like humans, the results will tell us nothing about how humans think. One of the prime examples of this is IBM’s Deep Blue, a system that was a master chess player, but certainly did not play in the same way that humans do.

    Somewhere in the middle of strong and weak AI is a third camp (the “in-between”): systems that are informed or inspired by human reasoning. This tends to be where most of the more powerful work is happening today. These systems use human reasoning as a guide, but they are not driven by the goal to perfectly model it.

     

  • #150

    The article can be found by searching Google:

    "computer world: artificial Intelligence definition"https://www.computerworld.com/article/2906336/.../what-is-artificial-intelligence.html

     

     

  • #151
    Elroch wrote:

    It's worth pointing out that AlphaZero does not have a database of positions. It has a convoluted way of calculating a number when presented with a position (and a list of other numbers for the legal moves). The origin of this convoluted function is its reaction to experience of about 28 billion positions and what they led to.

    Then it is all about memory rather than AI.

  • #152
    Lyudmil_Tsvetkov wrote:
    Elroch wrote:

    It's worth pointing out that AlphaZero does not have a database of positions. It has a convoluted way of calculating a number when presented with a position (and a list of other numbers for the legal moves). The origin of this convoluted function is its reaction to experience of about 28 billion positions and what they led to.

    Then it is all about memory rather than AI.

    Ermm...no.  I believe he's saying that AlphaZero's convoluted calculations are pretty much similar to, say, tactics trainer positions for a human being.  AlphaZero will not traverse thousands of games each time to find one that matches...it has developed an "eye" for positions and will apply its uniquely developed valuations of each factor in the position without the need for database lookups of previous game moves.

    In that sense, AlphaZero will be like a unique human player.  It played itself for millions of games to "figure out" chess.  If it had played Stockfish for millions of games instead, its understanding of chess would be uniquely different, and if it only played humans worldwide online or something, its understanding of chess would be unique in a different way.

    It is the right way to go, leaving behind all human history and the faulty assumptions we have concerning best play, which are also built into our engines.  It's just too bad they jumped the gun on announcing dominance over the best engine under dubious circumstances.

  • #153

    Intuitively, it is a good idea to imagine it like this. It has a big neural network which receives the current board position. Some of the nodes encapsulate ideas like how much material there is, who controls territory, what is attacking what, what skewers and so on through all sorts of configurations. Also positional factors of a zillion different types.

    These abstract concepts appear in the network at deeper levels as it trains because they happen to be useful for working out what is a good move. You can think of it as a bit like evolution (only it is a bit more deliberate in the way it adjusts the parameters to try to make the evaluation better).

    As an analogy from another area, neural networks are used to classify photos. The first level of the network may identify simple things like whether adjacent pixels are similar. A higher one may identify edges. A still higher one may identify face shaped areas, another something that might be an eye. A higher node still may provide a probability that a face is your face, out of a list of faces in photographs on which it trained.

    AlphaZero has to do the same for chess. In some ways this is easier, in others harder, but the tool in both cases is a large, deep neural network made of very similar dumb units that learn to be useful.

Top

Online Now