Stockfish dethroned

Sort:
Avatar of Optimissed
Elroch wrote:
Optimissed wrote:

This is not A.I. by the way. People tend to use that term without knowing what it means. So far, A.I. is a pipe dream. This thing just constructed a database of positions.

Most people who are expert on AI would disagree with the first three sentences, but you could argue this is a matter of semantics. The way this AI learnt chess is currently a hot topic to those who study how the brain works, which is a good indication of why it deserves the term.

The last sentence is simply wrong and could hardly be further from the truth. AlphaZero only stores positions to the minimum degree necessary to play chess: it doesn't even have an opening book (or any tablebase). All of the functionality is incorporated in the parameters of the neural networks, which encapsulate the concepts it learns about chess and how they related to each other. Very, very different to positions (which are like what it observes and "imagines" as it analyses lines that look appealing to its networks, somewhere about half way between the way a conventional computer does this and a human does it).>>>

Playing you at your own game of semantics, as you did with "impossible", you're misusing the word "expert". It means more than "practitioner" and the "most people" you're citing are interested parties .... which means they have a personal and probably financial interest in the hypothetical idea of A.I. or artificial intelligence. Yet we can only define intelligence as a comparison with human or biological intelligence rather than with machines constructed to simulate it. Turing's definition of A.I. was quite wrong, for instance. He would have been correct if he'd been discussing S.I. or simulated intelligence but artificial intelligence won't happen until the actual mechanisms of the brain are understood. They aren't at the moment: nowhere close, in fact. So all we have is a database that looks as if it's calculating independently of human input, where in fact the simulation is achieved by comparison, which is exactly the sort of thing that machines can do. If a database of positions is constructed, then it's necessary to identify each position that occurs and follow the tree towards the most favourable outcome. That's putting it very simply but of course, the Devil is in the detail.

So you're right in that the processes are more algorithmic and less brute force than previously, but wrong in that this is A.I.

 

Avatar of mcris

Someone (sammy_boi) is demonstrating his "manners" together with breaking of site TOS. But no worries, he will not have account closed...

Avatar of Elroch
Optimissed wrote:
Elroch wrote:
Optimissed wrote:

This is not A.I. by the way. People tend to use that term without knowing what it means. So far, A.I. is a pipe dream. This thing just constructed a database of positions.

Most people who are expert on AI would disagree with the first three sentences, but you could argue this is a matter of semantics. The way this AI learnt chess is currently a hot topic to those who study how the brain works, which is a good indication of why it deserves the term.

The last sentence is simply wrong and could hardly be further from the truth. AlphaZero only stores positions to the minimum degree necessary to play chess: it doesn't even have an opening book (or any tablebase). All of the functionality is incorporated in the parameters of the neural networks, which encapsulate the concepts it learns about chess and how they related to each other. Very, very different to positions (which are like what it observes and "imagines" as it analyses lines that look appealing to its networks, somewhere about half way between the way a conventional computer does this and a human does it).>>>

Playing you at your own game of semantics, as you did with "impossible", you're misusing the word "expert". It means more than "practitioner" and the "most people" you're citing are interested parties .... which means they have a personal and probably financial interest in the hypothetical idea of A.I. or artificial intelligence. Yet we can only define intelligence as a comparison with human or biological intelligence rather than with machines constructed to simulate it. Turing's definition of A.I. was quite wrong, for instance. He would have been correct if he'd been discussing S.I. or simulated intelligence but artificial intelligence won't happen until the actual mechanisms of the brain are understood. They aren't at the moment: nowhere close, in fact. So all we have is a database that looks as if it's calculating independently of human input, where in fact the simulation is achieved by comparison, which is exactly the sort of thing that machines can do. If a database of positions is constructed, then it's necessary to identify each position that occurs and follow the tree towards the most favourable outcome. That's putting it very simply but of course, the Devil is in the detail.

So you're right in that the processes are more algorithmic and less brute force than previously, but wrong in that this is A.I.

 

I will continue to use the definitions of those who work in the field of AI. It is normal practice to accept the definitions of specialists. There is subjectivity in this, but what matters is that people use the same language.

There is a good reason for using a weaker definition: it acknowledges that intelligence is built in many steps, not a single instantaneous step.

But as I said, there is no objective answer to the question "what is AI?". There are only answers to questions like "How do specialists use the term AI?". Another question with an objective answer would be "How does Optimissed use the term AI?", but you will forgive me if I don't think it is as important.

Avatar of Elroch
hairhorn wrote:

Spoken like someone who's never seen the videos of crows making tools.

Good example.

[Also let's try to keep the posts all as polite as this one. Disagreement does not require pejoratives].

Avatar of Winnie_Pooh
sammy_boi hat geschrieben:
btickler wrote:

 5000 is impossible right now while the best engines are 3400.

Actually even with a K factor of 1, the summation for rating gain (as the gap between two players goes from zero to infinity) is divergent, which is to say if an engine that never loses played infinite games, its rating would be infinity.

So it's not impossible, but sure, really unlikely we'll be able to make an engine good enough to reach 5000 any time soon (but it wouldn't require 4600 level opposition to get there, it would just need to be stronger than otherwise).

 

Agree - this is the key problem of increasing the rating infinitively. Even the strongest engine can´t boost it´s rating forever when there are not sufficient opponents with high enough rating to  beat.

I wonder even if an engine won every game would the rating of it´s opponents not also drop continuosly (because of loosing every game) and the rating increase would go asymptotically against an upper limit? - just like an equilibrium. It may be possible to calculate that limit (if it exists) but unfortunately I lack the maths to do it - maybe Alpha-Zero could puzzle out the answer ....

Avatar of Elroch

Firstly, bear in mind that the more important concept is the true underlying rating of a chess player: this is related to its expected statistical performance against a pool of rated players. This rating is estimated in a certain way by the Elo system, which forms a sort of average of a large number of results (plus some arbitrary results that explain the initial rating and which can be thought of as a prior assumption). The reason for the initial rating is to avoid excessively confident ratings when the sample is small, but better would be to use a full Bayesian estimate with a quantifiable uncertainty.

Anyhow, you seem to have misunderstood. It is not that a player "can´t boost it´s rating forever when there are not sufficient opponents with high enough rating to beat." Rather its rating merely rises very slowly as it wins games. There are issues of practicality: huge numbers of games are needed when the rating difference is very large, but this is just a practical problem.

Avatar of Winnie_Pooh
[COMMENT DELETED]
Avatar of Winnie_Pooh
Winnie_Pooh hat geschrieben:

I understand that the bigger the difference between the "super engine" and the rest of the pool gets the slower the progress in rating becomes. Therefore I wonder if there is a limes mathematically even if the number of games played are going against infinitive.

Or maybe the total number of rating points summarized of all members of a pool can´t increase beyond a certain point because every time someone gains rating points someone else has to lose points as well.

 

Avatar of Optimissed

<<But as I said, there is no objective answer to the question "what is AI?". There are only answers to questions like "How do specialists use the term AI?". Another question with an objective answer would be "How does Optimissed use the term AI?", but you will forgive me if I don't think it is as important.>>

I hope you'll excuse me for being sceptical but until you and the "experts" understand the concept of intelligence and how it works in animals and humans, then what is this artificial version of intelligence supposed to be? An automatic vacuum cleaner, maybe? A driverless car?

I suppose I could ask you to define A.I. and to state how your experts define it but I don't think I'd get an answer out of you. Probably a Google link or something but nothing that indicates you can think for yourself. Perhaps you don't know what a conflict of interests is. No problem; you can continue to think as you do. Please, however, don't force your weak understanding on others; especially on me!

Avatar of Optimissed
Optimissed wrote:

<<But as I said, there is no objective answer to the question "what is AI?". There are only answers to questions like "How do specialists use the term AI?". Another question with an objective answer would be "How does Optimissed use the term AI?", but you will forgive me if I don't think it is as important.>>

I hope you'll excuse me for being sceptical but until you and the "experts" understand the concept of intelligence and how it works in animals and humans, then what is this artificial version of intelligence supposed to be? An automatic vacuum cleaner, maybe? A driverless car?

I suppose I could ask you to define A.I. and to state how your experts define it but I don't think I'd get an answer out of you. Probably a Google link or something but nothing that indicates you can think for yourself. If you won't define it then what do you actually mean by it, in descriptive terms?? If you won't tell us what you mean by it, then what are we supposed to make of what you post? Perhaps you don't know what a conflict of interests is. No problem; you can continue to think as you do. Please, however, don't force your weak understanding on others; especially on me!

 

Avatar of Toire

This is an interesting discussion, but @Optimissed's inability to quote the post to which the reply is directed and/or post legibly thereafter, rather spoils it.

Avatar of Elroch
Optimissed wrote:
I suppose I could ask you to define A.I. 

 

Consider it done.

Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.

Examples that come from DeepMind include agents that play video games by observing the screen and learning how to achieve certain objectives by experimenting with the controls, but also the classic games successes.

There are of course two phases of operation of the AI in these examples. The first is the self learning phase where the agent interacts with, explores and learns to understand the entire environment defined by the rules of chess (i.e. an environment which is a chessboard in which two players act in opposition. The objective of this phase of operation of the AlphaZero AI is "become a good chess player".

In the second phase, the environment consists of the specific game states that result from the AI's moves and the opponent's moves, and the objective is "get a good result in the game".

Avatar of Lyudmil_Tsvetkov

Intelligence means understanding(intel-lego from Latin, understand).

What can an inanimate thing, a computer, understand?

Avatar of Sergeant-Peppers
After Adams-Carlsen at the London werewolves, there is not much point looking at GM games anymore, but AlphaZero using far more processing power versus Fishstock under the circumstances was like a prime Fischer versus a drunk Nigel Short.
Avatar of Optimissed

<<Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.>>

Thanks for the clarification, Elroch. I appreciate it. So that means that the study of A.I. is the study of agents etc.

Intelligence is, loosely, interacting with the environment to achieve goals. However, it begs the question of where the volition to do this comes from. Sure, we are motivated to interact in a way that promotes our health, longevity, reproduction prospects, wealth, maybe power etc. To some extent, we don't have much choice since not to do that would probably mean we wouldn't be here, but I'm not one of those people who believe we can't make choices. I hold that making choices or decisions is the result of a brain structure which isolates decision-making from the environment. I call it the Zener Effect, for want of something better. Others may call it something different. I'm sure you know what a Zener diode does. Anyway, I'd be interested as to where the computer scientists believe that the chess engine gets its motivation from. From itself, from God or from a sequence of interrelated instructions called a program(me)?

What I'm getting at is that so far, I don't see anything to distinguish this chess programme from a driverless car. At least, nothing qualitative. All it is, is a more algorithmic and less brute force programme which has a fully interactive database of positions.

Avatar of DiogenesDue
Optimissed wrote:

<<Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.>>

Thanks for the clarification, Elroch. I appreciate it. So that means that the study of A.I. is the study of agents etc.

Intelligence is, loosely, interacting with the environment to achieve goals. However, it begs the question of where the volition to do this comes from. Sure, we are motivated to interact in a way that promotes our health, longevity, reproduction prospects, wealth, maybe power etc. To some extent, we don't have much choice since not to do that would probably mean we wouldn't be here, but I'm not one of those people who believe we can't make choices. I hold that making choices or decisions is the result of a brain structure which isolates decision-making from the environment. I call it the Zener Effect, for want of something better. Others may call it something different. I'm sure you know what a Zener diode does. Anyway, I'd be interested as to where the computer scientists believe that the chess engine gets its motivation from. From itself, from God or from a sequence of interrelated instructions called a program(me)?

What I'm getting at is that so far, I don't see anything to distinguish this chess programme from a driverless car. At least, nothing qualitative. All it is, is a more algorithmic and less brute force programme which has a fully interactive database of positions.

I think you are in thermal runaway.

Sorry, inside Zener diode joke...

Avatar of Elroch

It's worth pointing out that AlphaZero does not have a database of positions. It has a convoluted way of calculating a number when presented with a position (and a list of other numbers for the legal moves). The origin of this convoluted function is its reaction to experience of about 28 billion positions and what they led to.

Avatar of SmyslovFan
Elroch wrote:
Optimissed wrote:
I suppose I could ask you to define A.I. 

 

Consider it done.

Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.

...

That's actually a pretty poor, not very useful definition.  AI is not the study of something.... We may study AI, but it is not in and of itself the study of...

 

Computer World, quoting John McCarthy who coined the term in 1957,  states, 

Simply put, artificial intelligence is a sub-field of computer science. Its goal is to enable the development of computers that are able to do things normally done by people -- in particular, things associated with people acting intelligently.

 

The article then goes into great depth to discuss the different types of AI that are currently being studied. 

 

The full article on AI is well worth a read. It is written for the lay person but has not dumbed down the concept or obscured it in verbose jungles that when untangled, mean nothing.

Here's a segment of the article. (For some reason, I'm having difficulty copying and pasting the link here.)

Strong AI, weak AI and everything in between

It turns out that people have very different goals with regard to building AI systems, and they tend to fall into three camps, based on how close the machines they are building line up with how people work.

For some, the goal is to build systems that think exactly the same way that people do. Others just want to get the job done and don’t care if the computation has anything to do with human thought. And some are in-between, using human reasoning as a model that can inform and inspire but not as the final target for imitation.

The work aimed at genuinely simulating human reasoning tends to be called “strong AI,” in that any result can be used to not only build systems that think but also to explain how humans think as well. However, we have yet to see a real model of strong AI or systems that are actual simulations of human cognition, as this is a very difficult problem to solve. When that time comes, the researchers involved will certainly pop some champagne, toast the future and call it a day.

The work in the second camp, aimed at just getting systems to work, is usually called “weak AI” in that while we might be able to build systems that can behave like humans, the results will tell us nothing about how humans think. One of the prime examples of this is IBM’s Deep Blue, a system that was a master chess player, but certainly did not play in the same way that humans do.

Somewhere in the middle of strong and weak AI is a third camp (the “in-between”): systems that are informed or inspired by human reasoning. This tends to be where most of the more powerful work is happening today. These systems use human reasoning as a guide, but they are not driven by the goal to perfectly model it.

 

Avatar of SmyslovFan

The article can be found by searching Google:

"computer world: artificial Intelligence definition"https://www.computerworld.com/article/2906336/.../what-is-artificial-intelligence.html

 

 

Avatar of Lyudmil_Tsvetkov
Elroch wrote:

It's worth pointing out that AlphaZero does not have a database of positions. It has a convoluted way of calculating a number when presented with a position (and a list of other numbers for the legal moves). The origin of this convoluted function is its reaction to experience of about 28 billion positions and what they led to.

Then it is all about memory rather than AI.