Stockfish dethroned

Sort:
Avatar of DiogenesDue
Lyudmil_Tsvetkov wrote:
Elroch wrote:

It's worth pointing out that AlphaZero does not have a database of positions. It has a convoluted way of calculating a number when presented with a position (and a list of other numbers for the legal moves). The origin of this convoluted function is its reaction to experience of about 28 billion positions and what they led to.

Then it is all about memory rather than AI.

Ermm...no.  I believe he's saying that AlphaZero's convoluted calculations are pretty much similar to, say, tactics trainer positions for a human being.  AlphaZero will not traverse thousands of games each time to find one that matches...it has developed an "eye" for positions and will apply its uniquely developed valuations of each factor in the position without the need for database lookups of previous game moves.

In that sense, AlphaZero will be like a unique human player.  It played itself for millions of games to "figure out" chess.  If it had played Stockfish for millions of games instead, its understanding of chess would be uniquely different, and if it only played humans worldwide online or something, its understanding of chess would be unique in a different way.

It is the right way to go, leaving behind all human history and the faulty assumptions we have concerning best play, which are also built into our engines.  It's just too bad they jumped the gun on announcing dominance over the best engine under dubious circumstances.

Avatar of Elroch

Intuitively, it is a good idea to imagine it like this. It has a big neural network which receives the current board position. Some of the nodes encapsulate ideas like how much material there is, who controls territory, what is attacking what, what skewers and so on through all sorts of configurations. Also positional factors of a zillion different types.

These abstract concepts appear in the network at deeper levels as it trains because they happen to be useful for working out what is a good move. You can think of it as a bit like evolution (only it is a bit more deliberate in the way it adjusts the parameters to try to make the evaluation better).

As an analogy from another area, neural networks are used to classify photos. The first level of the network may identify simple things like whether adjacent pixels are similar. A higher one may identify edges. A still higher one may identify face shaped areas, another something that might be an eye. A higher node still may provide a probability that a face is your face, out of a list of faces in photographs on which it trained.

AlphaZero has to do the same for chess. In some ways this is easier, in others harder, but the tool in both cases is a large, deep neural network made of very similar dumb units that learn to be useful.

Avatar of Optimissed
btickler wrote:
 
.

I think you are in thermal runaway.

Sorry, inside Zener diode joke...>>>

Sorry, I'd wired it in the wrong way round. Things are better now.

 

Avatar of Optimissed

<<<Ermm...no.  I believe he's saying that AlphaZero's convoluted calculations are pretty much similar to, say, tactics trainer positions for a human being.  AlphaZero will not traverse thousands of games each time to find one that matches...it has developed an "eye" for positions and will apply its uniquely developed valuations of each factor in the position without the need for database lookups of previous game moves.

In that sense, AlphaZero will be like a unique human player.>>>

Most people here may not be old enough to remember the development of chess programs and how there was a "discussion" between more and less algorithmic programs. Simplistic calculation won over complex calculations and the experiments I did back in the 80s also bore this out .... computers achieved the same goals much faster when there were more calculations to perform, which were kept simple, than when there were sometimes far less calculations that were more complex. I could never write in machine code but I suppose that would provide the clue as to why this was.

Nowadays, data storage devices are much bigger and speeds are much faster. Moreover, chess programs are written by programming experts rather than by amateurs who are primarily chess players. Anyway, we have a reversion to a more algorithmic style of programming. The idea that this is suddenly A.I. is ridiculous. A.I. has not been developed as yet. It looks like A.I. because it wasn't provided with an openings book but what we have is simulated intelligence.

It's the job of these people to convince others that great breakthroughs are being made because that's how the earn their money. So it's a simulation. Quite successful, it would appear.

Avatar of Optimissed

Incidentally, we already knew that brute force programming was a dead end. Chess engines had become completely boring even by the time that Fritz was developed. Fritz used to beat me by repeating moves and pouncing when I deviated from the best move to try to win. It was dull as ditchwater. OK, they've come a long way since then. There was one called Rebel I used to like to play against because it didn't bore me to death but it's noteworthy that Rebel wasn't among the very strongest programs.

Avatar of Optimissed

I know I should have. I wonder if it would have repeated three times though. It was a mark of how dull I found Fritz that I deviated after the first repetition and this happened at least two or three times at a slow rate of play.

Smyslov Fan wrote "The work aimed at genuinely simulating human reasoning tends to be called “strong AI,” in that any result can be used to not only build systems that think but also to explain how humans think as well. However, we have yet to see a real model of strong AI or systems that are actual simulations of human cognition, as this is a very difficult problem to solve. When that time comes, the researchers involved will certainly pop some champagne, toast the future and call it a day."

I think this highlights the genuine weakness in their thought .... the idea that we could learn how the human brain works by first building a strong A.I. program. In fact it must be the other way round .... first we have to understand the human mind.

Avatar of Elroch
Optimissed wrote:

Incidentally, we already knew that brute force programming was a dead end. Chess engines had become completely boring even by the time that Fritz was developed. Fritz used to beat me by repeating moves and pouncing when I deviated from the best move to try to win.

No! You should have taken the draw!

It was dull as ditchwater. OK, they've come a long way since then. There was one called Rebel I used to like to play against because it didn't bore me to death but it's noteworthy that Rebel wasn't among the very strongest programs.

Who remembers Crafty? I used to catch it out at blitz sometimes.

Avatar of hairhorn

Now that industry-standard chess engines are superhuman, it's more fun to try writing your own, slightly crappy chess playing agent. There are several that come in at less than 1K (like the slightly flawed 487 byte Boot Chess).

Avatar of Optimissed

Oh dear me, I'm tired. I answered your post before your post. FAIL.

Avatar of Elroch
hairhorn wrote:

Now that industry-standard chess engines are superhuman, it's more fun to try writing your own, slightly crappy chess playing agent. There are several that come in at less than 1K (like the slightly flawed 487 byte Boot Chess).

Wonderful! It is amazing that ZX81 chess has finally been surpassed. wink.png

Avatar of Elroch
SmyslovFan wrote:
Elroch wrote:
Optimissed wrote:
I suppose I could ask you to define A.I. 

 

Consider it done.

Artificial intelligence is defined by computer scientists as the study of agents that observe an environment and interact with it in order to achieve some sort of goals.

...

That's actually a pretty poor, not very useful definition.  AI is not the study of something.... We may study AI, but it is not in and of itself the study of...

So what you are saying is that the computer scientists who have done all the work on AI have the definition wrong? You should tell them, so they can fix it!

There is the same _dual_ meaning in almost every subject: mathematics (my original subject) is _both_ what is studied and the studying. Chemistry is both the behaviour of chemicals and studying that behaviour. It is rare for someone to demand that a term only be used for what is being studied and not for the study of it as well - there would be an immediate need for lots of new words!

Note that "studying" can include doing research and development, as it does in all the other subjects. 

Computer World, quoting John McCarthy who coined the term in 1957,  states, 

Simply put, artificial intelligence is a sub-field of computer science. Its goal is to enable the development of computers that are able to do things normally done by people -- in particular, things associated with people acting intelligently.

 

The article then goes into great depth to discuss the different types of AI that are currently being studied. 

 

The full article on AI is well worth a read. It is written for the lay person but has not dumbed down the concept or obscured it in verbose jungles that when untangled, mean nothing.

Here's a segment of the article. (For some reason, I'm having difficulty copying and pasting the link here.)

Strong AI, weak AI and everything in between

It turns out that people have very different goals with regard to building AI systems, and they tend to fall into three camps, based on how close the machines they are building line up with how people work.

For some, the goal is to build systems that think exactly the same way that people do. Others just want to get the job done and don’t care if the computation has anything to do with human thought. And some are in-between, using human reasoning as a model that can inform and inspire but not as the final target for imitation.

The work aimed at genuinely simulating human reasoning tends to be called “strong AI,” in that any result can be used to not only build systems that think but also to explain how humans think as well. However, we have yet to see a real model of strong AI or systems that are actual simulations of human cognition, as this is a very difficult problem to solve. When that time comes, the researchers involved will certainly pop some champagne, toast the future and call it a day.

The work in the second camp, aimed at just getting systems to work, is usually called “weak AI” in that while we might be able to build systems that can behave like humans, the results will tell us nothing about how humans think. One of the prime examples of this is IBM’s Deep Blue, a system that was a master chess player, but certainly did not play in the same way that humans do.

Somewhere in the middle of strong and weak AI is a third camp (the “in-between”): systems that are informed or inspired by human reasoning. This tends to be where most of the more powerful work is happening today. These systems use human reasoning as a guide, but they are not driven by the goal to perfectly model it.

I'd agree with all of that and use the same terminology, but would point out that Deep Blue is a lot weaker AI than AlphaZero. This is not because it doesn't play chess so well but because it has no intrinsic capability to learn. The wonderful thing about AlphaZero is how it became a good chess player, not that it is (although it also happens to be true that there is no way for humans to design a large neural network without it learning from experience). In addition, AlphaZero is general enough to be adapted to a wide range of games merely by switching the very simple code to represent the representation of state and the legal moves (which can be less than a kilobyte for chess, in principle!)

 

 

Avatar of kenardi

All Alpha Zero did is play like a super human.  It first studied its opponent, found a weakness, and exploited it!  The top chess engines have not changed for years, they are still: Stockfish, Houdini, and Komodo.  Until Alpha Zero is in a match against more than one engine nothing has been proven yet. 

Avatar of Elroch
kenardi wrote:

All Alpha Zero did is play like a super human.  It first studied its opponent, found a weakness, and exploited it!

That might be the appearance, but note that AlphaZero did not get its skill from playing Stockfish. Rather it demonstrated its skill by beating it.

 

Avatar of kenardi
[COMMENT DELETED]
Avatar of kenardi
Elroch wrote:
kenardi wrote:

All Alpha Zero did is play like a super human.  It first studied its opponent, found a weakness, and exploited it!

That might be the appearance, but note that AlphaZero did not get its skill from playing Stockfish. Rather it demonstrated its skill by beating it.

AI thinks like a computer, a computer engine thinks like a computer, and engine has a weakness since its parameters are set by humans, AI on the other hand sets its own from learning and building a tree from what it expects from its opponent.  Hence, Alpha Zero studies how to beat Stockfish before beating it.

Avatar of kenardi
kenardi wrote:
Elroch wrote:
kenardi wrote:

All Alpha Zero did is play like a super human.  It first studied its opponent, found a weakness, and exploited it!

That might be the appearance, but note that AlphaZero did not get its skill from playing Stockfish. Rather it demonstrated its skill by beating it.

AI thinks like a computer, a computer engine thinks like a computer.  However, a engine has a weakness since its parameters are set by humans, AI (Alpha zero) on the other hand sets its own from learning and building a tree from what it expects from its opponent.  Hence, Alpha Zero studies how to beat Stockfish (a human programmed engine) before beating it.  Computers are very good at building databases (a tree of information), this is all that was done.  This has limitations.  Both have a weakness.  Until both of these weaknesses are exploited in some fashion we can not claim a winner!

 

Avatar of SmyslovFan

Elroch, I'm saying that the definition of AI as the "study" of something is all wrong. I quoted computer scientists. Please, cite your source for your definition. 

I defer to your expertise in computers in general. I don't know enough about computers to know for certain that the differences between the chessbase article and your representation of the event are at odds with each other. The differences  are quite obvious, but perhaps there is some way to reconcile what they are saying compared to what you are saying. After all, like myself, the chessbase article focuses on the event from a chess player's perspective and interviews experts in chess. But they also include quotes from a member of the Stockfish team who also describes Stockfish as being crippled, which is the opposite of what you had been saying before the chessbase article was published. 

Avatar of Elroch
SmyslovFan wrote:

Elroch, I'm saying that the definition of AI as the "study" of something is all wrong.

And I pointed out that in almost all subjects (including computer science), the same term is used for both what is being studied and the study itself, and that everyone is aware of this.

I quoted computer scientists. Please, cite your source for your definition. 

I defer to your expertise in computers in general. I don't know enough about computers to know for certain that the differences between the chessbase article and your representation of the event are at odds with each other. The differences  are quite obvious, but perhaps there is some way to reconcile what they are saying compared to what you are saying. After all, like myself, the chessbase article focuses on the event from a chess player's perspective and interviews experts in chess. But they also include quotes from a member of the Stockfish team who also describes Stockfish as being crippled, which is the opposite of what you had been saying before the chessbase article was published. 

Try to be specific. You have written two posts referring to purported differences without identifying any of them. I pointed out one possible source of misunderstanding.

It is not so surprising that a member of the Stockfish team would root for his guy (I mean computer), but "crippled" is a subjective term and 130 points of performance Elo (and no losses for AlphaZero when it could play what it liked) has to be explained. IMO this cannot be done by appealing to things that don't have a big effect once you already have 1 minute on 32 cores. As I pointed out, opening books and clever allocation of time to different moves have less effect the more time you start with. In addition, an opening book is a crutch for an engine, just saving it from calculating deep enough to do without it, as Stockfish does.

Avatar of SmyslovFan

I'm still waiting for you to cite the source, or sources since you now say "computer scientists" use that definition.

 

For the record, while I am not a computer scientist, I have studied AI and taken courses in Psychology pertaining to AI. I am a literate lay person, but I recognize my limitations.

Avatar of sammy_boi
Elroch wrote:

In addition, an opening book is a crutch for an engine, just saving it from calculating deep enough to do without it, as Stockfish does.

Yes, but also I think it should be mentioned traditional engines like SF are not designed to handle the opening well. The programmers make adjustments mostly aimed at middlegame play, assuming there will be an opening book of some sort (if not one they prefer).

This is why I'd like to see the strongest traditional engine, with a good opening book as well as EGTB. That would be much more like a mankind vs alien intelligence match... although I assume AZ team has no interest in what chess fans like.

I also read that they haven't released all the games, and haven't released any code. If this is supposed to be a leap in AI, why aren't the full methods and results available to researchers in that field? That, combined with the curious SF choices make it seem a bit fishy.