I dont know anything about neural networks but i think whoever is saying they created self learning engines are kind of sugar coating the truth I mean how does that even work. Its still using a chess engine. Thats what an engine does already it sorts through many bad moves many times in a short span of time and re routes back to the initial move many times to determine if its the best move theres no such thing as self learning currently, the people that said are just on the spectrum, so they want to believe thats true. No offense. Its still using an algorithm but whats happening in the case of alpha zero is the moves have been delayed to give the affect of learning essentially they found a way to rewrite whatever algorithm a normal engine uses to a greater depth than stockfish or whatever the best engine was. Its normal algorithm is simply delayed over the course of many games and displayed between it and the opponent.
Machine Learning in chess

I dont know anything about neural networks but i think whoever is saying they created self learning engines are kind of sugar coating the truth I mean how does that even work. Its still using a chess engine. Thats what an engine does already it sorts through many bad moves many times in a short span of time and re routes back to the initial move many times to determine if its the best move theres no such thing as self learning currently, the people that said are just on the spectrum, so they want to believe thats true. No offense. Its still using an algorithm but whats happening in the case of alpha zero is the moves have been delayed to give the affect of learning essentially they found a way to rewrite whatever algorithm a normal engine uses to a greater depth than stockfish or whatever the best engine was. Its normal algorithm is simply delayed over the course of many games and displayed between it and the opponent.
That isn't at all what something like A0 is. The only thing given to the program in advance is the rules on how the pieces move and what defines the end condition.
No theory. No algorithms on how to choose moves. It literally learned what worked for it and what didn't. It looks at positions and assigns a statistical likelihood of winning, based on past experiences, and chooses the move that give the position with the highest likelihood of winning. It also doesn't look at as many future positions as classic methods.
Leela as far as I'm aware used the exact same process.

Thats like a delayed normal engine then its not really impressive to me, or novel. That means they altered a normal chess engine, to play itself. Im sure theres a word for it but all it does is pick one move and then picks one for the opponent and depending on the depth assesses the best move after a certain amount of moves, its retroactively alters its first move in a sequence, the first move is probably the easiest for it. After that as the position limits certain moves it can pick one move and ideally change it. That means for alpha zero they altered an engine to "play itself" which is what an engine does anyways theres no difference really lol unless theres a consciousness trapped in there then i got no idea
#1
"requiring to train trillions of training data before an engine improves in strength"
AlphaZero reached top grandmaster strength by playing 700000 games against itself.
Also humans like Fischer and Carlsen practiced that way.

Thats like a delayed normal engine then its not really impressive to me, or novel. That means they altered a normal chess engine, to play itself. Im sure theres a word for it but all it does is pick one move and then picks one for the opponent and depending on the depth assesses the best move after a certain amount of moves, its retroactively alters its first move in a sequence, the first move is probably the easiest for it. After that as the position limits certain moves it can pick one move and ideally change it. That means for alpha zero they altered an engine to "play itself" which is what an engine does anyways theres no difference really lol unless theres a consciousness trapped in there then i got no idea
They did not alter an engine. The only thing the program knew was how the pieces moved and what constituted a completed game. It learned how to play by using the base rules to figure out what actually worked and what did not. It literally learned by trial and error.

#1
"requiring to train trillions of training data before an engine improves in strength"
AlphaZero reached top grandmaster strength by playing 700000 games against itself.
Also humans like Fischer and Carlsen practiced that way.
Fischer and Carlsen also studied other player's games and used other learning resources.
That is correct, that, indeed, Alphazero literally learned the game of chess and its theories on its own without external input or position evaluation advise from a human. In contrast, differentiate this to a traditional chess engines (Deep Blue) wherein they hired a GM to programmatically etch deep positional patterns in order to help the computer decide. Traditional chess engines have been told by a human advise the importance of the central squares d4,d5,e4,e5 so the engine will give importance to these squares. Traditional chess engines are also been advised to give more importance to the knight than the bishop in closed positions.
NOW, the above human advises were not given to Alphazero. Alphazero has to learn for itself the importance of the central squares, how to evaluate open/closed positions etc. And, indeed, it learned it. And it even unraveled more new concepts that no human have conceived. Think about this very carefully, because its implications is mind blowing. Although, at present, I don't think the Alphazero can tackle the Penrose position challenge.
Anyway, my original question is just for the simple ones. I already stated, that these chess ML is very deep and is way over my head. I just want the simpler ones, like pytorch, tensorflow, svm etc and example applications in small aspects, and not necessarily a whole engine, for as long as it uses these well-known ML libraries. I repeat, the ML algorithms in chess engines are very niche and deep and very chess-specific. I am more of, the more popular ML frameworks and how it can be applied in any aspect of chess.
#1
"requiring to train trillions of training data before an engine improves in strength"
AlphaZero reached top grandmaster strength by playing 700000 games against itself.
Also humans like Fischer and Carlsen practiced that way.
Fischer and Carlsen also studied other player's games and used other learning resources.
I don't think that it is within a human's life span to play even 100,000 games. But despite it, the best of us, can play a draw against the strongest chess engines today which is trained 700,000 games. Because, humans truly learned. Tell me of a chess engine, that only needs to play 1000 games from ground zero, and achieved an ELO of around 2000.

Thats like a delayed normal engine then its not really impressive to me, or novel. That means they altered a normal chess engine, to play itself. Im sure theres a word for it but all it does is pick one move and then picks one for the opponent and depending on the depth assesses the best move after a certain amount of moves, its retroactively alters its first move in a sequence, the first move is probably the easiest for it. After that as the position limits certain moves it can pick one move and ideally change it. That means for alpha zero they altered an engine to "play itself" which is what an engine does anyways theres no difference really lol unless theres a consciousness trapped in there then i got no idea
It only seems the same to people who know little about computers and see them as black boxes.
Traditional engines evaluate positions based on valuations baked into them and then tweaked endlessly by the developers.
Machine learning engines were given no valuations at all. They learned the game through pure trial and error.
They are completely different animals, software-wise. Stockfish NNUE is a hybrid of the two distinctly different approaches.

#1
"requiring to train trillions of training data before an engine improves in strength"
AlphaZero reached top grandmaster strength by playing 700000 games against itself.
Also humans like Fischer and Carlsen practiced that way.
Fischer and Carlsen also studied other player's games and used other learning resources.
I don't think that it is within a human's life span to play even 100,000 games. But despite it, the best of us, can play a draw against the strongest chess engines today which is trained 700,000 games. Because, humans truly learned. Tell me of a chess engine, that only needs to play 1000 games from ground zero, and achieved an ELO of around 2000.
How many games depends on the time control and a player's free time. OTB it would be hard to get that many games, but online there's likely quite a few people that easily have played that many or more.
The deep learning algorithms aren't inherently different than humans when it comes to learning, just they can play a lot more games, a lot faster, and have perfect memories.
There's not may people that could just learn the rules and play games with themselves to get to a 2000 strength. Without outside influences, such as books or playing against players that they can learn from, they'd be lucky to break 1000 after 1000 games against themselves.
Yes, Alphazero is indeed vastly different from traditional chess engines. As soon as, Alphazero can correctly evaluate the Penrose position, then Skynet is in the horizon.
If Alphazero can correctly evaluate Penrose position, without any external human input, then it would mean that it is able to compute certain truths that is very hard to compute by numbers alone. To correctly evaluate the Penrose position, it needs to see beyond numbers, and it needs to empathize what it's opponent will do. And by then, it can correctly conclude that the position is a draw.
I mean, you could look at some source code for the use cases (idk if the engines are open source but im sure there are projects on the official websites of the libraries.

Yes, Alphazero is indeed vastly different from traditional chess engines. As soon as, Alphazero can correctly evaluate the Penrose position, then Skynet is in the horizon.
If Alphazero can correctly evaluate Penrose position, without any external human input, then it would mean that it is able to compute certain truths that is very hard to compute by numbers alone. To correctly evaluate the Penrose position, it needs to see beyond numbers, and it needs to empathize what it's opponent will do. And by then, it can correctly conclude that the position is a draw.
AlphaZero has been replaced by MuZero, which was given no information about how to play, learning the game from scratch. Since the whole project is how to create general purpose methods to solve any problem, my guess is that it might be able to solve the position, whatever it is (will have to look it up), if it managed to run across a similar position in its learning process.
But chess is just a domain to test the process of a particular approach to machine learning, and isn't something the project inherently cares about.

That is correct, that, indeed, Alphazero literally learned the game of chess and its theories on its own without external input or position evaluation advise from a human. In contrast, differentiate this to a traditional chess engines (Deep Blue) wherein they hired a GM to programmatically etch deep positional patterns in order to help the computer decide. Traditional chess engines have been told by a human advise the importance of the central squares d4,d5,e4,e5 so the engine will give importance to these squares. Traditional chess engines are also been advised to give more importance to the knight than the bishop in closed positions.
NOW, the above human advises were not given to Alphazero. Alphazero has to learn for itself the importance of the central squares, how to evaluate open/closed positions etc. And, indeed, it learned it. And it even unraveled more new concepts that no human have conceived. Think about this very carefully, because its implications is mind blowing. Although, at present, I don't think the Alphazero can tackle the Penrose position challenge.
Anyway, my original question is just for the simple ones. I already stated, that these chess ML is very deep and is way over my head. I just want the simpler ones, like pytorch, tensorflow, svm etc and example applications in small aspects, and not necessarily a whole engine, for as long as it uses these well-known ML libraries. I repeat, the ML algorithms in chess engines are very niche and deep and very chess-specific. I am more of, the more popular ML frameworks and how it can be applied in any aspect of chess.
You've made contradictory statements on what you want.
You want ML (machine learning) and not DL (deep learning) and yet you want to use deep learning tools.
Do you want to build a simple, deep learning project and use very little games...or what?
I know it can be googled, but I also posted it here in case some chess enthusiasts on the same boat with ML/DL can share additional info. Usually, starting out in ML/DL uses plant, flower iris, titanic csv files to demonstrate the point. I'm looking for other use cases which uses chess or an aspect of it in order to pump up more my interest learning ML/DL.
While I'm at the topic, I watched Vapnic VC theory and he clearly illuminated the big picture of what is going on in terms of A.I. His comments fully resonated with my initial observation. Why do we need 500,000 games to train an engine? According to him, a truly learning engine only needs 1/10th of the training data.

While I'm at the topic, I watched Vapnic VC theory and he clearly illuminated the big picture of what is going on in terms of A.I. His comments fully resonated with my initial observation. Why do we need 500,000 games to train an engine? According to him, a truly learning engine only needs 1/10th of the training data.
It's all about patterns and learning what works and what doesn't. For a self learning process, like AlphaZero, it had to play a lot of games to generate sufficient positions, and many of the games were ones where it was learning what didn't work more than what did. It might be possible to train a program with fewer positions, if you could get enough quality games that include enough patterns. But that is doing something slightly different that what AlphaZero was trying to do.

While I'm at the topic, I watched Vapnic VC theory and he clearly illuminated the big picture of what is going on in terms of A.I. His comments fully resonated with my initial observation. Why do we need 500,000 games to train an engine? According to him, a truly learning engine only needs 1/10th of the training data.
It's all about patterns and learning what works and what doesn't. For a self learning process, like AlphaZero, it had to play a lot of games to generate sufficient positions, and many of the games were ones where it was learning what didn'flt work more than what did. It might be possible to train a program with fewer positions, if you could get enough quality games that include enough patterns. But that is doing something slightly different that what AlphaZero was trying to do.
Exactly...
What are you trying to accomplish?
Train an engine...for what?
To learn....something, a few patterns/tactics/ maneuvering of pieces?
To "truly learn", wouldn't you need to reference only your own games? Otherwise, I can argue you only need 1/100th or 100x. Who determined the min/max to "learn" and how was it gone about?

Alpha Zero played those training games in 4 hours, so I'm not sure why playing less games would be a goal here since it's not like they used a bunch of resources. It only took 4 hours for Alpha Zero to learn to play chess better than all of human history combined (though A0 still has not played an official match, to be fair).
Of course, as anyone would expect, the results are better the more games played. So saying it should take less games to learn chess in general is obviously true, but saying that learning chess at a 3600 level should take 5,000 games is pretty silly, AI or no. This would allow for only a handful of attempts at every opening variation. You could play all 5,000 games in the Ruy Lopez and probably not exhaust reasonable candidate moves .
I want to dip my toe into Machine Learning and I want to do it in chess. True, that we have already opensource `LC0` and `Stockfish NNUE`, but I think that the level of `ML` used here is very specialized, deep and niche. I don't need an exceptionally strong chess strength, but instead I'm looking for best use case examples using popular ML methods/libraries such as:
It could be in small chunks, but that it fits and shows the ML concept clearly. Also, I smell something wrong in neural networks by requiring to train trillions of training data before an engine improves in strength. Isn't there something that learns like a human, plays couple of games and improves that way? It could be a sparring partner that grows with the player.