Objectively Speaking, Is Magnus a Patzer Compared to StockFish and AlphaZero?

Yeah, cuda cores are slow compared to cpu cores. My overclocked quad core i5 is faster for NNs than my GPU (bought to power 3 monitors) despite having 84 times fewer (if I recall) individual cores.

Is not 'The Secret of Chess a nice example for a neural network?
https://en.chessbase.com/post/the-secret-of-chess
Instead of using 50/100 terms to evaluate chess positions, you use 10 times more.
I don't get your point. The book (based on the review) seems to be about a handcrafted (possibly linear) evaluation function, rather than a non-linear one learnt from data by a machine learning algorithm.
Regarding representing a position, it's easy to do this very compactly. What is difficult is defining the derived quantities that result in the world's best chess player. Simple methods are not going to be enough, even if very useful in their own right.
Yeah, but Alpha with their NN on a single core will play around 2400-2800(more likely the first estimate, my initial suggestion of larger elo might have been wrong), SF with their NN(the eval framework) on single core plays at 3200, and, if all the parameters of 'The Secret of Chess' are incorporated into a new engine and properly tuned, it should play at around 4500 or so.
Make the conclusions yourself how much each NN is worth.
Your claim of 4500 Elo for your not yet implemented handcrafted engine shows just how far detached from reality you are. It is as ludicrous as a chessplayer claiming he will personally achieve Elo 3800.
DeepMind is owned by google. They bought the company, so it's not a matter of them calling on google for resources.
Sure. But before they are bought by Google they didn't have the resources. According to Demis Hassabis this was the reason for DeepMind to be a part of Google.
@Lyudmil_Tsvetkov:
The ELO depends on time, not only on the number of cores. By the way it is hard to get a single CPU core PC nowadays. Whatever, because of the heavy need of matrix operations you would use a GPU to run a NN, and a GTX1080 for example has 2560 CUDA cores. The Titan V has 5120 CUDA cores and 640 Tensor cores. But these are only numbers, it says very few without further information.
SF has no NN network and cannot use any NN. Simply calling some numbers results into nothing and is no foundation for discussion.
By the way I would like to get an answer of my questions where do you have the information about the prices of TPUs.
The info is publicly available, Google disclosed it itself. Check Talkchess for precise sources.
Of course, it is an NN, any evaluation system that has more than 1000 terms is an NN, and, if you take into account psqt with 64 squares for many terms, the NN of a bit more advanced engine might reach some 10 000 parameters. (SF has around 5000, I think)
With my system of terms and all the psqt included, it could get to over 50 000.
Are not all those terms neurons?
It is not about the number of the neurons, but their efficiency.
Is not 'The Secret of Chess a nice example for a neural network?
https://en.chessbase.com/post/the-secret-of-chess
Instead of using 50/100 terms to evaluate chess positions, you use 10 times more.
I don't get your point. The book (based on the review) seems to be about a handcrafted (possibly linear) evaluation function, rather than a non-linear one learnt from data by a machine learning algorithm.
Regarding representing a position, it's easy to do this very compactly. What is difficult is defining the derived quantities that result in the world's best chess player. Simple methods are not going to be enough, even if very useful in their own right.
Yeah, but Alpha with their NN on a single core will play around 2400-2800(more likely the first estimate, my initial suggestion of larger elo might have been wrong), SF with their NN(the eval framework) on single core plays at 3200, and, if all the parameters of 'The Secret of Chess' are incorporated into a new engine and properly tuned, it should play at around 4500 or so.
Make the conclusions yourself how much each NN is worth.
Your claim of 4500 Elo for your not yet implemented handcrafted engine shows just how far detached from reality you are. It is as ludicrous as a chessplayer claiming he will personally achieve Elo 3800.
Not ludicrous at all, I know what I am talking about.
Computer chess history consistently shows that better engines always had improved evaluation, invovling also a larger number of parameters.
SF 8, for example, has 2 times more parameters than, say, an engine 3100 elo, Spark, and 5 times more than a very weak engine. So, the stronger you get, the more terms you add.
Is not that all about neural networks? The engine just goes on adding new evaluation terms.
SF 8 has much more terms than SF 3, for example, close to the starting point of the Fishtest Framework.
Following this connection, you could safely conclude a future strongest engine will have even more terms. A 5000 elo engine 50 years from now will have incorporated and successfully tuned at least 2 to 3 times as many parameters as SF currently has.

A perfect chess playing machine won't break 3600. It's not that you don't understand computers, it's that you don't understand chess, or how the Elo system works.
Chess is a draw, and by a wide margin. Even AlphaZero playing a handicapped Stockfish didn't reach a 3600 performance level.

Why would anyone think Stockfish encapsulates a worse positional evaluation than one based on the book of one chess amateur who has not produced a chess engine?
It is insanely egotistic for someone to believe they can leap 1000 Elo points past the efforts of the entire achievements of all of those who have contributed to chess engines (like an amateur athlete claiming he will break 8 seconds for 100 meters). Google managed a 130 point leap only by using the power of innovative AI running on very powerful hardware. Its end product was the result of experience of 2.9 billion chess positions in the context of increasingly high quality games. We don't know how many parameters the network has, but it is surely millions - a lot more than would fit in a book.

A perfect chess playing machine won't break 3600. It's not that you don't understand computers, it's that you don't understand chess, or how the Elo system works.
Chess is a draw, and by a wide margin. Even AlphaZero playing a handicapped Stockfish didn't reach a 3600 performance level.
To be fair, you could look at the results of Karpov - Kasparov, see mostly draws and conclude 3000 could not be exceeded. This would be wrong by a large margin. We do not yet have a clear estimate of the Elo of perfect play. We know empirically that it gets much harder to improve by similar amounts.
A perfect chess playing machine won't break 3600. It's not that you don't understand computers, it's that you don't understand chess, or how the Elo system works.
Chess is a draw, and by a wide margin. Even AlphaZero playing a handicapped Stockfish didn't reach a 3600 performance level.
Alpha Zero is sooo WEAK: does not fianchetto its king side bishop with Bg2, often plays 1.d4, etc.
So, soo weak.
A perfect chess playing machine won't break 3600. It's not that you don't understand computers, it's that you don't understand chess, or how the Elo system works.
Chess is a draw, and by a wide margin. Even AlphaZero playing a handicapped Stockfish didn't reach a 3600 performance level.
Alpha Zero is sooo WEAK: does not fianchetto its king side bishop with Bg7 as black, often plays 1.d4, etc.
So, soo weak.
Why would anyone think Stockfish encapsulates a worse positional evaluation than one based on the book of one chess amateur who has not produced a chess engine?
It is insanely egotistic for someone to believe they can leap 1000 Elo points past the efforts of the entire achievements of all of those who have contributed to chess engines (like an amateur athlete claiming he will break 8 seconds for 100 meters). Google managed a 130 point leap only by using the power of innovative AI running on bespoke hardware. Its end product was the result of experience of 2.9 billion chess positions in the context of increasingly high quality games. We don't know how many parameters the network has, but it is surely millions - a lot more than would fit in a book.
Millions of CR*P parameters, that barely reach 2500 level on single core.
But don't you know most of the advanced chess parameters in Stockfish are mine?
@Lyudmil_Tsvetkov:
Too bad that Talkchess is your only source and 'search in Talkchess' is your only argument.
According to Chess Programming Wiki the engine RomiChess has no NN. And no, the number of terms has nothing to do with a NN. You can create a simple NN with two input neurons and one output neuron, and it is still a NN.
> The engine just goes on adding new evaluation terms.
It does not, the number of weights is fix. Only their values are modified during training. And no one knows which criteria and relations and knowledge is expressed by the weights – or result to the weights.
You cannot add criteria. Supervised learning is possible, as it was done in AlphaGo Lee.
> Computer chess history consistently shows that better
> engines always had improved evaluation, invovling also
> a larger number of parameters.
More parameters for more specific situations for finer evaluation. Makes sense, but it does not mean that more parameters are better. A well balanced and tested evaluation function can be stronger than a bad one with more parameters. The quality of the evaluation result counts, not the complexity of the evaluation function.
A perfect chess playing machine won't break 3600. It's not that you don't understand computers, it's that you don't understand chess, or how the Elo system works.
Chess is a draw, and by a wide margin. Even AlphaZero playing a handicapped Stockfish didn't reach a 3600 performance level.
Alpha Zero is sooo WEAK: does not fianchetto its king side bishop with Bg2, often plays 1.d4, etc.
So, soo weak.
Trolling can be funny, but simply repeating is boring.

Only weak chess players with limited experience of chess engine development would say AlphaZero is weak. This is an example of the common truth that the information in what people say is often not what they assert.

Why would anyone think Stockfish encapsulates a worse positional evaluation than one based on the book of one chess amateur who has not produced a chess engine?
It is insanely egotistic for someone to believe they can leap 1000 Elo points past the efforts of the entire achievements of all of those who have contributed to chess engines (like an amateur athlete claiming he will break 8 seconds for 100 meters). Google managed a 130 point leap only by using the power of innovative AI running on bespoke hardware. Its end product was the result of experience of 2.9 billion chess positions in the context of increasingly high quality games. We don't know how many parameters the network has, but it is surely millions - a lot more than would fit in a book.
Millions of CR*P parameters, that barely reach 2500 level on single core.
But don't you know most of the advanced chess parameters in Stockfish are mine?
Firstly you have exactly zero information about AlphaZero running on a single core. On a similar subject, how well do you think a 747 would fly on a 500 cc 2-stroke engine?
Regarding your question, no, I have no reason to believe Stockfish has any contribution from your writings. But I do know it lost a lot to AlphaZero by getting the evaluation of positional factors wrong.

A perfect chess playing machine won't break 3600. It's not that you don't understand computers, it's that you don't understand chess, or how the Elo system works.
Chess is a draw, and by a wide margin. Even AlphaZero playing a handicapped Stockfish didn't reach a 3600 performance level.
To be fair, you could look at the results of Karpov - Kasparov, see mostly draws and conclude 3000 could not be exceeded. This would be wrong by a large margin. We do not yet have a clear estimate of the Elo of perfect play. We know empirically that it gets much harder to improve by similar amounts.
I am sure somebody somewhere has said that 3000 was a fixed limit, but I don't remember that. I do remember some people saying humans would never break 3000 in classical chess, and that is a reasonable guess.
Kenneth Regan, an IM and a professional statistician, has estimated the highest possible rating to be slightly below 3600. Others have generally agreed that 3600 does look like the upper limit from a theoretical perspective.
Again, this isn't about computers, it's about chess itself and the way Elo is calculated.

I have seen no even half convincing argument that perfect chess has an Elo of 3600 - please do provide a link. We have just seen the first of a new breed leap to 130 points stronger in 1 step, and at a mere 80,000 nodes per second, this is not the last word.

I have seen no even half convincing argument that perfect chess has an Elo of 3600 - please do provide a link. We have just seen the first of a new breed leap to 130 points stronger in 1 step, and at a mere 80,000 nodes per second, this is not the last word.
Ask, and it shall be given...
A simple linear fit then yields the rule to produce the Elo rating for any (s, c), which we call an “Intrinsic Performance Rating” (IPR) when the (s, c) are obtained by analyzing the games of a particular event and player(s).
IPR = 3571 − 15413 · AEe. (6)
This expresses, incidentally, that at least from the vantage of RYBKA 3 run to reported depth 13, perfect play has a rating under 3600. This is reasonable when one considers that if a 2800 player such as Vladimir Kramnik is able to draw one game in fifty, the opponent can never have a higher rating than that."

Notice that I gave the complete information in context. I am sure someone will come along and try to parse the first part. The key is "if a 2800 player... is able to draw one game in fifty, the opponent can never have a higher rating than [3600]." ~Regan.
That is how a chess player and statistician deals with the issue. Computer people see constant improvement and think it is infinite. It isn't.
DeepMind is owned by google. They bought the company, so it's not a matter of them calling on google for resources.
Sure. But before they are bought by Google they didn't have the resources. According to Demis Hassabis this was the reason for DeepMind to be a part of Google.
@Lyudmil_Tsvetkov:
The ELO depends on time, not only on the number of cores. By the way it is hard to get a single CPU core PC nowadays. Whatever, because of the heavy need of matrix operations you would use a GPU to run a NN, and a GTX1080 for example has 2560 CUDA cores. The Titan V has 5120 CUDA cores and 640 Tensor cores. But these are only numbers, it says very few without further information.
SF has no NN network and cannot use any NN. Simply calling some numbers results into nothing and is no foundation for discussion.
By the way I would like to get an answer of my questions where do you have the information about the prices of TPUs.