At least some people understand. I guess if chess is your whole world it's hard to imagine that they would do this and move on.
Like Morphy and Fischer Alpha Zero is done.

There is some truth in that, but the problem is that opening books are less and less effective for even very powerful engines, never mind AlphaZero. All they can really do is avoid some disasters and defer the real battle. The very best an opening book can ever do is to use previous games as samples to use in analysis (which relies on trusting that the games are adequately accurate). For example, this can go badly wrong if you are playing the French against AlphaZero, even if it might be ok against a GM!
I disagree. Opening books can be invaluable because they are based on years of experience that the computer doesn't have to do itself using up plys, CPU cycles and game time.
Think of it as nearly infinite plys by humans codified into opening moves. Again, in the relatively few games that AlphaZero won (the vast majority were draws), it was because Stockfish effectively blocked itself in with its opening.
Have you seen the games? SF didn´t have any problem in the opening, but went down in the middlegame, how many times must it be said? In addition, those "years of experience" are hilarious for an engine.
Yes, I've seen the winning games. They are positional nightmares for Stockfish... but all are predicated on bad openings. The middle games, were AlphaZero man handled Stockfish in those wins, are available as wins because Stockfish, IMHO, didn't have its opening book.
Having said that, it's still a remarkable feat to feat Stockfish with a Neural Network that was trained in a trivial amound of time.
As an enthusiast of Chess I would very much like to see a match where both sides felt their programs were tuned and played on the best available hardware. I don't think that's too much to ask...
[And let's not forget... the vast majority of the games played were draws. By vast, I mean more than you would play in an entire GM tournament.]
There were an additional 1000 games I believe, starting from known openings (100 games each in 10 openings), where Stockfish scored some wins, so obviously its lack of a book was a slight weakness, though AlphaZero only had what it had learned in training and not opening book either.

it was NOT a match. It was a test if A0-Go method can be generallized so that it can learn to play a game with no game specific knowledge. At that has been proven. Whether result is bit stonger than SF or bot weaker does is no consequence for answering actual research question
Not the case. The AI is already "generalized", in that it was not developed specifically to play Go or any other single game or even genre of games. If you mean that they wanted to see if AlphaZero could teach itself GM/engine level chess without a shred of "assistance" as given to AlphaGo along the way by the team, then fine. Please try to be more accurate.
As for the "test" assertion...preliminary and/or private test results are generally not plastered all over the news.
I know you replied to a later reply by @petrip but this initial statement was that the test by the DeepMind group was to see if the Alpha-Go process could be generalized -- which is what AlphaZero was. I would have to double check but Alpha-Go was specifically coded for only Go and not generalized.

it was NOT a match. It was a test if A0-Go method can be generallized so that it can learn to play a game with no game specific knowledge. At that has been proven. Whether result is bit stonger than SF or bot weaker does is no consequence for answering actual research question
Not the case. The AI is already "generalized", in that it was not developed specifically to play Go or any other single game or even genre of games. If you mean that they wanted to see if AlphaZero could teach itself GM/engine level chess without a shred of "assistance" as given to AlphaGo along the way by the team, then fine. Please try to be more accurate.
As for the "test" assertion...preliminary and/or private test results are generally not plastered all over the news.
I know you replied to a later reply by @petrip but this initial statement was that the test by the DeepMind group was to see if the Alpha-Go process could be generalized -- which is what AlphaZero was. I would have to double check but Alpha-Go was specifically coded for only Go and not generalized.
Ermm...
You are talking about software developed to "learn" for itself. It was not written from the ground up to play Go ...imagine that you have a baby, and you want it to learn Go. It lives in a white cube with nothing in it. To teach the baby Go, you have 2 choices: you can simply put a Go board in the room, teach the baby (once it learns language
...) how to place pieces on the board, and provide an opponent who responds to the baby's moves and tell the baby when the game is over and result, but nothing else. Or, you can put a Go expert in the room with the baby and have it not only play the baby but provide feedback and strategy advice.
Either way, creating the baby itself that can learn is the hard part ...teaching it Go is just an add-on module that defines its "universe" and the laws that govern it. They did not create a baby who can only play Go.

I would have to research AlphaGo a bit more, since I mainly just read a few articles, but my understanding is that it was only written with the ability to play Go and has specific coding that was only useful for games of Go. The DeepMind team had to modify what they did, to make it more general and learn from the ground up with only the rule set.
It's really immaterial if they reused code to make the generalized version since AlphaGo wasn't initially designed as a generalized game playing algorithm. They even tested the process by having AlphaZero learn the same way with Go as well.
It may just be semantics, since code reuse in pretty much any coding endeavor is standard, but there is a difference in at least part of the methodologies used between the two programs.

Does the machine learn anything about time management? Or is that aspect completely ignored and something else tells it when to make a decision.

It might be immaterial to the process of playing the best game of Go or Chess, but is actually the whole point of the DeepMind project . To create an AI (bad term here but whatever, everyone is using it) that can learn how to do whatever they set in front it. In that sense, copying/reusing and then customizing code for the AI itself would completely defeat the purpose. Adding a "ruleset and outcomes" module is all they should need/want to do, ideally.

At least some people understand. I guess if chess is your whole world it's hard to imagine that they would do this and move on.
But Fischer and Morphy won´t return o chess, for obvious reasons. Google, on the other hand, could find profitable to release an AlphaZero version to the market.

I would have to research AlphaGo a bit more, since I mainly just read a few articles, but my understanding is that it was only written with the ability to play Go and has specific coding that was only useful for games of Go. The DeepMind team had to modify what they did, to make it more general and learn from the ground up with only the rule set.
It's really immaterial if they reused code to make the generalized version since AlphaGo wasn't initially designed as a generalized game playing algorithm. They even tested the process by having AlphaZero learn the same way with Go as well.
It may just be semantics, since code reuse in pretty much any coding endeavor is standard, but there is a difference in at least part of the methodologies used between the two programs.
The approach was very different: AlphaGo learnt to predict top professional go players moves and used this as a basis for its own understanding of positions. The same approach could have been applied to chess, but I suspect would have had more difficulty. They may even have tried it and not got great results: such things are not always published! By contrast AlphaZero learnt to play 3 games from the rules and a great deal of self-play.

At least some people understand. I guess if chess is your whole world it's hard to imagine that they would do this and move on.
100% correct. People who don't have their ego wrapped up in Stockfish understand. DM came, saw, conquered and now has moved on to more important things.

Does the machine learn anything about time management? Or is that aspect completely ignored and something else tells it when to make a decision.
I suspect the reason for the fixed time per move in their test was because they didn't bother to teach A0 any time management.

The fact is there are no new games. There will be no new games. As my op said.
There will be no new games (publicized until such time as private testing shows that AlphaZero can definitively dominate Stockfish in a public match with realistic settings)...

Read this and admit I was correct. https://www.chess.com/article/view/whats-inside-alphazeros-brain

At least some people understand. I guess if chess is your whole world it's hard to imagine that they would do this and move on.
100% correct. People who don't have their ego wrapped up in Stockfish understand. DM came, saw, conquered and now has moved on to more important things.
It appears that Deep Mind thinks they can use their AI to diagnose eye disease better than doctors.
https://news.sky.com/story/google-says-its-ai-can-diagnose-some-eye-diseases-11237882
Let's hope their proof for this is more rigorously tested than their claims about their chess software.

Read this and admit I was correct. https://www.chess.com/article/view/whats-inside-alphazeros-brain
Best quote from that interesting article: "A race between a person and a horse would not be made “fair” by only permitting the use of two legs."
(The estimate of the size of the neural network was also interesting, but rather speculative - there is quite a lot of variation in architecture with the same computational demand).
Stockfish fans should not be whining that AlphaZero was not crippled adequately to play against Stockfish. All they need to do is exhibit an engine configuration that can score more than 64% against the Stockfish configuration used in the match and they will be able to claim they have the strongest chess player on the planet. No limits on the configuration.
Good luck, boys!

Thanks, "bro".
I'm glad you know so much about Google.
I thought I knew a thing or two myself, having been at the Mt. View campus several times, knowing their director of Network Ops for decades, having a girlfriend who worked there, friends who work/worked there, etc. I have a half dozen Google employee gel pens sitting in the cup next to me on my desk...
But clearly, you know much more than I do...what is your inside track? Are you a software engineer at Google? Are you on the DeepMind dev team? That's amazing! How do you know they are going to replace their CM? That is some deep info...crap...are you managing the dev team? That would explain everything...must be hard directing the team remotely from Kentucky, though. Damn. You must have mad skillz.
What I know that you don't seem to understand is how the world works. Google is a business.... businesses exist to make money.... Alpha Zero is a product....The Chess community will pay for this product....thus it will be monetized. You're on here like a idiot listing your "sources". Well then enlighten us all. What info do you possess that leads you to conclude it's going to be shut down? I've read nothing that states that and in all other endeavors Google has always pushed software to several generations of advancements. This time it's one and done? Nah. you just a herb.
Not totally true. More so power then anything. At best though they could generate a larger influx of income by directing their resources on other projects. Such, mater accelerators. For a computer, or A.I would have infinitely more comprehension in anything and everything compared to a human. That is unfathomable, it's mystifying how no one seems concerned with such a power.

Read this and admit I was correct. https://www.chess.com/article/view/whats-inside-alphazeros-brain
Ermm, there is nothing in this article that makes your OP any more or less correct. It's just a more in-depth article that talks about things we already know from the various threads (here and elsewhere) we've seen. I could have written this article myself.
There's no denying that AlphaZero's machine learning is powerful...I said myself years ago that the "next step" in engines would be to stop using human valuations and bootstrap their own valuations via engine-to-engine play. But...none of that matters, because this controversy is not about AlphaZero, it's about the DeepMind team jumping the gun with dubious claims and questionable methodology.
I believe it is correct that their research is now moving into areas beyond games, which have been a big focus up to now. This is not surprising as the branch of artificial intelligence that they have applied is extremely broad in its potential for application.