Lots of things I'd like to see. 1) How long does AZ keep getting better? Is that a function of the learning hardware, or is there a theoretical upper limit (or at least a severe levelling off - where you let it "learn" for an extra week, it examines an extra 5 trillion positions, and gets 0.00001% better). If there is a theoretical upper limit, does there exist an architecture that can beat THAT?
If SF complains that it would have done better with its opening book, and to take advantage of its time management capabilities, then let it! See what kind of game follows. Heck, let Stockfish "learn" for 4 hours before the match too (if SF is capable of storing the results of that sort of thing).
One piece of information that I have not seen is the architecture of AlphaZero's neural network. It is surely large and deep, but it would be interesting to learn the details.