Using data from humans and engines doesn't mean anything because we (humans) and engines don't play perfect chess. You can't say "chess cannot be solved" and at the same time say that humans have solved it.
We can safely say humans and computers have not solved chess.
However, existing chess can be viewed as stochastic (except where perfect play is accessible). This means moves are made which have random errors associated with them,
When two players both make random errors of similar sizes, it seems highly likely that the statistical results will tend to be closer to the true value of the game than some other value. If the true value of a game of chess were a win, we would see a gradual reduction in variance of results converging towards 100% for white. What we actually see is a gradual reduction in the variance of results converging on a slight plus score for white, not converging on 100% for white.
White's plus score can be viewed as an artifact of imperfect play. With real chess engines, rating is increased steadily with computing power. Because of the practical advantage of the first move, it seems black requires a bit more computing power to be equal (equivalently, a small advantage in rating).
I do not believe there has been any reduction in white's advantage with increasing quality of play, either by GMs or computers. This makes sense because if white's practical advantage corresponds to say a score of 54%, and this corresponds to an Elo difference of 28 points, this is going to remain until an engine is so good, there is no advantage to running it faster. We are a long way from there.
There has been some progress towards 3D design. But thermal issues are going to be important. CPUs run hot even without reducing the surface area for cooling, eg 1.4 billion transistors in 130mm^2 on mine, generating 77W at design speed. Or a fair bit more when I overclock mine.
So very low power designs are needed for deeply 3D processors.