I can evaluate a billion positions of chess a second with just a pocket calculator, and I will evaluate 3 tic tac toe positions a second with the same pocket calculator, ergo tic tac toe would require the more power to solve
Chess will never be solved, here's why

did you miss the "difficulty of evaluating board positions and moves"?
Your logic is failing here.
Read the quote:
"The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves"
The word "and" here denotes a combination. There are games with more difficult to evaluate singular board positions (like Chess...). There are games with larger search spaces. There is no (popular/well known) game that has both to such a degree. An NBA team is #1 ranked. Do you assume that that this must ergo mean that they have the #1 offense *and* the #1 defense (as you are doing with this line of reasoning for Go)? Because they might be ranked #3 in one and #7 in the other, and still be ranked #1 overall. It's not a tough concept.

did you miss the "difficulty of evaluating board positions and moves"?
Your logic is failing here.
Read the quote:
"The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves"
The word "and" here denotes a combination. There are games with more difficult to evaluate singular board positions (like Chess...). There are games with larger search spaces. There is no (popular/well known) game that has both to such a degree. An NBA team is #1 ranked. Do you assume that that this must ergo mean that they have the #1 offense *and* the #1 defense (as you are doing with this line of reasoning for Go)? Because they might be ranked #3 in one and #7 in the other, and still be ranked #1 overall. It's not a tough concept.
whats funny is that i literally predicted that you would say that.
where u fail is that the "difficulty in evaluating board positions/moves" is mathematically equivalent to the overall difficulty.

how hard is it to see that the sheer amount of moves is what makes evaluating positions hard, among other things

its also funny how you still refuse to concede your original claim, that Go as a whole is less complex than chess.

how hard is it to see that the sheer amount of moves is what makes evaluating positions hard, among other things
Lol. You added "among other things" by choice, which confirms the point I just got done making. Why do I need to keep arguing when you tacitly admit you are incorrect all by yourself?
Good night. I'll eviscerate any new stuff that emerges later.

its also funny how you still refuse to concede your original claim, that Go as a whole is less complex than chess.
Link it. No such claim was made.

2 things.
first, Go is much more complex than chess. it took 20 years after deep blue to get the same level for AlphaGo.
second, a massive number of permutations doesnt necessarily mean that something cant be solved. checkers had 10 ^20 and was still solved. of course, chess is much much more complex, but the big number alone should dissuade us.
Your premise assumes that the efforts put into beating the world champs for Chess and Go were the same. This is not the case. Solving Chess was much better PR for IBM than solving Go would have been, so a lot more resources were brought to bear.
In terms of actually solving, IIRC Go has more positions...but evaluating Go positions should take less CPU horsepower than evaluating Chess positions.
btw, deep mind cost 400 million, while deep blue cost 100 million, although inflation might put deep blue over

2 things.
first, Go is much more complex than chess. it took 20 years after deep blue to get the same level for AlphaGo.
second, a massive number of permutations doesnt necessarily mean that something cant be solved. checkers had 10 ^20 and was still solved. of course, chess is much much more complex, but the big number alone should dissuade us.
Your premise assumes that the efforts put into beating the world champs for Chess and Go were the same. This is not the case. Solving Chess was much better PR for IBM than solving Go would have been, so a lot more resources were brought to bear.
In terms of actually solving, IIRC Go has more positions...but evaluating Go positions should take less CPU horsepower than evaluating Chess positions.
btw, deep mind cost 400 million, while deep blue cost 100 million, although inflation might put deep blue over
So where's the claim that Chess is more complex than Go in that post? I see a claim that your "20 years later" argument is misleading (true). I see a claim that Chess was a more prestigious game for IBM's PR purposes (true). I see the claim that we have discussed at length (which was also true).
I don't see any other claims.
P.S. Inflation does not put Deep Blue over, but you are still comparing apples and oranges. Google's hardware is reusable, and the AI software they built for machine learning can be applied to any number of games. The tests they ran with AlphaZero vs. Stockfish were behind closed doors with the DeepMind team controlling both engines and their configurations. The results were not official or verified. They spent some weeks on it, made a misleading press release, and moved on. IBM, on the other hand, made specialized processor boards and hardware specifically for playing chess, hired GMs, paid for an official competition with a prize fund, etc.

Whoever said Deep mind can be generalized is right on! Alpha fold, in my old protein folding field using hypercomputing is an amazing example. When comparing go to chess folks often mix up position combinatorics with state spaces, and we can argue complexity then via big O computing (polynomial, exponential, p/np hard etc) or simple game state complexity (ie Shannon's log(35 #now closer to 31# x^y to ^80) = 123 where 35 is candidate moves and 80 is game length in plies (ie 40 moves). Remember, we've found forced mate in 500+ ignoring the 50 move rule, and in saying solving chesd won't happen, Kasparov defined it as finding forced mate after move 2-- or in programming terms WLD always a forced draw like tic tac toe. This cant be done currently with an end game database like strategy due to impossible computing power at 10^123, forcing an algorithmic solution, which would be higher dimensional than currently feasible because chess takes on stochastic characteristics even though deterministic with an edge of chaos variable when we switch from bayesian tree searches to algorithms.
@7407
"we've found forced mate in 500+"
++ Yes, but not reachable from the initial position by optimal play by both sides.
"always a forced draw like tic tac toe" ++ Yes, like Checkers and Nine Men's Morris.
"impossible computing power at 10^123" ++ There are only 10^44 legal positions, of which only 10^17 are relevant to weakly solve Chess.

[snip]
only 10^17 [positions] are relevant to weakly solve Chess.
Pah.
I can get it down to 10^0 using an equally valid approach!
1. The initial position is materially balanced with no obvious advantage for one side.
2. Therefore, it is proven to be a draw, using your own alternative logic system (ahem)
I also want $1000000 each for solving the Riemann and Twin Prime conjectures by confident guessing.
@7409
"it is proven to be a draw" ++ Of course the initial position is a draw.
The question is: how? How to draw against 1 e4? How to draw against 1 d4?

Your reasoning says that if a position is "obviously" (to a weak chess player, even though not a strong one like Leelachess) drawn then you can ignore the details.
For example, you ignore the majority of legal responses in positions presented to the opponent by a strategy. In some cases, you ignore ALL of them!
@7411
"to a weak chess player" ++ No, to the strongest chess players that exist: ICCF grandmaster with engine and 50 days per 10 moves.
"you ignore the majority of legal responses"
++ If the best moves cannot win for white, then the worst moves cannot win for white either.

They are examples of weak chess player - i.e. those that would be beaten (occasionally or often) by a perfect chess player (to be certain, one that distributed its play across all tablebase optimal moves in some way).
A weak chess player (as defined above) is sometimes simply wrong. Relying on such a player by saying "probably it isn't wrong here so we can rely on it 100%" should be obviously foolish.
Note also that no engines express certainty about the result in uncertain positions. AI engines explicitly express uncertainty about a position. Eg Leelachess gives positive probabilities of win or loss in difficult tablebase draws. This will obviously be true for positions beyond any tablebase. Even with traditional engines, an evaluation of, say 0.2 means Stockfish is not certain about the draw. It sees a positive expectation for white which means at least some wins. It would be very foolish to infer a certain result from an evaluation that DEFINITELY has uncertainty, such as this. Sometimes there is a very narrow route to a forced win, making a draw SEEM likely.
Amusingly, although you might say Stockfish is only certain when it says "mate in NN", observation shows that even this is uncertain. It gives this evaluation based on imperfect analysis which is "convincing". Sometimes it turns out to be wrong and it reevaluates the evaluation to not a forced mate!
@7413
"Relying on such a player"
++ Weakly solving chess does not rely on the engine, but on the 7-men endgame table base.
"no engines express certainty about the result in uncertain positions"
++ The 7-men endgame table base does.
The point is to calculate until the 7-men endgame table base.
That is also how Checkers has been weakly solved.

Usually confused nonsense. The problem is not with positions in the (tiny) tablebase. The problem is with the grossly inadequate analysis getting there.
I have explained why. You have explained it is beyond your ability to comprehend.
Also, that is NOT how checkers was solved. Fudging it as badly as you suggest would not have required much computation, and would not pass peer review as being of significance. If you disagree, get your waffling peer reviewed.
@7415
"that is NOT how checkers was solved"
++ That IS how Checkers has been weakly solved.
See Figure 2
https://www.cs.mcgill.ca/~dprecup/courses/AI/Materials/checkers_is_solved.pdf
did you miss the "difficulty of evaluating board positions and moves"?