Regarding 'upper bounds' on the maximum number of 32-piece or less setups of chess positions - that's well within reach. Even without computers. And has been done.
And any programmer worth his salt could probably use computers efficiently to greatly advance through lesser upper bounds.
At issue - how much?
Then - what about cutting down according to equivalencies ?
Could also be done. But there's a massive number of them.
Can a computer isolate all equivalencies ?
I would say no. But still - I suspect that a practical upper bound of number of essentially different chess positions -
could be found in the twenties-range of powers of ten.
Or has been found.
But that's still way too Big For 'Solving'. Daunting.
Its comparable to the Avogadro number in chemisty/physics ... gigantic.
Unmanageable.
Instead of maybe comparable to an upper bound on the number of atoms in the observable universe ?
Chess will never be solved, here's why


#255
Do not push the agnosticism too far.
Yes 1 e4 e5 2 Ba6 might win for white, there is no proof of the contrary.
Yes 1 e4 f5 might win for black, there is no proof of the contrary.
However we know that is not so.
You've just contradicted yourself.
You are confusing "know" and "are very confident of".
I am very confident that if I toss a fair coin a thousand times, it will not come up heads every time. But I do not know that (if I did know that, I could also exclude each of the other possible sequences, one by one, and finally conclude no sequence is possible).
You don't have reason to be as confident about the status of 1. e4 f5 until there is a demonstrated forced result for white. If you disagree, pick a larger number of coin tosses until you agree.

e4 f5 is the Fromm gambit I believe. Its considered unsound.
Whereas the King's Gambit is considered 'sound' by GM's if I've got that right. But many GM's would prefer the black side of it?
I'm getting that impression too.
But that preference could depend in turn on who the opponent is ...
#270
"Solving chess is, among other things, determining if that is the case."
No: proving the Riemann Hypothesis, Goldbach Conjecture, Fermat's Last Theorem, Four Color Theorem... is/was not determining if that is the case: it is proving what has long been known.
Also checkers was known in practice to be a draw long before the proof.
We know that 1 e4 f5 wins for white, we know 1 e4 e5 2 Ba6 wins for black.
A full game tree until checkmate in all variations would prove it, but we know without proof.
Likewise we know chess is a draw based on massive evidence, but without a proof yet.
Solving chess = proof black has at least 1 move that draws against all reasonable white moves.

It would seem logical to have a computer 'dismiss' positions arising from a Crazy move.
Like e4 e5 Ba6. Crazy.
But dismissing the Fromm gambit ... that's different.
There's massive pitfalls in such an approach.
Far too many 'Morphy-positions' where crazy looking moves and the positions they create are Good !
But even dismissing positions where somebody just hung their rook - pitfalls again.
e4 e5 Ba6. Leave it in. Yes - it qualifies as 'solved' - but leave it in anyway.
An idea. Not some kind of arrogance.
Number of possible setups - is always going to be an Avogadro kind of number ... hey I'll look it up. 10 to the 26th with a coefficient in front of it? Coming up.
Awww - Avogadro # is 'only' about 6 x 10 to the 23 ... Shucks !

Next idea: comparing some processing speeds of computers ...
with typical upper bounds for the number of possible positions.
Try - 10 Ghz. That's actually more than 10 billion operations per second.
A 'gig' is more than a billion. But try 10 billion.
That's 10 to the tenth ops per second.
Say by some stretch we could get the number of possible chess setups down to 10 to a 'small' 27th power ....
then it would take our 10 billion ops/second computer 10 to the 17th seconds to do an op on each position.
(which isn't enough to analyze it to any degree at all. It could take many many millions of ops if not billions/trillions on each position ...)
but just one op per position would take 10 to the 17th seconds.
There's about 31 million seconds in a year.
That's about 3 x 10 to the seventh seconds.
So it would take our single digit Ghz computer well over 10 to the 9th years to do just one op per second on each position.
A billion years.
Does that link up with the forum topic?
"Chess will never be solved, here's why"
#283
No, no, present could engines reach 10^9 nodes per second.
I have calculated above how 3 cloud engines can solve chess in 5 years.

"No, no, present could engines reach 10^9 nodes per second."
I just finished posting to the effect that a common desktop might do 10 to the 10th ops per second.
Faster than your 10 to the ninth.
And its still going to take a billion years. For one op per second.
More like many trillions of years.
1000 ops on any position would probably not be anything like enough to even categorize a position ....
A kilobyte to come up with something about a chess position ?
Hardly.
This is beginning to look like 'case closed'?
Well - everyone decides for himself/herself is how I like to look at it ...
by what factor can an array of supercomputers outdo a strong desktop computer?
that could be googled I guess.
#285
10^9 nodes/second = 10^9 chess positions per second on a cloud engine
a common desktop is at 10^6 chess positions per second
I calculated it above: 5 years on 3 cloud engines.
Please see my calculation above.

"10^9 nodes/second = 10^9 chess positions per second on a cloud engine"
No. You're not going to get anything on a position with a byte or bit per position.
If you want to define what you mean by 'nodes' there - you could say here and now and define it for all reading.
Your choice.

"Summit can deliver 200 petaFLOPS at peak. This is equivalent to 200 quadrillion floating-point operations per second."
I got that from here:
https://www.rankred.com/fastest-supercomputers-in-the-world/
200 quadrillion is 2 x 10 to the 17th power.
So Summit is about 20 million times as fast as a good desktop computer.
But with one op per position - that's still going to take 50 years to go through 10 to the 27th positions.
With a kilobyte per position (not enough) - that's going to take 50,000 years.
With a meg per position (not enough for so many positions) -
that's going to take 50 million years ...
Even if you eat shark cartilage and watch Jimmy Fallon ...
you're not going to make it ...

#266
No, not at all.
There are 2 different things: 1) assessing the feasibility of solving chess and 2) solving chess itself.
1) To assess the feasibility we must estimate the number of legal, sensible, and relevant positions so as to estimate the resouces required i.e. CPU time on cloud engines. So the estimate should exclude illegal positions, non sensible positions like random samples from the Tromp count, and non relevant positions which are legal and sensible, but which will become unreachable in the course of the calculation, e.g. all positions with a white pawn on e2 after 1 e4.
2) To actually solve chess the cloud engines calculate from a humanly prepared 26-men tabya e.g. C67 towards the 7-men table base.
wait, what
are you saying that solving that 26-men tabiya will solve all of chess?
all of us are assuming it is from the starting position, hence the discussion about e4 e5 ba6 and e4 f5(not the from's gambit, that is f4 e5)
FLOP = floating point operation, is not needed for chess
A scientific paper shows there are 3.81*10^37 legal chess positions.
A scientific paper shows checkers was solved using the square root of the number of legal positions.
Hence for chess it is plausible that the square root of the number of legal positions is needed as well to account for the fact that each pawn move and each capture renders huge numbers of positions irrelevant.
A cloud engine reaches 10^9 nodes per second.
There are 500 ECO opening codes A00 to E99.
Only like 50 of these are needed to solve chess.
Hence chess can be solved using 3 cloud engines during 5 years.
The roadblock is money to pay for the cloud engines.
#291
Human assistants with ChessBase should prepare 26-men tabiya to speed up the process.
For example C67 is a good 26-men starting tabiya for the cloud engine

Too many implausible jumps there.
One can quote 'scientific paper' or 'peer review' and claim 'it must be'.
Computers depend on doing ops per second.
Processor speed.
You're saying there's a way around that?
You could do this with zero processing speed?
Zero processing speed means zero results.
Posts 283 and 285 and 289 are put in plain math terms.
Perhaps you can come up with 'magical refute'.
Quoting 'papers' doesn't get it.
If you're going to be 'right' - suggestion - you've got to be able to express the math on your own.
Its called 'logical evidence'.
But - you haven't 'got' to do anything at all !
Its only a suggestion.
Meant friendly !
For now - Summit supercomputer (which is a big roomful of supercomputers apparently - pictue in the website) ..
well it appears it would take Summit at least 50 million years as a lower bound. That's with a meg of processing per position.
When I thought about this decades ago - I came up with a different idea and that is if the supercomputer could relate positions together ... in other words - assign conclusions to a position based on certainties about a position one ply away.
And then piggyback on that. One conclusion leading to another - and cycle through positions in that way.
One thing for chess students to look out for is 'false equivalency'.
An example is with the b and g pawns.
The method of winning with sixth rank King (in K+P) - is more constrained than it is with the four more central pawns. Its 'easier' to mess up !