And they take themselves SO seriously. Dio correct as usual? Dio has never been correct about anything except when he projects his own ego on playerafar.
Is Chess Something We Can Solve?
This should be moved to off topics I think long endless arguments probably should not be in the community section
There's an aspect of this that hasn't been discussed much.
And that is - what if there is a breakthrough in programming and algorithms for this project - instead of just computing speed?
But could major software improvements make that much difference?
If the computers were a billion times faster and the software was a billion times more efficient - (which are both gigantic 'ifs')
then that would be an improvement factor of ten to the eighteenth power.
So if there are 10 to the 44th chess positions to be solved ...
well you can't exactly subtract the 18 from the 44.
If there was an 'average time' to solve each position with current software
such as one second - then you'd still end up needing 10 to the 26th seconds to solve the whole thing with the stronger hardware and software.
there are about 31 million seconds in a year.
So that still leaves you with over ten to the 18th Years!
A million trillion years!
Hahahaahahaha.
-------------------------
but even that might be understating it because there's no 'average time' to solve each position.
Positions in the endgame tablebases get harder and harder to solve as more pieces get added.
There's no 'one second' average.
In fact - there's no 'average' at all. (although perhaps tygxc will try to pretend there is an average and its called 'nodes per second'?)
There is not that much scope, because in something like the construction of a tablebase or the solution of checkers, very little time is spent on each position. It's not like analysing a move in chess, it's more like adding one node to the analysis tree. On a powerful cloud server, Stockfish can manage 300,000,000 nodes per second.
Interestingly, the construction of the Syzergy tablebase without a supercomputer was slightly faster (in clock time) than the construction of Lomosov tablebase with one (5 months versus 6 months), suggesting some good coding! (Both are full 7-piece tablebases). By my calculations, it managed 32 million positions per second. It probably used less powerful hardware than the most powerful implementation of Stockfish (which uses over 100 cores).
This is not many clock cycles for a quite complex operation of expanding a node (forwards or backwards).
So it's really all about those big numbers, maybe 10^30 positions. That number of nodes at the speed of the fastest Stockfish (300 Mnodes/s) is about 100 trillion years. And the 32 piece tablebase over 10,000 trillion trillion years. Gulp.
@360
"On a powerful cloud server, Stockfish can manage 300,000,000 nodes per second."
++ The 17 ICCF finalists each use two servers of each 90,000,000 nodes per second.
"maybe 10^30 positions" ++ No way. I stand by 10^17 positions to weakly solve Chess.
Even if the alpha-beta pruning of chess were as primitive as the one Schaeffer used to weakly solve Checkers and even without discounting obviously irrelevant positions,
then still it would lead to (10^38)^0.67 = 2*10^25 positions.
And they take themselves SO seriously. Dio correct as usual? Dio has never been correct about anything except when he projects his own ego on playerafar.
We covered this earlier. You're no longer capable of reliable testimony on what is correct or not, so why even comment?
There's an aspect of this that hasn't been discussed much.
And that is - what if there is a breakthrough in programming and algorithms for this project - instead of just computing speed?
But could major software improvements make that much difference?
If the computers were a billion times faster and the software was a billion times more efficient - (which are both gigantic 'ifs')
then that would be an improvement factor of ten to the eighteenth power.
So if there are 10 to the 44th chess positions to be solved ...
well you can't exactly subtract the 18 from the 44.
If there was an 'average time' to solve each position with current software
such as one second - then you'd still end up needing 10 to the 26th seconds to solve the whole thing with the stronger hardware and software.
there are about 31 million seconds in a year.
So that still leaves you with over ten to the 18th Years!
A million trillion years!
Hahahaahahaha.
-------------------------
but even that might be understating it because there's no 'average time' to solve each position.
Positions in the endgame tablebases get harder and harder to solve as more pieces get added.
There's no 'one second' average.
In fact - there's no 'average' at all. (although perhaps tygxc will try to pretend there is an average and its called 'nodes per second'?)
There is not that much scope, because in something like the construction of a tablebase or the solution of checkers, very little time is spent on each position. It's not like analysing a move in chess, it's more like adding one node to the analysis tree. On a powerful cloud server, Stockfish can manage 300,000,000 nodes per second.
Interestingly, the construction of the Syzergy tablebase without a supercomputer was slightly faster (in clock time) than the construction of Lomosov tablebase with one (5 months versus 6 months), suggesting some good coding! (Both are full 7-piece tablebases). By my calculations, it managed 32 million positions per second. It probably used less powerful hardware than the most powerful implementation of Stockfish (which uses over 100 cores).
This is not many clock cycles for a quite complex operation of expanding a node (forwards or backwards).
So it's really all about those big numbers, maybe 10^30 positions. That number of nodes at the speed of the fastest Stockfish (300 Mnodes/s) is about 100 trillion years. And the 32 piece tablebase over 10,000 trillion trillion years. Gulp.
@Elroch - we seem to agree on the kinds of daunting numbers involved.
Our disagreements usually have to do with terminology.
Both 'weakly solved' and 'nodes' play into tygxc's illogic and invalid claims.
Elroch have you considered the idea that when you add a piece to a tablebase that already has solved for 12 pieces that that increases the difficulty more than adding a piece to a tablebase that has solved for 8 pieces?
And - what is the actual closest thing to an 'average' in all of this?
If you consider the number of years it took a tablebase to get from six pieces solved to seven pieces solved - and then divide that time by the number of possible 7 piece positions ... then you might have a kind of average.
I vaguely recall doing something like that in the other forum. Years ago.
Its not really an average though.
But still - some figures are there.
And when you extrapolate for a 500 multiplier figure of number of positions each time another piece is added - just that alone is daunting.
The result is many trillions of years.
--------------------------------------------
But there's this matter of the difficulty also increasing ... because of more pieces already on the board.
That might even be calculated by comparing the jump from five to six pieces and from six to seven - while allowing for the 500 multiplier.
So instead of using the word 'average' a term 'difficulty factor' could be coined.
While also avoiding the term 'nodes'.
Nodes may as well be Noodles. Verbal spaghetti.
Elroch remember how important it is to explain Very clearly.
(reminds me of Elroch quickly refuting O's feeble attempt to attack the term 'perfect information' - but then O is generally feeble anyway)
And they take themselves SO seriously. Dio correct as usual? Dio has never been correct about anything except when he projects his own ego on playerafar.
We covered this earlier. You're no longer capable of reliable testimony on what is correct or not, so why even comment?
Counterpoint... you're not capable of reliable testimony on what is correct or not, so why even comment?
And they take themselves SO seriously. Dio correct as usual? Dio has never been correct about anything except when he projects his own ego on playerafar.
We covered this earlier. You're no longer capable of reliable testimony on what is correct or not, so why even comment?
Counterpoint... you're not capable of reliable testimony on what is correct or not, so why even comment?
Dio is always capable. O constantly tells falsehoods. (EB possibly not aware of that.)
So Dio's question is reasonable.
Dynamic of this forum:
tygxc constantly gets a lot of attention - often more than O gets. And O being foolishly jealous of tygxc on that means that O struggles terribly and foolishly to try to attract attention and divert focus from tygxc.
Which means tygxc wins over O. Constantly. O loses. Constantly. For that and other reasons. O losing also means the forum wins. Over and over again.
And it means that the real discussion of the forum topic goes on around O - without him. While with or without tygxc.
And they take themselves SO seriously. Dio correct as usual? Dio has never been correct about anything except when he projects his own ego on playerafar.
We covered this earlier. You're no longer capable of reliable testimony on what is correct or not, so why even comment?
Counterpoint... you're not capable of reliable testimony on what is correct or not, so why even comment?
hey, how about you actually read the context of posts instead of making false assumptions?
the context of this is that opt misread a basic question, then we corrected his misunderstanding, and hes calling us stupid for not agreeing with him and claiming that elroch is conducting a mass manipulation.
you also still havent corrected your post to my warning about tygxc, in which you stilll havent addressed many of my factual rebuttals to both tygxcs claims and your select defenses.
Optimissed you claimed that the question implied that a certain player would win every time. where? what sentence said that?
@360
"On a powerful cloud server, Stockfish can manage 300,000,000 nodes per second."
++ The 17 ICCF finalists each use two servers of each 90,000,000 nodes per second.
"maybe 10^30 positions" ++ No way. I stand by 10^17 positions to weakly solve Chess.
Even if the alpha-beta pruning of chess were as primitive as the one Schaeffer used to weakly solve Checkers and even without discounting obviously irrelevant positions,
then still it would lead to (10^38)^0.67 = 2*10^25 positions.
you do realize that the pruning of chess is less effective than that of checkers due to chess's complexity?
also, improved chess engines mean almost nothing when the basis for their improvements is almost entirely on increased computing power.
@360
"On a powerful cloud server, Stockfish can manage 300,000,000 nodes per second."
++ The 17 ICCF finalists each use two servers of each 90,000,000 nodes per second.
"maybe 10^30 positions" ++ No way. I stand by 10^17 positions to weakly solve Chess.
Even if the alpha-beta pruning of chess were as primitive as the one Schaeffer used to weakly solve Checkers and even without discounting obviously irrelevant positions,
then still it would lead to (10^38)^0.67 = 2*10^25 positions.
you do realize that the pruning of chess is less effective than that of checkers due to chess's complexity?
also, improved chess engines mean almost nothing.
Yes.
Because the engines of tomorrow will be better than the engines of today.
tygxc may not realize that engines of today are rated and that those ratings include wins over engines of yesteryear?
In other words - engines of today reveal weakness/inaccuracy of engines of earlier.
Engines of today playing each other reveals little about 'solving' chess.
Nor does engines 'pruning' their analysis.
--------------------------
On this website - members have often debated what would happen if the greats of chess were brought forward in their prime years - with time machines and given some time to adjust to the computer age and new compiled chess information - how would they do against the likes of Magnus Carlsen and other top players of today?
The bottom line is - that's just not known.
----------------------------------
But with engines - you don't need time machines.
The old engines would continue to be 'in their primes'.
They don't have heart attacks and strokes.
So they could be compared by being played off with modern computers.
But there's not much world interest.
Why not?
Because most chess players around the world would rather play each other than watch computers play each other.
Apparently there's not going to be a big computer project to prove to tygxc that computer games of yesterday and today are not 'optimal play' so therefore they won't arrange a playoff of old and new engines to refuting his claims.
It would be almost like proving there's no teapot orbiting Jupiter.
The money isn't there for that.![]()
plus, the engines rely on increased computing power to improve themselves, whereas tygxc is trying to claim that they are more efficient per unit than a checkers engine.
the context of this is that opt misread a basic question, then we corrected his misunderstanding, and hes calling us stupid for not agreeing with him and claiming that elroch is conducting a mass manipulation.
...
@Optimissed didn't misread the question, he failed to understand it.
I knew he wouldn't solve it but I was a bit surprised to find he couldn't even understand it.
There's an aspect of this that hasn't been discussed much.
And that is - what if there is a breakthrough in programming and algorithms for this project - instead of just computing speed?
But could major software improvements make that much difference?
If the computers were a billion times faster and the software was a billion times more efficient - (which are both gigantic 'ifs')
then that would be an improvement factor of ten to the eighteenth power.
So if there are 10 to the 44th chess positions to be solved ...
well you can't exactly subtract the 18 from the 44.
If there was an 'average time' to solve each position with current software
such as one second - then you'd still end up needing 10 to the 26th seconds to solve the whole thing with the stronger hardware and software.
there are about 31 million seconds in a year.
So that still leaves you with over ten to the 18th Years!
A million trillion years!
Hahahaahahaha.
-------------------------
but even that might be understating it because there's no 'average time' to solve each position.
Positions in the endgame tablebases get harder and harder to solve as more pieces get added.
There's no 'one second' average.
In fact - there's no 'average' at all. (although perhaps tygxc will try to pretend there is an average and its called 'nodes per second'?)
There is not that much scope, because in something like the construction of a tablebase or the solution of checkers, very little time is spent on each position. It's not like analysing a move in chess, it's more like adding one node to the analysis tree. On a powerful cloud server, Stockfish can manage 300,000,000 nodes per second.
Interestingly, the construction of the Syzergy tablebase without a supercomputer was slightly faster (in clock time) than the construction of Lomosov tablebase with one (5 months versus 6 months), suggesting some good coding! (Both are full 7-piece tablebases). By my calculations, it managed 32 million positions per second. It probably used less powerful hardware than the most powerful implementation of Stockfish (which uses over 100 cores).
This is not many clock cycles for a quite complex operation of expanding a node (forwards or backwards).
So it's really all about those big numbers, maybe 10^30 positions. That number of nodes at the speed of the fastest Stockfish (300 Mnodes/s) is about 100 trillion years. And the 32 piece tablebase over 10,000 trillion trillion years. Gulp.
@Elroch - we seem to agree on the kinds of daunting numbers involved.
Our disagreements usually have to do with terminology.
Both 'weakly solved' and 'nodes' play into tygxc's illogic and invalid claims.
No, they don't. It is appropriate for the rest of us to use the terms correctly. A node is a position that is analysed in a program, regardless of how much it is analysed. It's called a node, because in chess engines, tablebases and engine analysis, a key concept is the graph (mathematical sense - would you like to discourage that too?) of positions, and in graph theory, points in a graph are called nodes.
Of course what is done with the nodes is entirely different in engine analysis to when weak solving a game or in an endgame tablebase. In the latter, the edges (also graph theory term) for all legal moves are in the graph. In weak solving a game, all the forward edges are there only for positions reached by the opponent of a strategy and there can be just one edge for the positions reached by the proponent.Elroch have you considered the idea that when you add a piece to a tablebase that already has solved for 12 pieces that that increases the difficulty more than adding a piece to a tablebase that has solved for 8 pieces?
I have considered it, but it seems you have it the wrong way round. The size of tablebases goes up rapidly with number of pieces, but the ratio of the sizes goes down. Eg the 6 piece table base has 145 times more positions than the 5 piece one, but the 8 piece table base has "only" 90 times more positions than the 7 piece one! Intuitively, you have more repeated pieces and less room on the board for the later pieces.
And - what is the actual closest thing to an 'average' in all of this?
If you consider the number of years it took a tablebase to get from six pieces solved to seven pieces solved - and then divide that time by the number of possible 7 piece positions ... then you might have a kind of average.
I vaguely recall doing something like that in the other forum. Years ago.
Its not really an average though.
But still - some figures are there.
And when you extrapolate for a 500 multiplier figure of number of positions each time another piece is added - just that alone is daunting.
The result is many trillions of years.
--------------------------------------------
But there's this matter of the difficulty also increasing ... because of more pieces already on the board.
That might even be calculated by comparing the jump from five to six pieces and from six to seven - while allowing for the 500 multiplier.
It's not 500. It's down to 90 for 7 pieces to 8 pieces.
To get from 8 pieces to 32 pieces is another 24 steps of size, and the ratio of the sizes is 4.6e44 to 38,176,306,877,748,245. So on a geometric scale, each of those 24 steps increases the size by a geometric average of (4.6e44 / 38,176,306,877,748,245)^(1/24), which is less than 15! (Compare to ratio of 90 for 8 piece to 7 piece).
If the ratios do fall monotonically (I think so), the ratio of the sizes of the 32 piece and 31 piece table base will be the smallest.
So instead of using the word 'average' a term 'difficulty factor' could be coined.
While also avoiding the term 'nodes'.
Nodes may as well be Noodles. Verbal spaghetti.
Elroch remember how important it is to explain Very clearly.
(reminds me of Elroch quickly refuting O's feeble attempt to attack the term 'perfect information' - but then O is generally feeble anyway)
Again, a matter of respecting the definition of a standard term!
the context of this is that opt misread a basic question, then we corrected his misunderstanding, and hes calling us stupid for not agreeing with him and claiming that elroch is conducting a mass manipulation.
...
@Optimissed didn't misread the question, he failed to understand it.
I knew he wouldn't solve it but I was a bit surprised to find he couldn't even understand it.
Don't rule out him misunderstanding it Intentionally.
O likes to glibly troll. On reflex.
So just quickly assigning his own misinterpretation by whim would be overwhelmingly attractive and efficient to him.
Increases his trolling intensity.
We saw the same thing from him with Zemelo and 'crumpet'.
And 'perfect information' being too much for him.
Kind of like his 'think vaccination weakens the immune system'
and 'if its an opinion it can't be inaccurate'.
Quick glib trolling replies from Optimissed.
His lightweight pseudo-intellectual trolling.
You can expect more of the same from him. Year in year out.
(but I think Martin knows that)![]()
@Elroch
" Actually I think you have it the wrong way round. "
the fact that ratios of total pieces decline as more pieces are added - doesn't mean the difficulty doesn't increase more.
Where 'average' could mean the total time taken (T) to form the tablebase concerned divided by the number of possible positions (p) in the two consecutive cases respectively.
T1/p1 and T2/p2.
You might find the second term is higher even though p2 is about 500 times as big as p1. I just say 'might' not 'will'. Or if the multiplier was about 90 as you mentioned.
Regarding 'nodes' ... okay ... points on a graph.
But how are you going to tie that to 'nodes per second' verbally?
If you already have done so I missed it.
On your graph - what do you have on your y-axis and what on your x-axis?here's some wiki about nodes btw ... doubt if it helps though.
"In discrete mathematics, and more specifically in graph theory, a vertex (plural vertices) or node is the fundamental unit of which graphs are formed: an undirected graph consists of a set of vertices and a set of edges (unordered pairs of vertices), while a directed graph consists of a set of vertices and a set of arcs (ordered pairs of vertices). In a diagram of a graph, a vertex is usually represented by a circle with a label, and an edge is represented by a line or arrow extending from one vertex to another."
---------------------------------
@Elroch you also stated:
"It's not 500. It's down to 90 for 7 pieces to 8 pieces."
But you've got any of ten types of piece to add.
You've got well over 50 squares to add any of the ten piece types to.
How do you Not get a multiplier over 500?
You could add a pawn or a knight or a bishop or a rook or a queen - but there's either of two colors available for each piece.
So that's 5x2 = ten types of piece to add.
With only seven pieces on the board - that's 64-7 = 57 square choices for that added piece of one of ten types.
So why doesn't that multiply the number of possible positions by 10 x 57 = 570?
---------------------------------------------
Perhaps Elroch's argument is that if you add a black pawn to a square when there's already one on the board - then there would already have been a position where that previous black pawn was on that square and you're putting the new pawn on that square. So you can't count that new position twice. Are you saying that kind of thing cuts the multiplier down from over 500 to 90?
Doesn't look right. Including because new piece types are also being added.
Apparently you got that from the computer info as to number of positions.
-----------------------------------------------
There's the issue of the actual math.
48 ways to put a black pawn on the board. Now how many ways to add another black pawn to each position of the first pawn? Its less than 47 because you'd get repetitions? Yes that's intuitive. I think you'd get a sum of a series of declining terms.
But I also vaguely recall studying the math of that decades ago.
----------------------------------
Please comment further. If you're right you're right. I'm not 'O' (Optimissed) so I won't slander you if you're right.
And since I'm not O I also won't slander you if you're wrong.
Nor will I slander you if you're actually referring to something else somehow.
I don't slander anyway. That's what O does. Constantly.
And if you don't reply or don't address the points - then I won't get 'bent out of shape' and do an IPG or a chris21 and say 'See? No answer to the question.' along with the foolish 'still waiting'. We can expect that O will read this exchange and kneejerk some creative false comments out about it though.
tygxc? I'm predicting if he jumps into this one he won't be like O though - I don't think he ever has been. Good for him.![]()
My performance earlier on the other thread showed that I would probably still score higher than anyone else around here. I'm sure you aren't going to admit that readily and I don't have your expertise in maths to fall back on but I made a very good showing. You really ought to understand that relying on people like Mega and RATMAR to back you up, which you were doing, is a negative in you. You would do better to disidentify with them. Llama is the cleverest of your bunch of supporters and he doesn't make too many mistakes but he still does make them. Dio is as hopeless a case as playerafar, intellectually.
This makes it awfully hard to explain why you routinely fall short in our exchanges, and are usually the one who gets fed up and resorts to namecalling and unilateral claims of superiority/victory. This preemptive type of capitulation is a hallmark of someone who is accustomed to losing and is vexed by it.
Yes - Dio correct as usual.
O - that's Optimissed - always loses.
And yes he's severely vexed by losing and it shows very much.
Constantly.
But its also masochism. He loves to hate it.
----------------------------
tygxc is doing a lot better than O is.
I'm thinking tygxc is or was a lawyer. Or maybe a salesman.
But more like lawyer. Trial defense lawyer.
It fits.
But tygxc has conceded more than once that chess cannot be solved with today's technology.
--------------------------------------------
There's an aspect of this that hasn't been discussed much.
And that is - what if there is a breakthrough in programming and algorithms for this project - instead of just computing speed?
But could major software improvements make that much difference?
If the computers were a billion times faster and the software was a billion times more efficient - (which are both gigantic 'ifs')
then that would be an improvement factor of ten to the eighteenth power.
So if there are 10 to the 44th chess positions to be solved ...
well you can't exactly subtract the 18 from the 44.
If there was an 'average time' to solve each position with current software
such as one second - then you'd still end up needing 10 to the 26th seconds to solve the whole thing with the stronger hardware and software.
there are about 31 million seconds in a year.
So that still leaves you with over ten to the 18th Years!
A million trillion years!
Hahahaahahaha.
-------------------------
but even that might be understating it because there's no 'average time' to solve each position.
Positions in the endgame tablebases get harder and harder to solve as more pieces get added.
There's no 'one second' average.
In fact - there's no 'average' at all. (although perhaps tygxc will try to pretend there is an average and its called 'nodes per second'?)
What a bunch of complete fools posts here. Everyone being manipulated by Elroch. A bunch of manipulative psychopaths who are only happy controlling each other.