But this person (if it is just one) needs to account for about 1 in 5 posts in the whole discussion! Seems a bit implausible. Maybe multiple simultaneous unmutes?
Actually 1 in 5 sounds about right for this poster, on this thread.
But this person (if it is just one) needs to account for about 1 in 5 posts in the whole discussion! Seems a bit implausible. Maybe multiple simultaneous unmutes?
Actually 1 in 5 sounds about right for this poster, on this thread.
"That's how far these games are from a proof of chess being a draw."
++ So in your opinion the 17 ICCF (grand)masters and their engines all collude to make exactly 1 error, never 0, never 2.
@7xz you want to defend this?
No.I think it was someone else who suggested they were all colluding. Probably the 'p' person.
no, check the comment he was responding to. this is exclusively tygxc. playerafar isnt even in this conversation.
When I said no I meant no i didn't want to defend tygxc's comment.
@14559
"engines perform better with humans than without - then wouldn't that, in itself, imply that engines have room for improvement?"
++ Sure. As pointed out engines even have trouble with KNN vs. KP, which a human Troitsky solved before engines existed. Engines do not even recognise dead positions, as any arbiter does.
'In the middlegame, I use ChessBase to find similar positions so that I know what plans are relevant. For me, the key is planning, which computers do not do well — Petrosian-like evaluations of where pieces belong, what exchanges are needed, and what move orders are most precise within the long-term plan.' - Edwards
'the computer engines did not understand the main ideas and suggested in most middlegame positions that all candidate moves were equivalent. ' - Edwards
However, human ICCF (grand)master ICCF WC Finalist + engines and at 5 days/move have now reached perfection: 0 error/game.
And T and 'Washi' in the climate 'hoax' forum continue to be comparable.
That forum by the way - like this one recently got 'moderator intervention'.
It needed it.
Washi's ridiculous claims are as preposterous as T's.
Washi not as 'serene' as T but perhaps doesn't entertain a notion that he 'knows better than everybody'.
And as pointed out by another here - Washi has far more backers than T does.
highly disagree with the comparison, for the reasons u mentioned and more, the climate change debacle has actual stakes, and movements behind each side. people have investments in those positions and many personal political repercussions for themselves and for how they see the world with each statement, and a culture behind their positions. there's literally nobody on tygxc's side. theres no debate of the topics that tygxc claims. in fact, that's been one of the bigger difficulties in getting sources against tygxc, because tygxc's "interpretations" of various definitions are so completely ludicrous that nobody would even think to clarify against them.
But we agree that T has 'fewer backers'.
As in - none. Not usually. One or two that pretend to. Miniscule 'support'.
He's had 'backers' that don't agree with him! Lol.
MEGA - the word 'comparison' has multiple interpretations.
When there's 'no comparison' there's still 'comparison'.
T's tactics are similiar to much of Washi's tactics.
But not to the other climate skeptics.
'I'm here to make sure that tygxc doesn't mislead people, and I have been very successful at that.'
Correct. You have been. T continues to flail and flail. Doesn't mean he's getting anywhere except to cause others to present the real information and logic - both of which he disdains.
@14559
To make it clearer:
ICCF Finalist + engines, 5 days/move, anno 2024: 0 decisive games, 0 error/game
ICCF Finalist + engines, 2 days/move, anno 2024: decisive games, imperfect play, Dronov
ICCF Finalist + engines, 5 days/move, anno 2022: 17 decisive games, imperfect play
ICCF John Doe + engines, 5 days/move, anno 2024: 5 decisive games, imperfect play
engines without human, 5 days/move, anno 2024: decisive games, imperfect play
@14575
"Do you think 8 pieces by 2030, 9 pieces by 2050, and 10 pieces by 2100 is plausible at this rate?"
++ The rate will go up when the already commercially available quantum computers mature.
We will have 32 pieces by 2100.
@14575
"Do you think 8 pieces by 2030, 9 pieces by 2050, and 10 pieces by 2100 is plausible at this rate?"
++ The rate will go up when the already commercially available quantum computers mature.
We will have 32 pieces by 2100.
The wikipedia page you linked doesn't exactly support your optimism. QC is still closer to science fiction that to being "already commercially available".
A quote from the wiki page:
a 2023 Nature spotlight article summarised current quantum computers as being "For now, [good for] absolutely nothing".
The length of time its taking to tablebase 8 pieces (not even with castling and en passant included) indicates that 10 pieces will not be tablebased by the year 2100.
Not even 'weakly tablebased' - which 7 pieces is.
To solve 10 pieces by 2100 - a major jump in 'ops per second' is needed for the computers.
Now T might now try to spam and claim ops per second 'doesn't matter' and that its 'nodes per second' which will further reveal his crass illogic.
The term 'quantum computers' is not going to 'get around that' either.
Regarding the idea of a 'checkmate tablebase' as a kind of shortcut - that's kind of interesting.
Instead of trying to crunch 'all positions' ...
the computers would make it their business to generate 'checkmate positions' and from there generate 'checkmates in x moves' positions.
Easy if they're 'one ply backwards'.
But much harder if they're 'three ply backwards'. Mate in two in other words.
----------------------------------
Could that be done from the 'front' too?
Computers to tablebase all 'checkmate positions' with 32 pieces on board.
------------------------------
Point: If discussing 'gametree' - or rather analysis 'from the front' - then 32 piece tablebasing of all checkmate positions with 32 on board looks Possible.
Is stalemate possible with 32 on board?
Tough. Some such probably discovered though.
Blocked pawns stopping their own pieces moving somehow.
Ironically - its T who posted one.
Here: https://www.chess.com/forum/view/general/stalemate-with-all-32-pieces-on-the-board#comment-62422711Does that mean he 'knows more'? No.
Also - the 'games that get there' are less significant and more inane than the actual positions.
Why? Because 'game' makes the task many quadrillions of times more daunting than 'positions' and its already much more than daunting enough with just positions.
It's going to be funny when we see the engines of 10 or 20 years time crush the current state of the art. Simply more computing time would be sufficient to get wins.
The length of time its taking to tablebase 8 pieces (not even with castling and en passant included) indicates that 10 pieces will not be tablebased by the year 2100.
I don't know where you got that from but it doesn't make sense to me. It might require 10,000 times the compute and storage of the 8 piece tablebase. Moore's law (mostly by parallelism - tablebase construction is eminently suited to independent parallel computation because of the way that you have a separate tablebase for each combination of material, connected to each other by captures and promotions).
So it is reasonable to expect one move more each 15 years (if anyone is sufficiently motivated) until Moore's law breaks down for both storage and parallel computation (which is very similar to the law for "cost per compute").
Not even 'weakly tablebased' - which 7 pieces is.
There's no such thing. A tablebase is a strong solution of every position it contains.
To solve 10 pieces by 2100 - a major jump in 'ops per second' is needed for the computers.
No. If current trends continue, 2060 should suffice. The ratio of the number of positions per step is less than 100 and falling with each step.
Now T might now try to spam and claim ops per second 'doesn't matter' and that its 'nodes per second' which will further reveal his crass illogic.
The term 'quantum computers' is not going to 'get around that' either.
Quantum computing could get round it, but no-one has even sketched out how it could be done, and there is no strong reason to believe it will be possible. The present story of quantum computing is one of finding great difficulty scaling up, for fundamental physical reasons. Quantum computers are very different to normal computers.
It's worth remembering that technologies do reach road blocks they cannot surpass. The PC I bought over 10 years ago had an i5-3570k overclocked to 4.4 GHz. These days typical clock speeds on processors on turbo boost are similar (my laptop boosts to 4.6GHz), with the very fastest reaching 6.2 GHz (record liquid nitrogen overclocking is 8.2 GHz). The processors are much faster, but this is a consequence of more advanced design, partly more transistors (Intel don't provide this info any more, but indirect information suggests perhaps 10 billion transistors to the 3570k's 1.4 billion - about 100 million per square millimeter. Intel's most powerful processor is estimated to have 26 billion, with 24 cores to mine's 10).
Bottom line, I would expect an approximation to Moore's law to continue for parallel computing for a while, but the speed will tail off. Every new generation requires more expensive factories, with the current costing about $10 billion before you make the first CPU.
If it continued to 2100, our descendants could see a 12 or 13 piece tablebase.
Note: this graph only goes up to 2020. These days, the fastest progress is on increasingly parallel AI chips. The record is now 4 trillion transistors on a chip (about 80 times the largest CPU in 2020, but a different type of beast).
@14575
"Do you think 8 pieces by 2030, 9 pieces by 2050, and 10 pieces by 2100 is plausible at this rate?"
++ The rate will go up when the already commercially available quantum computers mature.
We will have 32 pieces by 2100.
The wikipedia page you linked doesn't exactly support your optimism. QC is still closer to science fiction that to being "already commercially available".
A quote from the wiki page:
a 2023 Nature spotlight article summarised current quantum computers as being "For now, [good for] absolutely nothing".
lmfao tygxc even people who arent used to your pattern of BSing are calling out your misrepresentations and lies.
@14559
However, human ICCF (grand)master ICCF WC Finalist + engines and at 5 days/move have now reached perfection: 0 error/game.
again, you say this, but you have no evidence of this
"That's how far these games are from a proof of chess being a draw."
++ So in your opinion the 17 ICCF (grand)masters and their engines all collude to make exactly 1 error, never 0, never 2.
@7xz you want to defend this?
No.I think it was someone else who suggested they were all colluding. Probably the 'p' person.
no, check the comment he was responding to. this is exclusively tygxc. playerafar isnt even in this conversation.
When I said no I meant no i didn't want to defend tygxc's comment.
alright
'the computer engines did not understand the main ideas and suggested in most middlegame positions that all candidate moves were equivalent. ' - Edwards
Hmm... But Edwards said this about a game that ended in a draw - just as the engines apparently assured him it would ... He refused to believe it, until it was proven true. Instead, he declared that the engines didn't understand.
But perhaps the engines understood just fine, and it was the human who actually didn't comprehend?
This highlights an issue with humans steering modern engines - psychologically, we want to believe that we still know best. Because there's an existential cliff that we're in danger of tumbling off, once we admit that engines have surpassed us to the point where we're no longer needed at all ...
QC is still closer to science fiction...
75 yrs ago so was e/t else. now lookit where we are. trust me in 75 yrs all that tyggy says WILL be true. in 1950 did they even have toasters ? lol !
The length of time its taking to tablebase 8 pieces (not even with castling and en passant included) indicates that 10 pieces will not be tablebased by the year 2100.
I don't know where you got that from but it doesn't make sense to me. It might require 10,000 times the compute and storage of the 8 piece tablebase. Moore's law (mostly by parallelism - tablebase construction is eminently suited to independent parallel computation because of the way that you have a separate tablebase for each combination of material, connected to each other by captures and promotions).
So it is reasonable to expect one move more each 15 years (if anyone is sufficiently motivated) until Moore's law breaks down for both storage and parallel computation (which is very similar to the law for "cost per compute").
Not even 'weakly tablebased' - which 7 pieces is.
There's no such thing. A tablebase is a strong solution of every position it contains.
To solve 10 pieces by 2100 - a major jump in 'ops per second' is needed for the computers.
No. If current trends continue, 2060 should suffice. The ratio of the number of positions per step is less than 100 and falling with each step.
Now T might now try to spam and claim ops per second 'doesn't matter' and that its 'nodes per second' which will further reveal his crass illogic.
The term 'quantum computers' is not going to 'get around that' either.
Quantum computing could get round it, but no-one has even sketched out how it could be done, and there is no strong reason to believe it will be possible. The present story of quantum computing is one of finding great difficulty scaling up, for fundamental physical reasons. Quantum computers are very different to normal computers.
It's worth remembering that technologies do reach road blocks they cannot surpass. The PC I bought over 10 years ago had an i5-3570k overclocked to 4.4 GHz. These days typical clock speeds on processors on turbo boost are similar (my laptop boosts to 4.6GHz), with the very fastest reaching 6.2 GHz (record liquid nitrogen overclocking is 8.2 GHz). The processors are much faster, but this is a consequence of more advanced design, partly more transistors (Intel don't provide this info any more, but indirect information suggests perhaps 10 billion transistors to the 3570k's 1.4 billion - about 100 million per square millimeter. Intel's most powerful processor is estimated to have 26 billion, with 24 cores to mine's 10).
Bottom line, I would expect an approximation to Moore's law to continue for parallel computing for a while, but the speed will tail off. Every new generation requires more expensive factories, with the current costing about $10 billion before you make the first CPU.
If it continued to 2100, our descendants could see a 12 or 13 piece tablebase.
Note: this graph only goes up to 2020. These days, the fastest progress is on increasingly parallel AI chips. The record is now 4 trillion transistors on a chip (about 80 times the largest CPU in 2020, but a different type of beast).
Moore's law is a scam. Classical computer capability is starting to level off. You can't keep compressing processor components beyond a certain amount. I doubt we'll get beyond 10 piece tablebases.
Regarding the idea of a 'checkmate tablebase' as a kind of shortcut - that's kind of interesting.
Instead of trying to crunch 'all positions' ...
the computers would make it their business to generate 'checkmate positions' and from there generate 'checkmates in x moves' positions.
Easy if they're 'one ply backwards'.
But much harder if they're 'three ply backwards'. Mate in two in other words.
----------------------------------
Could that be done from the 'front' too?
Computers to tablebase all 'checkmate positions' with 32 pieces on board.
Note that 32 pieces makes promotions impossible.
There could not be any promoted pieces on board.
And with 31 pieces only one promotion possible and so on.
People seem to think it 'matters' if a promotion is of a 'captured piece' or not.
Does it matter mathematically or in the project?
Well it means that with 31 pieces on board and one promoted piece - no capture could have happened.
Because promotion means a pawn is lost from the count.
So to have one promotion of a captured piece - maximum 30 pieces on board.
Already discussed? Probably.
------------------------------
Point: If discussing 'gametree' - or rather analysis 'from the front' - then 32 piece tablebasing of all checkmate positions with 32 on board looks Possible.
Is stalemate possible with 32 on board?
Tough. Some such probably discovered though.
Blocked pawns stopping their own pieces moving somehow.
Ironically - its T who posted one.
Here: https://www.chess.com/forum/view/general/stalemate-with-all-32-pieces-on-the-board#comment-62422711Does that mean he 'knows more'? No.
Also - the 'games that get there' are less significant and more inane than the actual positions.
Why? Because 'game' makes the task many quadrillions of times more daunting than 'positions' and its already much more than daunting enough with just positions.
It's the total pieces that matter, not which ones are promoted or not. In fact in cases of ridiculous numbers of promotions like 5 knights vs king those could simply be emitted from being rather obvious. Tablebases including things like Queen + Rook vs king are a waste.
Do you think 8 pieces by 2030, 9 pieces by 2050, and 10 pieces by 2100 is plausible at this rate?