Chess will never be solved, here's why

Sort:
DiogenesDue
tygxc wrote:

@14575

"Do you think 8 pieces by 2030, 9 pieces by 2050, and 10 pieces by 2100 is plausible at this rate?"
++ The rate will go up when the already commercially available quantum computers mature.
We will have 32 pieces by 2100.

Incredibly unlikely. Like lottery level unlikely. A 12-15 piece table base might be workable.

Elroch
EndgameEnthusiast2357 wrote:
Elroch wrote:
playerafar wrote:

The length of time its taking to tablebase 8 pieces (not even with castling and en passant included) indicates that 10 pieces will not be tablebased by the year 2100.

I don't know where you got that from but it doesn't make sense to me. It might require 10,000 times the compute and storage of the 8 piece tablebase. Moore's law (mostly by parallelism - tablebase construction is eminently suited to independent parallel computation because of the way that you have a separate tablebase for each combination of material, connected to each other by captures and promotions).

So it is reasonable to expect one move more each 15 years (if anyone is sufficiently motivated) until Moore's law breaks down for both storage and parallel computation (which is very similar to the law for "cost per compute").

Not even 'weakly tablebased' - which 7 pieces is.

There's no such thing. A tablebase is a strong solution of every position it contains.

To solve 10 pieces by 2100 - a major jump in 'ops per second' is needed for the computers.

No. If current trends continue, 2060 should suffice. The ratio of the number of positions per step is less than 100 and falling with each step.

Now T might now try to spam and claim ops per second 'doesn't matter' and that its 'nodes per second' which will further reveal his crass illogic.
The term 'quantum computers' is not going to 'get around that' either.

Quantum computing could get round it, but no-one has even sketched out how it could be done, and there is no strong reason to believe it will be possible. The present story of quantum computing is one of finding great difficulty scaling up, for fundamental physical reasons. Quantum computers are very different to normal computers.

It's worth remembering that technologies do reach road blocks they cannot surpass. The PC I bought over 10 years ago had an i5-3570k overclocked to 4.4 GHz. These days typical clock speeds on processors on turbo boost are similar (my laptop boosts to 4.6GHz), with the very fastest reaching 6.2 GHz (record liquid nitrogen overclocking is 8.2 GHz). The processors are much faster, but this is a consequence of more advanced design, partly more transistors (Intel don't provide this info any more, but indirect information suggests perhaps 10 billion transistors to the 3570k's 1.4 billion - about 100 million per square millimeter. Intel's most powerful processor is estimated to have 26 billion, with 24 cores to mine's 10).

Bottom line, I would expect an approximation to Moore's law to continue for parallel computing for a while, but the speed will tail off. Every new generation requires more expensive factories, with the current costing about $10 billion before you make the first CPU.

If it continued to 2100, our descendants could see a 12 or 13 piece tablebase.

Note: this graph only goes up to 2020. These days, the fastest progress is on increasingly parallel AI chips. The record is now 4 trillion transistors on a chip (about 80 times the largest CPU in 2020, but a different type of beast).

Moore's law is a scam. Classical computer capability is starting to level off. You can't keep compressing processor components beyond a certain amount. I doubt we'll get beyond 10 piece tablebases.

Moore's Law was never a scam - it was a non-profit making observation that turned out to accurately describe the technological progression for several decades.

It has however ended. It was over 10 years ago that it became difficult to make transistors either smaller or faster. We are stuck around 100 million transistors per square millimeter now.

But GPUs that were first driven by the requirements of gaming, now the requirements of AI are still making strides forward, with core counts and parallel processing power marching on. The latest AI chip has 6 trillion transistors if I recall (thousands of cores, each like a cheap low power processor with relatively simple functionality.

Energy is becoming a constraint. It's the sheer number of cores that makes AI machines very power hungry. The key to lowering power requirements is mainly to work with very small voltages, but there are limits without getting out the liquid nitrogen.

playerafar

@Elroch
Tablebases aren't including castling and en passant - so they're weak.
You've claimed they 'didn't bother'.
So we disagree.
Regarding 9 pieces and 10 pieces ...
difficulty increases as the number of pieces increases.
------------------------------------
By your 'logic' the jump from 31 pieces to 32 pieces would be no more difficult as from 7 to 8.
I disagree.
However we can't foresee that there won't be a major improvement in computing speed. And number of computers (money) applied to the projects.
As for 'Moore's Law' that appears to be a measure of what's happened but doesn't necessarily forecast what will.
Gallium arsenide and similiar doesn't care about Moore's Law.
But whatever improvements in speed will happen they just can't make a big enough dent in that daunting 4 x 10^44 number which looks 'compact' but is an extremely daunting number.

DiogenesDue

Weak and strong do not apply so let's not muddy up that terminology any more than Tygxc et al already muck them up. Tablebases that skip castling and en passant are incomplete. It's unfortunate that this is so, but logically when the first tablebases were being created it didn't make much sense to include those moves. I suspect once we hit about 10-12 pieces, there will be a call to go back and start parallel efforts to catch up and complete the tablebases. That will also be a huge effort in terms of calculation by the time you do reach enough pieces, obviously, so better to start early.

Gallium arsenide would suffer the same type of curve that is behind Moore's Law which is generally applied to silicon, and gallium arsenide fabrication is way behind now (I worked for one of the only gallium arsenide companies in silicon valley in the early 90s, and it went under while waiting for Moore's Law to flatten out for silicon...) so it would be a step back before it's a step forward. There's no significant infrastructure there at all.

playerafar

You know I made a mistake in something I said several hours ago.
And nobody caught it.
Lol. Hahahaah.
Maybe I should delete the post before somebody does?
Or edit it?
I like to improve my posts so I'll probably edit it.
Then later I'll say what the mistake was.
----------------
It was quite a blunder.
And I've fixed it now.
If you had 32 pieces on board with no captures or promotions yet - 
and you promote a pawn - then there's still 32 pieces on board!
I was thinking 31 !
Lol. Quite a blunder.
But then again that business of 'promotions to a captured piece'.
In the example I gave - you couldn't do that because no pieces have been captured. You also couldn't do it if only pawns have been captured.
And obviously if you've got 31 pieces on board because a non-pawn was captured then after promotion to that captured piece you've still got 31 while being minus a pawn on board.
Yes. Obvious.
But what is the significance of 'promoting to a captured piece' or not doing so?

Elroch
playerafar wrote:

@Elroch
Tablebases aren't including castling and en passant - so they're weak.

Both of these are cheap inclusions.

And what is this? (see that move at the top right?) Click to enlarge.

playerafar

Elroch you're never going to convince me that tablebasing while skipping castling and en passant 'isn't weak'.
So suggest - don't try.
Except you might convince somebody else. Or perhaps several agree with you.
Yes - you might know ten times as much about statistics as everybody here combined - and I mean that.
But skipping castling en passant and castling is Weak.
Skipping the 50 move rule and 3 fold repetition is somewhat Weak but much less so.

IsniffGas
EndgameEnthusiast2357 wrote:
Elroch wrote:
playerafar wrote:

The length of time its taking to tablebase 8 pieces (not even with castling and en passant included) indicates that 10 pieces will not be tablebased by the year 2100.

I don't know where you got that from but it doesn't make sense to me. It might require 10,000 times the compute and storage of the 8 piece tablebase. Moore's law (mostly by parallelism - tablebase construction is eminently suited to independent parallel computation because of the way that you have a separate tablebase for each combination of material, connected to each other by captures and promotions).

So it is reasonable to expect one move more each 15 years (if anyone is sufficiently motivated) until Moore's law breaks down for both storage and parallel computation (which is very similar to the law for "cost per compute").

Not even 'weakly tablebased' - which 7 pieces is.

There's no such thing. A tablebase is a strong solution of every position it contains.

To solve 10 pieces by 2100 - a major jump in 'ops per second' is needed for the computers.

No. If current trends continue, 2060 should suffice. The ratio of the number of positions per step is less than 100 and falling with each step.

Now T might now try to spam and claim ops per second 'doesn't matter' and that its 'nodes per second' which will further reveal his crass illogic.
The term 'quantum computers' is not going to 'get around that' either.

Quantum computing could get round it, but no-one has even sketched out how it could be done, and there is no strong reason to believe it will be possible. The present story of quantum computing is one of finding great difficulty scaling up, for fundamental physical reasons. Quantum computers are very different to normal computers.

It's worth remembering that technologies do reach road blocks they cannot surpass. The PC I bought over 10 years ago had an i5-3570k overclocked to 4.4 GHz. These days typical clock speeds on processors on turbo boost are similar (my laptop boosts to 4.6GHz), with the very fastest reaching 6.2 GHz (record liquid nitrogen overclocking is 8.2 GHz). The processors are much faster, but this is a consequence of more advanced design, partly more transistors (Intel don't provide this info any more, but indirect information suggests perhaps 10 billion transistors to the 3570k's 1.4 billion - about 100 million per square millimeter. Intel's most powerful processor is estimated to have 26 billion, with 24 cores to mine's 10).

Bottom line, I would expect an approximation to Moore's law to continue for parallel computing for a while, but the speed will tail off. Every new generation requires more expensive factories, with the current costing about $10 billion before you make the first CPU.

If it continued to 2100, our descendants could see a 12 or 13 piece tablebase.

Note: this graph only goes up to 2020. These days, the fastest progress is on increasingly parallel AI chips. The record is now 4 trillion transistors on a chip (about 80 times the largest CPU in 2020, but a different type of beast).

Moore's law is a scam. Classical computer capability is starting to level off. You can't keep compressing processor components beyond a certain amount. I doubt we'll get beyond 10 piece tablebases.

NOT U AGAIN /j

Elroch
playerafar wrote:

Elroch you're never going to convince me that tablebasing while skipping castling and en passant 'isn't weak'.
So suggest - don't try.

Wake up! I just posted a position from a tablebase with en passant being possible and told you so.

Neither of those rules is expensive to include.

Also "weak" is a bad word to use. It is non-standard, it has a standard meaning in the solving of chess, and it is neither easy to guess what it means nor, IMHO, at all good as a choice of word. Far better to refer to any specific deficiency of a specific tablebase explicitly, which you have not. If you want me to do it for you, Syzygy does not deal with castling rights. Not a design decision I would have made, but there it is.

If you want a position with 7 pieces that deals with e.p. correctly, here you go:

https://syzygy-tables.info/?fen=3qkb2/8/8/8/4Pp2/8/8/3QK3_b_-_e3_0_1

MEGACHE3SE

yeah weak refers to a specific type of incompleteness, while you are just positing incompleteness itself

playerafar
MEGACHE3SE wrote:

yeah weak refers to a specific type of incompleteness, while you are just positing incompleteness itself

I prefer to use 'weak' in a common sense of the word - instead of as jargon.
Yes - scientific terminology has its place. Must be.
But - the fact that 'a tablebase' happened to include en passant (but not castling) does not contradict what I'm saying.
Which is that tablebases that don't include both en passant and castling as they try to progress from 7 pieces to 8 pieces and 9 and 10 and so on are Weak.
The fact they 'didn't bother' is no excuse.
Castling and en passant are just too basic to the game to be excluded or 'not bothered with'.
----------------------------------------
Yes - you each know more math than the math than I have both forgotten and remember but you're not going to convince me that such 'skipping' table bases are Strong.
They're not Strong. So they're weak.
We could waste a lot of energy. But that's unlikely.
----------------------------
I think you both understand the logic I'm getting at.
But I'm not the T-person nor the O-person.
Which means I don't need for such points to be my 'life's work' nor to personalize it.
You don't either. Right?
That's a question. A friendly one.
But no need to answer. Questions aren't thunder.
And the disagreement is minor - that's if it exists at all.
happy

Elroch

Jeez. I JUST TWICE POINTED OUT THAT Syzygy DEALS WITH EN PASSANT COMPLETELY AND GAVE EXAMPLES.

You seem to be trying to merit a one letter abbreviation.

playerafar
Elroch wrote:

Jeez. I JUST TWICE POINTED OUT THAT Syzygy DEALS WITH EN PASSANT COMPLETELY AND GAVE EXAMPLES.

You seem to be trying to merit a one letter abbreviation.

And you just told me it skips castling. By omission.
Permit me to be right.
Lol!
Put on a happy face Elroch.
Life is short.
You wouldn't be bored to tears if you were right 100% of the time?
Well maybe not. That could be Interesting.
But please remember - you're not the T-person nor the O-person.
Which means you don't need to be 'right'.
Right?

playerafar

I want to repeat the joke I just told in your forum E.
An article on a search page I just brought up stated that one third of persons aged 85 have some dementia.
But on the the same page another article stated that the rate of dementia for seniors doubles every five years.
Going by the two articles that would mean:
At age 90 two thirds of seniors would have some dementia.
And at age 95 133% of seniors would be demented at least somewhat.
133%. Whaatt??
Hey Kool! 133%! Far out! Gnarly! Copacabana!
happy
prediction with 60% confidence - Elroch will 'apply' ... that's right.
Lets see if the 60 'comes in'.

IsniffGas
playerafar wrote:

I want to repeat the joke I just told in your forum E.
An article on a search page I just brought up stated that one third of persons aged 85 have some dementia.
But on the the same page another article stated that the rate of dementia for seniors doubles every five years.
Going by the two articles that would mean:
At age 90 two thirds of seniors would have some dementia.
And at age 95 133% of seniors would be demented at least somewhat.
133%. Whaatt??
Hey Kool! 133%! Far out! Gnarly! Copacabana!

never speak again

playerafar

I just did.

MARattigan
Elroch wrote:

Jeez. I JUST TWICE POINTED OUT THAT Syzygy DEALS WITH EN PASSANT COMPLETELY AND GAVE EXAMPLES.

You seem to be trying to merit a one letter abbreviation.

P: Not even 'weakly tablebased' - which 7 pieces is.

E: There's no such thing. A tablebase is a strong solution of every position it contains.

I'd say P would have been correct had he meant by "weakly tablebased" providing a weak solution of every position it contains. He was responding to @tygxc who talks only of solving versions of chess that include 50M/3R positions because he thinks basic rules chess is insoluble.

P has since made it plain that he didn't mean that but intends (like a certain absent friend) to avoid jargon (meaning invent his own terms without any necessity of definition so that everyone can understand them).

E is definitely incorrect in that context. Syzygy provides a weak solution of of every position it contains under competition rules. It provides a strong solution of virtually no positions it covers that are not already mate or dead under competition rules, though it does provide a strong solution of every position it contains under basic rules.

It's also a pretty safe bet that no tablebase (not even for KRvK) that provides a strong solution of every position it contains under competition rules will be produced by 2100, if only because nobody will attempt it.

MARattigan
Thee_Ghostess_Lola wrote:
Thee_Ghostess_Lola wrote:

for some side fun ?...& since Goldbach was brought up ?...solve:

x^y = y^x

where x ≠ y...nor the 2,4 pair...nor non-integers

good luck all u math olympians !

assign: y = nx

do hobuncha manipulatives & u end up here:

x = n-1√n

y = n-1√n^n

then: pick any natural # ur charming little so desires...and x^y = y^x works !

****

now...try: ∞ (lol !)

Nobody appears to have corrected this so far, so I'll give you a clue.

playerafar

People 'using the jargon' can ever convince others that skipping castling or en passant or both is Ever 'strongly solving'?
There's a thing called 'friendly disagreement' - with neither side spamming and both sides having a reasonable position ...
that's in intense contrast with 'T' and 'O'.
So if 'disagreement' is 'essential' to healthy discussion - friendly disagreement Can Do. But maybe its only going to be 'Could Do.'
I had a 'friendly disagreement' with Dio several days ago about something concerning 'luck in chess'.
Its over. 
-----------------------------
Conclusion: neither side will budge a millimeter but the friendly disagreement appears to be about the semantics of 'luck' and 'in' and 'chess'.
A slight improvement would have occurred if the topic had been set up as 'chess has luck' ... 'has' instead of 'in'. But the semantics issues of what 'chess' means and what 'luck' means would still have continued.
Why mention here? Because I think that topic is relevant.
The whole forum doesn't have to just be about 'exposing and refuting T' (which constantly goes on and good thing)
-------------------------------
Regarding weak and strong and weakly and strongly I maintain that the usage of those terms is better if their common meanings are used.
I believe it'll continue to be in friendly disagreement with those who maintain that the usage should be jargon-usage.
Perhaps better would have been if whoever setting up the terms had used the words 'purely' and 'imperfectly'. Much better.
Then the terms not only become better - then they lead better into 'scalar and generic'. Instead of 'binary and technical'.

MARattigan
playerafar wrote:

People 'using the jargon' can ever convince others that skipping castling or en passant or both is Ever 'strongly solving'?
There's a thing called 'friendly disagreement' - with neither side spamming and both sides having a reasonable position ...
...

As Elroch pointed out, tablebases don't skip e.p. They don't address positions with castling, just as they don't at present address 12 man or 32 man positions. That shouldn't be a bar to persuading people that they strongly solve the positions they do address so long as it's true.

In fact it's not true under competition rules but it is true under basic rules. And it is true that they weakly solve the positions they address under competition rules; there should be no bar to persuading people of that either.

It does require that people understand what you're trying to persuade them of. That is they need to learn the jargon. It's there to avoid people talking at cross purposes.

If you have a conversation where each side means different things by the words in use then all you are likely to finish up with is a disagreement (friendly or otherwise).

You later refer to the common meanings of the terms "weakly" and "strongly". There are no commonly agreed meanings to those terms in this context apart from what you refer to as the jargon.