Chess will never be solved, here's why

Sort:
Doves-cove

my bio is legendary. lmao 🤣🤣

tygxc

To come back to the double error probability
P(double error)
= P (A errs & B misses the win)
= P (A errs) * P (B misses the win | A has erred)
< P (A errs) * P (A errs)
= P² (A errs)
= P² (single error).

P (B misses the win | A has erred) < P(A errs)
for 2 reasons:

  1. B has more information: he sees the move played by A and thus looks 1 ply deeper
  2. The error by A is more likely to result from a short thinking time 2-5 days/move instead of a long thinking time 5-10 days/move. B is more likely to spend a normal or long thinking time 5-10 days/move when B suspects an error by A, e.g. when the move A played is not the move B expected. B is thus more likely to spot the error by A than A is to make the error.
MARattigan
MEGACHE3SE wrote:
tygxc wrote:

"Tromp showed there were 10^41 positions with 2 or fewer promotions to pieces not previously captured" ++ Including underpromotions. The 3.28 *10^38 includes 1-2 promotions to queens not previously captured.

except for the paper where that figure comes from EXPLICITLY STATES NO PROMOTIONS OF ANY TYPE.

you either do not even READ the papers you cite or you are Lying through your teeth.

this fact has been pointed out to your repeatedly, and you are either too intellectually dishonest or too stupid to understand it.

since you masquerade as neither, you disgust me.

That's unfair.

The paper explicitly states no promotions of any type, but if you look through the content that is not what is calculated. It's the statement that is at fault.

As I mentioned before, diagrams such as this are included in the count, where there must have been promotions.

On the other hand the paper is correct in stating diagrams. No side to move etc. @tygxc has arbitrarily multiplied by 10 to account for

a factor of 2 for side to move

a factor of 100 to include ply count

a factor which could be determined by reworking the figures in the paper, but hasn't been, for the addition of up to two queens

All of which would be necessary (if not sufficient) to support his argument.

But the main problem is he doesn't actually have an argument. He's consistently declined to post a flowchart or pseudocode or some exact description of the method he proposes, but he has explicitly stated that he doesn't intend to include the majority of lines in whatever he might finish up with, so regardless of how long it might take it wouldn't be a solution in any normal sense.

MEGACHE3SE
MARattigan wrote:
MEGACHE3SE wrote:
tygxc wrote:

"Tromp showed there were 10^41 positions with 2 or fewer promotions to pieces not previously captured" ++ Including underpromotions. The 3.28 *10^38 includes 1-2 promotions to queens not previously captured.

except for the paper where that figure comes from EXPLICITLY STATES NO PROMOTIONS OF ANY TYPE.

you either do not even READ the papers you cite or you are Lying through your teeth.

this fact has been pointed out to your repeatedly, and you are either too intellectually dishonest or too stupid to understand it.

since you masquerade as neither, you disgust me.

That's unfair.

The paper explicitly states no promotions of any type, but if you look through the content that is not what is calculated. It's the statement that is at fault.

As I mentioned before, diagrams such as this are included in the count, where there must have been promotions.

 

On the other hand the paper is correct in stating diagrams. No side to move etc. @tygxc has arbitrarily multiplied by 10 to account for

a factor of 2 for side to move

a factor of 100 to include ply count

a factor which could be determined by reworking the figures in the paper, but hasn't been, for the addition of up to two queens

All of which would be necessary (if not sufficient) to support his argument.

But the main problem is he doesn't actually have an argument. He's consistently declined to post a flowchart or pseudocode or some exact description of the method he proposes, but he has explicitly stated that he doesn't intend to include the majority of lines in whatever he might finish up with, so regardless of how long it might take it wouldn't be a solution in any normal sense.

the paper is calculated as an estimate for no promotions of any type. just because a position goes through the cracks doesnt change the purpose of the calculations. changing the statement changes the calculations to be done, many of which fall outside of the previously eliminated positions.

Ur giving wayyyyy to much credit to tygxc here.

tygxc

@12031

"The paper explicitly states no promotions of any type, but if you look through the content that is not what is calculated. It's the statement that is at fault."
++ No promotions in the title is just short for no promotions to pieces not previously captured

"diagrams such as this are included in the count, where there must have been promotions"
++ Yes, also generally it is impossible to tell if a piece is original or promoted.

"On the other hand the paper is correct in stating diagrams. No side to move etc."
++ Tromp does the same and multiplies by 2 for diagrams to positions, which is correct except when a king is in check.

"@tygxc has arbitrarily multiplied by 10" ++ Not arbitrarily, but 10.9456.

"to account for a factor of 2 for side to move"
++ No, that factor 2 of diagrams to positions is undone by the factor 1/2 for diagrams to nodes, except when a position is up/down symmetrical.

"a factor which could be determined by reworking the figures in the paper for the addition of up to two queens" ++ That is exactly what I did.

A factor 3.8E41 / 1.9E40/4 for = 4.97 for 1 extra queen
A factor 3.6E42/ 1.9E40/4/4/2 = 5.97 for 2 extra queens, one white, one black
Total factor 4.97 + 5.97 = 10.95

"exact description of the method he proposes"
ICCF (grand)master + 2 servers of 90*10^6 servers, average 5 days/ply

"doesn't intend to include the majority of lines" ++ Some lines can be dismissed right away because of game knowledge and logic, like 1 e4 e5 2 Ba6? or 1 Nf3 d5 2 Ng1. The former is sure to lose, the latter might still draw, but does not even try to win and is thus logically inferior.
'Chess is a generalised trade' - Botvinnik
You can trade material, time, and position,
but giving away material, time, or position for no return can be dismissed right away.

"how long it might take"
++ ICCF WC33 Finals started 20 July 2022 and still 22 games are ongoing, 114 ended in draws.

"a solution in any normal sense" ++ It is a weak solution: it shows how to draw.
It is redundant and thus fail safe, but not yet complete.

MEGACHE3SE

++ No promotions in the title is just short for no promotions to pieces not previously captured

except for where the calculations done in the paper disagree with you.

""exact description of the method he proposes"
ICCF (grand)master + 2 servers of 90*10^6 servers, average 5 days/ply"

thats not the calculations u make. you give only 1 node per position on the tree. a ply includes imperfect moves by definition and in programming.

" ++ Some lines can be dismissed right away because of game knowledge and logic, like 1 e4 e5 2 Ba6? or 1 Nf3 d5 2 Ng1. The former is sure to lose, the latter might still draw, but does not even try to win and is thus logically inferior."

claiming something is true isnt logic lmfao. you just assume its true based on rules of thumb. and you are repeatedly asked to actually provide a logical justification behind these claims, but you refuse to, because you have no conception of rigorous logic.

"'Chess is a generalised trade' - Botvinnik
You can trade material, time, and position,
but giving away material, time, or position for no return can be dismissed right away."

actually none of those are proven invariants to the game state, and by definition cannot be dismissed. how weak minded do you have to be to claim that you are giving away "position" (which is based on evaluation) and then claim that its logic and not evaluation???

plus the definition of a weak solution means that nothing can be dismissed.

"a solution in any normal sense" ++ It is a weak solution: it shows how to draw."

its not a proof, so it isnt a solution.

all of this has been repeated to you many times tygxc.

Kotshmot
tygxc wrote:

To come back to the double error probability
P(double error)
= P (A errs & B misses the win)
= P (A errs) * P (B misses the win | A has erred)
< P (A errs) * P (A errs)
= P² (A errs)
= P² (single error).

P (B misses the win | A has erred) < P(A errs)
for 2 reasons:

  1. B has more information: he sees the move played by A and thus looks 1 ply deeper
  2. The error by A is more likely to result from a short thinking time 2-5 days/move instead of a long thinking time 5-10 days/move. B is more likely to spend a normal or long thinking time 5-10 days/move when B suspects an error by A, e.g. when the move A played is not the move B expected. B is thus more likely to spot the error by A than A is to make the error.

"P (A errs) * P (A errs)"

First off can you explain probability of which event is this supposed to represent?

"B has more information: he sees the move played by A and thus looks 1 ply deeper"

Yes we discussed this. If we simplify the event of error made by the engine it is due to lack of depth by x plies. What you are saying is that after the move was made, engines of both players would suddenly change evaluation after reaching appropriate depth.

I would argue that the range is large and on average the lack of depth is more than 1-2 ply, thus it most of the time wouldn't make a difference.

"B is more likely to spend a normal or long thinking time 5-10 days/move when B suspects an error by A"

This is getting very speculative but errors suspected initially would very rarely happen, rather they would run into the error when appropriate depth is reached by the engine, assuming they let it run that far.

To really comment on this specifically would need to see some data on their time usage, do they run into time trouble or how often they deviate from the usual time usage. My assumption is that Grischuk like bad time usage is rare.

In general it's hard to say if vast majority of errors happen outside of current engines capabilities regardless of the think time - We certainly can't prove they don't.

MEGACHE3SE

@tygxc your waffling doesnt change the fact that you cant treat the errors as independent.

you just choose to ignore the lines where neither A or B see the winning line. this has been repeatedly pointed out to you.

you try to hide it under layers upon layers of fallacies but they are all just that, fallacies.

for example - ". B is more likely to spend a normal or long thinking time 5-10 days/move when B suspects an error by A"

why tf would B suspect an error from A, when A had no reason to suspect it was an error even at several days thinking time?

"e.g. when the move A played is not the move B expected." is literally just equivalent to an assumption that the errors are independent. why would B expect anything but the move that A plays, especially when at that time B has had less ply of depth.

you completely ignore the reason why A erred in the first place: The move/error was out of the depth of computation. "B is more likely to spend a normal or long thinking time 5-10 days/move when B suspects an error by A," doesnt apply because B would have no reason to suspect an error out of the depth of computation.

tygxc

@12035

"probability of which event"
P(A errs) is the probability that player A makes an error.
P (double error) is the probability that A makes an error and B missed the win.

"lack of depth is more than 1-2 ply, thus it most of the time wouldn't make a difference"
++ I say 
P (B misses the win | A has erred) < P(A errs)
but the difference may not be large.

"errors suspected initially would very rarely happen" ++ They keep a record of prior analysis. Whenever a player plays a move less expected, there is reason to think longer.

"some data on their time usage"
++ 'I never take fewer than two days and often as many as 10.' - Edwards

"do they run into time trouble"
++ Yes, it happens they use up all 50 days/10 moves and then have to reply the same day.

"My assumption is that Grischuk like bad time usage is rare." ++ It happens. If you want to outthink your opponent and his engines, then you have to spend more time than he does.

"if vast majority of errors happen out of current engines capabilities"
++ The vast majority of errors happens from human factors: dubious opening selection, hasty move, clerical error, illness... This is clear from inspection of decisive games in previous years.

VerifiedChessYarshe

What this forum has turned to? We only here to discuss if chess is solvable or not. I lost track of this discussion. Wish the forum has a better future.

MEGACHE3SE
VerifiedChessYarshe wrote:

What this forum has turned to? We only here to discuss if chess is solvable or not. I lost track of this discussion. Wish the forum has a better future.

its basically tygxc spreading misinformation while people correct him.t he discussion is over, it's just that tygxc doesnt have the basic logical capacity to realize how wrong he is. this isnt my personal opinion or anything. ive literally gone to mathematicians to verify how delusional he is, and they reprimanded me for wasting their time with his stupidity.

tygxc

@12038

"We only here to discuss if chess is solvable or not."
++ For all practical purpose chess already is ultra-weakly solved and the game-theoretic value of the initial position is a draw.

Weakly solving chess in now ongoing as a by-product of the ICCF World Championship finals: 114 draws out of 114 games show how to draw.

Strongly solving chess to a 32-men table base of all 10^44 legal positions is expected by 2100 by retrograde analysis with quantum computing.

MEGACHE3SE
tygxc wrote:

@12035

"probability of which event"
P(A errs) is the probability that player A makes an error.
P (double error) is the probability that A makes an error and B missed the win.

"lack of depth is more than 1-2 ply, thus it most of the time wouldn't make a difference"
++ I say 
P (B misses the win | A has erred) < P(A errs)
but the difference may not be large.

ah yes, just assume your conclusion, even though they are providing evidence that your justification is false.

"errors suspected initially would very rarely happen" ++ They keep a record of prior analysis. Whenever a player plays a move less expected, there is reason to think longer.

this of course ignores the basic fact of why an error would occur: it's out of computational depth. tygxc just ASSUMES that errors would not be the expected move.

"some data on their time usage"
++ 'I never take fewer than two days and often as many as 10.' - Edwards

not data, just a quote.

"if vast majority of errors happen out of current engines capabilities"
++ The vast majority of errors happens from human factors: dubious opening selection, hasty move, clerical error, illness... This is clear from inspection of decisive games in previous years.

you cant prove that, as we cannot detect errors out of current engines capabilities BY DEFINITION.

MEGACHE3SE
tygxc wrote:

@12038

"We only here to discuss if chess is solvable or not."
++ For all practical purpose chess already is ultra-weakly solved and the game-theoretic value of the initial position is a draw.

by definition chess isnt ultra weakly solved (a proof of the games outcome given perfect play). A game solution is a formal proof, tygxc doesnt understand what a formal proof is, so he thinks that conventional game knowledge counts as "logic" or "proof".

Weakly solving chess in now ongoing as a by-product of the ICCF World Championship finals: 114 draws out of 114 games show how to draw.

this is a weird delusion by tygxc where he thinks the fact that the computers are drawing each other means they are perfect. Weakly solving a game is creating an algorithm (through a game tree or invariants, or both) to guarantee the best possible result against any possible opposition, and proving that said algorithm is perfect. the ICCF games, of course, have no such proof of perfection.

Strongly solving chess to a 32-men table base of all 10^44 legal positions is expected by 2100 by rtetrograde analysis with quantum computing.

this is based on a wild assumption that tygxc makes where he thinks that quantum computing will somehow not only grow at an exponential rate for 80 years, but also be optimized for chess. all current experts in the field disagree with these claims, but tygxc repeats it as fact for some reason.

see what i mean by lack of logical capacity?

MEGACHE3SE

tygxc cant argue against my posts, so he just downvotes them and acts like hes being victimized.

Elroch

There is no such thing as "is expected". There is such a thing as a person expecting something.

What @tygxc is saying (in a disguised way) that @tygxc expects "Strongly solving chess to a 32-men table base of all 10^44 legal positions [...] by rtetrograde [sic] analysis with quantum computing" by 2100.

This is a legitimate speculation, and I have speculated similarly myself (while noting that it might be impossible). Speculation is not a reliable forecast.

Kotshmot
tygxc wrote:

@12035

"probability of which event"
P(A errs) is the probability that player A makes an error.
P (double error) is the probability that A makes an error and B missed the win.

"lack of depth is more than 1-2 ply, thus it most of the time wouldn't make a difference"
++ I say 
P (B misses the win | A has erred) < P(A errs)
but the difference may not be large.

"errors suspected initially would very rarely happen" ++ They keep a record of prior analysis. Whenever a player plays a move less expected, there is reason to think longer.

"some data on their time usage"
++ 'I never take fewer than two days and often as many as 10.' - Edwards

"do they run into time trouble"
++ Yes, it happens they use up all 50 days/10 moves and then have to reply the same day.

"My assumption is that Grischuk like bad time usage is rare." ++ It happens. If you want to outthink your opponent and his engines, then you have to spend more time than he does.

"if vast majority of errors happen out of current engines capabilities"
++ The vast majority of errors happens from human factors: dubious opening selection, hasty move, clerical error, illness... This is clear from inspection of decisive games in previous years.

"P (B misses the win | A has erred) < P(A errs)"

I think you have something confused. This would mean that there is no tendency for error pairs at all, the opposite would be true in fact. It would mean that the initial conditions are more favourable for an error than a state where the engine at serious depth has already misevaluated the position and starting from the next turn is now supposed to follow the winning line. This shouldn't be a part of your argument and it would make no practical sense.

Also the function you made doesn't seem right as A errs * A errs is not relevant for anything.

Elroch
tygxc wrote:

++ For all practical purpose chess already is ultra-weakly solved

This is exactly as valid as the statement "For all practical purposes, uranium is cheese".

This is an embarrassing repeated blunder. I can guarantee that NO-ONE who has published a paper relating to ultra-weak solving would agree with it, nor anyone who understands what the term means or familiar with how it is used competently.

In mathematics, remarkable people are sometimes able to sketch out how a proof of a difficult result might be structured, with the proof itself taking decades of work to produce. There is nothing that could even be described as a sketch of what an ultra-weak solution of chess would look like (except for weak and strong solutions, which technically qualify as ultra-weak solutions).

tygxc

@12044

"there is no tendency for error pairs at all" ++ Indeed. Error pairs are something typical for human play with a clock. I make an incorrect sacrifice, instead of calculating you decline the sacrifice. I allow a correct sacrifice, instead of calculating you refrain from sacificing.

"the initial conditions are more favourable for an error than a state where the engine at serious depth has already misevaluated the position and starting from the next turn is now supposed to follow the winning line" ++ Yes, with understanding that both human players are different, their engines and their settings are different, and the time per move is usually different.

"A errs * A errs is not relevant for anything"
P(double error) < P²(single error) is relevant.
For example 114 draws out of 114 games means that if game 115 were decisive then P(single error) = 1/115, and thus P(double error) = 1/115/115 = 0.008%

If the 114 draws were to include 1 game with a double error,
then 11 games with a single error, i.e. decisive games would be expected.
If the 114 draws were to include 2 games with a double error,
then 15 games with a single error, i.e. decisive games would be expected.
If the 114 draws were to include 3 games with a double error,
then 18 games with a single error, i.e. decisive games would be expected.

MEGACHE3SE
tygxc wrote:

@12044

"there is no tendency for error pairs at all" ++ Indeed. Error pairs are something typical for human play with a clock. I make an incorrect sacrifice, instead of calculating you decline the sacrifice. I allow a correct sacrifice, instead of calculating you refrain from sacificing.

.... thats literally exactly what would happen in an ICCF engine game too lmfao. its just that engine power outstrips human power.

"the initial conditions are more favourable for an error than a state where the engine at serious depth has already misevaluated the position and starting from the next turn is now supposed to follow the winning line" ++ Yes, with understanding that both human players are different, their engines and their settings are different, and the time per move is usually different.

thats the same for human games too lmfao, that doesnt make errors independent. or what, do they not meet your arbitrary and ever changing definition of "sufficiently strong"?

"A errs * A errs is not relevant for anything"
P(double error) < P²(single error) is relevant.
For example 114 draws out of 114 games means that if game 115 were decisive then P(single error) = 1/115, and thus P(double error) = 1/115/115 = 0.008%

making calculations off of what has pointed out to be a faulty variable isnt logic, but delusion.

If the 114 draws were to include 1 game with a double error,
then 11 games with a single error, i.e. decisive games would be expected.
If the 114 draws were to include 2 games with a double error,
then 15 games with a single error, i.e. decisive games would be expected.
If the 114 draws were to include 3 games with a double error,
then 18 games with a single error, i.e. decisive games would be expected.

WOW, SUCH INCREDIBLE GUESSES ON YOUR END. IF ONLY THEY WERE IN THE FORM OF MATHEMATICAL PROOF CONSIDERING HOW YOU CLAIM IT AS SUCH.

ah yes, continue to dodge the core argument you respond to and instead take a side comment they made out of context tygxc. that'll be sure to convince them.