Chess will never be solved, here's why

Sort:
Avatar of Optimissed
DiogenesDue wrote:
Optimissed wrote:

[Removed: Offensive] ~W

You don't seem to understand engines. or programming. You are misinterpreting what happened here, as you often do, but usually when trying to discern the motivations of human beings.

I would remind you what Wind said only yeseterday. You seem to have forgotten.

Haven't read it. Should I?? I don't as a rule read back to find what I may have missed.

Let me remind you that all was peaceful and respectful here in this thread, until you lot decided on a trolling expedition. So I vamoosed, only to return and find Elroch repeatedly insulting tygxc. I reported it and nothing happened. In fact, the trolling got worse, with the profile called Mega insulting tygxc with every post.

If you are sincere about improving the general behaviour here, don't blame others for the behaviour of yourself and your friends. After reporting bad behaviour on and off for three months and nothing happened, I finally renewed my old correspondence with erik and told him what I think the situation is. I hadn't talked to him for over 8 years.

Avatar of playerafar
Wind wrote:

Hi all! Hope you're having a good time.

Please let's try to keep the thread relevant to the theme proposition without resorting to endless quote-requote rude remarks and personal attacks. It's really unpleasant trying to get involved in an interesting discussion while there are so many pointing fingers that lead nowhere and drive the topic away from a creative debate.

Appreciate your understanding, have an amazing rest of the week!

Link to that post:
https://www.chess.com/forum/view/general/chess-will-never-be-solved-heres-why?page=742#comment-104581067An issue is that a particular person (not I) will go all out to bait other posters and draw criticism and then go all-out on his trolling and personal attacks - feeling he can then counter-report.
And thereby deter criticism of his posts and get a double standard.
For ten years now.
Should we care? No.
But 'being passive' doesn't follow from that though.

Avatar of ardutgamersus
DiogenesDue wrote:
Optimissed wrote:
ardutgamersus wrote:
Optimissed wrote:

Really really tired and didn't want to play in this match tonight but I'd said I'd play and it was an away match. In the event, the other six of our team all lost. I played quite well according to the analysis thing here. Made two weaker moves but was never in the negative figures. The analysis learned a lot from my moves, actually. Several times it revised its best move according to the strength of some of the moves I played. Just sayin'. Anyway, after my exchange sac he had too many loose pawns lying around and I won in another 20 moves. Time control was too fast for me. All moves in 80 minutes plus 15 seconds / move.

stockfish probably re evaluated its moves to adjust to your moves which it considered were good but not quite as good as possible in some cases

I kept going out and coming back in and in some cases it still preferred my moves and in others it reverted to its own, where you have to play your move, let it think for a bit and when the score goes up, go backwards and then forwards again. So it seemed to have a bit of learning ability, which surprised me. Maybe something stuck in a cache though.

Only you could think that Stockfish is "learning" from your moves. If you let it sit, it comes up with the best move. Sometimes it takes seconds and sits tight, over times it it takes minutes or hours to change its evaluation. Rest assured, none of that is predicated on your choices, at all.

it's not learning, it's updating its playstyle to yours, better said adapting its moves, which means it is learning. a move that could have been a blunder can now be a brilliant move, so it is learning.

Avatar of Optimissed
Kotshmot wrote:
Optimissed wrote:
Kotshmot wrote:
tygxc wrote:

@14876

"it's easy to understand why errors will come in pairs." ++ Some errors, but not all errors.

It has no relevance, whether exactly all errors come in pairs. Of course there's always a chance for a single error game, provided that the first error is punished with flawless play. Nobody can argue against that.

What is relevant:

If paired error games are more common than single error games (would make sense according to evidence and logic), in 114 ICCF draws the expected amount of games with paired errors is >0.

This possibility should be accepted.

Doesn't this rely on the idea that better engines are going to be developed which will discover errors in games played at 5 days per move with GMs supervising the strong engines that are already used?

At the normal level, most errors are picked up by the weak engine they have here in analyis. I obviously mean "most" and not "all" and I could post a game where the analysis here and "game review" dropped a win whereas playing at 3 days per move, obviously with no help from engines, I played a series of moves in a difficult ending which DIDN'T drop the win, after my opponent blundered by trying to push kingside attack for one move too many and was one move late getting his pieces back to defend. The engine here hadn't a clue how to play the resultant ending and it would only have drawn.

But all told, the chances a better engine will be able outplay a current engine enough to win are naturally falling. As we go forward in time, an engine from year y will be able to get an increasing proportion of draws against an engine from year y + 5, until radically different algorithms are developed, perhaps along the lines I've suggested or faster computers using different hardware are developed. Will quantum computers be viable enough to allow real AI instead of the "apparent AI" we have at the moment?

That's another interesting conversation.

For now we are just breaking down what can or rather cannot be concluded, in terms of total errors being made, from the fact that the sample contains 0 decisive games.

What happens when future engines, perfect or not, are introduced and meet todays ICCF finalists. Options are:

1. Future engines will be able to discover errors from the sample of drawn games we are looking at today and are decisively stronger. Possible and can't be excluded.

2. Future engines will not discover any errors in the referred 114 game sample. However, in a large sample of games future engines would beat todays ICCF finalists. They would consistently find the most challenging lines leaving less drawing lines available every turn, until our current finalists run out of depth. Eventually this would lead to an error and a decisive game.

3. ICCF finalists with 5 days of avg time for turn are strong enough to always draw against future engines. This is what Tygxc thinks. This is unlikely if not impossible depending on sample size. Reason is, we know chess games are much "deeper" than todays players are able to process. This factually leaves room for error and I see no reason to believe eventually a position would be reached where slightest of misevaluation is made, when options are as few as possible.

It's an interesting discussion, what is most likely the case today. These "options" can be broken down into more detailed possibilities.

I find it less interesting than perhaps you do. I'll tell you why, if I may. Often, when we reach firm conclusions regarding methodology, it's the result of a very fundamental bias, regarding the dichotomy, which exists at a fundamental level in our cognitive apparatus, between the inductive and deductive approaches. However, we should know that deduction can only be carried out if a set of "givens" is coherent enough that a logical structure can be constructed from them. The "givens" don't have to be true .... just valid. A lack of truth in the premises affects the truth of the conclusion to some degree at least but it's still a valid conclusion. It just might not be right.

Somewhere along the way, mathematicians have invented axioms which are assumed correct and act as the fundamental givens when a set of premises is investigated and assessed. Unfortunately, whilst some of these axioms are basic and obviously reasonable, in the world of mathematics there's been a tendency to invent artificial axioms which supplement the basic ones and which are use to give incontrovertible finality to the pronouncements of some mathematicians, even if those pronouncements are suspect of even wrong. It's as if the mathematicians have forgotten that philosophers exist to add a counterbalance and if necessary to criticise mathematical thinking when it gets a bit too carried away with its own glory.

To put that into context, there was a discussion regarding the odds against 114 pairs of errors being even. I know that I showed Elroch's judgement to be biassed towards accepting as true that which he wanted to accept as true, without applying to it the criticism and scepticism which he applies to arguments with which he disagrees. He hasn't responded to my refutation, so far as I know. Dula standards aren't a good way to approach these discussions. Wanting to win rather than being open to likely truths isn't good, either.

In my opinion, tygxc's argument about error pairs wasn't refuted. A refutation would have depended on an understanding of error distribution in typical, real games. But that understanding doesn't exist, since it cannot be achieved until chess is solved. This discussion is about "trying to solve chess" and the refutation of tygxc's argument depended on assumptive thinking .... that chess is solved and we understand the error distribution.

So far, the best evidence we have is the 114 games. I'm not arguing that the evidence is sufficiently compelling for us to be deductively sure that chess is a draw: just that it's the best evidence we have, so far. I have pointed out that it is necessary to look at much longer games using different strategies before we can be completely sure and even then, the certainty is more of an inductive nature. But then, most of the foundations of deductive reasoning are inductive too.

Avatar of Elroch
tygxc wrote:

@14816

"114 games all drawn does mean that the odds against the errors all occurring in pairs is astronomical"
++ If all 114 games are drawn, then all 114 games contain an even number of errors: 0, 2, 4.

The most plausible error distribution is 114-0-0-0-0.

While the statement about the even number of half point errors is provable, the next appears to be no more than a statement of how plausible seems to you and, as such, of no significance.

If this is not so, provide the reasoning why the probability that this is true is higher than its negation.

However, a few games with a pair of errors: e.g. 112-0-2-0-0 cannot be excluded.
There can be no substantial number of games with a pair or errors, e.g. 60-0-54-0-0,
because then there would be at least 1 game with 1 unpaired error, i.e. a decisive game.

State your reasoning clearly, including your assumptions. To everyone here with any knowledge of probability theory, it seems as though you are making the beginner's error of assuming probabilities are independent without this being possible to justify.

"odds for all of the errors occuring in pairs in these 114 games is necessarily tiny"

Based on what assumptions? Only wrong ones can be guessed.++ Assume game 115 were decisive, no clerical error, or due to illness.
Then odds of 1 error = 1/115. Thus odds of a pair of errors= (1/115)² = 0.008%.

Finally you make your assumptions clear enough not to need you to respond to requests for clarification. You are assuming a Poisson process. This cannot be justified.

The odds could be slightly more if there is a tendency for errors to come in pairs.

No, the odds could be any amount more if there is a tendency for errors to come in pairs.

"is it more likely that the winning line found or missed by player 2 in this particular instance?"
++ Player 1 is more likely to make an error than player 2 is to miss the win after it.
The reason is player 2 has 1 ply more information: he knows the move played by player 1,

Correct. But we should be more precise. Player 2 can ignore all the positions that could be reached from alternative moves by player 1 but are not reached in other lines by transposition.

while player 1 was considering several candidate moves. Player 2 looks 1 ply deeper than player 1 did, even with equal hardware, software, and time per move.

Another issue is the time per move. They have 50 days for 10 moves, but are free to spend it as they see fit. If player spends 2 days on his move, and player 2 spends 10 days on his reply,
then player 2 is more likely to spot the error made by player 1.

Correct as stated, but this does not quantify how much more likely he is to spot the error. Slightly simplifying, you can think of it being that a given error requires some specific number of ply to avoid. There are errors that require 1 ply, 2 ply, ... N ply, and so on. We have only a very loose upper bound on N. So if player 1 errs and analysed to depth M, we know that the error is one that requires M+1, M+2, or some larger number of ply up to the maximum possible, but we do not know which of those numbers it is.

What we need is good knowledge of the distribution of the function of possible errors defined by:

f(error) = number of ply analysis needed to avoid the error

over the set of positions that occur in the games of interest.

Suffice it to say we do not have knowledge of this. It is unfortunately not feasible to find it without having access to an oracle.

What we could examine is errors made by engines in tablebase positions. The above distribution could be calculated for such positions. But this would be a major research project, and would be invalidated to some extent by any updates to the engines being studied.

Avatar of Optimissed
ardutgamersus wrote:
DiogenesDue wrote:
Optimissed wrote:
ardutgamersus wrote:
Optimissed wrote:

Really really tired and didn't want to play in this match tonight but I'd said I'd play and it was an away match. In the event, the other six of our team all lost. I played quite well according to the analysis thing here. Made two weaker moves but was never in the negative figures. The analysis learned a lot from my moves, actually. Several times it revised its best move according to the strength of some of the moves I played. Just sayin'. Anyway, after my exchange sac he had too many loose pawns lying around and I won in another 20 moves. Time control was too fast for me. All moves in 80 minutes plus 15 seconds / move.

stockfish probably re evaluated its moves to adjust to your moves which it considered were good but not quite as good as possible in some cases

I kept going out and coming back in and in some cases it still preferred my moves and in others it reverted to its own, where you have to play your move, let it think for a bit and when the score goes up, go backwards and then forwards again. So it seemed to have a bit of learning ability, which surprised me. Maybe something stuck in a cache though.

Only you could think that Stockfish is "learning" from your moves. If you let it sit, it comes up with the best move. Sometimes it takes seconds and sits tight, over times it it takes minutes or hours to change its evaluation. Rest assured, none of that is predicated on your choices, at all.

it's not learning, it's updating its playstyle to yours, better said adapting its moves, which means it is learning. a move that could have been a blunder can now be a brilliant move, so it is learning.

You misunderstand how its analysis works. It wouldn't give my moves a higher score than its own if its own were better so it isn't about playstyle. These engines are fast but the reason we can beat a bot that's rated at 2000 is that we find a deeper line than it looks at, which if it was set to look one move further, it would see. We don't win by short term tactics but by longer term projects where we promote a pawn.

So it isn't a big thing at all but Dio didn't bother to compliment me on good play which he certainly couldn't replicate. He just tried to find an error in my thinking and it turns out that he was the one who made the error. He always does actually.

So the engine doesn't "adopt my playing style". It simply holds my move in its cache so it can find it next time. Like I said, it isn't true AI because true AI doesn't exist yet.

Avatar of Kotshmot
Optimissed wrote:

In my opinion, tygxc's argument about error pairs wasn't refuted. A refutation would have depended on an understanding of error distribution in typical, real games. But that understanding doesn't exist, since it cannot be achieved until chess is solved. This discussion is about "trying to solve chess" and the refutation of tygxc's argument depended on assumptive thinking .... that chess is solved and we understand the error distribution.

I'll respond to this part as for the rest I'm either lacking the context or agree on the general stuff.

The hypothesis of Tygxc more or less can't be refuted, as you said, we are lacking the data. Some of it I agree with, some of it I don't as I've reasoned in my previous post.

The methods Tygxc uses such as his probability calculations and poisson distribution, he often applies wrong, which leads to some understandable false confidence. Now I wouldn't dump everything he says in the trash because of some miscalculation, because there's also evidence that supports some of the stuff he is saying.

Someone will think I'm even being too nice perhaps but atleast Tygxc keeps the conversation going which leads to interesting stuff sometimes.

Avatar of playerafar
Kotshmot wrote:
Optimissed wrote:

In my opinion, tygxc's argument about error pairs wasn't refuted. A refutation would have depended on an understanding of error distribution in typical, real games. But that understanding doesn't exist, since it cannot be achieved until chess is solved. This discussion is about "trying to solve chess" and the refutation of tygxc's argument depended on assumptive thinking .... that chess is solved and we understand the error distribution.

I'll respond to this part as for the rest I'm either lacking the context or agree on the general stuff.

The hypothesis of Tygxc more or less can't be refuted, as you said, we are lacking the data. Some of it I agree with, some of it I don't as I've reasoned in my previous post.

The methods Tygxc uses such as his probability calculations and poisson distribution, he often applies wrong, which leads to some understandable false confidence. Now I wouldn't dump everything he says in the trash because of some miscalculation, because there's also evidence that supports stuff he is saying.

Someone will think I'm even being too nice perhaps but atleast Tygxc keeps the conversation going which leads to interesting stuff sometimes.

tygxc's claims of 'proof' can be refuted and have been and are.
Engines of the same strength playing each other and continuing to draw each other does not prove those engines are playing perfect games.
Its as simple as that.
Refutation in two lines.
Arguing that they are playing perfect games because they keep drawing each other is a circular and invalid argument which also ignores the context of robotically programmed artificial intelligence.
Even resulting in engines not being able to recognize draws that human players can quickly see are obvious draws.
The engines have a 'horizon' they cannot see beyond.
They can only evalutate to that horizon.
---------------
Related to this:
I suggested an algorithm whereby the engines could quickly assign a win to positions with very lopsided material situations for one side that is on move.
For example - several pieces or pawns against a Long King.
WIthout having to 'brute force it out' every time.
Elroch indicated its impossible for a computer to think that way.
He appears to be right.
This kind of thing is related to the 114 draws simply being an aspect of artificial intelligence playing itself.
It suggests that the errors they make are 'deep enough' that they cannot see far enough to refute such play.
-------------
Also related to this:
a top ranked engine playing itself with five days per move.
Does it draw?
Does it make errors?
When it draws itself - it doesn't follow it made no errors.

Avatar of Elroch

It is true that engines change their evaluations and their preferred moves over time. It is true that sometimes this means they will change their preference to the first preference of a player who is, say 1500 points weaker, as @Optimissed points out. It's worth emphasising that more of the time, they do not. Nor does it mean that weaker players have better insight. It's better understood as an aspect of the random variation.

Avatar of playerafar
Elroch wrote:

It is true that engines change their evaluations and their preferred moves over time. It is true that sometimes this means they will change their preference to the first preference of a player who is, say 1500 points weaker, as @Optimissed points out. It's worth emphasising that more of the time, they do not.

There's no 'magic' in the latest engines taking five days per move to draw each other.
They could get 10,000 consecutive draws against each other and it would only prove something about their limitations.
There is absolutely no proof they are playing 'perfect' games.
With artificial intelligence like that - how different is that from each of those engines playing itself?

Avatar of playerafar

I did a search just now 'what happens when the strongest chess computer plays itself?'
Right away the very first hit was ...
"With any engine, you will have a bunch of draws but you will also have wins and losses too. The "horizon effect" (not to mention the time constraints) will prevent the computer from always playing the best move. So it will make a move and "both sides" will evaluate it the same."
Is that correct?
It implies that the engine isn't perfect.
That indicates its games against itself aren't perfect either.
I intentionally did not go to the site that google entry came from.
Reason: evaluate the entry per se.
-----------------
there's a flaw in that internet quote.
I wonder if anybody spots it.
After a move is played - its evaluation may change because 'the same side' now has a different position in front of it that has advanced by one move - or ply.
Which would mean that 'both sides' do Not 'evaluate it the same' ...
----------------------
Now try out what would happen if a very strong engine is given a very long time to move ...
with each additional move of 'look-ahead' the possibilities multiply tremendously ...
so there's a paradox.
Does giving the computer extra time 'expand its horizon'?
Apparently. But the rate of lookahead is going to fall off exponentially - if not worse than that - the further ahead the computer gets to look.
The more time - the more lookahead - the more the computer is 'reaching' its horizon.
At which point it struggles.
As seen with the tablebase projects.
The computer's 'horizon' simply can't deal with eight pieces on board and the 'lookahead' problems that causes.
-----------------------------
It should not take long to figure out that engines getting five days per move that have positions in front of them with far more than seven pieces on board - are not 'solving'. They can't.
No Einstein stuff there.

Avatar of tygxc

@14899

"3. ICCF finalists with 5 days of avg time for turn are strong enough to always draw against future engines." ++ This is also what the late 3-times ICCF World Champion Dronov implied with 'It is necessary to radically reduce the time for thinking.'

"we know chess games are much deeper than todays players are able to process"
++ Chess is not as deep as people think. 10^38 positions is a huge number, but 10^38 = 3^80, i.e. 40 moves with average 3 non-transposing choices per move generate all of chess.

"This factually leaves room for error" ++ An average ICCF WC Finals game ends in a draw in 40 moves. You only have 40 moves to go wrong.

Avatar of tygxc

@14905

"What we could examine is errors made by engines in tablebase positions."
++ I proposed a relevant position:

Black to play and draw.
It is a 7-men endgame table base position. As it is a draw, it could result from the initial position with optimal play from both sides. Humans in endgame books had this wrong and thought it a white win, so it is not trivial. As is is a rook ending, that is the most common type of endgame.

Avatar of MaetsNori
I’m on the iPad app right now, so it’s hard to quote posts, but I found myself wondering if tygxc was correct in suggesting that ICCF WC decisive games have been linearly decreasing over the years. As this sounds like a compelling trajectory …

So here’s what I found:

Decisive ICCF WC games per event (ignoring losses from a player who sadly passed away during the event)-

2011: 16
2013: 20
2015: 13
2017: 9
2019: 15
2020: 17
2022: not yet completed, with 22 games ongoing

I’m not sure what we can conclude from this data …

In any case, it’s already been pointed out that ICCF WC draws can still contain mistakes. Therefore, pointing at the number of draws per event doesn’t seem to prove much, as such draws have been shown to be unreliable measures of quality chess - especially when evaluated by future engines …
Avatar of Optimissed
Elroch wrote:

It is true that engines change their evaluations and their preferred moves over time. It is true that sometimes this means they will change their preference to the first preference of a player who is, say 1500 points weaker, as @Optimissed points out. It's worth emphasising that more of the time, they do not. Nor does it mean that weaker players have better insight. It's better understood as an aspect of the random variation.

Well, the bot I was using was called Li, rated at 2000. Very strong tactically and prone to a piece sacrifice to get a lot of pressure. Apt to be a little too aggressive. Over the past couple of days I spent two hours working my way through all the Roman bots and then Li. It isn't what you would call strong and being able to play positionally, I found it was really good practice. You have to play aggressively in a very muted way, giving precedence to keeping your pieces well-placed and your formations flexible and strong. It fails to tactical-positional play.

In the game I posted, the bot's first preferences were identical to a high proportion of my chosen moves in that game and there were about five cases of my moves being better than the bot's, which isn't surprising because I'm a stronger player. I made three sub-par moves but I was never losing, as you can see from the game score. My opponent played far better than a player rated at about 1710 FIDE and then the rather quick time control, 80 mins plus 15 seconds, and my exchange sacrifice, which was winning, got the better of him. I was way behind on time for most of the game because I was really too tired to play and had problems with my glasses. This is my second rated game after 5 years of not playing rated otb. My last game, 5 years ago, was black against Brett Lund, FIDE 2350 and an old friend. He told me it was the hardest game he had to play all year although he did win from a losing position in the end, because I got into time trouble.

Anyway this was not about random variation, unless you're implying that the bot chooses a weaker move sometimes at random. It may be possible but then it would vary back again if you ran the game through again.

Avatar of Optimissed

Just checked ... Brett's down to 2292 now. Around number 4000 in the world apparently. He's about 62 and in 11 years he'll find out what it's like to be 73. happy.png

Avatar of TumoKonnin
Optimissed hat geschrieben:
ardutgamersus wrote:
DiogenesDue wrote:
Optimissed wrote:
ardutgamersus wrote:
Optimissed wrote:

Really really tired and didn't want to play in this match tonight but I'd said I'd play and it was an away match. In the event, the other six of our team all lost. I played quite well according to the analysis thing here. Made two weaker moves but was never in the negative figures. The analysis learned a lot from my moves, actually. Several times it revised its best move according to the strength of some of the moves I played. Just sayin'. Anyway, after my exchange sac he had too many loose pawns lying around and I won in another 20 moves. Time control was too fast for me. All moves in 80 minutes plus 15 seconds / move.

stockfish probably re evaluated its moves to adjust to your moves which it considered were good but not quite as good as possible in some cases

I kept going out and coming back in and in some cases it still preferred my moves and in others it reverted to its own, where you have to play your move, let it think for a bit and when the score goes up, go backwards and then forwards again. So it seemed to have a bit of learning ability, which surprised me. Maybe something stuck in a cache though.

Only you could think that Stockfish is "learning" from your moves. If you let it sit, it comes up with the best move. Sometimes it takes seconds and sits tight, over times it it takes minutes or hours to change its evaluation. Rest assured, none of that is predicated on your choices, at all.

it's not learning, it's updating its playstyle to yours, better said adapting its moves, which means it is learning. a move that could have been a blunder can now be a brilliant move, so it is learning.

You misunderstand how its analysis works. It wouldn't give my moves a higher score than its own if its own were better so it isn't about playstyle. These engines are fast but the reason we can beat a bot that's rated at 2000 is that we find a deeper line than it looks at, which if it was set to look one move further, it would see. We don't win by short term tactics but by longer term projects where we promote a pawn.

So it isn't a big thing at all but Dio didn't bother to compliment me on good play which he certainly couldn't replicate. He just tried to find an error in my thinking and it turns out that he was the one who made the error. He always does actually.

So the engine doesn't "adopt my playing style". It simply holds my move in its cache so it can find it next time. Like I said, it isn't true AI because true AI doesn't exist yet.

Huh?

1. Why are you insulting him? Please be civil, like he is.

2. What do you define as “true AI”?

3. It doesn’t hold cache of your moves, what are you saying?

4. He doesn’t have to complement you, we’re talking about how chess will never be solved.

5. We can win against a 2000 bot by short term tactics as well, here:

Look at all these tactics!!! (This is against the 2000 stockfish, I played against it)
 
Avatar of TumoKonnin
Optimissed hat geschrieben:
Elroch wrote:

It is true that engines change their evaluations and their preferred moves over time. It is true that sometimes this means they will change their preference to the first preference of a player who is, say 1500 points weaker, as @Optimissed points out. It's worth emphasising that more of the time, they do not. Nor does it mean that weaker players have better insight. It's better understood as an aspect of the random variation.

Well, the bot I was using was called Li, rated at 2000. Very strong tactically and prone to a piece sacrifice to get a lot of pressure. Apt to be a little too aggressive. Over the past couple of days I spent two hours working my way through all the Roman bots and then Li. It isn't what you would call strong and being able to play positionally, I found it was really good practice. You have to play aggressively in a very muted way, giving precedence to keeping your pieces well-placed and your formations flexible and strong. It fails to tactical-positional play.

In the game I posted, the bot's first preferences were identical to a high proportion of my chosen moves in that game and there were about five cases of my moves being better than the bot's, which isn't surprising because I'm a stronger player. I made three sub-par moves but I was never losing, as you can see from the game score. My opponent played far better than a player rated at about 1710 FIDE and then the rather quick time control, 80 mins plus 15 seconds, and my exchange sacrifice, which was winning, got the better of him. I was way behind on time for most of the game because I was really too tired to play and had problems with my glasses. This is my second rated game after 5 years of not playing rated otb. My last game, 5 years ago, was black against Brett Lund, FIDE 2350 and an old friend. He told me it was the hardest game he had to play all year although he did win from a losing position in the end, because I got into time trouble.

Anyway this was not about random variation, unless you're implying that the bot chooses a weaker move sometimes at random. It may be possible but then it would vary back again if you ran the game through again.

Do you have the game? How did you know you were winning at first?

Avatar of tygxc

@14914

"Decisive ICCF WC games per event"
WC33: 22 ongoing + 10 GM Dronov deceased but in otherwise drawn positions + 57 draw agreed + 37 3-fold repetition + 10 7-men endgame table base draw claimed = 136 total
WC32: 4 decisive + 13 resigned by SIM Bock for personal reasons + 119 draws = 136 total
WC31: 15 decisive games + 111 draws = 136 total
WC30: 9 decisive games + 127 draws = 136 total
WC29: 12 decisive games + 124 draws = 136 total
WC28: 20 decisive games + 116 draws = 136 total
WC27: 16 decisive games + 120 draws = 136 total
WC26: 24 decisive games + 112 draws = 136 total
WC25: 32 decisive games + 16 cancelled + 88 draws = 136 total
WC24: 35 decisive games + 101 draws = 136 total

The number of decisive games goes down every year.
In previous years there were decisive games and thus also a few draws with a pair of error.
This year we have no decisive game, thus all 114 draws are perfect games with optimal play from both sides.

Avatar of TumoKonnin
tygxc hat geschrieben:

@14914

"Decisive ICCF WC games per event"
WC33: 22 ongoing + 10 GM Dronov deceased but in otherwise drawn positions + 57 draw agreed + 37 3-fold repetition + 10 7-men endgame table base draw claimed = 136 total
WC32: 4 decisive + 13 resigned by SIM Bock for personal reasons + 119 draws = 136 total
WC31: 15 decisive games + 111 draws = 136 total
WC30: 9 decisive games + 127 draws = 136 total
WC29: 12 decisive games + 124 draws = 136 total
WC28: 20 decisive games + 116 draws = 136 total
WC27: 16 decisive games + 120 draws = 136 total
WC26: 24 decisive games + 112 draws = 136 total
WC25: 32 decisive games + 16 cancelled + 88 draws = 136 total
WC24: 35 decisive games + 101 draws = 136 total

The number of decisive games goes down every year.
In previous years there were decisive games and thus also a few draws with a pair of error.
This year we have no decisive game, thus all 114 draws are perfect games with optimal play from both sides.

The thing is, we don’t know if the computers ACTUALLY played perfectly, we just know they are the SAME STRENGTH, since they drew.