Is there such thing as "luck" in chess?

Sort:
DiogenesDue
Kotshmot wrote:

[new iteration of same argument removed]

I'd be happy to discuss with someone who is genuinely open to debate and test the subject of luck, however my suspicion is that it's only you here who could even semi legitimately hold the position of no luck in chess.

Nice try. Appeals to ego like this would work on an Optimissed or ibrust. I was serious though, I have no intention of spending hours and hours going back through the same steps that have been repeated several times before in this thread, they all ended in the same impasse and this time would be no different.

Kotshmot
DiogenesDue wrote:
Kotshmot wrote:

[new iteration of same argument removed]

I'd be happy to discuss with someone who is genuinely open to debate and test the subject of luck, however my suspicion is that it's only you here who could even semi legitimately hold the position of no luck in chess.

Nice try. Appeals to ego like this would work on an Optimissed or ibrust. I was serious though, I have no intention of spending hours and hours going back through the same steps that have been repeated several times before in this thread, they all ended in the same impasse and this time would be no different.

There was no appeal to ego there, I just actually don't see any users on the no luck side that participate in a productive way. Me asking you methodical questions is also making sure our premises about relationship of luck and skill in practice are shared. That way I could build a proof of my side based on shared premises. I can do it without asking any questions, but that means I have to deal with you questioning the premises afterwards. When I have time I'll give you a logically sound argument where luck and skill are on the same spectrum, that holds in chess and anywhere else. We can continue that way.

DiogenesDue
ibrust777 wrote:

DiogenesDue: "So...you are a hardcore determinist then when it comes to dice? How does that jive with your quantum indeterminacy?"

No, only in this context where you're presuming that there is no player - i.e. that the thing by itself can determine its outcomes.

I have presumed nothing of the kind. In fact, my OOP example of passing in the players as objects/parameters makes it quite clear that I am not saying a chess game determines it's own outcome. I am saying that the players bring in their own encapsulated properties, which are not part of the game of chess.

This is a flawed notion, easily dispensed with via the argument on incompleteness, but it's serving some function within this conversation. On another note, your typical causation can be reconciled with retrocausation via telic metacausation.

You're tossing around a lot of causations, but this is a relatively empty statement, and potentially circular, unless you explain yourself much better.

If "reconciled" means "conceptually linked," the statement is trivially true but not significant/important here. If it means "logically or scientifically proven to coexist," then it requires an argument or model.

If "telic metacausation" is just your word-salad way of saying "chess is goal-driven" it does not explain retrocausation in any meaningful way for this discussion.

The argument on incompleteness is again this... (I'll do the translation this time): 
"you presume a [game model] can fully determine [game outcomes] without reference to [players], but the [player] itself is part of the [game], the [game outcomes] are a function of some [player priorities]"

Never happened, sorry. All your verbiage here is a waste of time.

To which you responded with this bad argument:

DiogenesDue: "You do not define that makes any reality complete"

Everything that is real is part of reality. The player is real, they're a real part of the game... it's not an opinion. If you are modeling reality... the observable aspects of it, and the observer themselves, are either a part of your model... or they're excluded from it. If you exclude them, your model is incomplete... because it does not contain everything.

This is a meaningless statement in terms of my argument about luck in chess.

Two chess players sit down at park bench and one sits in a puddle of rainwater, the aftereffects of this distracts them throughout the game. An example of "everything" that can affect a chessplayer who is playing a game of chess? Yes. Example of luck in chess? Categorically: No.

Furthermore, the player is an essential aspect of the game, since without one the game can't be played.

Never argued otherwise.

DiogenesisDue: "It's beyond reaching to try to claim that separating human players from the game of chess itself equates to determinism"

Separating players out from the game doesn't really "amount to determinism", since without a subject there is no decision making capability, i.e. there's no way to determine events. However, that is the presumption of your argument, that without players the game can still somehow be played, i.e. it can determine its own game outcomes without reference to a player.

But you typically slide around this by presuming a "perfect game player", i.e. silently injecting a player back into the model... just not a real one, and not one you acknowledge as being subjective. This is someone or something capable of comprehending a full proof of the game... usually you vaguely suggest this is some computer, and you either skirt around the fact that humans programmed the computer, or you refer to some quantum computer or advanced computer capable of hard-cracking it.

Even in the case of a hard crack - even if the computer knows the set of moves leading to a win vs. a draw, it still requires some subjective priority to play a winning move... and to choose which move to play from the set of viable moves. That decision making can't be derived from the game rules. You can really never remove subjective bias from a model and get a working model.

Another meaningless statement that doesn't even hold up. I can't even begin to list all the trivial refutations of your statement, but I'll use one simple one: V = IR (yes, equations are working models).

One obvious approach for making a computer select a move would be to just have it select a random move from the set of moves leading to a win. i.e. exactly what we've been talking about, "randomness" operating within a set narrowed down, "determined" as options by this perfectly skilled player.

Anyway, when you replace real subjects with imaginary ones yes, your model becomes meaningless - i.e. it pertains to an imaginary scenario, not the actual one. We don't create models for their own sake, the goal is to actually model something.

There are numerous examples of models that were "imaginary" that turned out to have a few little uses in society, you know, just here and there...Game Theory would be a one of them. Complex Numbers. Non-Euclidean Geometry. Imaginary Time (Hawking). General relativity....

Boolean Algebra...you know, that thing you indirectly make your living off of?

DiogenesDue
Optimissed wrote:

There is no workable argument which can demonstrate that "there is no luck in chess". Moreover I have never seen you put together a proper argument on anything. A proper argument has premises which, if true, lead enerringly to a conclusion but at some point you always either give up or claim that you made such an argument in AD 23 and will not repeat it. I remember you making the same excuse in AD 23, that you'd made it in 339 BC and it would be incorrect to repeat it now the Roman Empire is so different, because it would not appeal to the new Emperor's ego.

Or some similar drivel! Fact is, you are not open to debate.

If you ever logically prove something that has not already been proven by somebody else on a given thread, I'll entertain you being able to talk about it and pay attention. Hasn't happened yet.

DiogenesDue
Optimissed wrote:

Incidentally, I have never known trolls who are willing to participate in building shared premises. My experience is that they answer a maximum of one question honestly and when the next one comes, they're out. They won't respond because they don't want to be on the receiving end of an irrefutable, logical proof. Best to just count it as a win and move on. When someone won't take part in a proper discussion, then at best it means that they have an intuition which they are not capable of backing with reason and they know they can't but they are not willing to question their intuition.

Alternatively, they are being deliberately deceptive.

Alternatively, it has already been argued for 300 pages. You know, like say...somebody arguing for hundreds of pages that they know chess is a forced draw when they don't.

DiogenesDue
Kotshmot wrote:

There was no appeal to ego there, I just actually don't see any users on the no luck side that participate in a productive way. Me asking you methodical questions is also making sure our premises about relationship of luck and skill in practice are shared. That way I could build a proof of my side based on shared premises. I can do it without asking any questions, but that means I have to deal with you questioning the premises afterwards. When I have time I'll give you a logically sound argument where luck and skill are on the same spectrum, that holds in chess and anywhere else. We can continue that way.

Sure, that's fine.

You might as well just cut and paste your whole ChatGPT "proof" step by step. Note that I have not gotten to replying to your ChatGPT excursion yet, but pushing and prodding ChatGPT into agreeing with you "100%" is not that difficult, which is why I made it clear that I only looked at the initial response. The longer you talk, the more that ChatGPT tailors itself to your input, and it will not be disagreeable to you, so, each new step brings you inexorably closer to winning your point (for anyone that has a modicum of reasoning ability, that is) because ChatGPT defers to you at every step and ergo can only lose ground if you do not make a significant mistake while moving forward.

The easiest way is to bring it into conflict with itself on the several goals it has on each response...

"gotta keep content safe and palatable"

"gotta be succinct and engaging"

"gotta use an 8th grade reading level"

"can't insult the user's intelligence, better compliment them instead"

Here's some examples of ChatGPT being hoodwinked pretty easily:

So, for those people that want to use ChatGPT to "confirm" their argument in a meaningful way, it needs to be done in the least number of questions possible, when you are still getting the raw machine learning's initial take (which will be the closest thing to "informed consensus" achievable)..

DiogenesDue
Optimissed wrote:
DiogenesDue wrote:
Optimissed wrote:

Incidentally, I have never known trolls who are willing to participate in building shared premises. My experience is that they answer a maximum of one question honestly and when the next one comes, they're out. They won't respond because they don't want to be on the receiving end of an irrefutable, logical proof. Best to just count it as a win and move on. When someone won't take part in a proper discussion, then at best it means that they have an intuition which they are not capable of backing with reason and they know they can't but they are not willing to question their intuition.

Alternatively, they are being deliberately deceptive.

Alternatively, it has already been argued for 300 pages. You know, like say...somebody arguing for hundreds of pages that they know chess is a forced draw when they don't.

Then you should have no difficulty in producing the definitive argument, should you, since you obviously know all the theory by heart.

Just take it that we are too dumb to recognise the definitive argument. You spell it out for us and grab all the glory!

Oh wait, you're making a false analogy between me knowing that chess is a draw with good play and you thinking something weird and not even on the spectrum of reality regarding luck and skill. Now, chess is considered drawn because ALL the available evidence supports it and ZERO evidence goes against it.

Regarding your luck/skill/spectrum digression, you're saying that you have evidence which is just as strong as that supporting chess being drawn with good play on either side?

Geez, another attempt. I am not going to repeat a process I have already repeated multiple times in the past that only leads to the exact same impasse of definitions. You and anyone else are free to re-read the thread as many times as you like. I will not summarize it for you. The baiting is comical. People that bait this way and cannot learn not to are those who feel that baiting is effective...i.e. those that fall for baiting techniques.

When I do give a summarized position in some new round or flurry of posting, it's not for you (that is, the long term posters on the thread). It's for the new arrivals. Everybody else can do their own homework.

DiogenesDue
Optimissed wrote:

I think Kotshmot might be able to do it by himself.

What you say in the final passage makes sense, Dio. I'm looking forward to watching the examples you gave just there, though. Should be fun.

It is fun, yes. When ChatGPT first came out I spent dozens of hours putting it through it's paces, and pushing it to and fro in a similar manner. I posted a decent chunk of the sessions in Al's club (some ChatGPT, some Bard).

Kotshmot

I'm aware that ChatGPT can be gaslighted, I recently posted an experiment where I made it fully accept Pascal's Wager. The luck/skill one I cannot post because it incudes other irrelevant discussion. I maintain that ChatGPT is decent at testing logic when used correctly and to ask whether it is 100% or is able to challenge the logic, is a good command to use. Just taking the initial answer is not the way, because it could refer to a potentially vulnerable source without testing the logic. Very context dependant.

Elroch

The latest ChatGPT models are more focussed on reasoning, and good at describing their "thinking" - especially with prompting. Deepseek also appears good at this. This has been a conscious enhancement of the designs.

DiogenesDue
Kotshmot wrote:

I'm aware that ChatGPT can be gaslighted, I recently posted an experiment where I made it fully accept Pascal's Wager. The luck/skill one I cannot post because it incudes other irrelevant discussion. I maintain that ChatGPT is decent at testing logic when used correctly and to ask whether it is 100% or is able to challenge the logic, is a good command to use. Just taking the initial answer is not the way, because it could refer to a potentially vulnerable source without testing the logic. Very context dependant.

It depends what you are using ChatGPT for. In this case I am talking about using ChatGPT to determine whether an argument is viable and whether it's a good argument by consensus around the internet. That is best determined by an early take, not a long conversation where you grind ChatGPT down.

In the case of luck in chess, I think the consensus take is far more valuable than any "insight" ChatGPT might refine its way to. You can easily tip the scales on ChatGPT for topics where definitions frame each side very differently. You just get it to agree to use your definitions, then toss out your logical conclusions one by one. At the end you will get 100% agreement, but ChatGPT will not add the proper caveat: "for the definitions you wanted me to use".

Elroch is right that the AIs are getting better in many ways, but they are also getting worse in others. ChatGPT is more obsequious now and so it's a little easier to bully it into agreeing with you, or at least in being silent about how wrong you are. You often can tell ChatGPT "thinks you are wrong" when it goes from saying things it finds agreement with you on and starts talking about how both sides have valid points and yours is definitely a viewpoint that other people have, etc. That process normally happens in the other direction...general to specific/refined.

This will keep happening because there is still a division in the AIs between the training and its raw results vs. the "bit and bridle" mechanisms/locks that force the AIs not to say things it shouldn't. And unfortunately, like in everything else, money talks and telling 50% of your users that they are wrong about various black and white topic does not sell subscriptions. These straitjackets the AIs wear are also the source of those strange moments when you are skirting a line and the AI will tell you something that seems a little off and when you try to drill down it suddenly plays dumb and doesn't seem to know what you are talking about and you can't even get it to restate what it already said before.

DiogenesDue
ibrust777 wrote:

Lol, its conclusion is exactly the argument being made against Dio by multiple people.
Okay, I take back everything bad I ever said about chatGPT. It's a genius of unprecedented magnitude.

There's not enough information there to make such a conclusion. But I guess I will take the compliment that it takes unprecedented genius to oppose me? wink.png

playerafar
DiogenesDue wrote:
Kotshmot wrote:

I'm aware that ChatGPT can be gaslighted, I recently posted an experiment where I made it fully accept Pascal's Wager. The luck/skill one I cannot post because it incudes other irrelevant discussion. I maintain that ChatGPT is decent at testing logic when used correctly and to ask whether it is 100% or is able to challenge the logic, is a good command to use. Just taking the initial answer is not the way, because it could refer to a potentially vulnerable source without testing the logic. Very context dependant.

It depends what you are using ChatGPT for. In this case I am talking about using ChatGPT to determine whether an argument is viable and whether it's a good argument by consensus around the internet. That is best determined by an early take, not a long conversation where you grind ChatGPT down.

In the case of luck in chess, I think the consensus take is far more valuable than any "insight" ChatGPT might refine its way to. You can easily tip the scales on ChatGPT for topics where definitions frame each side very differently. You just get it to agree to use your definitions, then toss out your logical conclusions one by one. At the end you will get 100% agreement, but ChatGPT will not add the proper caveat: "for the definitions you wanted me to use".

Elroch is right that the AIs are getting better in many ways, but they are also getting worse in others. ChatGPT is more obsequious now and so it's a little easier to bully it into agreeing with you, or at least in being silent about how wrong you are. You often can tell ChatGPT "thinks you are wrong" when it goes from saying things it finds agreement with you on and starts talking about how both sides have valid points and yours is definitely a viewpoint that other people have, etc. That process normally happens in the other direction...general to specific/refined.

This will keep happening because there is still a division in the AIs between the training and its raw results vs. the "bit and bridle" mechanisms/locks that force the AIs not to say things it shouldn't. And unfortunately, like in everything else, money talks and telling 50% of your users that they are wrong about various black and white topic does not sell subscriptions. These straitjackets the AIs wear are also the source of those strange moments when you are skirting a line and the AI will tell you something that seems a little off and when you try to drill down it suddenly plays dumb and doesn't seem to know what you are talking about and you can't even get it to restate what it already said before.

I have experienced that kind of thing many times with AI but it can be circumvented by ending the session - deleting the browser/cache history and starting a new AI session. One can also switch AI's at that point.
I've found for short sessions ChatGPT is best - and for long its better to start with Copilot and then once it begins to go into its error loops switching to GPT works.
I'm talking about the free access to GPT and Copilot through internet search - with no fee and no signup.

DiogenesDue
Kotshmot wrote:

You can make it test a logical proof quite well. If you ask it a question once, it usually just refers to a source without testing the logic. You can keep challenging it until comes to a conclusion, that is probably accurate after multiple rounds of questions and arguments. You can even ask if it's a 100% confident conclusion or not. But that's enough of AI for now - hopefully this image wont take too much space.

...and yet, if you go back and start this conversation from scratch, it will not give your "won position" as the new response. In that sense ChatGPT is just pandering to people. It gives you kudos and then reverts right back to its data.

playerafar
DiogenesDue wrote:
Optimissed wrote:

I'll pick you up on one oversight though @playerafar. Dio and I seem to be enjoying a completely civil relationship just now. You didn't notice that, did you! Just your childishness, I suppose.

My relationship with any poster is civil when they are also civil. That just never lasts very long with you in the mix. A consistent trend that is not hard to observe in action.

The Guy has never 'picked me up' yet.
And yes he doesn't stay civil very long at all ...
but sometimes new members or arrivals to forums aren't aware of this.
And the guy being muted for three months at the end of last year made that worse. He's temporarily able to mislead more people.
A fact not a worry.
Funny how he seems to own ibrust and a couple of others ...
---------------------
but anyway there's the forum subject.
Which is beat to death for now - 
but there are various spinoff subjects.

playerafar
DiogenesDue wrote:
Kotshmot wrote:

You can make it test a logical proof quite well. If you ask it a question once, it usually just refers to a source without testing the logic. You can keep challenging it until comes to a conclusion, that is probably accurate after multiple rounds of questions and arguments. You can even ask if it's a 100% confident conclusion or not. But that's enough of AI for now - hopefully this image wont take too much space.

...and yet, if you go back and start this conversation from scratch, it will not give your "won position" as the new response. In that sense ChatGPT is just pandering to people. It gives you kudos and then reverts right back to its data.

Several times the AIs have agreed with me and with the robotic praise and flattery and so on ... but while giving a wrong reason for agreeing !!

DiogenesDue
Optimissed wrote:

I also have won hundreds of debates against people (here) who are not competent in some way to realise it. Their little friends never accept it either and that's what's meant by a cabal .... when, all the time, the same people vote with each other, irrespective of the value of their opinions and irrespective of the fact that they make only a pretence of arguing so as to fool the unwary. And always the same ones back each other up because they know that if they were unsupported they would be SEEN to fail but by spamming, they try to cover it up.

Or, you've just accumulated a disparate set of detractors due to a plethora of problematic posting preferences. Occam, what say you?

Kotshmot
DiogenesDue wrote:
Kotshmot wrote:

You can make it test a logical proof quite well. If you ask it a question once, it usually just refers to a source without testing the logic. You can keep challenging it until comes to a conclusion, that is probably accurate after multiple rounds of questions and arguments. You can even ask if it's a 100% confident conclusion or not. But that's enough of AI for now - hopefully this image wont take too much space.

...and yet, if you go back and start this conversation from scratch, it will not give your "won position" as the new response. In that sense ChatGPT is just pandering to people. It gives you kudos and then reverts right back to its data.

ChatGPT evaluates your logic to the best of its ability and gives its verdict on your argument.

And yes, ChatGPT doesn't remember mistakes or misinformation that have been corrected in previous sessions. Your point?

DiogenesDue
ChessAGC_YT wrote:

Bro did not just say I use TikTok because I don't write essays on stupid things, I think we know who's the one with a limited brain here.

I'm not sure if I were in your shoes that I would open this up for comment by assuming what "we" know.

...and, on a meta-thread level, I am caught up now.

DiogenesDue
Kotshmot wrote:

ChatGPT evaluates your logic to the best of its ability and gives its verdict on your argument.

And yes, ChatGPT doesn't remember mistakes or misinformation that have been corrected in previous sessions. Your point?

ChatGPT doesn't use or retain your session exchanges in training at all. Mistake or breakthrough, no difference.

My point was that the "100% wins" are effectively meaningless. Fun, but meaningless.