Math, percentages

Sort:
Sqod

By the way, here are two examples of how expected value is multiplied by the probability, which is basically the method I used:

http://www.quickanddirtytips.com/business-career/small-business/how-expected-value-can-help-you-make-good-decisions-part-1

http://www.quickanddirtytips.com/business-career/small-business/how-expected-value-can-help-you-make-good-decisions-part-1?page=1

 

xman720

Sorry, you're correct, it's reversed.

But you would indeed have a 47% chance of winning.

u0110001101101000
X_PLAYER_J_X wrote:

I have already solved the answer in the simplest of terms.

Stronger player 51% chance of winning

Weaker player 49% chance of winning

 

 

You can not factor an equation because there to many missing unknown's.

Furthermore, The assumptions which are based on the player in question are of second hand importance compared to the position in question.

It's true statistics might be called a best guess sort of math, but with enough info it can be very accurate!

Because of the way the rating formula works, stats are already built into the ratings, so even if we only know the ratings we can already give a very good estimate.

It may not be true after 5 games or 10 games or on Tuesday or before Joe had breakfast etc. It's just an average expected score over an arbitrarily large number of games vs many different opponents.

Christopher_Parsons

@OP

There are two problems you face in trying to give this serious considerarion. One of which is that you are trying to use ELO, which first of all is an estimate and isn't intrinsic in nature. You tend to get punished by your mistakes only when you lose, as you are rated by ELO. An intrinsic system would rate your strength over all, based on your performance with less emphasis on winning or losing.

Also you are most likely going to be at a loss in trying to account for how such math would pertain to players of similar ELO, but who have different styles of play. Depending upon how a game would ensue, after you attempted to calculate a probability value for, the type of game and positions that could unfold, may completely fall into the strengths of the player who wasn't supposed to win, while at the same time dealing circumstances than force the player who is supposed to win, into their area of greatest weakness.

woton

I don't think that the calculation is possible using a simple formula.  First of all, the percentage wins for a given rating difference is based on observation and includes wins for both white and black.  This is not like flipping a coin or rolling dice where all of the posssible outcomes are known.  You have to process data from a large number of games and see what happened.

Basically, whoever developed the correlation looked at a large amount of data and did a lot of "number crunching."  These data, and, hence, the correlation can change with time.

xman720

The calculation is simple over an arbitarily large number of games. Loss chance is w/(w + l) and win chance is l/(w + l)

Assumption 1: Both ELO ratings are accurate.

Assumption 2: Within the confines of this test, the strength of neither player will change.

Conclusion based off assumpion 1: If both players play an arbitarily large number of games, their ELO ratings will stay unchanged.

Now if you gain 6 rating points for winning and lose 8, and your opponent gains 8 and loses 6, that means that a 1:1 win ratio disagrees with the first assummption that the ELO ratings are accurate. This proves that people with higher ratings have na advantage over people with lower ratings.

Instead, you must win 8 games for every 6 you lose, because we've proved that if the ELO ratings are accurate, they must stay constant over a large number of games. If a 1300 player could gain rating by playing a 1500 player over and over, then one or both of the ratings is not accurate.

Therefore, if you are going to gain 6 and lose 8, you should expect to win 8/14 = 57% of these games in a large group. It is the only solution that doesn't lead to a mathematical contradiction.

Let's say, for some odd reason, I decide to say the answer is 75%. This leads to a contradiction:

After infiniteimately many games, both opponents will hace a different rating than what they started with.

This shows that either the ELO ratings are inaccurate or the ELO system is flawed. Since we are assuming both of those things to be false, there is a contradiction in assumming any answer other than 57%. You can talk about how the problem is "impossible" or "unsolvable" for days, or you can be like IBM and start thinking about it. It's not too complicated and it doesn't have to take into account player strategies.

This isn't meant to mean that I can predict the exact outcome of a match between two rated players. What this means is that if you are sitting across the table from a complete stranger and the only information you have to go on is his rating, this is the best answer you can come up with. You cannot come up with any better answer without more information.

GMrisingJCLmember1

Literally what my thoughts were:


You can't give qualitities like who gets to play the first move and playing strength an accurate measurement (these are intangible but you can try, for example using elo to measure strength)

Assuming these are the only differences in the players (to avoid further unknowns and complications). There are 2 unknowns we need (these are the percentage advantage given from having the first move and from the 55%). All the unknowns are unlike terms it's almost like me saying:

 

e =5%

 

What is f?

 

Back to attempting to answer your question:

 

Imagine making your question into an equation ():

 

55%+e+w

-Since there SHOULD be 50% assigned at the begining to each player but there was a 55% given to player 1,  there is a +5% chance of winning.

 

-A higher rating by 70 elo gives a +5% of winning.

 

 

-Therefore we can substitute this 

Wait a second all the things said before were probably wrong, lets take a new approach:

This is an parody of the original question to show why adding in more number to get answers is pointless:

Assume these are the ONLY FACTORS in this scenario question (to avoid more complexion).

I have a 55% chance of getting another friend.

Because I am kind to others I now have a 60% chance of getting a friend.

Since people in my school are bias towards Asians and I am Asian how many percentage would my chance of getting a new friend be now?

Already we see a flaw in the question (It should start off with a 50% chance of getting a friend since I either get a friend or don't making it a 1/2 probability of getting a friend). But to account for it lets say I am 2x as kind (if you follow what the question says you get a +5% for being kind).

The point is these are intangible variables and can't be measured (unless one day someone magically makes a conversion for things like that).

BTW please don't dismiss me as stupid (I am not even over 18 yet).

Ziryab
Bittrsweet wrote:

Hello. I am trying to find the overall percentage of winning a game. Let's say that white has a 55% chance of winning a game. 

White's overall score in the database is 55%, but that is not the winning chance. It combines the wins with 1/2 the draws. White's chance of winning is closer to 40%.

SmyslovFan

Ziryab, you're trying to make things more complicated than they are. Change the word "winning" to "scoring". 

The answer has already been given.

SmyslovFan

There is another way to do this though: If White scores 55%, then the equivalent difference in rating for White is 35 Elo. If you add that Elo to the 70 Elo difference, you get a difference of 105 Elo, or a 64.7% "winning" score.

This works especially nicely with huge Elo differences such as Carlsen against a 1200. The chance of Carlsen "winning" is already +99.9% though.


http://www.pradu.us/old/Nov27_2008/Buzz/elotable.html

Notice that even in that web site, they use "winning" to mean "scoring".

woton

Instead of guessing about how to perform a calculation, why not analyze several game databases  by using a computer to  filter out games where the players have a specific rating difference, divide that group into white wins, black wins, draw, and average the numbers.  That's how most of these "expected results" (USCF terminology) are arrived at in the first place.

SmyslovFan

Woton, this isn't about how chess is played, it's about statistics. 

In a database of over 5 million games, White scores 54.95%. The OP has stipulated the win rate to be 55%. There's no need to question that number.

woton

SmyslovFan

I know that it's about statistics.  During my career, I performed a large number of experiments and did analysis to determine  mean values and standard deviations.  These values were then used to determine the risk of taking equipment out of service for maintenance, repairs, etc.

The OP wants to know the probability of white winning when there is, say, a 200 point rating difference.  Not the same thing as how often white wins overall.

This is not like rolling two dice where you can count the number sides, determine the total number of combinations, determine the number of combinations that add to seven and calculate the probability of rolling a seven.  

You have to run experiments and analyze the data from those experiments.  Future predictions are based on what has happened in the past, and the past keeps changing.

Note:  The data are probably already out there, Glickman had to have some basis for the RD calculation and the USCF expected results calculation.

SmyslovFan

The problem is translating between %s and Elo.

I like my solution of translating the win rate for white into Elo, then adding that to the existing Elo difference. That seems a very simple solution. 

Yes, I do recognise that the win rate for white changes as players get better. But as a simple formula, adding 35 Elo points to White's Elo to represent the difference in Elo is both elegant and pretty accurate.

woton
[COMMENT DELETED]
Robert_New_Alekhine
Fiveofswords wrote:

well as a person who studied math in school and has a rather mature grasp of mathematics, i think you just cant approach chess this way. Chess is highly resistant to stochastical reasoning. Many chess players...even good ones...already focus a bit too much on % for various openings etc. More improtant is lookign at the position. And evaluation is intrinsically subjective.

Like personally, i find it much mroe easy to beat players who are stornger. why? because i find their moves more rpedictable and i can read them mroe easily. I can provoke a mistake from a strong palyer. Against some oblivious players I dont knwo how to do this, i jsut have to wait for it.

Now I'm proud that my Chessbase has a malfunction and doesn't allow me to see percentages (not that I would want to)

woton

Maybe someone should contact the MIT Media Lab.  They were doing a similar thing for both Millionaire Chess Opens.  Given the board position and the ratings of the players, they would provide the chances of winning for each player.

Bittrsweet

Thank you SmyslovFan. Your solution to the problem is, as you said, both elegant and accurate.

u0110001101101000
SmyslovFan wrote:

The problem is translating between %s and Elo.

I like my solution of translating the win rate for white into Elo, then adding that to the existing Elo difference. That seems a very simple solution. 

Yes, I do recognise that the win rate for white changes as players get better. But as a simple formula, adding 35 Elo points to White's Elo to represent the difference in Elo is both elegant and pretty accurate.

Yes, so without a lot of data I think you're right that this is a good way to approach it.