What do you think GM Hikaru IQ is?

Sort:
crazedrat1000
DiogenesDue wrote:

Lol. So...you never bothered to click any of the links I posted, did you? If you had, you would have seen this already:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6107969/table/tbl1/?report=objectonly

Your listed values are 100% accurate.,,you just interpreted them poorly. 0.3 and everything below as far as 0.1 is "weak". Above it is "moderate", so if you read a source telling you that the cutoff for weak is 0.1 and for moderate is 0.3, why on earth would you assume that 0.3 is middle of the moderate range and not the minimum/start of the range? Does that sound like a precise, scientific way to interpret what you posted?

The Emory site simply doesn't say that, you did. What it says it 0.1 is a mild correlation, and 0.3 is moderate. And I think they're speaking loosely, if you want to know the truth, because they aren't imagining this as some discrete mechanical scale the way that you are. If 0.3 is moderate 0.29 is not going to be mild, it's going to be "basically moderate" - the scale is continuous, these are subjective descriptors, it's not some discrete mechanical scale. If you really want to interpret the strength of a correlation you're going to need to interpret the correlation in context alongside other research on the same subject, considering the design of the study and the thing you're measuring, what question you're asking, and so on, there is not some official formula that will give you a precise answer. That's really the truth of it, I don't even think they thought about this scale in the way that you are.

In either case, what we have is a misalignment between your source and mine. So the debate is a wash - I really think Cohen gives a much more useful and practical scale for use within social science, because it corresponds with how psychology is actually practiced in the real world. Your source is just a linear scale, it looks like it was constructed without much thought, or any real consideration for practical interpretation. I mean it is just a mindless linear progression you're referencing, basically not making any attempt whatsoever to interpret the correlation within the context of psychology. Which I've been pointing out the need to do all along. Obviously the people at Emory agree with me. But you can stick with your little linear scale if you want, it's not too essential.

Btw - you missed a paragraph in the preceding post, you should address that.

I like my little one-liners at the end, they tend to get under the skin of internet egomaniacs such as yourself, something I do enjoy doing.

Keep trying

TheJaguarGambit
ibrust wrote:
DiogenesDue wrote:
 

I just explained to you that the Dancey and Reidy *is* the de-facto Psychology correlation scale, and that they consider 0.1 to 0.3 as weak correlation. You can't gloss over it or pretend it's not there...you were just flat out wrong when you said it was moderate, etc. in fact, 0.22 isn't even close it's almost dead center of the range "weak". Trying to explain how Psychology is a soft science changes nothing in this regard, it's just the rationalization you used to convince yourself (you used "common sense", perhaps).

Emory university is one of the premier health and humanities universities... I think it's also in the top 25 ranking of Ivy league universities. On their psychology departments website they have a link discussing the interpretation of effect sizes in psychology:

Correlation (emory.edu)

In psychological research, we use Cohen's (1988) conventions to interpret effect size. A correlation coefficient of .10 is thought to represent a weak or small association; a correlation coefficient of .30 is considered a moderate correlation; and a correlation coefficient of .50 or larger is thought to represent a strong or large correlation.

This is the convention they cite... it is directly contrary to your claim that 0.1-0.3 is weak. i.e. 0.2 is in the dead center of the "weak" range. What I said originally was that 0.24 was moderate. Well my claim is much closer to Cohen's convention than yours. Now, as you've pointed out, the correlation is reduced to 0.22... now we probably have to say the effect is mild-to-moderate.

So Cohen wrote this book back in 1988, this is apparently a longstanding convention in psychology, according to the professors at Emory. It's a convention that actually makes sense within the field of psychology... On the other hand, from you I'm just hearing this rote insistence that some other specific math book is the gold standard and defines things differently, according to you... you're making comments about social science conventions you don't even appear qualified to make. I'm assuming you own this math book, but I've seen no actual citation of your math book on this issue from you, so I can't interpret what it's saying contextually... I also see no intelligent reasoning or interpretation of the practice of social psychology provided by you... so no, I don't think I acknowledge this appeal to convention that you're making. Everything I see on the web is saying moderate is about 0.3, this is what makes sense, what I was taught at university when I took statistics in psychology... and what Emory / Cohen go with. Clearly Cohen's work is a longstanding convention, at best the argument is a wash and at worst you're either misreading... or misinterpreting, or stretching the truth, or just full of it, who knows.

Anyway, I find it hilarious that you're doubling down on what the math books say as if this is the most critical point... you're still missing it - I'm not even really hinging the point on this, in other fields certainly 0.2 is a fairly weak correlation. Anyway, what I'm really describing is how social science is practiced. This is what's relevant because you're presuming to do social science. A math book is generally not going to presume to teach very much about the practical aspects of social science to students, it's going to leave that for other courses and mostly just teach students the math itself. Anyone who's majored in or worked in a STEM field will tell you there is a world of difference between the book learning they got in school, and how their field is actually practiced... have you ever worked in a STEM field...? I've worked in two separate STEM fields, engineering and social science. What I'm telling you is social science is a field made up of 0.2 correlations, and so within the field you cannot just dismiss such a correlation or consider it trivial, you would be left unable to do social science. Your study, which yielded an effect of r=0.22, should not be ignored by a serious social scientist. That is the important point, you ignored it but I'm repeating it again for you. And the social scientists who wrote that correction don't disregard the effect, they state explicitly that the conclusion of the study remains unchanged.

In reality an effect in social science is almost always predicted by at least 1-2 dozen core factors. So you end up with a complex equation for predicting your effect, one which has 1-2 dozen inputs and produces the effect as the output. The core factors are usually found through some broad analysis of hundreds of traits, and then through a process of factor analysis. So in our conversation what we'd say is IQ is one of 1-2 dozen factors which predicts chess playing ability, along with many other factors such as hours practiced, studying habits, age at which the person first learned, the persons sex, probably their current age, and a half a dozen other factors you could probably identify. That's how this works in reality.

The other thing I will mention, yet again, is that in statistics the sample correlation r is inversely proportional to confidence interval p. So when you have a confidence interval of p=0.001, an extremely conservative confidence interval, it becomes quite meaningless to squabble over whether a correlation r is moderate or mild-to-moderate. Because I could literally just change p=0.05 or 0.01 and the correlation would shoot right up to moderate immediately, it's quite a meaningless thing you're squabbling over. But you don't really realize how meaningless it is, because again, you're not a social scientist.

I think it would be 125 - 175

DiogenesDue
ibrust wrote:

The Emory site simply doesn't say that, you did. What it says it 0.1 is a mild correlation, and 0.3 is moderate. It ends there. And I think they're speaking loosely, if you want to know the truth, because they aren't imagining this as some discrete mechanical scale the way that you are. If 0.3 is moderate 0.29 is not going to be mild, it's going to be "basically moderate" - the scale is continuous, these are subjective descriptors, it's not some discrete mechanical scale.

In either case, what we have is a misalignment between your source and mine. So the debate is a wash - I really think Cohen gives a much more useful and practical scale for use within social science, because it corresponds with how psychology is actually practiced in the real world. Your source is just a linear scale, it looks like it was constructed without much thought, or any real consideration for practical interpretation. Obviously the people at Emory agree with me. But you can stick with your little linear scale if you want, it's not too essential.

Btw - you missed a paragraph in the preceding post, you should address that.

I like my little one-liners at the end, they tend to get under the skin of internet egomaniacs such as yourself, something I do enjoy doing.

Keep trying

I didn't miss anything (you did erase/ignore most of my reply, though). There's no misalignment of sources, either...both sources support my position.

I have a very thick skin, especially when it comes to input from people I have less than average respect for. I've had discussions with countless posters just like you, and the tactic is always the same when it reaches this point (the point where you've lost the exchange). Imply that I am overly upset about things and must be raging, and/or that I live in a basement, etc. It's the standard refuge of posters that have run out of things to defend themselves.

Don't mistake my directness and lack of fluffy rainbows with me being out of sorts. I am just very direct when I am holding up a mirror to someone's behavior. It's measured, and intentional. But that's as far as I go...I won't cuss at you, wish that your dog dies, etc.

crazedrat1000

I ignored your post mostly, yeah, because I'm losing interest. It was mostly irrelevant, the only interesting part was the part I addressed there. The rest of it... well there was some weird speculation on why I chose to say "meaningful" instead of "significant" somewhere - you're way too suspicious and confused, going down a rabbit hole there for reasons I don't understand. Then of course there was the "respect mah authoritaaa!!!", some meaningless attempt to establish your status as a manager it appears. And now it's just becoming: "the scale is discrete! Why would you ever say it's continuous, how could it be?! I win I win you lose you lose!", the conversation is really degrading at this point.

Here's the paragraph you missed though, and I'm sure you'll come up with some new way to completely not address this point / loudly proclaim victory. I'm not sure if anyone besides me is even following this conversation at this point, maybe the audience in your head cares. Overall this is getting very redundant though -

The other thing I will mention, yet again, is that in statistics the sample correlation r is inversely proportional to confidence interval p. So when you have a confidence interval of p=0.001, an extremely conservative confidence interval, it becomes quite meaningless to squabble over whether a correlation r is moderate or mild-to-moderate. Because I could literally just change p=0.05 or 0.01 and the correlation would shoot right up to moderate immediately, it's quite a meaningless thing you're squabbling over. But you don't really realize how meaningless it is, because again, you're not a social scientist.

DiogenesDue
ibrust wrote:

I ignored your post mostly, yeah, because I'm losing interest. It was mostly irrelevant, the only interesting part was the part I addressed there. The rest of it... well there was some weird speculation on why I chose to say "meaningful" instead of "significant" somewhere - you're way too suspicious and confused, going down a rabbit hole there for reasons I don't understand. And now it's just becoming: "the scale is discrete! Why would you ever say it's continuous, how could it be?! I win I win you lose you lose!", the conversation is really degrading at this point.

Here's the paragraph you missed though -

The other thing I will mention, yet again, is that in statistics the sample correlation r is inversely proportional to confidence interval p. So when you have a confidence interval of p=0.001, an extremely conservative confidence interval, it becomes quite meaningless to squabble over whether a correlation r is moderate or mild-to-moderate. Because I could literally just change p=0.05 or 0.01 and the correlation would shoot right up to moderate immediately, it's quite a meaningless thing you're squabbling over. But you don't really realize how meaningless it is, because again, you're not a social scientist.

- You are misinterpreting again. I said that the "retraction" of the meta-analysis by its own authors changed those two words, not you.

- I never uttered anything remotely like your paraphrasing

- You added that paragraph afterwards in an edit, but here you go:

You stated that a sample correlation (r) is "inversely proportional" to the confidence interval (p), which is not accurate. Correlation coefficients (r) and p-values (p) are related, but aren't inversely proportional. The correlation coefficient (r) measures the strength and direction of the relationship between two variables, the p-value (p) assesses the likelihood that the observed correlation occurred by chance. A small p-value (such as p < 0.001) means the result is statistically significant, but it doesn't affect the magnitude of r. The confidence interval provides a range within which the true correlation is likely to fall, but the p-value is not equivalent to or directly interchangeable with the confidence interval.

You implied that by changing the p-value threshold (from p=0.001 to p=0.05 or p=0.01), the correlation would "shoot right up to moderate". Incorrect. Changing the p-value threshold does not affect the strength of the correlation. The correlation coefficient remains constant regardless of the p-value threshold. A p-value only indicates how likely the observed correlation is to have occurred by chance. Lowering the p-value threshold (e.g., from 0.001 to 0.05) simply makes it easier to declare the result statistically significant, but it does not change the strength of the correlation. A weak correlation (0.22, in this case) will remain weak even if you change the p-value threshold.

You dismiss whether a correlation is weak or moderate as "meaningless" due to the p-value being small (p=0.001). The distinction between weak and moderate correlations is not meaningless, even if the result is statistically significant. In fact, the strength of the correlation is just as important as statistical significance when interpreting practical importance. A statistically significant weak correlation still implies a weak relationship between variables. Statistical significance tells us whether the relationship exists or not, but the correlation coefficient tells us how strong that relationship is.

crazedrat1000
DiogenesDue wrote:

- You added that paragraph afterwards in an edit, but here you go:

You stated that a sample correlation (r) is "inversely proportional" to the confidence interval (p), which is not accurate. Correlation coefficients (r) and p-values (p) are related, but aren't inversely proportional. The correlation coefficient (r) measures the strength and direction of the relationship between two variables, the p-value (p) assesses the likelihood that the observed correlation occurred by chance. A small p-value (such as p < 0.001) means the result is statistically significant, but it doesn't affect the magnitude of r. The confidence interval provides a range within which the true correlation is likely to fall, but the p-value is not equivalent to or directly interchangeable with the confidence interval.

You implied that by changing the p-value threshold (from p=0.001 to p=0.05 or p=0.01), the correlation would "shoot right up to moderate". Incorrect. Changing the p-value threshold does not affect the strength of the correlation. The correlation coefficient remains constant regardless of the p-value threshold. A p-value only indicates how likely the observed correlation is to have occurred by chance. Lowering the p-value threshold (e.g., from 0.001 to 0.05) simply makes it easier to declare the result statistically significant, but it does not change the strength of the correlation. A weak correlation (0.22, in this case) will remain weak even if you change the p-value threshold.

You dismiss whether a correlation is weak or moderate as "meaningless" due to the p-value being small (p=0.001). The distinction between weak and moderate correlations is not meaningless, even if the result is statistically significant. In fact, the strength of the correlation is just as important as statistical significance when interpreting practical importance. A statistically significant weak correlation still implies a weak relationship between variables. Statistical significance tells us whether the relationship exists or not, but the correlation coefficient tells us how strong that relationship is.

You are correct on one point, because I said sample correlation where I should have said population correlation. However, you are wrong on the broader point, your interpretation of this result and of the relationship between r / p:

r=0.22, p<0.001

This is this result of a left-tailed test of correlation. r here is the population correlation, not the sample correlation. You're correct you couldn't change the confidence interval and modify the sample correlation - by changing p what you're modifying is the left-tail of the confidence interval, i.e. the hypothesized population correlation. The correlation result r of the study we're talking about here is a hypothesized population correlation - that's why r is qualified by p, it must be interpreted alongside p. If this study used a normal confidence interval, like p<0.01, the confidence interval will become more narrow and the left tail of the confidence interval, which is 0.22 here - i.e. the hypothesized population correlation - will increase to a moderate correlation. That is the meaningful point about the results of the study I am making. And yes this is an inversely proportional relationship, as the confidence interval widens the left tail of the test, the hypothesized population correlation, approaches zero i.e. reduces in significance... and vice versa.

What this really shows is how difficult it is to interpret statistics correctly, and how precise you really have to be. For example, I've used the term "confidence interval" loosely as well but really p here is the lower confidence limit. The interval is actually the internal space between the upper and lower limit.

So anyway, the point about our conversation remains - if this study were to use a normal confidence interval (or limit), like p<0.01, it would have produced a more moderate correlation (even by your discrete interpretation of that term) between IQ and chess playing ability in the population. How much more moderate? Hard to say. But yes, this debate is meaningless. It has been meaningless the entire time.

DiogenesDue
ibrust wrote:

You are correct on one point, because I said sample correlation where I should have said population correlation. However, you are wrong on the broader point, your interpretation of this result and of the relationship between r / p:

r=0.22, p<0.001

This is this result of a left-tailed test of correlation. r here is the population correlation, not the sample correlation. You're correct you couldn't change the confidence interval and modify the sample correlation - by changing p what you're modifying is the left-tail of the confidence interval, i.e. the hypothesized population correlation. The correlation result r of the study we're talking about here is a hypothesized population correlation - that's why r is qualified by p, it must be interpreted alongside p. If this study used a normal confidence interval, like p<0.01, the confidence interval will become more narrow and the left tail of the confidence interval, which is 0.22 here - i.e. the hypothesized population correlation - will increase to a moderate correlation. That is the meaningful point about the results of the study I am making. And yes this is an inversely proportional relationship, as the confidence interval widens the left tail of the test, the hypothesized population correlation, approaches zero i.e. reduces in significance... and vice versa.

What this really shows is how difficult it is to interpret statistics correctly, and how precise you really have to be. For example, I've used the term "confidence interval" loosely as well but really p here is the lower confidence limit. The interval is actually the internal space between the upper and lower limit.

So anyway, the point about our conversation remains - if this study were to use a normal confidence interval (or limit), like p<0.01, it would have produced a more moderate correlation (even by your discrete interpretation of that term) between IQ and chess playing ability in the population. How much more moderate? Hard to say. But yes, this debate is meaningless. It has been meaningless the entire time.

You can't call it a left-tailed test unless you were presuming a negative correlation from the get-go, which is the opposite of the premise here...?

In any case, I am going to partially agree...things seem to be becoming meaningless, but I suspect it's because I am actually arguing with a 3rd party by proxy at this point.

DiogenesDue
ibrust wrote:

It's left tailed because the sample correlation is above 0, so to test the null hypothesis (test whether the confidence interval includes zero) you need the lower limit of the confidence interval.

That would call for a two-tailed test, by my understanding. Right tailed for positive correlations, left tailed for negative correlations, and two-tailed to check for both (and for zero correlation).

In any case, I need to call it a night I am traveling tomorrow...

crazedrat1000

What we're testing is whether the confidence interval includes 0, that's the null hypothesis. The "tail" of the test is referring to the critical region, not the trailing off region that doesn't need to be tested. So yes, that was a left tailed test. The sample correlation is above zero, we want to see if the confidence interval includes zero, so the critical region to test is to the left of the sample correlation toward zero.

You're correct that google is needed for confirmation of these things.

crazedrat1000

Well if you know you have talent in something... this can encourage you to invest time in cultivating the talent. If you know your IQ is high... maybe this encourages you to invest time in educating yourself, in developing your mind. Maybe it encourages you to practice more at chess. On the other hand... if your IQ isn't super-high maybe you find this very discouraging and you don't even pursue education, maybe you give up on chess...

DiogenesDue
micwhite wrote:

Not meaning to break up what is ongoing in conversation but can I enquire what the thoughts are on the value of having a known high IQ?

A practical use I've seen of it was taking a closely related 11+ test to get into a selective school. Of course, you could argue that being in Mensa is some kind of perk, that you're in a club with "like-minded" people and that's good for the talk and groups. Some people include Mensa on their CV, but when has that really made a difference?

I have never discerned any value other than bragging rights. Mensa membership is a dime a dozen (and not to disparage the whole organization, but some chapters have been known to fake IQ test results to get more members, and thus membership dues)...

Triple Nine Society and up (more exclusive versions of Mensa) may eventually reach some kind of qualitative value, but I kind of doubt it.

Edit: I saw your "hmmm" emoji, but consider the actual set/parameters of those that seek Mensa membership:

- People who may or may not actually have Mensa-level IQs but who want to be perceived by others as being "certified" intelligent

- People who may or may not actually have Mensa-level IQs who want to reassure themselves they are "certified" as intelligent

- People that have hung out with Mensa members or at Mensa events and genuinely enjoy them and want to participate on a regular basis

Let me assure you that the 3rd set of people is the smallest set...the words "insufferable", "stilted", and "cringeworthy" spring to mind, and while these stereotypes are far from absolute, you will probably run into one or more occurrences of one or more of these reactions...but please check it out for yourself and please let us know how it went . You never know.

Alexeivich94

Humans dont even have a very good understanding on what intelligence is. Intellectual performance can come in so many different forms that its very difficult to measure.

Elroch

Seems to be some confusion about "significant" in this discussion. You don't need a strong correlation for it to be significant. Often the word is used in such discussions to indicate some sort of statistical criteria, often the popular frequentist definition of significance (a somewhat flawed choice, but that is another discussion) but with the general purpose of supporting a belief that there is a non-random relationship between the variables.

A large sample size can make a very weak correlation statistically significant (for frequentist or Bayesian notions of significance), and legitimately conclude that the correlation is probably not random.

For chess, it seems obvious that people of unusually low intelligence are unlikely even to be able to play chess, immediately providing a causal relationship between chess playing ability and IQ - as long as we don't skew the sample by first ignoring all the people who can't play chess! At the other extreme there seems to be general agreement that GMs typically have highly significantly above average IQs, giving high plausibility to at least some degree of correlation of skill at chess with IQ among those who can play chess.

But note an important point - if it is true that the average IQ of a GM is 135 (plausible claim that could do with better evidence), there are several hundred times as many chess players with IQs over 135 who are not GMs! Both these facts are consistent with a significant but not very strong correlation between IQ and chess rating, which is exactly what I would expect.

Elroch
llama_l wrote:
Elroch wrote:

Seems to be some confusion about "significant" in this discussion. You don't need a strong correlation for it to be significant. Often the word is used in such discussions to indicate some sort of statistical criteria, often the popular frequentist definition of significance (a somewhat flawed choice, but that is another discussion) but with the general purpose of supporting a belief that there is a non-random relationship between the variables.

A large sample size can make a very weak correlation statistically significant (for frequentist or Bayesian notions of significance), and legitimately conclude that the correlation is probably not random.

For chess, it seems obvious that people of unusually low intelligence are unlikely even to be able to play chess, immediately providing a causal relationship between chess playing ability and IQ - as long as we don't skew the sample by first ignoring all the people who can't play chess! At the other extreme there seems to be general agreement that GMs typically have highly significantly above average IQs, giving high plausibility to at least some degree of correlation of skill at chess with IQ among those who can play chess.

But note an important point - if it is true that the average IQ of a GM is 135 (plausible claim that could do with better evidence), there are several hundred times as many chess players with IQs over 135 who are not GMs! Both these facts are consistent with a significant but not very strong correlation between IQ and chess rating, which is exactly what I would expect.

Googling to check gave results of Kasparov scoring 123 on Raven's and 135 "on another." I was only familiar with the 135 result.

I doubt IQ among GMs averages to 130, but it might, and anyway as you said there's obviously a correlation.

I found the 135 number plausible because it's not that extreme an IQ - 1% of people have a higher IQ (including, long ago, with all due humility, me) - and being a GM is a rather high level of achievement - one in a few hundred thousand of those who play chess at any level. The average IQ of doctors is around 130 and, while being a doctor is a very skilled and important job, it is less dominated by abstract problem solving than playing chess (not to mention the much larger number of doctors - albeit mostly explained by being a much wiser choice of profession than being a chess player!) Some sources say the average IQ of maths PhDs is higher, which is consistent with my notion that problem solving is key to IQ, and that this should be reflected in chess.

Elroch
llama_l wrote:

Here's an IQ-like puzzle question. See how you compare against a few top GMs...

At 2 minutes in the video, pause after you hear the question because due to editing they start talking / answering within a few seconds.

https://www.youtube.com/watch?v=xlUfsrk179I&ab_channel=chess24

Only one of them reasoned out the answer, one of them guessed correctly, and one of them guessed incorrectly.

Yeah! I beat the one who got it wrong and had a specific arrangement in mind (did not prove it was optimal, but was very confident). Will cherish that forever.

 It only took about 10 seconds to see, even though I first saw a less efficient attempt that gives 75% of the number.

Ziryab
ibrust wrote:
DiogenesDue wrote:
 

I just explained to you that the Dancey and Reidy *is* the de-facto Psychology correlation scale, and that they consider 0.1 to 0.3 as weak correlation. You can't gloss over it or pretend it's not there...you were just flat out wrong when you said it was moderate, etc. in fact, 0.22 isn't even close it's almost dead center of the range "weak". Trying to explain how Psychology is a soft science changes nothing in this regard, it's just the rationalization you used to convince yourself (you used "common sense", perhaps).

Emory university is one of the premier health and humanities universities... I think it's also in the top 25 ranking of Ivy league universities. On their psychology departments website they have a link discussing the interpretation of effect sizes in psychology:

Correlation (emory.edu)

In psychological research, we use Cohen's (1988) conventions to interpret effect size. A correlation coefficient of .10 is thought to represent a weak or small association; a correlation coefficient of .30 is considered a moderate correlation; and a correlation coefficient of .50 or larger is thought to represent a strong or large correlation.

This is the convention they cite... it is directly contrary to your claim that 0.1-0.3 is weak. i.e. 0.2 is in the dead center of the "weak" range. What I said originally was that 0.24 was moderate. Well my claim is much closer to Cohen's convention than yours. Now, as you've pointed out, the correlation is reduced to 0.22... now we probably have to say the effect is mild-to-moderate.

So Cohen wrote this book back in 1988, this is apparently a longstanding convention in psychology, according to the professors at Emory. It's a convention that actually makes sense within the field of psychology... On the other hand, from you I'm just hearing this rote insistence that some other specific math book is the gold standard and defines things differently, according to you... you're making comments about social science conventions you don't even appear qualified to make. I'm assuming you own this math book, but I've seen no actual citation of your math book on this issue from you, so I can't interpret what it's saying contextually... I also see no intelligent reasoning or interpretation of the practice of social psychology provided by you... so no, I don't think I acknowledge this appeal to convention that you're making. Everything I see on the web is saying moderate is about 0.3, this is what makes sense, what I was taught at university when I took statistics in psychology... and what Emory / Cohen go with. Clearly Cohen's work is a longstanding convention, at best the argument is a wash and at worst you're either misreading... or misinterpreting, or stretching the truth, or just full of it, who knows.

Anyway, I find it hilarious that you're doubling down on what the math books say as if this is the most critical point... you're still missing it - I'm not even really hinging the point on this, in other fields certainly 0.2 is a fairly weak correlation. Anyway, what I'm really describing is how social science is practiced. This is what's relevant because you're presuming to do social science. A math book is generally not going to presume to teach very much about the practical aspects of social science to students, it's going to leave that for other courses and mostly just teach students the math itself. Anyone who's majored in or worked in a STEM field will tell you there is a world of difference between the book learning they got in school, and how their field is actually practiced... have you ever worked in a STEM field...? I've worked in two separate STEM fields, engineering and social science. What I'm telling you is social science is a field made up of 0.2 correlations, and so within the field you cannot just dismiss such a correlation or consider it trivial, you would be left unable to do social science. Your study, which yielded an effect of r=0.22, should not be ignored by a serious social scientist. That is the important point, you ignored it but I'm repeating it again for you. And the social scientists who wrote that correction don't disregard the effect, they state explicitly that the conclusion of the study remains unchanged.

In reality an effect in social science is almost always predicted by at least 1-2 dozen core factors. So you end up with a complex equation for predicting your effect, one which has 1-2 dozen inputs and produces the effect as the output. The core factors are usually found through some broad analysis of hundreds of traits, and then through a process of factor analysis. So in our conversation what we'd say is IQ is one of 1-2 dozen factors which predicts chess playing ability, along with many other factors such as hours practiced, studying habits, age at which the person first learned, the persons sex, probably their current age, and a half a dozen other factors you could probably identify. That's how this works in reality.

The other thing I will mention, yet again, is that in statistics the sample correlation r is inversely proportional to confidence interval p. So when you have a confidence interval of p=0.001, an extremely conservative confidence interval, it becomes quite meaningless to squabble over whether a correlation r is moderate or mild-to-moderate. Because I could literally just change p=0.05 or 0.01 and the correlation would shoot right up to moderate immediately, it's quite a meaningless thing you're squabbling over. But you don't really realize how meaningless it is, because again, you're not a social scientist.

TL: DR

You lost me with utter nonsense in the first paragraph, anyway. Someone with an IQ of 142 should at least know how to Google Ivy League and learn that none of them are in the South.

Elroch

A more quantitative way of understanding the strength of relationship is via the notion that one variable explains a certain fraction of the variance of another via the optimal linear model. This relates the variance of what is left over after a linear model has been allowed for to the variance of the original variable.

Where two variables have a correlation coefficient of c, each explains c^2 of the variance of the other. For example, a correlation coefficient of 0.25 corresponds to explaining about 6% of the variance.

The standard language used is however misleading when there is a non-linear relationship, especially a deterministic one. For example, if y = x^3, all of the variance in y is explained by x using the correct cubic relationship, but the calculation using the correlation coefficient does not come to this conclusion, because an linear model is not perfect.

crazedrat1000
Ziryab wrote:
 

TL: DR

You lost me with utter nonsense in the first paragraph, anyway. Someone with an IQ of 142 should at least know how to Google Ivy League and learn that none of them are in the South.

It's ok, the conversation is above your head anyway, no loss here. 
But I actually got that from google -

"Emory University is one of 25 top schools in the nation tapped as a "New Ivy" in Kaplan/Newsweek's 2007 "How to Get Into College Guide."

Though this is in danger of becoming a very stupid debate, but not the first.

Elroch

As an example of the quantitative significance of a relatively weak correlation coefficient like 0.2, this would mean that the variable (say IQ) explain 4% of the variance of the other variable (say chess rating). It's entirely subjective whether you view this as a lot or a little, but I'd be inclined to the latter.

Elroch
ibrust wrote:
Ziryab wrote:
 

TL: DR

You lost me with utter nonsense in the first paragraph, anyway. Someone with an IQ of 142 should at least know how to Google Ivy League and learn that none of them are in the South.

It's ok, the conversation is above your head anyway, no loss here. 
But I actually got that from google -

As an independent observer of this discussion, you didn't. You replaced the unfamiliar term "New Ivy" by the widely known term "Ivy League" (which I believe always refers to a specific 8 very high performing Universities, like "Oxbridge" refers to a specific 2 in the UK).

"Emory University is one of 25 top schools in the nation tapped as a "New Ivy" in Kaplan/Newsweek's 2007 "How to Get Into College Guide."

Though this is in danger of becoming a very stupid debate, but not the first.

To back up my claim of independence, I don't view this carelessness as clearly indicating a lack of a high IQ. But checking facts would be definitely a habit worth acquiring.