What do you think GM Hikaru IQ is?

Sort:
DiogenesDue
ibrust wrote:

Blah blah blah, ignore repeat ignore repeat double down, appeal to the crowd, repeat ad nauseum...

Anyway, I'm glad to see you didn't bother to dispute the claim: There is a significant correlation between IQ and chess playing ability. So what are you arguing about, exactly? I actually have no idea at this point.

Keep trying!

I already refuted your claim, no need to re-hash when anyone reading the specifics will come to the correct conclusion...that you never should have gone off in the first place.

Your "keep tryings" at this point are like trying to put down sandbags after you already broke the levee.

https://www.chess.com/forum/view/general/relationship-bewteen-chess-rating-and-iq?page=1

https://www.chess.com/forum/view/general/what-is-the-relationship-between-iq-and-chess-1?page=1

https://www.chess.com/forum/view/chess-players/iq-and-chess-the-real-relationship?page=1

https://www.chess.com/forum/view/general/what-does-your-chess-rating-say-about-your-overall-iq?page=1

https://www.chess.com/forum/view/general/chess-and-iq-relationship

https://www.chess.com/forum/view/general/iq-versus-chess-rating4

https://www.chess.com/forum/view/general/relation-of-chess-ability-to-iq

This list double or triples if the add the Bobby Fischer/Kasparov/Carlsen IQ threads. You'll note how they mostly go the same way as this thread...someone claims a strong correlations or tries to use this argument to support some dubious claim, gets slapped down, and retreats to "well, there's still some measurable correlation".

crazedrat1000

You will never refute that claim since we have meta-analysis proving it's correct, and anyone can just read the study and see it for themselves.... you're fighting a very losing battle with this one:

The relationship between cognitive ability and chess skill: A comprehensive meta-analysis - ScienceDirect

Infact, I didn't realize this, but the results actually give concrete data on the IQ scores @AZPawnStar -

Results

The participants in the studies represented a wide range of chess skill. For example, across the 7 studies that collected Elo rating, the weighted average was 2018 (SD = 177) and the range was 1311 (an amateur level of skill) to 2607 (an elite level of skill). The participants in the studies also represented a wide range of intelligence/cognitive ability. For example, among the five studies that reported full-scale IQ, the weighted mean was 120.5, and the average standard deviation was 14.8

So in this one average elo is 2000 and average IQ is 120, and this IQ stat is compiled across 5 studies.

The low IQ population getting outraged and refusing to read studies isn't "slapping down" anything, the study results remain unchanged by the outrage and they are very clear...

Nope, you just fail completely I'm afraid.

DiogenesDue
ibrust wrote:

You will never refute that claim since we have meta-analysis proving it's correct, and anyone can just read the study and see it for themselves.... you're fighting a very losing battle with this one:

The relationship between cognitive ability and chess skill: A comprehensive meta-analysis - ScienceDirect

Infact, I didn't realize this, but the results actually give concrete data on the IQ scores @AZPawnStar -

Results

The participants in the studies represented a wide range of chess skill. For example, across the 7 studies that collected Elo rating, the weighted average was 2018 (SD = 177) and the range was 1311 (an amateur level of skill) to 2607 (an elite level of skill). The participants in the studies also represented a wide range of intelligence/cognitive ability. For example, among the five studies that reported full-scale IQ, the weighted mean was 120.5, and the average standard deviation was 14.8

So in this one average elo is 2000 and average IQ is 120, and this IQ stat is compiled across 5 studies.

Nope, you just fail completely I'm afraid.

You wouldn't know the difference, really. You seem unable to understand when a study is being hyped but has actual results that are less than convincing, and whose conclusive correlation is *not* really significant enough to make the claims you are making.

The study proves my point more than yours...and once you read the corrigendum, it's even worse (for you).

TL;DR for other posters:

Ibrust is pushing a meta-analysis (read as: a study that did no real work and just tried to reconcile the data from other studies) that had to be amended later because the people involved made a gigantic error.

crazedrat1000

I know it probably hurts emotionally to not have a high IQ... to feel you can never achieve greatness. I'm sorry for you, however, that doesn't change reality. Not everyone gets to be the hero in the story. We aren't all Magnus Carlsen... You have to accept who you are, the fact you aren't super-smart is not the end of the world, most people aren't, and everyone has something special to offer the world. Like a smile, or something.

There is a significant correlation between chess ability and IQ.

DiogenesDue
ibrust wrote:

I know it probably hurts emotionally to not have a high IQ... to feel you can never achieve greatness. I'm sorry for you, however, that doesn't change reality. Not everyone gets to be the hero in the story. We aren't all Magnus Carlsen... You have to accept who you are, the fact you aren't super-smart is not the end of the world, most people aren't, and everyone has something special to offer the world. Like a smile, or something.

There is a significant correlation between chess ability and IQ.

My IQ is higher than Kasparov's (and my SAT score supports that), and I retired in my 40s due to my "greatness", but keep trying to come up with a narrative to support your insecurities. I think your avatar is well chosen.

Ironically, I am still posting to cement in stone that you are acting petty, and you are still posting because you are indeed petty. As I said, win-win for me.

crazedrat1000

Well let's see... my IQ is 142, yours is higher than Kasparovs and his is 135... so right here we have 3 people we know of who have elos in the top 99th percentile, and also IQs in the top 99th percentile... and you're really smart... you can read the studies on the topic as well... and yet you still doubt the connection. You've gone your entire life succeeding academically and intellectually (except for when it comes to this debate), you've experienced firsthand the fact you succeed effortlessly at everything... you succeeded at chess too... and yet you doubt the connection. Hmmm, very strange. What rational basis do you have for such doubt? I don't understand, it is completely obvious based on both the data, common sense, and your own experience that there is a significant correlation.

You've proven something I've noticed and pointed out quite often - that smart people are often very dumb as well. It's one of those interesting contradictions.

DiogenesDue
ibrust wrote:

Well let's see... my IQ is 142, yours is higher than Kasparovs and his is 135... so right here we have 3 people we know of who have elos in the top 99th percentile, and also IQs in the top 99th percentile... and you're really smart... you can read the studies on the topic as well... and yet you still doubt the connection. You've gone your entire life succeeding academically and intellectually (except for when it comes to this debate), you've experienced firsthand the fact you succeed effortlessly at everything... you succeeded at chess too... and yet you doubt the connection. Hmmm, very strange. What rational basis do you have for such doubt? I don't understand, it is completely obvious based on both the data, common sense, and your own experience that there is a significant correlation.

Except that you are wrong on all 3 counts...

- The data is inconclusive (ergo the weak correlation...which even if you try to argue for "moderate" correlation it falls far short of "completely obvious").

- "Common sense" is another name for confirmation bias in many cases. The second you hear the phrase, start being skeptical.

- My own experience shows the opposite. Practice and more importantly training from experienced chess players that can steer someone to the right resources are far and away the primary indicators. I have played FMs and state champions that seem dull in general, and I have played brilliant engineers in Silicon Valley with $400K salaries and PhDs/MBAs (who were not beginners, mind you) that I could wipe the floor with a hundred times in a row.

crazedrat1000

a) The results of that study are not inconclusive, the study concludes: r=0.24 (a moderate correlation in psychology, not a minor one, for the 3rd time) within a confidence interval of p.001, i.e. 1 in 1000 odds there isn't a significant correlation, a very conservative confidence interval. There is a significant correlation, there is no serious doubt about that.

Here is a quote from a scientist on typical effect sizes: "Whether an effect size is "acceptable" really depends on field and purpose. For example, in social psychology, Pearson correlations around 0.2 are pretty typical."

b) The claim has never been that IQ is a stronger predictor than practice. What your experience shows is that there is alot of variance and the effect isn't super-strong, because practice matters alot. This is something I discussed in detail in the very first post you quoted - the good studies will need to control for variables like practice, in order to isolate the effect of IQ. Because again... the question you're asking matters, we're not interested in whether a high IQ person with very little practice can beat a well practiced chess player, that's not an interesting question. What we want to know is whether IQ predicts the speed at which a person learns, and correlates with the max elo attained. That's alot more relevant to a conversation on average GM IQ. You're also exercising confirmation bias, dismissing out of hand the aspects of your experience that don't support your predetermined conclusion. And you're actually speculating as to the IQs of your FM opponents. 

c) when two tasks both primarily tax short and long term memory, visual-spatial reasoning, and processing speed, it is not confirmation bias to suspect there will be some correlation between performance on both of them. It is a logical inference. And there is a correlation. 

Nonsense, nonsense, and nonsense.

DiogenesDue
ibrust wrote:

a) The results of that study are not inconclusive, the study concludes: r=0.24 (a moderate correlation in psychology, not a minor one, for the 3rd time) within a confidence interval of p.001, i.e. 1 in 1000 odds there is a significant correlation, a very conservative confidence interval. There is a significant correlation, there is no serious doubt about that.

"Whether an effect size is "acceptable" really depends on field and purpose. For example, in social psychology, Pearson correlations around 0.2 are pretty typical."

b) What your experience shows is that practice matters alot. This is something I discussed in detail in the very first post you quoted - the good studies will need to control for variables like practice, in order to isolate the effect of IQ. Because again... the question you're asking matters, we're not interested in whether a high IQ person with very little practice can be a well practiced chess player, that's not an interesting question. What we want to know is whether IQ predicts the speed at which a person learns, and correlates with the max elo attained. That's alot more relevant to a conversation on average GM IQ. 

c) when two tasks both primarily tax short and long term memory, visual-spatial reasoning, and processing speed, it is not confirmation bias to suspect there will be some correlation between performance on both of them. It is a logical deduction. And there is a correlation. 

Nonsense, nonsense, and nonsense.

How many times do I have to mention the meta analysis had to be corrected later for you to actually figure it out?

"The overall conclusion that cognitive ability contributes meaningfully to individual differences in chess skill is unchanged; most important, the meta-analytic average of correlations between chess skill and broad cognitive abilities is similar to the originally reported value and still statistically significant (0.24, p < .001, in the original analyses, vs. 0.22, p < .001, in the corrected analyses). However, as shown below in Table 1, there are changes in some specific conclusions. Most notably, while the correlations of chess skill with fluid intelligence (Gf) and short-term/working memory (Gsm) are unaffected, the correlations of chess skill with crystallized intelligence (Gc) and processing speed (Gs) are no longer statistically significant."

While you are still trying to argue that 0.24 is not a weak correlation (it is), 0.22 is even worse. Additionally, the findings that chess ability is affected by fluid intelligence and short term memory, but not crystalized intelligence and processing speed dovetails with my argument, that chess ability and IQ are only (weakly) correlated in a general "a rising tide lifts all boats" fashion. Pattern recognition (which falls under fluid intelligence) and short term memory affect pretty much everything. The fact that processing speed was found to not be a significant correlation will be surprising to some, but not to all...speed of calculation has not helped the various super GMs that are great at blitz/bullet to become world champions. Rather the reverse is true...the best at classical chess can become world class at faster time controls, but the reverse never seems to pan out.

P.S. Your font changes seem to indicate that you are cutting and pasting here...if you are going to bring in other's arguments, just link to them or at least identify that you are doing so.

crazedrat1000

I haven't addressed that correction because it says outright what I'm saying, that cognitive ability contributes meaningfully to individual chess skill - this is exactly what I've been saying the entire time. What is there to address? I'll just end up repeating exactly what the scientists said in that paragraph: the effect changing from 0.24 to 0.22 makes no difference... I've never claimed crystalized intelligence made a difference. I have no idea what time controls the studies used, so I can't comment on the processing speed aspect, though I do suspect if it were bullet you would be able to find a correlation with processing speed, however if it's rapid or classical probably not... But it doesn't matter because GF and short / long term memory are enough to make the point... and again 0.2 is typical in social psychology... and I have never claimed the effect is strong I've claimed it's significant in the scientific sense of the word, I have already explained this repeatedly... the confidence interval is also so conservative that we are left with no doubt as to whether an effect exists.... you are grasping at straws here....

Ziryab

“Effect sizes were small-to-medium in magnitude; variance in chess skill explained by cognitive ability was similar in magnitude for Gf (6%), Gsm (6%), Gs (6%), and Gc (5%), with an average of 6%. Full-scale IQ explained < 1% of the variance in chess skill.”

With an IQ of 142, you should have read the conclusion with better comprehension.

DiogenesDue
ibrust wrote:

I haven't addressed that correction because it says outright what I'm saying, that cognitive ability contributes meaningfully to individual chess skill - this is exactly what I've been saying the entire time. What is there to address? I'll just end up repeating exactly what the scientists said in that paragraph: the effect changing from 0.24 to 0.22 makes no difference... I've never claimed crystalized intelligence made a difference. I have no idea what time controls the studies used, so I can't comment on the processing speed aspect, though I do suspect if it were bullet you would be able to find a correlation with processing speed, however if it's rapid or classical probably not... But it doesn't matter because GF and short / long term memory are enough to make the point... and again 0.2 is typical in social psychology... and I have never claimed the effect is strong I've claimed it's significant in the scientific sense of the word, I have already explained this repeatedly... the confidence interval is also so conservative that we are left with no doubt as to whether an effect exists.... you are grasping at straws here....

Lol, no. The change to using the word "meaningful" and removing "significant" changes things by your own previous arguments. You were the one claiming when you were trying to berate me that "significant" was a concrete and not-fuzzy word in the context used, and again here in this post..."meaningful", conversely, is not a concrete word. It is a word you use when your meta-analysis has been effectively debunked but you would rather not admit it.

As for your continued insistence that 0.22 or 0.24 are not weak correlations, specifically in Psychology, I will point out the de-facto standard for Psychology, Dancey and Reidy, regard 0.1 to 0.3 as weak correlations.

crazedrat1000
Ziryab wrote:
blah blah blah look at these small numbers 

In chess I wouldn't expect that IQ explained most of the variance of chess ability because an unpracticed chess player with a high IQ is simply not going to beat a practiced chess player with an average IQ. That's because chess is a skill-based game. It's alot like playing guitar in this respect. I've probably acknowledged this 4 times now... The other issue is that in psychology almost any effect is explained by a dozen or more factors, and to predict it you need a complex equation with many different inputs. So we wouldn't expect IQ to explain most of the variance. For example... how frequently does a person still play the game? That's going to be another factor. When did they start? How seriously do they take it? Do they play OTB or IRL?

The interesting question is whether IQ predicts max elo and learning speed. For this you need a carefully designed study... this is why there is such variety in the results of studies, results vary alot depending on how they control the variables. For example, from the same study...

"...evidence for the relationship between chess skill and cognitive ability is inconsistent. In an early study, Djakow, Petrowski, and Rudik (1927) reported that there were no differences in visuospatial memory and general intelligence between eight grandmasters and non-chess players. More recently, in two studies, Unterrainer and colleagues found near-zero correlations between measures of cognitive ability (full-scale IQ and Raven's) and chess rating (see Unterrainer et al., 2006, Unterrainer et al., 2011). By contrast, Frydman and Lynn (1992) found that elite Belgian youth chess players were approximately one standard deviation higher than the population mean on the performance subscale of the Wechsler Intelligence Scale for Children (WISC), which primarily reflects fluid reasoning. Furthermore, the stronger players had higher WISC performance IQ scores than the weaker players. More recently, using a relatively large sample with a wide range of chess skill, Grabner, Stern, and Neubauer (2007) found a significant positive correlation (r = 0.35) between full-scale IQ and chess rating. Similarly, Ferreira and Palhares (2008) studied ranked youth chess players and found a significant positive correlation (rs = 0.32–0.46) between fluid reasoning and Elo rating. de Bruin, Kok, Leppink, and Camp (2014) had beginning youth chess students complete a chess test, in which they were shown a chess game position and asked to predict the best next move. Performance on the chess test correlated moderately (r = 0.47) with scores on the WISC."

crazedrat1000
DiogenesDue wrote:
 

Additionally, the findings that chess ability is affected by fluid intelligence and short term memory, but not crystalized intelligence and processing speed dovetails with my argument, that chess ability and IQ are only (weakly) correlated in a general "a rising tide lifts all boats" fashion.

...

Lol, no. The change to using the word "meaningful" and removing "significant" changes things by your own previous arguments. You were the one claiming when you were trying to berate me that "significant" was a concrete and not-fuzzy word in the context used, and again here in this post..."meaningful", conversely, is not a concrete word. It is a word you use when your meta-analysis has been effectively debunked but you would rather not admit it.

As for your continued insistence that 0.22 or 0.24 are not weak correlations, specifically in Psychology, I will point out the de-facto standard for Psychology, Dancey and Reidy, regard 0.1 to 0.3 as weak correlations.

The debate has always been about whether some significant effect exists - i.e. statistically significant, significant in the scientific sense of the word. You didn't begin this debate, and you didn't set the premises of the debate - you chimed into my conversation, and now you're trying to redefine what the debate is / has been about. But in doing so you dispense with the vast majority of the disagreement, this is now a mostly meaningless conversation. I have just not really strongly argued what you are arguing about. My main recommendation would be that, in the future, you read more carefully before chiming in - like start with the first post in a conversation (the one you missed, then tried to ignore by suggesting the entire 38 page thread was the relevant context) and follow the logic from there. Had you done that in this instance you'd have saved us both alot of time. This business on whether the effect shown in that study is moderate or weak has always been a tangent, that's certainly not what I was discussing with others before you inserted yourself into the debate.

The correction you quoted maintains the studies results are significant... there is no veiled attempt to hide something here, there is nothing to hide, you have not debunked anything, you have quoted a paragraph that has proven my point, that is all...

Now regarding this tangent of yours - I haven't questioned the fact that, from a traditional mathematical standpoint, 0.2 is a weak correlation. I'm speaking to the practice of psychology, not some idealistic notion of statistics. Statistics would just argue that practically every correlation in psychology is weak. The fact is psychology is a very soft science, you're only going to get weak correlations - and so in practice you must aggregate many factors - many weak correlations - to explain just about anything. This isn't really up for debate, it is the way things work in psychology, I assure you... appealing to a math book is really missing the point and not the way to understand how psychology is practiced in the real world.

For example, if I were a social scientist I would not dismiss the effect of 0.22 as "insignificant" or I'd be forced to dismiss practically every effect I ever found as insignificant, I'd be unable to practice social science. This would be a very dumb and ineffective line of reasoning for a social scientist to adhere to. And in this conversation we're basically presuming to act as social scientists for answering this question... so we may as well try to think like competent ones. Even if that's obviously really not the case.

MaetsNori

Being skilled at chess doesn't necessarily mean one has a high IQ.

High chess skill means that the individual has spent years obsessing over this singular activity, and has learned to excel at it as a result - often to the detriment of many other skills in life.

This is true with most things, actually. You can't be great at everything. You pick and choose what to put your time and efforts into ...

crazedrat1000

Well this is a bit of a tangent, but one thing worth pointing out... is that the ability to practice, i.e. the ability to sit there for long hours and focus your attention on one task - in this case a primarily mental one - could be closely related to, or even be an aspect of, intelligence. Because it has to do with the strength of the nervous system. And I say intelligence, rather than IQ, because IQ is testing a person in a very short time window how they respond to novelty. However, when you really look at the great mental achievements people make... usually they're the results of years and years of intense mental efforts. But even IQ is known to correlate closely with the strength of the nervous system. So when you start controlling for hours practice, trying to isolate IQ, you may be excluding some important aspect of intelligence in the process.

DiogenesDue
ibrust wrote:

The debate has always been about whether some significant effect exists - i.e. statistically significant, significant in the scientific sense of the word. You didn't begin this debate, and you didn't set the premises of the debate - you chimed into my conversation, and now you're trying to redefine what the debate is / has been about. But in doing so you dispense with the vast majority of the disagreement, this is now a mostly meaningless conversation. I have just not really strongly argued what you are arguing about. My main recommendation would be that, in the future, you read more carefully before chiming in - like start with the first post in a conversation (the one you missed, then tried to ignore by suggesting the entire 38 page thread was the relevant context) and follow the logic from there. Had you done that in this instance you'd have saved us both alot of time. This business on whether the effect shown in that study is moderate or weak has always been a tangent, that's certainly not what I was discussing with others before you inserted yourself into the debate.

The correction you quoted maintains the studies results are significant... there is no veiled attempt to hide something here, there is nothing to hide, you have not debunked anything, you have quoted a paragraph that has proven my point, that is all...

Now regarding this tangent of yours - I haven't questioned the fact that, from a traditional mathematical standpoint, 0.2 is a weak correlation. I'm speaking to the practice of psychology, not some idealistic notion of statistics. Statistics would just argue that practically every correlation in psychology is weak. The fact is psychology is a very soft science, you're only going to get weak correlations - and so in practice you must aggregate many factors - many weak correlations - to explain just about anything. This isn't really up for debate, it is the way things work in psychology, I assure you... appealing to a math book is really missing the point and not the way to understand how psychology is practiced in the real world.

For example, if I were a social scientist I would not dismiss the effect of 0.22 as "insignificant" or I'd be forced to dismiss practically every effect I ever found as insignificant, I'd be unable to practice social science. This would be a very dumb and ineffective line of reasoning for a social scientist to adhere to. And in this conversation we're basically presuming to act as social scientists for answering this question... so we may as well try to think like competent ones. Even if that's obviously really not the case.

I just explained to you that the Dancey and Reidy *is* the de-facto Psychology correlation scale, and that they consider 0.1 to 0.3 as weak correlation. You can't gloss over it or pretend it's not there...you were just flat out wrong when you said it was moderate, etc. in fact, 0.22 isn't even close it's almost dead center of the range "weak". Trying to explain how Psychology is a soft science changes nothing in this regard, it's just the rationalization you used to convince yourself (you used "common sense", perhaps).

I jumped into the conversation on your post, so obviously I read it...you know, the one where you dismissively explained to some other poster all the stuff that was not really correct, then told him "Keep trying!" in a very "lacking self-awareness" kind of way?

"The correction you quoted maintains the studies results are significant" is incorrect and disingenuously so, for the reasons already discussed. They specifically backed off on the statistical term to replace it with something that was subjective and ergo cannot really be argued against further. You know it, I know it, and anyone following this carefully enough knows it.

Just taking your lumps and moving on would have been the smarter play....this same thing happened to you last time you got all blustery on some other thread. To be clear, if you hadn't been sooo condescending to the other poster (not going to drag him into this) while also being sooo wrong (on that post, and yes, on some previous posts), I would have either not posted at all or been less harsh (not that I did any namecalling, in spite of apparently being a dunce?). You reap what you sow.

DiogenesDue
ibrust wrote:

Well this is a bit of a tangent, but one thing worth pointing out... is that the ability to practice, i.e. the ability to sit there for long hours and focus your attention on one task - in this case a primarily mental one - could be closely related to, or even be an aspect of, intelligence. And I say intelligence, rather than IQ, because IQ is testing a person in a very short time window how they respond to novelty. However, when you really look at the great mental achievements people make... usually they're the results of years and years of intense mental efforts. But even IQ is known to correlate closely with the strength of the nervous system. So when you start controlling for hours practice, trying to isolate IQ, you may be excluding some important aspect of intelligence in the process.

Maybe you should shelve the pop psychology and do a meta analysis? Imagine the good you could do the world...imagine real hard.

Have a good evening.

crazedrat1000
DiogenesDue wrote:
 

I just explained to you that the Dancey and Reidy *is* the de-facto Psychology correlation scale, and that they consider 0.1 to 0.3 as weak correlation. You can't gloss over it or pretend it's not there...you were just flat out wrong when you said it was moderate, etc. in fact, 0.22 isn't even close it's almost dead center of the range "weak". Trying to explain how Psychology is a soft science changes nothing in this regard, it's just the rationalization you used to convince yourself (you used "common sense", perhaps).

Emory university is one of the premier health and humanities universities... I think it's also in the top 25 ranking of Ivy league universities. On their psychology departments website they have a link discussing the interpretation of effect sizes in psychology:

Correlation (emory.edu)

In psychological research, we use Cohen's (1988) conventions to interpret effect size. A correlation coefficient of .10 is thought to represent a weak or small association; a correlation coefficient of .30 is considered a moderate correlation; and a correlation coefficient of .50 or larger is thought to represent a strong or large correlation.

This is the convention they cite... it is directly contrary to your claim that 0.1-0.3 is weak. i.e. 0.2 is in the dead center of the "weak" range. What I said originally was that 0.24 was moderate. Well my claim is much closer to Cohen's convention than yours. Now, as you've pointed out, the correlation is reduced to 0.22... now we probably have to say the effect is mild-to-moderate.

So Cohen wrote this book back in 1988, this is apparently a longstanding convention in psychology, according to the professors at Emory. It's a convention that actually makes sense within the field of psychology... On the other hand, from you I'm just hearing this rote insistence that some other specific math book is the gold standard and defines things differently, according to you... you're making comments about social science conventions you don't even appear qualified to make. I'm assuming you own this math book, but I've seen no actual citation of your math book on this issue from you, so I can't interpret what it's saying contextually... I also see no intelligent reasoning or interpretation of the practice of social psychology provided by you... so no, I don't think I acknowledge this appeal to convention that you're making. Everything I see on the web is saying moderate is about 0.3, this is what makes sense, what I was taught at university when I took statistics in psychology... and what Emory / Cohen go with. Clearly Cohen's work is a longstanding convention, at best the argument is a wash and at worst you're either misreading... or misinterpreting, or stretching the truth, or just full of it, who knows.

Anyway, I find it hilarious that you're doubling down on what the math books say as if this is the most critical point... you're still missing it - I'm not even really hinging the point on this, in other fields certainly 0.2 is a fairly weak correlation. Anyway, what I'm really describing is how social science is practiced. This is what's relevant because you're presuming to do social science. A math book is generally not going to presume to teach very much about the practical aspects of social science to students, it's going to leave that for other courses and mostly just teach students the math itself. Anyone who's majored in or worked in a STEM field will tell you there is a world of difference between the book learning they got in school, and how their field is actually practiced... have you ever worked in a STEM field...? I've worked in two separate STEM fields, engineering and social science. What I'm telling you is social science is a field made up of 0.2 correlations, and so within the field you cannot just dismiss such a correlation or consider it trivial, you would be left unable to do social science. Your study, which yielded an effect of r=0.22, should not be ignored by a serious social scientist. That is the important point, you ignored it but I'm repeating it again for you. And the social scientists who wrote that correction don't disregard the effect, they state explicitly that the conclusion of the study remains unchanged.

In reality an effect in social science is almost always predicted by at least 1-2 dozen core factors. So you end up with a complex equation for predicting your effect, one which has 1-2 dozen inputs and produces the effect as the output. The core factors are usually found through some broad analysis of hundreds of traits, and then through a process of factor analysis. So in our conversation what we'd say is IQ is one of 1-2 dozen factors which predicts chess playing ability, along with many other factors such as hours practiced, studying habits, age at which the person first learned, the persons sex, probably their current age, and a half a dozen other factors you could probably identify. That's how this works in reality.

The other thing I will mention, yet again, is that in statistics the sample correlation r is inversely proportional to confidence interval p. So when you have a confidence interval of p=0.001, an extremely conservative confidence interval, it becomes quite meaningless to squabble over whether a correlation r is moderate or mild-to-moderate. Because I could literally just change p=0.05 or 0.01 and the correlation would shoot right up to moderate immediately, it's quite a meaningless thing you're squabbling over. But you don't really realize how meaningless it is, because again, you're not a social scientist.

DiogenesDue
ibrust wrote:

Emory university is one of the premier health and humanities universities... I think it's also in the top 25 ranking of Ivy league universities. On their psychology departments website they have a link discussing the interpretation of effect sizes in psychology:

Correlation (emory.edu)

In psychological research, we use Cohen's (1988) conventions to interpret effect size. A correlation coefficient of .10 is thought to represent a weak or small association; a correlation coefficient of .30 is considered a moderate correlation; and a correlation coefficient of .50 or larger is thought to represent a strong or large correlation.

This is the convention they cite... it is directly contrary to your claim that 0.1-0.3 is weak. i.e. 0.2 is in the dead center of the "weak" range. What I said originally was that 0.24 was moderate. Well my claim is much closer to Cohen's convention than yours. Now, as you've pointed out, the correlation is reduced to 0.22... now we probably have to say the effect is mild-to-moderate.

So Cohen wrote this book back in 1988, this is apparently a longstanding convention in psychology, according to the professors at Emory. It's a convention that actually makes sense within the field of psychology... On the other hand, from you I'm just hearing this rote insistence that some other specific math book is the gold standard and defines things differently, according to you... you're making comments about social science conventions you don't even appear qualified to make. I'm assuming you own this math book, but I've seen no actual citation of your math book on this issue from you, so I can't interpret what it's saying contextually... I also see no intelligent reasoning or interpretation of the practice of social psychology provided by you... so no, I don't think I acknowledge this appeal to convention that you're making. Everything I see on the web is saying moderate is about 0.3, this is what makes sense, what I was taught at university when I took statistics in psychology... and what Emory / Cohen go with. Clearly Cohen's work is a longstanding convention, at best the argument is a wash and at worst you're either misreading... or misinterpreting, or stretching the truth, or just full of it, who knows.

Lol. So...you never bothered to click any of the links I posted, did you? If you had, you would have seen this already:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6107969/table/tbl1/?report=objectonly

Your listed values are 100% accurate.,,you just interpreted them poorly. 0.3 and everything below as far as 0.1 is "weak". Above it is "moderate", so if you read a source telling you that the cutoff for weak is 0.1 and for moderate is 0.3, why on earth would you assume that 0.3 is middle of the moderate range and not the minimum/start of the range? Does that sound like a precise, scientific way to interpret what you posted?

Anyway, I find it hilarious that you're doubling down on what the math books say as if this is the most critical point... you're still missing it - I'm not even really hinging the point on this, in other fields certainly 0.2 is a fairly weak correlation. Anyway, what I'm really describing is how social science is practiced. This is what's relevant because you're presuming to do social science. A math book is generally not going to presume to teach the practical aspects of social science to students, it's going to leave that for other courses and just teach students the math itself. Anyone who's majored in or worked in a STEM field will tell you there is a world of difference between the book learning they got in school, and how their field is actually practiced... have you ever worked in a STEM field...?

Math books...?

Yes, I worked in a STEM field, managing 55 developers and managers, some of whom were pretty much like you, or so I would assume based on your Stack Exchange postings. I noticed that you did the same thing there as you do here (right down to "carry onward!"...you ran in and posted a long diatribe...unfortunately it was on a long dead thread and nobody saw it except the admin that kindly told you how old the thread was.

I've worked in two separate STEM fields, engineering and social science. What I'm telling you is social science is a field made up of 0.2 correlations, and so within the field you cannot just dismiss such a correlation or consider it trivial, you would be left unable to do social science. Your study, which yielded an effect of r=0.22, should not be ignored by a serious social scientist. That is the important point, you ignored it but I'm repeating it again for you. And the social scientists who wrote that correction don't disregard the effect, they state explicitly that the conclusion of the study remains unchanged.

Then why not leave "significant" right where it was? Why replace it with "meaningful" at all?

"If you gave Lt. Kendrick an order, and your orders are always followed, why would Santiago be in any danger...?"

In reality an effect in social science is almost always predicted by at least 1-2 dozen core factors. So you end up with a complex equation for predicting your effect, one which has 1-2 dozen inputs and produces the effect as the output. The core factors are usually found through some broad analysis of hundreds of traits, and then through a process of factor analysis. So in our conversation what we'd say is IQ is one of 1-2 dozen factors which predicts chess playing ability, along with many other factors such as hours practiced, studying habits, age at which the person first learned, the persons sex, probably their current age, and a half a dozen other factors you could probably identify. That's how this works in reality.

No, "we" wouldn't say that. There's a reason social science is considered a soft science, and there's a reason why IQ has not been considered a good measurement of intelligence for the past few decades. If your job habitually calls on you to accept 0.2 correlations (and among sets of "dozens of factors", no less) as strong enough to act upon with "meaningful" resources, then I submit that you are probably just doing misguided busywork and wasting either taxpayers' or shareholders' $$$.

Note that I am not saying social science related work is not needed/necessary, but that the mess you described is no way to determine where/how to apply resources.