I just explained to you that the Dancey and Reidy *is* the de-facto Psychology correlation scale, and that they consider 0.1 to 0.3 as weak correlation. You can't gloss over it or pretend it's not there...you were just flat out wrong when you said it was moderate, etc. in fact, 0.22 isn't even close it's almost dead center of the range "weak". Trying to explain how Psychology is a soft science changes nothing in this regard, it's just the rationalization you used to convince yourself (you used "common sense", perhaps).
Emory university is one of the premier health and humanities universities... I think it's also in the top 25 ranking of Ivy league universities. On their psychology departments website they have a link discussing the interpretation of effect sizes in psychology:
In psychological research, we use Cohen's (1988) conventions to interpret effect size. A correlation coefficient of .10 is thought to represent a weak or small association; a correlation coefficient of .30 is considered a moderate correlation; and a correlation coefficient of .50 or larger is thought to represent a strong or large correlation.
This is the convention they cite... it is directly contrary to your claim that 0.1-0.3 is weak. i.e. 0.2 is in the dead center of the "weak" range. What I said originally was that 0.24 was moderate. Well my claim is much closer to Cohen's convention than yours. Now, as you've pointed out, the correlation is reduced to 0.22... now we probably have to say the effect is mild-to-moderate.
So Cohen wrote this book back in 1988, this is apparently a longstanding convention in psychology, according to the professors at Emory. It's a convention that actually makes sense within the field of psychology... On the other hand, from you I'm just hearing this rote insistence that some other specific math book is the gold standard and defines things differently, according to you... you're making comments about social science conventions you don't even appear qualified to make. I'm assuming you own this math book, but I've seen no actual citation of your math book on this issue from you, so I can't interpret what it's saying contextually... I also see no intelligent reasoning or interpretation of the practice of social psychology provided by you... so no, I don't think I acknowledge this appeal to convention that you're making. Everything I see on the web is saying moderate is about 0.3, this is what makes sense, what I was taught at university when I took statistics in psychology... and what Emory / Cohen go with. Clearly Cohen's work is a longstanding convention, at best the argument is a wash and at worst you're either misreading... or misinterpreting, or stretching the truth, or just full of it, who knows.
Anyway, I find it hilarious that you're doubling down on what the math books say as if this is the most critical point... you're still missing it - I'm not even really hinging the point on this, in other fields certainly 0.2 is a fairly weak correlation. Anyway, what I'm really describing is how social science is practiced. This is what's relevant because you're presuming to do social science. A math book is generally not going to presume to teach very much about the practical aspects of social science to students, it's going to leave that for other courses and mostly just teach students the math itself. Anyone who's majored in or worked in a STEM field will tell you there is a world of difference between the book learning they got in school, and how their field is actually practiced... have you ever worked in a STEM field...? I've worked in two separate STEM fields, engineering and social science. What I'm telling you is social science is a field made up of 0.2 correlations, and so within the field you cannot just dismiss such a correlation or consider it trivial, you would be left unable to do social science. Your study, which yielded an effect of r=0.22, should not be ignored by a serious social scientist. That is the important point, you ignored it but I'm repeating it again for you. And the social scientists who wrote that correction don't disregard the effect, they state explicitly that the conclusion of the study remains unchanged.
In reality an effect in social science is almost always predicted by at least 1-2 dozen core factors. So you end up with a complex equation for predicting your effect, one which has 1-2 dozen inputs and produces the effect as the output. The core factors are usually found through some broad analysis of hundreds of traits, and then through a process of factor analysis. So in our conversation what we'd say is IQ is one of 1-2 dozen factors which predicts chess playing ability, along with many other factors such as hours practiced, studying habits, age at which the person first learned, the persons sex, probably their current age, and a half a dozen other factors you could probably identify. That's how this works in reality.
The other thing I will mention, yet again, is that in statistics the sample correlation r is inversely proportional to confidence interval p. So when you have a confidence interval of p=0.001, an extremely conservative confidence interval, it becomes quite meaningless to squabble over whether a correlation r is moderate or mild-to-moderate. Because I could literally just change p=0.05 or 0.01 and the correlation would shoot right up to moderate immediately, it's quite a meaningless thing you're squabbling over. But you don't really realize how meaningless it is, because again, you're not a social scientist.
I think it would be 125 - 175
Lol. So...you never bothered to click any of the links I posted, did you? If you had, you would have seen this already:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6107969/table/tbl1/?report=objectonly
Your listed values are 100% accurate.,,you just interpreted them poorly. 0.3 and everything below as far as 0.1 is "weak". Above it is "moderate", so if you read a source telling you that the cutoff for weak is 0.1 and for moderate is 0.3, why on earth would you assume that 0.3 is middle of the moderate range and not the minimum/start of the range? Does that sound like a precise, scientific way to interpret what you posted?
The Emory site simply doesn't say that, you did. What it says it 0.1 is a mild correlation, and 0.3 is moderate. And I think they're speaking loosely, if you want to know the truth, because they aren't imagining this as some discrete mechanical scale the way that you are. If 0.3 is moderate 0.29 is not going to be mild, it's going to be "basically moderate" - the scale is continuous, these are subjective descriptors, it's not some discrete mechanical scale. If you really want to interpret the strength of a correlation you're going to need to interpret the correlation in context alongside other research on the same subject, considering the design of the study and the thing you're measuring, what question you're asking, and so on, there is not some official formula that will give you a precise answer. That's really the truth of it, I don't even think they thought about this scale in the way that you are.
In either case, what we have is a misalignment between your source and mine. So the debate is a wash - I really think Cohen gives a much more useful and practical scale for use within social science, because it corresponds with how psychology is actually practiced in the real world. Your source is just a linear scale, it looks like it was constructed without much thought, or any real consideration for practical interpretation. I mean it is just a mindless linear progression you're referencing, basically not making any attempt whatsoever to interpret the correlation within the context of psychology. Which I've been pointing out the need to do all along. Obviously the people at Emory agree with me. But you can stick with your little linear scale if you want, it's not too essential.
Btw - you missed a paragraph in the preceding post, you should address that.
I like my little one-liners at the end, they tend to get under the skin of internet egomaniacs such as yourself, something I do enjoy doing.
Keep trying