.
CP6033

Looks to me like pretty soon every player with a ranking of above 2000 in Daily Chess is going to be accused of cheating. The argument seems to be that you must be cheating or else your rating would be lower. Again I come back to "ducking" as used to catch witches, which worked like this:
"The victim's right thumb was bound to left toe. A rope was attached to her waist and the 'witch' was thrown into a river or deep pond. If the 'witch' floated it was deemed that she was in league with the devil, rejecting the 'baptismal water'. If the 'witch' drowned she was deemed innocent."
In our context, if you win a game then you are, by definition, cheating, so you get burned at the stake. If you lose then bad luck, you're dead, but at least you didn't cheat.
I can see why engine matchup rates could be seen as evidence of cheating AS LONG AS the benchmarks are set correctly, and that last part remains to be proved, IMO. The CP loss thing, as I understand it, measures how close you are to the perfect move that the engine sees. I'm trying to get a handle on how that might work by taking a closer look at the figures provided for CP and for dd. Not much to go on but it may give me an idea. I will return!
BTW, why is it assumed that cheating is impossible in fast format chess? Could you not have an engine running while you play?
As far as I can see, CP's various ratings show that he is an all round strong player at round about 2000 to 2150 strength. In fact, his chess.com ratings are LOWER than his USCF OTB rating! The problem for the witchfinders is going to be that, if you accept that CP is capable of achieving these engine matchup rates in daily chess WITHOUT cheating, then they will have to admit that the benchmarks they are using are incorrect.
The more I look at this, the more suspicious I am of the methodology.

There's a huge amount of information on engine-match analysis on the site, much of it in the Cheating Forum group which anyone can join (I'm a member). Although the site doesn't publish it's cheat detection methods, it's accepted that they are of the kind we see above.
I believe the site's system is entirely robust & reliable but because it aims to be scrupulously fair, it inevitably leaves a proportion of dishonest players free to play. That's the problem with probabilistic detection models, there's no neat & certain line you can draw & say that all those who step over it are cheating. All you can do is aim for such a high degree of confidence that engine use is involved, that errors are practically ruled out.
But I'm no statistician & I struggle at times to understand exactly what some results mean & what conclusion they should point to.
Joe, the data on number of 'losing' moves relates (I think) to the engine evaluation in centi-pawns (CP) of the particular move, not some comparison with the best engine move for that position. So when it gives the following:-
>500 CP loss: 2/934; 0.21% (std error 0.15)
I interpret that as meaning, of the 934 qualifying(*) moves analysed, 2 moves were evaluated as losing by more than 500 CP or 5 pawns.
* A lot is written on what constitutes a 'Qualifying Move but they are usually those that are 'out of book', so not in any database & within a certain winning tolerance. This aims to eliminate forced & obvious moves from the analysis.
For example, if you were in a winning endgame & suddenly saw a 10 move forced mate, then any engine should agree with you but it would be perverse to accuse you of being a probable engine user because you followed Houdini for 10 moves!
When I carry out engine match analysis of our games, I filter out all moves that are evaluated as greater than two pawns in strength. Some cheat detectors also filter T1, T2 & T3 results that vary by more than a certain amount from the next T value, the point being that there should be only a small difference between the evaluations for the results to be useful.
As for cheating in Live chess, it does take place. It's believed to happen even in the fastest games such as bullet. TT can also be 'massaged'.
I have to agree with Stephen on one thing - we come here to play chess & have fun, not to become mired in all this stuff. It gets a little depressing after a while, especially when you can't reach a useful conclusion about what you're looking at.

For me it comes back to these figures:-
T1: 490/934; 52.46% (std error 1.63)
T2: 736/934; 78.80% (std error 1.34)
T3: 809/934; 86.62% (std error 1.11)
My own:-
UNDECIDED POSITIONS
Positions: 758
T1: 205/551; 37.21% (std error 2.06)
T2: 243/441; 55.10% (std error 2.37)
T3: 257/395; 65.06% (std error 2.40)
I drew my breath in when I saw them because most players around 1900/2000 will have figures quite a bit below those with a T1 in the 30% range. I'll see if I can get hold of mine.
I've just obtained my own figures & it troubles me that CP's are so much higher than mine & also comparable to GM/Super-Gm standards of play.
And just when you have enough to deal with, another problem arrives...
https://www.chess.com/member/cp6033
This time it's our old team member 87654321 who's done the analysis. He's pretty thick with hicetnunc - cut off the same granite block so to speak!
Daily team match games >30 moves.
Classic t3, upto 850cp.
T2 above 99.99% confidence levels if 60/70/80 baseline.
26 games
UNDECIDED POSITIONS
Positions: 934
T1: 490/934; 52.46% (std error 1.63)
T2: 736/934; 78.80% (std error 1.34)
T3: 809/934; 86.62% (std error 1.11)
>0 CP loss: 417/934; 44.65% (std error 1.63)
>10 CP loss: 314/934; 33.62% (std error 1.55)
>25 CP loss: 209/934; 22.38% (std error 1.36)
>50 CP loss: 109/934; 11.67% (std error 1.05)
>100 CP loss: 58/934; 6.21% (std error 0.79)
>200 CP loss: 18/934; 1.93% (std error 0.45)
>500 CP loss: 2/934; 0.21% (std error 0.15)
CP loss mean 22.61, std deviation 55.91
LOSING POSITIONS
Positions: 0
(My comment) His stats look consistent across all categories of play with a USCF rating too:-
USCF 2160
Tactics 2486
Bullet 2036
Blitz 2052
Rapid 1921
Lessons 2454
Daily 2146
Chess 960 1907
* I've just been reminded that the T3 figures above are a classic type of analysis & it's better to compare the 'baseline' figures in the other thread with filtered results instead. Here are CP's filtered T3 stats:-
"Here's CP6033's filtered T3 using my normal settings. This is not particularly suspicious."
50 games
UNDECIDED POSITIONS
Positions: 1113
T1: 307/788; 38.96% (std error 1.74)
T2: 366/625; 58.56% (std error 1.97)
T3: 403/558; 72.22% (std error 1.90)
>0 CP loss: 487/1113; 43.76% (std error 1.49)
>10 CP loss: 358/1113; 32.17% (std error 1.40)
>25 CP loss: 218/1113; 19.59% (std error 1.19)
>50 CP loss: 100/1113; 8.98% (std error 0.86)
>100 CP loss: 36/1113; 3.23% (std error 0.53)
>200 CP loss: 7/1113; 0.63% (std error 0.24)
>500 CP loss: 3/1113; 0.27% (std error 0.16)
CP loss mean 18.15, std deviation 67.71
LOSING POSITIONS
Positions: 123
T1: 19/57; 33.33% (std error 6.24)
T2: 23/37; 62.16% (std error 7.97)
T3: 17/27; 62.96% (std error 9.29)
>0 CP loss: 62/123; 50.41% (std error 4.51)
>10 CP loss: 52/123; 42.28% (std error 4.45)
>25 CP loss: 43/123; 34.96% (std error 4.30)
>50 CP loss: 33/123; 26.83% (std error 4.00)
>100 CP loss: 20/123; 16.26% (std error 3.33)
>200 CP loss: 9/123; 7.32% (std error 2.35)
>500 CP loss: 3/123; 2.44% (std error 1.39)
CP loss mean 59.65, std deviation 141.38
Those look much better! I think we can rest easy in this case.