Thanks Elroch. Glad you are on our side !!
The Chess Mine and why they should be boycotted

Much respect for exposing this! I didn't see it coming at all; I always figured their success was purely due to having the top VC members and nothing more.

Doing more thorough analysis now, having pruned all moves in the master database. (The default is to ignore 10 moves by each side, which does normally work fine). Interestingly there are quite a few games that left theory before move 10 so get some more moves to analyse there.

Here is the post to the Chess Mine to explain the improved analysis.
The run has now completed and, as I guessed from having done this with other datasets, it didn't make a huge difference to trim all database moves (eg T1 and T3 rose a little - i.e. slightly closer to engine - and T2 fell a little, no move of any significance) . It meant a few games got dropped but the total number of moves analysed increased (because a lot of games left the opening book before move 10). The overall conclusion is the same: such stats are not achievable by unassisted humans.
Here is the summary.
The Chess Mine, 79 games
UNDECIDED POSITIONS
Positions: 961
T1: 308/530; 58.11% (std error 2.14)
T2: 266/318; 83.65% (std error 2.07)
T3: 237/260; 91.15% (std error 1.76)
=0 CP loss: 756/961; 78.67% (std error 1.32)
>0 CP loss: 205/961; 21.33% (std error 1.32)
>10 CP loss: 112/961; 11.65% (std error 1.04)
>25 CP loss: 27/961; 2.81% (std error 0.53)
>50 CP loss: 6/961; 0.62% (std error 0.25)
>100 CP loss: 1/961; 0.10% (std error 0.10)
>200 CP loss: 0/961; 0.00% (std error 0.00)
>500 CP loss: 0/961; 0.00% (std error 0.00)
CP loss mean 3.14, std deviation 9.01
LOSING POSITIONS
Positions: 0
Here is a graph of the cumulative CP loss in reverse chronological order. So this is the average discrepancy in centipawns per move (according to a certain chess engine) between our choice and the top computer choice in all unclear positions. The first point on the graph just uses the most recent game and the numbers before it incorporate more games back to the creation of the group.
We don't have any magic advantage to detect when one move is 3.12 centipawns higher evaluation than another when Carlsen can only manage 6.65 centipawns and the super-GM tournament with the strongest stats ever achieved 6.85 centipawns. We would need to be around 3400 OTB rating, I estimate.

If anyone is interested, here are our recent stats for comparison. Pretty good, but not up to Carlsen's level, which seems reasonable. He achieves pretty much spot on the established threshold for human play for T1/T2/T3 - 50/70/80 (see his recent stats above).
Intellectual Chess Players, 38 games
UNDECIDED POSITIONS
Positions: 562
T1: 153/357; 42.86% (std error 2.62)
T2: 146/223; 65.47% (std error 3.18)
T3: 137/180; 76.11% (std error 3.18)
=0 CP loss: 350/562; 62.28% (std error 2.04)
>0 CP loss: 212/562; 37.72% (std error 2.04)
>10 CP loss: 155/562; 27.58% (std error 1.89)
>25 CP loss: 99/562; 17.62% (std error 1.61)
>50 CP loss: 45/562; 8.01% (std error 1.14)
>100 CP loss: 16/562; 2.85% (std error 0.70)
>200 CP loss: 3/562; 0.53% (std error 0.31)
>500 CP loss: 1/562; 0.18% (std error 0.18)
CP loss mean 14.60, std deviation 39.13
LOSING POSITIONS
Positions: 21
T1: 2/6; 33.33% (std error 19.25)
T2: 2/3; 66.67% (std error 27.22)
T3: 2/2; 100.00% (std error 0.00)
=0 CP loss: 14/21; 66.67% (std error 10.29)
>0 CP loss: 7/21; 33.33% (std error 10.29)
>10 CP loss: 7/21; 33.33% (std error 10.29)
>25 CP loss: 7/21; 33.33% (std error 10.29)
>50 CP loss: 5/21; 23.81% (std error 9.29)
>100 CP loss: 4/21; 19.05% (std error 8.57)
>200 CP loss: 3/21; 14.29% (std error 7.64)
>500 CP loss: 2/21; 9.52% (std error 6.41)
CP loss mean 96.76, std deviation 219.00

The Chess Mine has been heavily influenced by engine analysis for its entire existence, with its games dominated by such analysis (despite some players posting honest human analysis). It's standard of play is as a result similar to a pure strong engine.
By contrast, our record is consistent with purely human input (no way to be sure, but I would hope it is 100% human!). As a result our play is pretty strong but not super-GM level (not exactly sure if it is IM or FM level, but around there).
Here is what someone who co-ordinates with the chess.com cheat detection team says:
"But yeah, those numbers don't really surprise me given what I've heard and seen about them in the past. It's pretty near pure engine. Possibly even better than pure engine - it might take a decent centaur player to beat them."

To bring up to date members who have not seen what has happened in the Chess Mine, @Ponz111 has chosen to leave, vigorously protesting he was not cheating (or maybe it was had not cheated) despite not having been accused, and there is hope the play might become more consistent with purely human input. No rush to jump to conclusions though.

I would not exclude Ponz as a cheater, as he organized chess of the team and he was everytime right... Who else?

Elroch, at what point would you have enough data to repeat the analysis only on games since the cheating came to light? I for one would be very interested (and concerned) to see that.

Sorry, only just noticed this.
I noticed that the tiny sample of moves in the most recent couple of Chess Mine games was not at all suspicious (very different to the long term stats), but it takes a lot of games to give more robustness to such an observation because of random variation.

In this table, is T1 a threshold or a time or is it the top three moves under consideration
T1: 308/530; 58.11% (std error 2.14)
T2: 266/318; 83.65% (std error 2.07)
T3: 237/260; 91.15% (std error 1.76)
and then I feel like I am almost catching on with the percentages but somehow failing to grasp where the numerator and denominator came from in the fraction. Please give details! thanks!
As a result of trying to defend the Chess Mine's record to @petitbonom after a challenge was denied, I have analysed all of the Chess Mine's games using @MGleason's tool for the purpose of identifying suspect players. I wish I had done this earlier - the results are starkly indicative of substantial use of engines over the entire 85 games. I'm not going to stay in the group, and have explained why in their forum.
Here are my posts there:
Recently, I heard about the Chess Mine having a challenge declined on the basis that they had strong suspicions that this team's moves were influenced by engine output. I defended this claim based on the statistically inadequate evidence provided, and this led to me to analyse this group's games in the same way as I have many times for daily chess players (several of whom have been then booted for fair play violations after chess.com did much more thorough analysis.
So, what does an analysis of the entire play of this group say? Here is the summary output of @MGleason's program for identifying suspect players for reporting, using a well-known chess engine with a modest amount of time to calculate.
It is very important to note that only unclear positions after the opening are included in these analyses. Also that the T1-T2-T3 stats are only for a smaller number of positions where there are enough moves that have computer evaluations within a small difference of each other for there to a close choice. There are multiple moves with less than half a pawn difference (can be a lot less) according to the engine.
The Chess Mine, 85 games
UNDECIDED POSITIONS
Positions: 884
T1: 279/488; 57.17% (std error 2.24)
T2: 248/288; 86.11% (std error 2.04)
T3: 210/232; 90.52% (std error 1.92)
=0 CP loss: 685/884; 77.49% (std error 1.40)
>0 CP loss: 199/884; 22.51% (std error 1.40)
>10 CP loss: 88/884; 9.95% (std error 1.01)
>25 CP loss: 25/884; 2.83% (std error 0.56)
>50 CP loss: 3/884; 0.34% (std error 0.20)
>100 CP loss: 0/884; 0.00% (std error 0.00)
>200 CP loss: 0/884; 0.00% (std error 0.00)
>500 CP loss: 0/884; 0.00% (std error 0.00)
CP loss mean 2.92, std deviation 8.29
LOSING POSITIONS
Positions: 0
For comparison, here is the analysis of 94 recent standard time control games by a guy called Magnus Carlsen.
Carlsen, Magnus, 94 games
UNDECIDED POSITIONS
Positions: 1513
T1: 608/1212; 50.17% (std error 1.44)
T2: 722/1046; 69.02% (std error 1.43)
T3: 769/949; 81.03% (std error 1.27)
=0 CP loss: 1053/1513; 69.60% (std error 1.18)
>0 CP loss: 460/1513; 30.40% (std error 1.18)
>10 CP loss: 269/1513; 17.78% (std error 0.98)
>25 CP loss: 126/1513; 8.33% (std error 0.71)
>50 CP loss: 37/1513; 2.45% (std error 0.40)
>100 CP loss: 5/1513; 0.33% (std error 0.15)
>200 CP loss: 1/1513; 0.07% (std error 0.07)
>500 CP loss: 1/1513; 0.07% (std error 0.07)
CP loss mean 6.65, std deviation 21.56
LOSING POSITIONS
Positions: 20
T1: 5/10; 50.00% (std error 15.81)
T2: 5/9; 55.56% (std error 16.56)
T3: 8/8; 100.00% (std error 0.00)
=0 CP loss: 11/20; 55.00% (std error 11.12)
>0 CP loss: 9/20; 45.00% (std error 11.12)
>10 CP loss: 7/20; 35.00% (std error 10.67)
>25 CP loss: 5/20; 25.00% (std error 9.68)
>50 CP loss: 4/20; 20.00% (std error 8.94)
>100 CP loss: 3/20; 15.00% (std error 7.98)
>200 CP loss: 2/20; 10.00% (std error 6.71)
>500 CP loss: 0/20; 0.00% (std error 0.00)
CP loss mean 47.80, std deviation 99.01
What do these stats (and their comparison) say?
Well, bottom line is that the play of this group has been a _lot_ closer to that of a pure engine in unclear positions than that of the world champion. Note that the same is also true of benchmarks based on world correspondence chess champions before the engine era, which are not more engine-like than Carlsen at OTB chess.
For examples, in unclear positions, the computer evaluation of this group's move was less than 0.03 pawns different to the engine choice. This is extraordinarily small. No human dataset comes close to this. The idea that any humans can detect positional differences twice as accurately according to an engine as the world champion is kind of ridiculous.
The team's move matches one of the top two engine choices when there are three within 0.5 pawns over 86% of the time, while Carlsen manages nearly 70% of the time. The move chosen has exactly the same evaluation as that of the engine choice 77% of the time, while Carlsen manages this 69% of the time.
Bottom line - I can't escape the conclusion that there is very high confidence in the fact that the past results of this group have been heavily influenced by engine assistance and because of that can be assumed to be to a large extent due to that engine assistance.
It is possible that this is a past problem, but it just isn't enough for me. I'm not interested in being part of a group with this history and I am sorry not to have done this analysis earlier, ideally before I considered joining in the first place.
I would point out that in principle it is possible for anyone on chess.com to check each of the moves which contribute to the relevant statistics to see who influenced the choice of that move, and thereby get a pretty clear idea which players have been the cause of the illicit assistance.