After watching the vids it seems very unlikely he did not cheat.
Ivanov speaks out!

If he ever enters this site he is going to get it. Me... way way way back and not watching. The screams for him asking for help would give me nightmares.

Sounds like they caught on to him when he started suddenly playing computer-strong out of the blue. I noticed Chessbase also mentioned the Clark Smiley cheating scandal that happened in the USCF. In that case, he didn't even have to wear a hidden device. Instead, he just pretended to be using a notation app on a handheld, when he was actually getting moves from an engine on it. He managed to cheat for several months before getting caught, though of course he denied cheating until the moment he got caught.
Whatever happened to Clark Smiley, Did USCF impose any kind of ban or did he just get away with nothing?
He got banned for an amount of time I don't know. It was quite shocking to our team that he was cheating.

I mentioned this in another thread. Make them change out their clothing in another room and into a standard uniform for playing tournaments. Something along the lines of golf.

I lean towards the opinion that he was cheating. On the other hand, I think anyone who wants to assemble evidence to this end has more work to do. Lilov made a great start with some evidence, but there are more specific things that can be done if someone wants to really nail the analysis.
I think there is a small problem with the idea of comparing his moves to "one of the top two or three Houdini suggestions." I think there could be a more meaningful comparison if we use (for example) "within 3 centipawns evaluation of the top suggestion." Those are the evaluations that will occasionally swap positions at the top of the list.
But perhaps even more importantly, someone needs to do a comparison of a large number of different engines at different settings to try to better determine precisely which is the most likely to have been used. This engine and settings should be tested by given the actual length of time used for each move (to the closest it might be known), with a small amount of overhead time added for communication.
The point being that as convincing as the current evidence is, it would be completely damning if someone could show a 99-100% correlation given specific parameters. For example, imagine being able to say:
Using Houdini 3.x on this machine setup, and given 20 seconds for communication, the amount of thinking time used by Ivanov was used to determine which line Houdini suggested at that point in time.
In short, determine exactly what he was using and the correlation is bound to be much stronger still. It may be mostly convincing to say it already corresponds closely, but you have to be able to explain why it does not correspond for each move where that is not the case.

Jamie, more scientific analysis has been done, posted by goldendog early on in the thread.
Lilov's 'analysis' is poor, he doesn't use his experience to say why the moves were computer moves, just has an engine running and points out when the moves come up. Nothing anyone else can't do.

Thanks Scott. I found the data you were talking about (http://www.cse.buffalo.edu/~regan/chess/fidelity/ACPcover-and-report.pdf) -- Much better. I'm an armchair stat guy, so I appreciate the level of detail.
I still wonder if there is any data on time taken per move that might help. It also sounds like the author did not try the analysis with many different engines and settings, which might be able to completely nail down which engine was used for a tighter correlation.
Even so, that was much more convincing than Lilov.

I lean towards the opinion that he was cheating. On the other hand, I think anyone who wants to assemble evidence to this end has more work to do. Lilov made a great start with some evidence, but there are more specific things that can be done if someone wants to really nail the analysis.
I've said this numerous times, but all one needs to do is load up Houdini themselves and review the games. You'll be saying to yourself "Uhh yeah, this guy obviously cheated" because unlike any other human game you review, the engine seems to be 'psychic' when predicting Ivanov's next move at almost every turn in the games he cheated in. Then review the one game at Zadar where he didn't cheat, and you'll notice the engine can't predict his moves at all (except for forced trades of course).
So instead of reading about other people's findings and saying "well I don't know if that really impresses me to assume he cheated", try reviewing the games yourself. Actually seeing the analysis in action by the engine really drives the point home.
If you actually read what other people are saying about the process being important and not react with your emotional "Well he is guilty as hell and anyone with half a brain can see it!!!!" tirade, you might understand that some of us would like this to be a long lasting and repeatable exercise.
So, while this might be a clear cut case, the next guy won't be as stupid as to follow the first (or second) move blindly and it won't be so obvious. When that happens, strong statistical methodology is required to prove cheating. The rules might have to be changed and if they are, it needs to be clear enough that someone might be found guilty of cheating based on a correlation that is repeatable, and not just some peeved (rightfully) GM slapping together a video and posting it to youtube.

@chrispret, if a cheater was going to greater lengths to conceal the fact, especially if they were already a strong player, I don't think statistical evidence would be enough to prove their guilt. It's only because this guy made no effort whatsoever to hide the fact that he was following the engine that we don't need any tangible evidence.
I mostly agree with you, but Regan's (http://www.cse.buffalo.edu/~regan/chess/fidelity/ACPcover-and-report.pdf) methodology is at least repeatable. In addition it could be published and peer reviewed by statisticians who are true experts in the field and once accepted, that methodology can be applied to all tournaments (after the fact, or real time). Of course someone will still get away with it by cheating smarter...
I believe prevention is better, but to combat cheating there is a need for a multi-faceted approach.

In order for a player, in real time, to cheat in a sophisticated manner where he chose moves that were ~4th best, he would need a colleague who was also a strong player to determine when the 4th best move was also good enough. While such a level of cheating is conceivable, it would be very difficult to do in real time.
Perhaps only the egregious cheats will ever get caught, but even that is a good starting point.
If it comes to the point where cameras in the bathrooms seems reasonable, it's already too late. I don't think we're at that point.

Interesting points beings circulated here. In theory it would be easy for a very strong player to use cheating methods, to only help him in crucial situations. Such a player, say 2700+ would hardly be suspected of such a thing, and easily get away with it. Or, a player say 2200 who does not try to become Fischer in two weeks, but rather charts a realistic progression of improvement based on cheating...he also would not be suspected. The problem is greed...as always.
The video gives a brilliant explanation of what happened during that tournament with Ivanov. It seems the only ones defending him are ignoring the details or use the defense: "Well, he had a .00000000001% chance of doing that without an engine, so it is possible."
When they are home and figure out he cheated ....