I'm finding a similar difference between Fritz 11 and Houdini 1.5.
engines scores

Such differences are not uncommon in complex positions.
As I'm playing with different engines, I'm noticing one key variable that may explain some differences: in the image, look at the number of CPUs.
My computer is a quad core with each core using two threads.
Houdini 1.5 uses all four cores. Stockfish detects eight threads as 8 CPUs. The commercial engines that I have installed each show 1 CPU. I may need the Deep versions of these engines to gain optimal evaluation on a quad-core machine.
If you have the same difference, it may be that Houdini is giving you a more accurate assessment. Look also at the depth and Kilonodes per second. Houdini's are much higher.

Imagine you had only Fritz's evaluation. What would your conclusion be ?
That Tal was already winning at that stage? Not sure

Such differences are not uncommon in complex positions.
As I'm playing with different engines, I'm noticing one key variable that may explain some differences: in the image, look at the number of CPUs.
My computer is a quad core with each core using two threads.
Houdini 1.5 uses all four cores. Stockfish detects eight threads as 8 CPUs. The commercial engines that I have installed each show 1 CPU. I may need the Deep versions of these engines to gain optimal evaluation on a quad-core machine.
If you have the same difference, it may be that Houdini is giving you a more accurate assessment. Look also at the depth and Kilonodes per second. Houdini's are much higher.
Houdini is definitely faster and better - I think Fritz 12 is mistaken here! (Famous last words)

Personally I could care less about enigmatic or differing engine assessments of games played in the exosphere of chess. I'm playing down in the troposphere and I'm really interested in what the engines have to tell me about the games I've lost (BTW even most of my wins turn out to be blunderfests after Fritz or Houdini gets done with them). BOTTOM LINE: Even the mediocre analysis available here tells me way more than I've usually seen in my turn-based games.

I'm no expert at engines, but one thing I do notice is that sometimes, Houdini will go well past 20 ply to find the "correct" move, whereas seemingly with Rybka 4 the "correct" move is ascertained almost 100% of the time before ply 17, and 95% of the time before ply 13. However, having said that, Houdini 2.0c gets to 20 ply much much much faster than say Rybka or any other engine.
A couple of weeks ago, I was analyzing a game that I lost OTB and at the end of the game, my opponent had a great combo that basically amounted to a forced mate. In reviewing the game on Houdini, even until the very last move, it kept saying that it was even until it got to 22 ply - then bam all of a sudden it went from +0.55 to 10.0 just like that. If I get a chance I'll post the start of the combo.
Hmmm, maybe Houdini is more susceptible to the "Horizon Effect"? But then, I thought it was top-rated, no?

I think that you have a point, paulgottlieb. They certainly usually agree on the direction. But, I'm skeptical that they never disagree. Almost never, yes. Testing this proposition, I ran both engines through a batch of games with some unclear positions. I found a few where one engine sees = and the other +/= or =/+, and a few times there were brief instances of +0.01 and -0.01. All of these changed into basic agreement after a few seconds, supporting your assertion.
There may be some complex unclear positions in which the two disagree even though I have yet to find them.
Here's one where both see the position as equal, but lean in opposite directions. 0.06 for Fritz, -0.20 for Houdini.
I did some reading on heuristic algorithims in college and can tell you the only specific thing i can truly remember about how the AI in these evaluation profiles operate is usually a mix of what is in the database used in terms of games, relative values to pieces in relation to both squares placed upon and number of times played overall and finally what type of playing style the computer program is set to favor. A computer that plays a lot of positional games will have a different evaluation of say a Smith Morra Gambit Declined than say a program that focuses on attacking.
Bottom line, the results in it are complicated to understand or evaluate at times (perhaps we need to build programs to analyze the chess programs?)
My first guess is that White's advantage grows as the analysis deepens. But there is often a lot more involved. Engines vary in their manner of positional evaluation.
I opened the position with three engines. Fritz 11 has greater depth and a higher evaluation for White. Engines are able to go deep faster with more aggressive pruning. Sometimes in complex positions, they prune a promising line.
thanks for that - maybe there's something wrong with my pc.
I doubt it.
I don't have Fritz 12, but my screenshot shows Fritz 11, Rybka 4, and Hiarcs 12. I've just downloaded Houdini 1.5 so that I can install it on my new computer. We'll see if I can reproduce a similar result to what you found.