curiouser and curiouser...
Computer test

Thanks...
Yet a highly talented human, who would probably still lose to the computer most of the time, can evalutate the position very accurately and rather quickly.
Isn't that fascinating?
I'm not sure it's all that fascinating. The critical thing is that the computer seeks the best move -- in this position that means above all rejecting bxa5 as the best move. So long as it does that, the numerical values it assigns other moves and positions are IMO academic. Engines aren't optimized to give the best evauluation. Engines aren't optimized to find draws. Engines are optimized to find the best move.
I think a lot of players, myself included, tend to think that since an engine plays at some ridiculous rating -- 3000+ elo or whatever -- these evaluation numbers the engine assigns must also be of '3000+ elo' quality.
In competition, against each other and against humans, engines are not evaluated, are not rewarded, for assigning good evaluations. Engines are rewarded for winning and penalized for losing and getting the evaluation completely wrong -- evaluating a drawn position as a loss, does not hurt the engine's 'bottom line' -- only making an incorrect moves based on that evaluation affects the engine negatively.
I guess what I'm trying to say, in summary, is: The engines aren't so much incapable of spotting the draw and giving it the correct evaluation as that they can't be bothered to give the correct evaluation -- it's beside the point.

I agree with j27pyth, with the added thought that a human is capable of forming a plan, (in this case, never do anything else except a King move).
Computers are incapable of plans, after every move it evaluates the position as a new one.

Grobzilla - does your Houdini tell you that the other moves lead to a draw?
Well, not immediatelty, but by 3-fold.

Crafty 20.14 has 1.bxa5 as best for a split second, then it plummets to worst.
Fritz 6 is indecisive. It likes 1.bxa5 for about 2 seconds, then it doesn't for a while, then it likes it again, then back to not liking it. Funny.

At times "doing nothing" is best---for human chess. computers are not thinking or formulating a plan they respond to their software. So super computers can beat the best of G.Ms,can they smell a freshly cooked apple pie, have sex, love your child,enjoy a hot summer day ?

"I'm not sure it's all that fascinating."
The fascinating part is that humans, who seem decidedly weaker, can areach a correct conclusion very quickly that a computer can't reach at all. Of course, it makes little difference in practice, but that doesn't make it less fascinating. When computer chess was being developed as part of the larger development of Artificial Intelligence, the idea was to have chess programs think like humans. Botvinnik, himself a pioneer in this area, took that approach. But the divergent school of thought favored brute force or number-crunching, which ultimately proved better for computer chess programs if less suitable to A-I. The human brain, or way of thinking, does offer some advantages, such as those that enable us to evaluate quickly or think beyond the calculable but brute force, with today's advances, can surmount and outflank those advantages. Still the beauty of the human mind juxtaposed to the power of the computer processor makes for a fascinating image.

"I'm not sure it's all that fascinating."
The fascinating part is that humans, who seem decidedly weaker, can areach a correct conclusion very quickly that a computer can't reach at all. Of course, it makes little difference in practice, but that doesn't make it less fascinating. When computer chess was being developed as part of the larger development of Artificial Intelligence, the idea was to have chess programs think like humans. Botvinnik, himself a pioneer in this area, took that approach. But the divergent school of thought favored brute force or number-crunching, which ultimately proved better for computer chess programs if less suitable to A-I. The human brain, or way of thinking, does offer some advantages, such as those that enable us to evaluate quickly or think beyond the calculable but brute force, with today's advances, can surmount and outflank those advantages. Still the beauty of the human mind juxtaposed to the power of the computer processor makes for a fascinating image.
Something I draw from today's strong engines is that what we thought of in the past as creative intelligence (in chess) has been replicated and surpassed by engines. Sure not in every position, but for example sacrifices or maneuvers we used to think of as too exotic or intuitive for a computer mimic can be found by today's computers.
So it makes me wonder what other areas of creativity that we think of as uniquely human can be simulated with a sufficiently complex computer. I tend to think we're fooling ourselves if we think what we call intuition isn't simply a higher form of calculation.

But given the option of a beautiful sacrifice, winning in 10 or a mundane grinding down that wins in 9, the computer will always take the 9 move mate, while most humans will take the sac. (I would, at any rate).
I remember the early days of computers. My university had one and it required a large room, specially prepared, to give the computer the conditions it needed to function well. We were all allowed to write some tiny programme in one of the early languages (algol and fortran I think they were called) to see how the process worked - quite fun.
Chess was thought to be a particularly good field in which to strive for improvement to programming technique. Botvinnik was a prominent early computer scientist as well as a chess player.
Very slowly the chess playing programmes improved until they reached a point where they could give an average club player a run for his or her (his in those days) money. It was speculated that one day - in several hundred years time perhaps - they might even reach master strength.
The change to number crunching and the rapid advance of computers to their present level was a huge surprise. Botvinnik, I suspect, would have been disappointed.
But facillity at chess does not take computers very far if compared to the human brain. There are no variables in chess and, huge as the numbers are, there are only eight squares by eight squares and so many combinations. On the basis of what has happened in chess I would be willing to believe that the number crunching approach will prove more powerful than intuition would suggest in more fields than just chess. But not very many more.

But given the option of a beautiful sacrifice, winning in 10 or a mundane grinding down that wins in 9, the computer will always take the 9 move mate, while most humans will take the sac. (I would, at any rate).
Sure. But I think if we wanted to, we could program them to win with the artistic move sometimes. Of course they don't think it's beautiful, or even think at all. They're just following the code and crunching numbers, but my thought is that a sufficiently advanced number cruncher would be indistinguishable from human creativity, intuition, etc. as evidenced just a bit by chess playing programs today.

I can surely see your point. I've heard music created by computers and while probably technically accurate, it seemed to lack some je ne sais quoi. Of couse, knowing it was computer-generated might have prejudiced me from the outset and some human music lacks a lot too, but seldom does it feel so cold. I hope number-crunching is never indistinguishable from human creativity, but who knows?

Thanks...
Yet a highly talented human, who would probably still lose to the computer most of the time, can evalutate the position very accurately and rather quickly.
Isn't that fascinating?
I agree. These days, we tend to reduce the human brain as some superduper calculator. But the wonderfully amazing thing about the human mind is its capacity to understand, an attribute these chess programs are not capable of doing.
Absolutely, these chess programs can out-calculate the human mind. But when it comes to understanding, human beings have the advantage (it is no contest, since the computers cannot understand). A human being merely looks at the position and given a certain level of wisdom about chess, he can see (understand) that it is a drawn position.
This chess problem seems to be a good example which highlights the difference between calculating and understanding.

In February 2010 Kasparov wrote a review of Diego Rasskin-Gutman's book, "Chess Metaphors: Artificial Intelligence and the Human Mind," for the "New York Times." It was quite long and in it Kasparov recounted his own affairs with programs. One interesting passage talked about about the aftermath of his 1997 loss to the new and improved Deep Blue:
It was the specialists—the chess players and the programmers and the artificial intelligence enthusiasts—who had a more nuanced appreciation of the result. Grandmasters had already begun to see the implications of the existence of machines that could play—if only, at this point, in a select few types of board configurations—with godlike perfection. The computer chess people were delighted with the conquest of one of the earliest and holiest grails of computer science, in many cases matching the mainstream media’s hyperbole. The 2003 book Deep Blue by Monty Newborn was blurbed as follows: “a rare, pivotal watershed beyond all other triumphs: Orville Wright’s first flight, NASA’s landing on the moon….”
The AI crowd, too, was pleased with the result and the attention, but dismayed by the fact that Deep Blue was hardly what their predecessors had imagined decades earlier when they dreamed of creating a machine to defeat the world chess champion. Instead of a computer that thought and played chess like a human, with human creativity and intuition, they got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force. As Igor Aleksander, a British AI and neural networks pioneer, explained in his 2000 book, How to Build a Mind:
"By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s. In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives."
It was an impressive achievement, of course, and a human achievement by the members of the IBM team, but Deep Blue was only intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better.

"I'm not sure it's all that fascinating."
The fascinating part is that humans, who seem decidedly weaker, can areach a correct conclusion very quickly that a computer can't reach at all. Of course, it makes little difference in practice, but that doesn't make it less fascinating.
Well, I think you missed my point (which I didn't make very clearly at all) ... I agree with you an interesting/fascinating thing is to consider the limitations and strengths of conceptual planning vs concrete deterministic position-crunching decision making. You said: The fascinating part is that humans, who seem decidedly weaker, can reach a correct conclusion very quickly that a computer can't reach at all.
I absolutely agree with this in general. I was trying to make a very limited point: the inability of engines to give a good evaluation of the specific locked position you gave is less about planning vs. calculation, or the intrinsic limitations of number-crunching, than it is about the priorities encoded in the evaluation. If the engine is able to sort available moves in the correct relative order from best to worst then getting the absolute evaluation of the individual moves "right" is irrelevant to the priorities of the programmers.
This doesn't invalidate your general point at all. It was just me being annoyingly precise about a minor point while expressing it poorly. Perhaps I'm continuing along that very same path!
The planning vs crunching problem for chess is only more pronounced with the game of go. The branching of Go, with it's huge 19x19 board, had strong humans easily defeating number crunching programming for years -- It took all the processing power available for engines to manage a decent club level -- a level strong players could annihilate. It seemed that perhaps Botvinnik had been right after all, human-like conceptual planning was needed, he'd just been analyzing the wrong game! This has changed in recent years. It's true Go remains unconquered by engines... the very best players can still destroy the best engines. But top Go programs have made enormous strides -- it's like they've gone from elo 2000 to elo 2500 -- a huge gain. And it's only partly because of the gains in processing power. And yet it's not because they use better planning in the human sense either. The gains have come via using "monte carlo" analysis -- essentially playing out tons of super fast games against themselves to create a probabalistic view of the outcome of moves at the base of the move tree.
I think that humans should make big braniac computers with a 3000+ ELO.
But they shouldn't depend on them.
For example, you shouldn't put a game directly in your engine. If it were a tournament game you should ask your opponent to analyze it with you or your coach or a strong player. Computers can only point out tactical mistakes, but your coach can tell what you are doing wrong.
For example, if you play a gambit the computer will already give a winning advantage to the other side because it's greedy. So if you play passive moves after that and lose the computer won't object, because you were going to lose anyhow.
It's OK to have smart braniacs of chess in our society, but it isn't OK to mindlessly follow their analysis, you should analyze your own games.
Houdini on my computer, gives obviously material advantage to black, but the instataneous line given is clearly a draw (it reaches 26 moves, and is clearly playing for the 50 moves rule). I don't know if it considered to take the Ra5, because I just gave it a few seconds, and didn't see it in the analysis.