True or false? Chess will never be solved! why?

Sort:
EscherehcsE
Mathemagics wrote:
Scottrf wrote:


You really think an engine always draws against itself?

uh... it does...

Have you never seen an engine play against itself? It is ALWAYS a draw.. you will never see the same engine beat itself! it's impossible! (or it's a really bad engine that doesn't always find the best moves in x search depth)

Don't believe me? download Arena and try it out yourself...

An Engine against a different engine on the other hand... that's completely different though it is still likely to draw.

Ignorance and arrogance is a horrible combination.

The_Ghostess_Lola
Scottrf wrote:

Just because a position is given as equal, doesn't mean that you can play the best moves and it will stay equal.

Maybe not XX Man, but we hafta prove otherwise - and we haven't....yet. So, we haven't. A person's Intuition should tell one chess is a theoretical draw. We're still trying to make it a law - we're having trouble and pinning our hopes on a brain box. Yes, white may get an advantage....but not enuf to register a +1.

TBentley

How many games did they play?

The_Ghostess_Lola

What we have going for us everyone *Wink* is we can be pretty certain that chess has a finite # of moves available. Which will help the brute force of our toodlie tower - in tyme. It may be (Avagadro's #)^3....but I think it's out there....somewhere in lala land. 

Iluvsmetuna

An engine that cannot beat itself cannot find its own flaws, so either it cannot solve chess or is already perfect and has solved chess. Trophy anyone ?

Benzodiazepine

This is very phylosphic. I'm out.

Benzodiazepine

I think chess is much like lotthery.

U always lose.

Iluvsmetuna

I recommend falling out of the other side of the bed tomorrow morning, Benz!

Ziryab
watcha wrote:

I think this is the way a layperson should grasp the intuition behind the limits of computation:

1) Information is energy

2) Energy is gravity

3) Gravity is black holes

Information is energy: this thought already occured to von Neumann in the beginning of the information age. Neumann's speculation was that If you erase a bit of information heat must be released. What an ingenious insight! And it turns out to be true: physicists by now have actually measured the heat released when erasing a single bit of information. This insight has led to the research in reversible gates, which do not erase information ( Toffoli gates ). While in classical computing you have the luxury of using irreversible gates and lose information and therefore heat, it turns out that quantum computation can only be done in a reversible way. Think about that. No information can be lost in quantum mechanics.

Energy is gravity. This should be trivial by now ( it was not trivial before Einstein ). Since Einstein every child knows that 'E' is equal to 'm' times 'c' squared ( where E is energy, m is mass and c is the speed of light ). But even 'c' squared is unnecessary: if you count in natural units where the speed of light is 1, the equation reads: E = m. Simple as that. Even Newton knew that mass has gravity. If mass is equal to energy than energy has gravity also.

Gravity is black holes. This is a consequence of Einstein's theory which Einstein himself did not like. Nevertheless it follows from his equations and he admitted this. If you keep packing mass and energy in a limited amount of space sooner or later the whole thing will collapse into a black hole. No surprise that the ultimate information content that can be contained in a finite amount of space has come from the study of black holes ( black hole thermodynamics, to be precise ).

Since information is energy, energy is gravity, gravity is black holes, storage of information has ultimate limits due to black holes.

But not only that: also the speed of processing of information has limits due to energy. In order to perform computation you need a system which can alter its state to represent the change in the state of computation ( all computation can be viewed as a series of states of a Turing machine in which state transitions are dictated by a certain fixed rule ). It turns out that in quantum mechanics the speed at which you can develop a system from one state to an other clearly distinguishable state depends on the average energy of the system. The faster transitions you need the higher the average energy of the system should be. There is a limit to computational speed per unit energy ( Margolus-Levitin theorem ). Again, you can't build a system of arbitrarily high computational power because it will have so much energy that it would collapse into a black hole.

To put this more simply: a serious effort that gets close to solving chess will cause the world to collapse upon itself and suck everything nearby into the abyss.

TurboFish

I have seen Rybka beat itself, especialy in semi-closed positions.  Of course exisiting engines can beat themselves!  Even the mighty Stockfish beats itself.

Since engines don't play perfectly in complex positions (i.e., the vast majority of positions), they must by definition sometimes play objectively inferior moves.  This type of failure will always be a possibility unless a tablebase position is reached before the engine reaches it's calculation horizon.

Iluvsmetuna

Looks like chess will never be solved, total bummer. Still, life is awesome nonetheless.

watcha
Ziryab wrote:

To put this more simply: a serious effort that gets close to solving chess will cause the world to collapse upon itself and suck everything nearby into the abyss.

We have a seven men tablebase now and we have not collapsed upon ourselves.

May be we can risk building an eight men tablebase without going into the abyss.

EscherehcsE

No opening book, 5-piece Nalimov TB, fixed depth of 14 ply.



watcha
TurboFish wrote:

Since engines don't play perfectly in complex positions 

Engines don't perform exhaustive tree search. What you see displayed as 'depth' is that of an imperfect search. No exhaustive search deeper than 12-14 plies can be performed by today's engines which is nothing.

In an average chess position there are 35 legal moves. If you want to apply the minimax/negamax algorythm to the tree of chess you need the following time as a function of depth ( measured in plies ):

    Minimax/negamax analysis time ( secs )
Depth ( plies ) Nodes Laptop 3500 k-Nodes/sec Deep Blue 200 M-Nodes/sec
1 35 0 0
2 1 225 0 0
3 42 875 0 0
4 1 500 625 0 0
5 52 521 875 15 0
6 1 838 265 625 525 9
7 64 339 296 875 18 383 322
8 2 251 875 390 625 643 393 11 259

 

On my laptop Houdini 3 runs at an analysis speed of 3500 kN/s, Deep Blue analyzed at a speed of 200 MN/s which even by today's standards is a high speed since at TCEC, the site where top engines compete for the unofficial word championship title running on high end commercially available hardware the typical analysis speed is around 50 MN/s.

It can be seen from the table that the realistic ( that is: which can be achieved in a couple of minutes ) depth that can be reached by minimax/negamax is 6 plies on a laptop and 7 plies on Deep Blue.

Minimax/negamax analyzes every single position which can arise in a given number of plies therefore all positions will have correct value within the tree. However this is not necessary since you can apply a trick. If a move is refuted minimax/negamax will still search on to find out whether it can be refuted even more. However you can stop the search of the refuted move immediately without examining further the possible responses to this move. In this way you can cut branches of the tree. You can do the same thing at every level recursively. This algorythm is called alphabeta. With correct ordering of the moves ( using evaluations of shallow searches for ordering in deeper searches ) enough cuts can be made to reach twice the depth by alphabeta that can be achieved by minimax/negamax. Since alphabeta does not lose relevant information there is no risk that you cut a meaningful branch of the tree. Alphabeta will always find the same best move which would have been found by minimax/negamax.

By applying alphabeta you can reach a search depth of 12 plies on a laptop and 14 plies on Deep Blue. This means that by today's standards a perfect search can only be performed to a depth of 12 - 14 plies depending on the hardware. Any search deeper than this runs the risk of cutting meaningful branches off the tree. Since today's engines can reach a depth of at least 20 plies in a couple of minutes it is evident that these searches can not be perfect. Any move which yields visible results beyond 12 - 14 plies can fall victim to pruning.

EscherehcsE

None of this thread really matters anyway. In a couple of hundred years, a giant asteroid will smash into the Earth and split the planet apart. Tongue Out

bigpoison
Scottrf wrote:

Ignorance and arrogance is a horrible combination.

But all too common.

watcha
EscherehcsE wrote:

None of this thread really matters anyway. In a couple of hundred years, a giant asteroid will smash into the Earth and split the planet apart.

By that time human colonies on Mars will have a 20 men endgame tablebase.

EscherehcsE
watcha wrote:
EscherehcsE wrote:

None of this thread really matters anyway. In a couple of hundred years, a giant asteroid will smash into the Earth and split the planet apart.

By that time human colonies on Mars will have a 20 men endgame tablebase.

OK, next year then. Laughing

The_Ghostess_Lola

Venus watcha, Venus....Wink....

The_Ghostess_Lola

(TF) I have seen Rybka beat itself....They say Bobby Fischer used to play himself....and that he was the only person he ever really lost to.