Chess will never be solved, here's why

Sort:
DiogenesDue

Those articles reinforce my point.  A single Chess position should take more CPU power to evaluate than a single Go position.  Chess engines still use deep searches and brute force calculation alongside machine learning, AlphaGo uses purely machine learning (though they did "cheat" and use human play to seed the process, unlike AlphaZero's machine learning...which ultimately will make AlphaGo's learned valuations imperfect in the end), with heuristics to decide on win probabilities, and the heuristics used are baked-in every time AlphaGo plays and learns...i.e. almost all the processing power would be already front-loaded and done before a new position is evaluated.  The only evaluation that takes place for AlphaGo in a new position is "what does this position look like related to already evaluated positions, and what does the value network say is the best win probablility?"

The whole point of machine learning is that the previous AI work is subsumed into the valuation so that the next position *doesn't* require brute force calculation.  All that AlphaGo calculates is "what worked best the last time a position like this came up?".  It's like training a dog to do tricks, but the AI can remember a gazillion steps for its tricks and performs those steps perfectly every single time.

Machine learning for Chess works to a point, but as Stockfish has proven out, a combination of brute force calculation and machine learning probability valuations is stronger that machine learning alone.  Which is inherently obvious if you ponder it for a minute or two.

If Chess has 400 possibilities after 2 moves, and Go has 130,000 possibilities after 2 moves, then if each single Go position took more CPU power than each single Chess position, AlphaGo would be running more than 325 times slower than AlphaZero on a given position using the same DeepMind hardware.  I'm pretty sure that is not the case...but feel free to prove me wrong on that.

Ergo, the CPU usage for each individual Chess position > the CPU usage for each individual Go position.

MEGACHE3SE
btickler wrote:

Those articles reinforce my point.  A single Chess position should take more CPU power to evaluate than a single Go position.  Chess engines still use deep searches and brute force calculation alongside machine learning, AlphaGo uses purely machine learning (though they did "cheat" and use human play to seed the process, unlike AlphaZero's machine learning...which ultimately will make AlphaGo's learned valuations imperfect in the end), with heuristics to decide on win probabilities, and the heuristics used are baked-in every time AlphaGo plays and learns...i.e. almost all the processing power would be already front-loaded and done before a new position is evaluated.  The only evaluation that takes place for AlphaGo in a new position is "what does this position look like related to already evaluated positions, and what does the value network say is the best win probablility?"

The whole point of machine learning is that the previous AI work is subsumed into the valuation so that the next position *doesn't* require brute force calculation.  All that AlphaGo calculates is "what worked best the last time a position like this came up?".  It's like training a dog to do tricks, but the AI can remember a gazillion steps for its tricks and performs those steps perfectly every single time.

Machine learning for Chess works to a point, but as Stockfish has proven out, a combination of brute force calculation and machine learning probability valuations is stronger that machine learning alone.  Which is inherently obvious if you ponder it for a minute or two.

If Chess has 400 possibilities after 2 moves, and Go has 130,000 possibilities after 2 moves, then if each single Go position took more CPU power than each single Chess position, AlphaGo would be running more than 325 times slower than AlphaZero on a given position using the same DeepMind hardware.  I'm pretty sure that is not the case...but feel free to prove me wrong on that.

Ergo, the CPU usage for each individual Chess position > the CPU usage for each individual Go position.

bro i think ur just straight misreading the articles at this point.

you.... do realize that the equivalent heuristic is that alphago evaluates a position 300 times weaker?

DiogenesDue
MEGACHE3SE wrote:

bro i think ur just straight misreading the articles at this point.

you.... do realize that the equivalent heuristic is that alphago evaluates a position 300 times weaker?

"Bro" can you even explain what you just said using your own words?  Define "a position 300 times weaker".  What does that mean to you?  Or are you going to keep regurgitating?  Do you even know how to program, or are you just reading these articles with no understanding?  If someone told you to write a Chess or Go engine from scratch, how would you attack the problems?

MEGACHE3SE

@btickler you are making the false assumption that those engines are evaluating positions at the same strength.  

im going to repeat myself here

"If Chess has 400 possibilities after 2 moves, and Go has 130,000 possibilities after 2 moves, then if each single Go position took more CPU power than each single Chess position, AlphaGo would be running more than 325 times slower than AlphaZero on a given position using the same DeepMind hardware.  I'm pretty sure that is not the case...but feel free to prove me wrong on that." 

this makes the assumption that the evaluations are of the same strength. they arent. 

it is stated very explicitly "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves." 

"difficulty of evaluating board positions and moves"

compared to human - wise, alphago is the same level as deep blue.

deep blue had 11 Gflops.  Alphago uses at least 84000.  (1200 CPU*70 gflops each, assuming the cpu's are like those found in a regular computer.).

takes more  than a thousand times the amount of power, let alone with better AI, for a Go program to perform as well as a chess program

 

DiogenesDue
MEGACHE3SE wrote:

@btickler you are making the false assumption that those engines are evaluating positions at the same strength.  

im going to repeat myself here

"If Chess has 400 possibilities after 2 moves, and Go has 130,000 possibilities after 2 moves, then if each single Go position took more CPU power than each single Chess position, AlphaGo would be running more than 325 times slower than AlphaZero on a given position using the same DeepMind hardware.  I'm pretty sure that is not the case...but feel free to prove me wrong on that." 

this makes the assumption that the evaluations are of the same strength. they arent. 

it is stated very explicitly "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves." 

"difficulty of evaluating board positions and moves"

compared to human - wise, alphago is the same level as deep blue.

deep blue had 11 Gflops.  Alphago uses at least 84000.  (1200 CPU*70 gflops each, assuming the cpu's are like those found in a regular computer.).

takes more  than a thousand times the amount of power, let alone with better AI, for a Go program to perform as well as a chess program

No Sherlock, we're comparing AlphaGo to AlphaZero here, same hardware and two root branches of the same AI software.  Why would you even bring up Deep Blue?

There's no assumption of "strength" required.  I stated that Chess positions should take more CPU to evaluate than Go positions, one for one.  Period.  End stop.  You have supported my position with every post you have made.  The fact that you can't grok this is pretty funny.

MEGACHE3SE

" I stated that Chess positions should take more CPU to evaluate than Go positions, one for one." - which is objectively incorrect.

 

 

 

DiogenesDue
MEGACHE3SE wrote:

" I stated that Chess positions should take more CPU to evaluate than Go positions, one for one." - which is objectively incorrect.

Well, I've made a logical case.  I don't see you putting anything forth other than blather and quotes you don't understand yourself but that have given you some vague notion that you must be right.

tygxc

@7376

"Chess positions should take more CPU to evaluate than Go positions, one for one."
++ Solving Chess does not depend on some evaluation, but on the 7-men endgame table base.

Stockfish is designed to play, i.e. find one good move in some time limit e.g. 3 min / move.
To do that, it depends on some evaluation as it cannot calculate all the way in that time limit.

Stockfish can be used to analyse, using more time and taking e.g. 2 moves instead of 1 move into account. Then it will in part depend on evaluation, as some lines will not reach the 7-men endgame table base.

Stockfish can be used to weakly solve Chess, using much more time:
5 years on 3 cloud engines of 10^9 nodes/s, or 15000 years on a desktop,
calculating all the way to the 7-men endgame table base.

MEGACHE3SE

the sources i have posted explicitly support my position.  what do you define "evaluation" as anyways?  a measure of who is winning, or by how much?  you do realize that such evaluations can be wrong?

DiogenesDue
tygxc wrote:

@7376

"Chess positions should take more CPU to evaluate than Go positions, one for one."
++ Solving Chess does not depend on some evaluation, but on the 7-men endgame table base.

Stockfish is designed to play, i.e. find one good move in some time limit e.g. 3 min / move.
To do that, it depends on some evaluation as it cannot calculate all the way in that time limit.

Stockfish can be used to analyse, using more time and taking e.g. 2 moves instead of 1 move into account. Then it will in part depend on evaluation, as some lines will not reach the 7-men endgame table base.

Stockfish can be used to weakly solve Chess, using much more time:
5 years on 3 cloud engines of 10^9 nodes/s, or 15000 years on a desktop,
calculating all the way to the 7-men endgame table base.

Shoo.  I've already refuted all your stuff, and this point is meaningless for the subtopic being discussed.

tygxc

@7381

"you do realize that such evaluations can be wrong?"
++ All evaluations are wrong to some extent:
the only right evaluation is draw / win / loss from the 7-men endgame table base.
That is also how Checkers has been weakly solved:
calculate until the exact evaluation draw / win / loss of the endgame table base.

DiogenesDue
MEGACHE3SE wrote:

the sources i have posted explicitly support my position.  what do you define "evaluation" as anyways?  a measure of who is winning, or by how much?  you do realize that such evaluations can be wrong?

Whether they are wrong (and they are *all* presumed to be wrong until each game is solved) or right is immaterial here.  Under state of the art conditions for Go and Chess playing software and using the best available methodologies available for each currently, an evaluation of a discrete Chess position should take more CPU power to complete than a discrete Go position when using a NNUE hybrid of machine learning and brute force calculation for chess versus straight machine learning for Go.

MEGACHE3SE

lmao what '"whether they are wrong or right is immaterial here"

you do realize that that is the most important part???

DiogenesDue
MEGACHE3SE wrote:

lmao what '"whether they are wrong or right is immaterial here"

you do realize that that is the most important part???

Not remotely.  All "evaluations" derived for as-yet unsolved games are assumed to be wrong.  I'm just winning the point you tried to argue so you will think harder next time.  This discussion is meaningless for any other purpose.  It adds nothing to the topic, and I have no interest in your conclusions.  This is just a newspaper to the snout.

MEGACHE3SE
btickler wrote:
MEGACHE3SE wrote:

the sources i have posted explicitly support my position.  what do you define "evaluation" as anyways?  a measure of who is winning, or by how much?  you do realize that such evaluations can be wrong?

Whether they are wrong (and they are *all* presumed to be wrong until each game is solved) or right is immaterial here.  Under state of the art conditions for Go and Chess playing software and using the best available methodologies available for each currently, an evaluation of a discrete Chess position should take more CPU power to complete than a discrete Go position when using a NNUE hybrid of machine learning and brute force calculation for chess versus straight machine learning for Go.

and yet a go position ends up taking more computing power.

your logic must be flawed somewhere.

tygxc

@7386

"Stockfish can not even solve simple 8 man TB positions."
++ If given enough time it can.
Besides you first have to prove your 8-men position is relevant, i.e. can result from optimal play from both sides. Your example cannot result from the initial position by optimal play from both sides and thus is not relevant to weakly solving Chess.

DiogenesDue
MEGACHE3SE wrote:

and yet a go position ends up taking more computing power.

your logic must be flawed somewhere.

Show your work.  Your articles say no such thing.  If you think that they do, then you do not understand them.  You keep ducking the question...do you have any software or hardware background?  Roblox does not count.

MEGACHE3SE

ok what does "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves"  mean?

MEGACHE3SE

are you seriously trying to argue that strength of evaluations doesnt matter?  

DiogenesDue
MEGACHE3SE wrote:

ok what does "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves"  mean?

It means that overall, due to Go's much larger move tree, it's the most challenging "mainstream" game to solve.  But we're not discussing that...and that imprecision is why you lost this argument.