Solving chess? With no BS. (moderated)

Sort:
MARattigan

@btickler

So what exactly is wrong with it?

 

blueemu
MARattigan wrote:

@btickler

So what exactly is wrong with it?

 

Are you serious? Even the most complex QR code (version 40, 177 x 177) can store less than 3000 bytes of data. That falls short by... how much?... a factor of 5 x 10^114 or so?

Reality check: That's 5,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times too small.

DiogenesDue
MARattigan wrote:

@btickler

So what exactly is wrong with it?

QR codes were invented to direct smartphone users to websites using encoded links and to store product codes/coupons.  The maximum capacity of a postage stamp sized QR code is 7089 numbers, or 4296 alphanumeric characters.  So, a QR code stores about 4K or 0.4% of 1MB.  I would assume she was joking, which she does tend to do.

The 10 year old USB flash drive in my pocket (at 512MB) holds over a thousand times the amount of data embedded in a QR code.

Edit:  Emus are fast.

blueemu
btickler wrote:

Edit:  Emus are fast.

Urban Dictionary: Emu'd

hoodoothere
MARattigan wrote:
hoodoothere wrote:

I think that's Shannon's estimate based on an average 40 move game between players. Doesn't come remotely close to the number of possible games. In fact if chess is a win that could be less than the number of subgames in the tree of moves for one winning line.

????? 10 to the 120 power is an astronomical number considering there are only 10 to 82 power atoms in the known universe on the high end: https://www.universetoday.com/36302/atoms-in-the-universe/ 

TestPatzer
btickler wrote:

You can't discount "bad" openings at all.  The problem is that engines, up until super recently, have gotten all their valuations from humans, ala pawn = 1, minor piece = 3, etc.  Human beings play flawed chess based on biases...we see a mate moving a queen forwards/towards the opposing king faster than we see a mate retreating the queen away to deliver check...we prefer various aesthetically pleasing patterns, we find attacking easier and preferable to defending...on and on.  These tiny but ubiquitous biases are baked right into traditional engine's evaluations.  Machine learning engines that *do not pollute their play pool by facing human opposition constantly in their learning process* (I'm looking at you in your early days, Leela) can get around this obstacle.  That's why Alpha Zero's play was shocking to GMs.  If Alpha Zero had played GMs. and not itself only during it's training, it would have "learned" how to best beat human chess players with all the biases thereof, and it would not have evolved to where it did.

Maybe the way for white to force a win is open with e4 and sac the LSB on f7, every game.  You would have discounted that as a "bad opening" and discarded it.  Roughly the same way  that GMs for decades discarded the Berlin as being bad for black, because they assumed that exchanging queens and losing castling rights was "bad"...and then they stopped analyzing based on an assumption.

In any case, the solution that tablebases are building will continue to be built backwards, so pruning openings doesn't do much of anything...it's like snipping the last millimeter of Rapunzel's hair.

Well, I mean in practical a sense. "Soft" solving chess, until the technical data storage issues are resolved in some way. (While the progress with tablebases continues to march forward, in the background.)

And I don't mean disregarding openings based on human perception, but disregarding them if, after continuous engine testing, certain lines are shown to be losing, without fail.

(Maybe, to discount human influence in the engines, as you pointed out, we could only have "zero" type engines, like AZ and LeelaChess, test the openings.)

Imagine AlphaZero playing 1.f3 e5 against itself 50 million times (a day's work), to reach an inarguable conclusion about whether or not 1.f3 is survivable for White. (Then doing that same work for each dubious opening.)

Most openings aren't really definable at the first move, though, so, more realistically, we'd be starting at certain points, at the 15th or 20th move, more likely. Analyzing specific variations.

This, of course, wouldn't be "solving" chess, in a definitive tablebase manner, but it could help put a dent in things along the way, until the a solution to the whole "more positions than atoms in the universe" issue is resolved. At the very least, it might give us some good insights along the way.

MARattigan
blueemu wrote:
MARattigan wrote:

@btickler

So what exactly is wrong with it?

 

Are you serious? Even the most complex QR code (version 40, 177 x 177) can store less than 3000 bytes of data. That falls short by... how much?... a factor of 5 x 10^114 or so?

Reality check: That's 5,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times too small.

Yes. Bit of a red face job really.

I was aiming for 

𝓲=(𝓽𝓼 +𝓬+𝓮+𝓶)𝓾

where

𝓽 = number of bits to represent type and colour of piece or empty = 4

𝓼 = number of squares = 64

𝓬 = number of bits to represent castling rights = 4

𝓮 = number of bits to represent e.p. possibility and file = 4

𝓶 = number of bits to represent  max possible moves in any legal position = 9

𝓾 = upper bound on legal chess positions = 10⁴⁷

This gives 𝓲 as an upper bound on the number of bits required to hold a 32 man tablebase with the depth metric replaced by a single optimal move.

The back of my envelope said that if in Lola's scheme the sides of the squares were reduced to 1/1000 inch and, instead of black and white, a full 2²⁴ RGB colour range were used (which I think would have been possible in the 1960s) 𝓲 different arrangements would be accommodated.

But I wholly forgot, and it vexes me much, that the number of arrangements required to store a tablebase is not 𝓲 but 2 , so I finished up 2 adrift as you pointed out.

On the other hand Lola's post did make clear that what is important is not the number of atoms used to store a solution, but the number of arrangements, spatial, kinetic and physical, of those atoms. In classical mechanics at least, a single atom could encode all the 2-32 man tablebases.

Unfortunately I repeated my mistake in the figures I mentioned in the latter case, so my "possibly plausible future technology" becomes distinctly less plausible.

MARattigan
hoodoothere wrote:
MARattigan wrote:
hoodoothere wrote:

I think that's Shannon's estimate based on an average 40 move game between players. Doesn't come remotely close to the number of possible games. In fact if chess is a win that could be less than the number of subgames in the tree of moves for one winning line.

????? 10 to the 120 power is an astronomical number considering there are only 10 to 82 power atoms in the known universe on the high end: https://www.universetoday.com/36302/atoms-in-the-universe/ 

But it's small compared with the number of possible chess games, especially since OP says in post #2 that we're considering only games under basic rules. The FIDE basic rules no longer contain a 50 move or triple repetition rule, so there are ℵ₀ legal games.

StinkingHyena
I believe the original poster was looking for 100% provable, so tablebases, quantum computers etc. may miss the mark. I believe one of the qualities of a proof is that it is verifiable or checkable (if that’s a word, pun intended). How would you verify a tablebase or any other solution that had some massive number of variations?
hoodoothere
StinkingHyena wrote:
I believe the original poster was looking for 100% provable, so tablebases, quantum computers etc. may miss the mark. I believe one of the qualities of a proof is that it is verifiable or checkable (if that’s a word, pun intended). How would you verify a tablebase or any other solution that had some massive number of variations?

Look back. Discussion and opinions are fine. If anybody had the proof it would be published and they would be famous.

DiogenesDue

I only need backup and "proof" of things being claimed as true.  Not speculations happy.png.

In other words:

Fine:

"Chess is XYZ, and here is evidence for X, a graph of Y, and a link to an article on Z."

or 

"Chess might be XYZ, in my opinion."

Not fine:

"Chess is XYZ, there's a mountain of evidence and everyone that understands chess agrees.  No, I won't link, summarize, or even provide a high level outline of the mountain of evidence..."

MARattigan
blueemu wrote:

Opinion:

Even if you could record each node on a seperate subatomic particle, there aren't enough particles in the entire observable universe (something under 10^80) to keep track of the nodes in the game-tree (around 10^120 if you only consider the first 40 moves, with no promotions allowed).

Does that mean you don't like my idea that a single atom could encode the 32 man tablebases?

blueemu
MARattigan wrote:

Does that mean you don't like my idea that a single atom could encode the 32 man tablebases?

Several of the parameters that you mention are NOT independant. Position, velocity, momentum, acceleration etc are all inter-connected, and cannot be varied arbitrarily. Other parameters (eg: spin) are subject to selection rules and again cannot be independently adjusted. At the atomic scale, the wave-functions of (for example) the electrons in a silicon atom overlap, so you cannot use each electron to represent something different... they can interchange their roles in a tiny fraction of a second.

I doubt that the idea would work (even in theory) in a continuum universe. It certainly wouldn't work in a quantized universe.

MARattigan
Deranged wrote:

... Imagine if you were writing a mathematical proof and you had to prove every little detail along the way. Prove Pythagoras' Theorem again. Prove that 1 + 1 = 2. It would go on forever. ...

Not true. Russell manages the second one in only 756 pages of Principia Mathematica (if you ignore the prologue stuff).

MARattigan
blueemu wrote:
MARattigan wrote:

Does that mean you don't like my idea that a single atom could encode the 32 man tablebases?

Several of the parameters that you mention are NOT independant. Position, velocity, momentum, acceleration etc are all inter-connected, and cannot be varied arbitrarily. Other parameters (eg: spin) are subject to selection rules and again cannot be independently adjusted. At the atomic scale, the wave-functions of (for example) the electrons in a silicon atom overlap, so you cannot use each electron to represent something different... they can interchange their roles in a tiny fraction of a second.

I doubt that the idea would work (even in theory) in a continuum universe. It certainly wouldn't work in a quantized universe.

At any given instant in classical terms position, velocity and acceleration are independent as are the other parameters I mentioned.  At that instant the atom would encode the required information. Of course if you allowed yourself the luxury of 10 atoms that would give you ten times as many independent variables.

As I said, I don't understand quantum mechanics. How many states could be encoded in that case?  It seems intuitively strange that one would need more atoms than there are in the universe to achieve the same result (but of course quantum theory is a bit strange).

blueemu

It's a large topic. I'll limit myself to a few remarks and examples:

In a quantized universe, position and velocity (technically, position and momentum) are mutually exclusive properties. You cannot determine both to arbitrary precision, and the more accurately you determine one of them, the larger the uncertainty you introduce in the other.

Look at it as a thought experiment. Suppose you had an electron, and you wanted to determine its position as accurately as you could. The obvious way to find out where it is, is to look at it. But this involves bouncing light off the electron. 

Now, light comes in packets called photons. And photons have momentum. When you bounce a photon off that electron, some of the momentum will be transferred, and the electron will recoil... so whatever its position was before you looked at it, that position information will no longer be a valid description of your photon. In effect, just by observing it we have introduced random changes.

And it's actually worse than that. Ideally, you would want to use the weakest photon that you can, to disturb the electron as little as possible. But the energy of a photon is directly proportional to its frequency (and... more importantly for our current purposes... inversely proportional to its wavelength). A weak photon will be very "mushy" and spread out. Photons in the radio spectrum can be tens of kilometers across! 

It won't help us encode information to know "where our electron is" to within the nearest ten or twenty kilometers! We will need precision on the atomic or sub-atomic scale. And that means using much more compact (and far higher energy) photons... hard gamma rays instead of radio waves.

... and when a hard gamma ray smashes into an electron, it will not only rip it completely out of the atom (and scatter most of the other electrons as well), it very well might transform it into a different particle entirely... a muon perhaps.

This is by no means the only problem with the possibility (or perhaps, impossibility) that you put forward. Energy and Time are another pair of incompatible properties, in the sense that the more accurately one is determined, the more uncertain the other one becomes. And this is a rock that tears another hole in the bottom of your boat.

MARattigan

@blueemu

I've read so much before.

I was really only talking about atoms. Are you saying that no information at all can be encoded in physical states of an atom? I would expect a reduction in the potentially readable values for example (ℵ₁ to something finite) but does it really altogether disappear? 

hoodoothere

I thought in quantum mechanics position was irrelevant and atomic structures exist only as a cloud of probability? but it's been a long time since I studied it, not my filed.

blueemu

@MARattigan

It depends on what you mean by "encoded". A single "read" operation randomizes the information. In what sense can the information be said to be "encoded" if you can't access it without replacing it with gibberish?

Even assuming that you can overcome that difficulty, each atom could only store one "bit", no? Because the write operation for the second bit would randomize the encoded information for the first bit.

One problem is that only four fundamental forces are known, so you cannot use "write" methods that are transparent to all your previous operations. In theory, you could use photons to write the first bit into the atom, gravitrons (if indeed they exist) to write the second bit... and then you are stuck, since the bosons responsible for mediating the electroweak and strong nuclear forces are also charge carriers (and therefore, stir up virtual photons as they move). Your attempt to write the third bit will erase the first one.

All this quite overlooks the energy scale involved. As I mentioned, in order to access the atomic or subatomic scales you will need to use hard gamma rays; photons so energetic that they will spontaneously trigger the creation of matter/anti-matter pairs. Which will do a wonderful job of randomizing your stored information.

This doesn't look like the sort of problem that can be solved by advances in technology. It looks like the sort of problem that is encoded right into the laws of nature... Planck's constant, the Heisenburg principle, complimentarity, etc.

blueemu

@btickler : Are we still on topic?