Forums

Does chess yield the largest meaningful finite number?

Sort:
DefinitelyNotGM
[COMMENT DELETED]
DumbLove

How Is Chess Meaningful in the sense of Mathematics?

It isn't.

Sure it's all relative.

But you tell me, how this even matters?

If I give you the answer would you become instantly a better chess player? No you wouldn't.

Maybe over time but not right away.

Go train. Get better.

johnyoudell

no

watcha

I don't compare Go to captcha. But I point out that the human mind has such hard coded abilities (like image recognition) in which it is still better than computers. Go is like a mosaic picture and to make sense of it on larger boards where simple tree search gets too costly for computers may be easier for humans.

markxue
what about this
bbeanzonpizza
jaaas wrote:
sloughterchess wrote:

If I liked to torture computers I would require them to identify every legal position in chess and all its variants! I suspect that this number is at least 1X10 raised to the power of 1000.

"Chess and all it's variants" is vastly imprecise. For instance, you could come up with a variant featuring a board of infinite size.

Otherwise, if you want some mindbendingly large numbers that somehow relate to the physical world, then you might calculate the rough amount of Planck cubes within the Universe, then the rough amount of Planck frames it has existed for, and then do some permutations involving those numbers (like, how many possible results can you get when ordering all Planck frames in all possible ways while in any Plack frame all the Planck cubes could be ordered in any way possible?). Good luck.

I am aware this is 7 years old, and no one cares, but I had nothing to do today and decided to have some fun. I changed the problem a little bit.

First things, lets see how big and old the universe is today in Planck units:

Rc = current radius: 4.4e26 m  = 2.7e61 Planck lengths

Tc = current age: 13.8e9 years = 8.1e60 Planck times

I'm going to assume a spherical universe. This Isn't a good approximation but it helps immensely later. 

Because of this assumption: 

V =Volume of the universe = 4*Pi/3 * radius^3 ~ 8.5e184 planck volumes (or cubes)

Now let's ask this question (instead of just ordering all of the planck cubes, I decided to add a spicier step): How many overall organization schemes are there, if at each time step, we choose any ordering of the planck cubes?

This is rather trivial if the number of cubes stays the same throughout time, it's pretty easy to show the total number of organization schemes is (#Cubes!)^(#time steps) ~( using the Sterling approximation) (10^(#cubes*log10(#cubes) - #cubes))^(#time steps)  ~ (10^(1e187 - 1e185))^(1e61) (all rounded to the nearest power of 10) ~ 10^(1e248) so much larger than all possible chess positions.

 

This isn't physical though, the volume of the universe is also constantly expanding, and I thought it would be fun to calculate the possible number of configurations of all cubes throughout the universe's evolutionary history.

 

I have no idea how the universe has actually expanded, but I am going to assume it was with a constant outward velocity, which is probably terribly wrong.

Therefore, the radius of the universe as a function of time looks like (setting r(0) = 0): 

r(t) = A*t 

with

A = Rc/Tc ~ 3.4 Planck Lengths/Planck Time

Fun fact: This also means the universal expansion is on average ( since I assumed constant rate) 3.4 times the speed of light (yes, it can be FTL, here's some reading on that).

Since we assume spherical universe geometry, the volume of the universe at time t, V(t) can be expressed as:

V(t) = 4 Pi/3 * r(t)^3 =4 Pi/3  (At)^3

We define the number of orderings at each time as L(t);

L(t) = V(t)!

(If you're having trouble seeing this, there are V(t) options for the first cube in the order, then V(t)-1 for the second, and so on. Multiplying all these options together leads to the total number of orderings at timestep t, and is just V(t) * (V(t)-1) * ... * 2 * 1 := V(t)! = L(t) )

Now, the total number of unique evolutions H for the universe (the number of unique order of orders when moving from timeframes t=1,2,3,...,tC) is:

H := Product[L(t),(t,1,tC)]

This can recast as

Log H : = Sum[Log[L(t)],{t,1,tC}]

It is infeasible to compute this sum up to tC, since that would involve summing 8e60 terms. Instead, we must find an approximating function to L(t) or Log[L(t)] that we can explicitly integrate. Using Log[L(t)] will be easier, since using the actual value of L(t) will result in overflow errors in all likelihood. 

A commonly used approximation in statistical mechanics is used here. The Sterling approximation states that for sufficiently large n,

Log[n!] ~ nLog[n] - n

Perhaps even nicer, the Sterling approximation applies to polynomial functions of n as well. 

Therefore Log[L(t)] : = Log[V(t)!] ~ V(t) Log[V(t)] - V(t) = Lapprox(t)

and the approximate number of unique evolutions can be expressed as 

Happrox(t) = Exp[Integrate[Lapprox(t),{t,0,tC}]

with some moderate algebra and simple calculus this can be found exactly:

Happrox(t) ~ Exp[t^4(Pi * A^3)/12 [-7 + 4 Log[4/3 A^3 Pi t^3])]

However, our approximations aren't perfect. Here I show the relative error of this approximation to the true sum value of the number of evolutions for the first few thousand time steps.

It's pretty clear that the approximation gets better after a few thousand steps (<0.1% error In the estimate of Log[H] after 2000 steps. Important aside here: the actual error in H will be much larger than 0.1%, since that is only the error of the estimate of Log[H]. However, with numbers this large, we only care about order of magnitude, and our log error tells us how far off in magnitude we are. 0.1% error in magnitude sounds acceptable to me). So let's improve our estimate of the number of evolutionary paths by summing the Log of the number of organization schemes for the first 2000 steps (call this Log[H0]), and then using the integral approximation Happrox(t) after that.

HImproved(t) = Log[H0] + Log[Happrox(t)] - Log[Happrox(2000)]

Log[H0] = Sum[Log[L(t)],{t,0,2000}] ~ 1.681*10e16

Log[Happrox(2000)] ~ 1.680*10e16

now let's do view the relative error vs the true value of our improved approximation:

The error is approaching 0 (though slowly), but is at most ~0.05% so I am happy with these approximations. Using this we get an estimate for Log[H]:

Log[H]~ Log[Happrox(tC)] ~ 7.2e247

H ~ exp[7.23e247]  ~ 10e(3.1e247). A little bit smaller than if we could organize all the current cubes at any time in the past.

Perhaps in terms more understandable H ~ (googleplex)^148. Oceans larger than the number of possible chess positions, but Oceans and Oceans smaller than Graham's number, TREE(3), etc. I had fun doing this please let me know if you think there are any mistakes

AunTheKnight

I think the amount of possible positions is called Shannon’s Number. Unless I’m mistaken, there are more possible positions than the amount of atoms in the universe, but what about protons, neutrons, quarks, and leptons?

AunTheKnight
thund3rduck wrote:
jaaas wrote:
sloughterchess wrote:

If I liked to torture computers I would require them to identify every legal position in chess and all its variants! I suspect that this number is at least 1X10 raised to the power of 1000.

"Chess and all it's variants" is vastly imprecise. For instance, you could come up with a variant featuring a board of infinite size.

Otherwise, if you want some mindbendingly large numbers that somehow relate to the physical world, then you might calculate the rough amount of Planck cubes within the Universe, then the rough amount of Planck frames it has existed for, and then do some permutations involving those numbers (like, how many possible results can you get when ordering all Planck frames in all possible ways while in any Plack frame all the Planck cubes could be ordered in any way possible?). Good luck.

I am aware this is 7 years old, and no one cares, but I had nothing to do today and decided to have some fun. I changed the problem a little bit.

First things, lets see how big and old the universe is today in Planck units:

Rc = current radius: 4.4e26 m  = 2.7e61 Planck lengths

Tc = current age: 13.8e9 years = 8.1e60 Planck times

I'm going to assume a spherical universe. This Isn't a good approximation but it helps immensely later. 

Because of this assumption: 

V =Volume of the universe = 4*Pi/3 * radius^3 ~ 8.5e184 planck volumes (or cubes)

Now let's ask this question (instead of just ordering all of the planck cubes, I decided to add a spicier step): How many overall organization schemes are there, if at each time step, we choose any ordering of the planck cubes?

This is rather trivial if the number of cubes stays the same throughout time, it's pretty easy to show the total number of organization schemes is (#Cubes!)^(#time steps) ~( using the Sterling approximation) (10^(#cubes*log10(#cubes) - #cubes))^(#time steps)  ~ (10^(1e187 - 1e185))^(1e61) (all rounded to the nearest power of 10) ~ 10^(1e248) so much larger than all possible chess positions.

 

This isn't physical though, the volume of the universe is also constantly expanding, and I thought it would be fun to calculate the possible number of configurations of all cubes throughout the universe's evolutionary history.

 

I have no idea how the universe has actually expanded, but I am going to assume it was with a constant outward velocity, which is probably terribly wrong.

Therefore, the radius of the universe as a function of time looks like (setting r(0) = 0): 

r(t) = A*t 

with

A = Rc/Tc ~ 3.4 Planck Lengths/Planck Time

Fun fact: This also means the universal expansion is on average ( since I assumed constant rate) 3.4 times the speed of light (yes, it can be FTL, here's some reading on that).

Since we assume spherical universe geometry, the volume of the universe at time t, V(t) can be expressed as:

V(t) = 4 Pi/3 * r(t)^3 =4 Pi/3  (At)^3

We define the number of orderings at each time as L(t);

L(t) = V(t)!

(If you're having trouble seeing this, there are V(t) options for the first cube in the order, then V(t)-1 for the second, and so on. Multiplying all these options together leads to the total number of orderings at timestep t, and is just V(t) * (V(t)-1) * ... * 2 * 1 := V(t)! = L(t) )

Now, the total number of unique evolutions H for the universe (the number of unique order of orders when moving from timeframes t=1,2,3,...,tC) is:

H := Product[L(t),(t,1,tC)]

This can recast as

Log H : = Sum[Log[L(t)],{t,1,tC}]

It is infeasible to compute this sum up to tC, since that would involve summing 8e60 terms. Instead, we must find an approximating function to L(t) or Log[L(t)] that we can explicitly integrate. Using Log[L(t)] will be easier, since using the actual value of L(t) will result in overflow errors in all likelihood. 

A commonly used approximation in statistical mechanics is used here. The Sterling approximation states that for sufficiently large n,

Log[n!] ~ nLog[n] - n

Perhaps even nicer, the Sterling approximation applies to polynomial functions of n as well. 

Therefore Log[L(t)] : = Log[V(t)!] ~ V(t) Log[V(t)] - V(t) = Lapprox(t)

and the approximate number of unique evolutions can be expressed as 

Happrox(t) = Exp[Integrate[Lapprox(t),{t,0,tC}]

with some moderate algebra and simple calculus this can be found exactly:

Happrox(t) ~ Exp[t^4(Pi * A^3)/12 [-7 + 4 Log[4/3 A^3 Pi t^3])]

However, our approximations aren't perfect. Here I show the relative error of this approximation to the true sum value of the number of evolutions for the first few thousand time steps.

 

It's pretty clear that the approximation gets better after a few thousand steps (<0.1% error In the estimate of Log[H] after 2000 steps. Important aside here: the actual error in H will be much larger than 0.1%, since that is only the error of the estimate of Log[H]. However, with numbers this large, we only care about order of magnitude, and our log error tells us how far off in magnitude we are. 0.1% error in magnitude sounds acceptable to me). So let's improve our estimate of the number of evolutionary paths by summing the Log of the number of organization schemes for the first 2000 steps (call this Log[H0]), and then using the integral approximation Happrox(t) after that.

HImproved(t) = Log[H0] + Log[Happrox(t)] - Log[Happrox(2000)]

Log[H0] = Sum[Log[L(t)],{t,0,2000}] ~ 1.681*10e16

Log[Happrox(2000)] ~ 1.680*10e16

now let's do view the relative error vs the true value of our improved approximation:

 

The error is approaching 0 (though slowly), but is at most ~0.05% so I am happy with these approximations. Using this we get an estimate for Log[H]:

Log[H]~ Log[Happrox(tC)] ~ 7.2e247

H ~ exp[7.23e247]  ~ 10e(3.1e247). A little bit smaller than if we could organize all the current cubes at any time in the past.

Perhaps in terms more understandable H ~ (googleplex)^148. Oceans larger than the number of possible chess positions, but Oceans and Oceans smaller than Graham's number, TREE(3), etc. I had fun doing this please let me know if you think there are any mistakes

Are you a mathematics/physics professor?

bbeanzonpizza
AunTheKnight wrote:
thund3rduck wrote:
jaaas wrote:
sloughterchess wrote:

If I liked to torture computers I would require them to identify every legal position in chess and all its variants! I suspect that this number is at least 1X10 raised to the power of 1000.

"Chess and all it's variants" is vastly imprecise. For instance, you could come up with a variant featuring a board of infinite size.

Otherwise, if you want some mindbendingly large numbers that somehow relate to the physical world, then you might calculate the rough amount of Planck cubes within the Universe, then the rough amount of Planck frames it has existed for, and then do some permutations involving those numbers (like, how many possible results can you get when ordering all Planck frames in all possible ways while in any Plack frame all the Planck cubes could be ordered in any way possible?). Good luck.

I am aware this is 7 years old, and no one cares, but I had nothing to do today and decided to have some fun. I changed the problem a little bit.

First things, lets see how big and old the universe is today in Planck units:

Rc = current radius: 4.4e26 m  = 2.7e61 Planck lengths

Tc = current age: 13.8e9 years = 8.1e60 Planck times

I'm going to assume a spherical universe. This Isn't a good approximation but it helps immensely later. 

Because of this assumption: 

V =Volume of the universe = 4*Pi/3 * radius^3 ~ 8.5e184 planck volumes (or cubes)

Now let's ask this question (instead of just ordering all of the planck cubes, I decided to add a spicier step): How many overall organization schemes are there, if at each time step, we choose any ordering of the planck cubes?

This is rather trivial if the number of cubes stays the same throughout time, it's pretty easy to show the total number of organization schemes is (#Cubes!)^(#time steps) ~( using the Sterling approximation) (10^(#cubes*log10(#cubes) - #cubes))^(#time steps)  ~ (10^(1e187 - 1e185))^(1e61) (all rounded to the nearest power of 10) ~ 10^(1e248) so much larger than all possible chess positions.

 

This isn't physical though, the volume of the universe is also constantly expanding, and I thought it would be fun to calculate the possible number of configurations of all cubes throughout the universe's evolutionary history.

 

I have no idea how the universe has actually expanded, but I am going to assume it was with a constant outward velocity, which is probably terribly wrong.

Therefore, the radius of the universe as a function of time looks like (setting r(0) = 0): 

r(t) = A*t 

with

A = Rc/Tc ~ 3.4 Planck Lengths/Planck Time

Fun fact: This also means the universal expansion is on average ( since I assumed constant rate) 3.4 times the speed of light (yes, it can be FTL, here's some reading on that).

Since we assume spherical universe geometry, the volume of the universe at time t, V(t) can be expressed as:

V(t) = 4 Pi/3 * r(t)^3 =4 Pi/3  (At)^3

We define the number of orderings at each time as L(t);

L(t) = V(t)!

(If you're having trouble seeing this, there are V(t) options for the first cube in the order, then V(t)-1 for the second, and so on. Multiplying all these options together leads to the total number of orderings at timestep t, and is just V(t) * (V(t)-1) * ... * 2 * 1 := V(t)! = L(t) )

Now, the total number of unique evolutions H for the universe (the number of unique order of orders when moving from timeframes t=1,2,3,...,tC) is:

H := Product[L(t),(t,1,tC)]

This can recast as

Log H : = Sum[Log[L(t)],{t,1,tC}]

It is infeasible to compute this sum up to tC, since that would involve summing 8e60 terms. Instead, we must find an approximating function to L(t) or Log[L(t)] that we can explicitly integrate. Using Log[L(t)] will be easier, since using the actual value of L(t) will result in overflow errors in all likelihood. 

A commonly used approximation in statistical mechanics is used here. The Sterling approximation states that for sufficiently large n,

Log[n!] ~ nLog[n] - n

Perhaps even nicer, the Sterling approximation applies to polynomial functions of n as well. 

Therefore Log[L(t)] : = Log[V(t)!] ~ V(t) Log[V(t)] - V(t) = Lapprox(t)

and the approximate number of unique evolutions can be expressed as 

Happrox(t) = Exp[Integrate[Lapprox(t),{t,0,tC}]

with some moderate algebra and simple calculus this can be found exactly:

Happrox(t) ~ Exp[t^4(Pi * A^3)/12 [-7 + 4 Log[4/3 A^3 Pi t^3])]

However, our approximations aren't perfect. Here I show the relative error of this approximation to the true sum value of the number of evolutions for the first few thousand time steps.

 

It's pretty clear that the approximation gets better after a few thousand steps (<0.1% error In the estimate of Log[H] after 2000 steps. Important aside here: the actual error in H will be much larger than 0.1%, since that is only the error of the estimate of Log[H]. However, with numbers this large, we only care about order of magnitude, and our log error tells us how far off in magnitude we are. 0.1% error in magnitude sounds acceptable to me). So let's improve our estimate of the number of evolutionary paths by summing the Log of the number of organization schemes for the first 2000 steps (call this Log[H0]), and then using the integral approximation Happrox(t) after that.

HImproved(t) = Log[H0] + Log[Happrox(t)] - Log[Happrox(2000)]

Log[H0] = Sum[Log[L(t)],{t,0,2000}] ~ 1.681*10e16

Log[Happrox(2000)] ~ 1.680*10e16

now let's do view the relative error vs the true value of our improved approximation:

 

The error is approaching 0 (though slowly), but is at most ~0.05% so I am happy with these approximations. Using this we get an estimate for Log[H]:

Log[H]~ Log[Happrox(tC)] ~ 7.2e247

H ~ exp[7.23e247]  ~ 10e(3.1e247). A little bit smaller than if we could organize all the current cubes at any time in the past.

Perhaps in terms more understandable H ~ (googleplex)^148. Oceans larger than the number of possible chess positions, but Oceans and Oceans smaller than Graham's number, TREE(3), etc. I had fun doing this please let me know if you think there are any mistakes

Are you a mathematics/physics professor?

I am a grad student in nuclear physics but maybe one day haha

AunTheKnight

Ah. Good luck! I wish I were that smart.

bbeanzonpizza

Yes it looks like Shannon's number is ~10^120, which is definitely more than the number of Planck volumes in the current universe. A fun thing to try is could you encode every chess position, and the next best response move, if you turned all the universe's mass into information? This paper  estimates the minimum mass energy of a bit of information to be stored is ~3.4e-36 kg/c^2. The mass energy of ordinary matter in the observable universe is  ~1.5e53kg/c^2, or 4.4e88 minimum bit energies.

The minimum encoding for a chess game (including who's move it is, castling rights, en passant) Is probably 188 bits. However, it would also be nice to encode the best move for who's to play, which would require an additional (3+6) (which piece type, and what square it's on) + (6) (square it lands on) = 15 bits, so each position needs 188+15 = 203 bits for representation. This corresponds to a minimum energy of ~7.0e-34 kg/c^2, or 390 eV.

Therefore the maximum number of chess positions with best play indicated that can be stored in the universe is ~2.1e86 positions.

If dark matter and dark energy are also included in the massive chess memory bank, about 5.7e87 positions can be stored. This is still 10^-32 the size of Shannon's estimate, which is a lower bound for the number of chess positions also. Such a complex game.

bbeanzonpizza
thund3rduck wrote:

Yes it looks like Shannon's number is ~10^120, which is definitely more than the number of Planck volumes in the current universe. A fun thing to try is could you encode every chess position, and the next best response move, if you turned all the universe's mass into information? This paper  estimates the minimum mass energy of a bit of information to be stored is ~3.4e-36 kg/c^2. The mass energy of ordinary matter in the observable universe is  ~1.5e53kg/c^2, or 4.4e88 minimum bit energies.

The minimum encoding for a chess game (including who's move it is, castling rights, en passant) Is probably 188 bits. However, it would also be nice to encode the best move for who's to play, which would require an additional (3+6) (which piece type, and what square it's on) + (6) (square it lands on) = 15 bits, so each position needs 188+15 = 203 bits for representation. This corresponds to a minimum energy of ~7.0e-34 kg/c^2, or 390 eV.

Therefore the maximum number of chess positions with best play indicated that can be stored in the universe is ~2.1e86 positions.

If dark matter and dark energy are also included in the massive chess memory bank, about 5.7e87 positions can be stored. This is still 10^-32 the size of Shannon's estimate, which is a lower bound for the number of chess positions also. Such a complex game.

Actually this is all wrong. Sorry about that. I just realized that Shannon's number is an estimate for the number of distinct games, not the number of positions. The number of legal chess positions has an upper bound of ~10^47, which in terms of minimum bit storage mass corresponds to about 3.4e11 kg, or a spherical asteroid with ~500m diameter.

AunTheKnight

Ooh. So the Shannon Number is the amount of possible games, and the amount of possible legal positions is different?

bbeanzonpizza
AunTheKnight wrote:

Ooh. So the Shannon Number is the amount of possible games, and the amount of possible legal positions is different?

Shannon calculated a prediction for both the number of possible games (10e120 which was a lower bound and I believe has been further increased) and the number of possible legal positions (he actually predicted the number of "sensible" positions to be 10^40, not sure if this was an upper or lower bound but it was probably pretty off-the-cuff anyways). The current best upper bound for "legal" positions is 10e46.7, so Shannon may not be far off with his guess for sensible positions

bbeanzonpizza
thund3rduck wrote:
AunTheKnight wrote:

Ooh. So the Shannon Number is the amount of possible games, and the amount of possible legal positions is different?

Shannon calculated a prediction for both the number of possible games (10e120 which was a lower bound and I believe has been further increased) and the number of possible legal positions (he actually predicted the number of "sensible" positions to be 10^40, not sure if this was an upper or lower bound but it was probably pretty off-the-cuff anyways). The current best upper bound for "legal" positions is 10e46.7, so Shannon may not be far off with his guess for sensible positions

Also interesting to note, if pawn promotions are ignored, the upper bound for legal positions falls all the way to ~2e40, since the pawns can't do funky queen things once they promote.

DiogenesDue

188 bits would be a chess position, not a game.  Plus you need to add, at a minimum, some bits for storing the evaluation results (assuming you are trying to solve chess) and some type of indexing.

You should probably peruse this thread...it's extremely long, however.

bbeanzonpizza
AunTheKnight wrote:

I think the amount of possible positions is called Shannon’s Number. Unless I’m mistaken, there are more possible positions than the amount of atoms in the universe, but what about protons, neutrons, quarks, and leptons?

Interesting! Protons and neutrons each are baryons composed of three quarks. For now we're going to turn away from non-baryonic matter, except for electrons around nuclei. The number of quarks + electrons is clearly higher for larger nuclei, but larger nuclei are rare compared to H and He in the universe. We can create a weighted value of the number of (quark+electrons) each element contributes by finding it's universal atomic abundance P(Z) and multiplying by it's number of quarks + electrons M(Z) = 3A(Z) + Z(m_p-m_e)/m_p ~ 3A(Z) + Z

where A(Z) is the isotope-averaged atomic mass of the element and  m_p, m_e are the proton and electron masses respectively. However, m_p ~ 2000 m_e so the electron mass can be safely ignored. 

The number of quarks+electrons in elements is scaled by it atomic abundance to give an effective number of contributions per atom in the universe (particle contribution):

As can be seen, H and He dominate (note Log vertical scale).

 

The Sum over all elements is the effective number of quarks+electrons per atom. Call this X

X = Sum[M(z)P(z), {z,1,~100}]

X~ 7.6 particles/atom

So if you use every quark and electron around nuclei, you have 7.6 more places.

 

bbeanzonpizza
btickler wrote:

188 bits would be a chess position, not a game.  Plus you need to add, at a minimum, some bits for storing the evaluation results (assuming you are trying to solve chess) and some type of indexing.

You should probably peruse this thread...it's extremely long, however.

Sorry for the confusion, I was caring more about the positions instead of the games. The 188 bits I believe is long enough to encode castling rights, en passant capture opportunities, and whose turn it is. I then added 15 bits for the best move (3 for piece type, 6 for initial square, 6 for final square) which sums to 203 bits. The color of the best move does not need to be specified since that was contained in the 188 bits. But now I'm also seeing 240 bits minimum from others, so I could be totally overlooking something.  It would also be impossible to figure out where that next position was stored without using more space to encode where each position is located, or found some sort of neat organization for the data that was based on the actual position. It is also probably important to denote if a capturing is occurring but I could be wrong. Again, this is the position and the assumption is the best move is known and also is encoded with the position. Having the whole game mapped out does sound more pleasing of the two things though even though both will never be able to happen.

DiogenesDue

In terms of smaller particles, it's not just whether we can find smaller particles, it's whether we can store information using them (with non-destructive read/write operations).

It's a fun problem to talk about, but it's not happening in our lifetimes (nor probably our grandchildrens' lifetimes wink.png...), unless there's a a tremendous and unforeseeable breakthrough (and quantum computing is not that breakthrough...though it might get us closer).