...
I did react, only for an instant, when you called me a random person the other day
But did he call you a truly random person?
No, that was the real source of my chagrin. He implied I'm pseudo-random.
...
I did react, only for an instant, when you called me a random person the other day
But did he call you a truly random person?
No, that was the real source of my chagrin. He implied I'm pseudo-random.
I am sure I can appear patronising, supercilious and various other things, but really all I am trying to do is draw people's attention to knowledge that exists (and which I have no personal credit for!) in a neutral way.
Like @llama36, I am not emotional, unless enthusiasm for knowledge counts.
You have never appeared as any of those to me. In fact, I've been impressed that you seem to have gone out of your way not to insult me when I've said things that were clearly wrong.
It's always wrong to insult everyone, no matter how wrong they may be. Divide and rule! ![]()
More seriously, considering the infinite sequence of coin tosses, the first thing to accept is that it's not real or physical. It's impossible. But it is intuitively a meaningful thing. It might also be a useful thing (like infinite decimal numbers are).
Another useful point is that, once considered as an abstract object, time disappears. You can accurately conceive of an infinite number of coin tosses all taking place simultaneously, merely labelled by the natural numbers (this is how infinite stochastic processes are defined).
Given that, intuition about the sequence never completing can be discarded. Like Zeno's paradox doesn't help you understand real-valued functions, that doesn't help to understand the Bernoulli Process.
Perhaps it has a different meaning from what is obviously a nominal probability pattern to mathematicians.
It would be meaningless to suggest that one could perform an infinite number of infinitely long series of coin flips. Therefore, it would be equally meaningless to suggest that in such a series of series, where your (and that of the mathematics dept.) expectation would be about one series with zero heads and maybe one with zero tails, whereas I think that neither could possibly occur, that one or the other of us is going to be right. You see, there's no practical difference to speak of. Nothing that could make a difference to probability calculations in practice.
But I'm right and they're wrong. ![]()
Please try to understand the next paragraph. I have posted similar things several times and you have been too lazy to get the point. This has consequences.
If you are unfamiliar with the idea of events being possible and having probability zero, I can assure this is ubiquitous for a large class of random variables that are widely important in applications. As I have pointed out, one example is when you have typical real-valued probability distributions like the normal distribution (surely you are familiar with this?). With such a random variable (which outputs a single real value, with the values being more likely to be in the "thick" centre of the distribution and less likely to be in the thinner tails), the probability of any specific result is zero. Of course, it makes no sense to think of all results as being impossible - the whole idea is that you always get some result! Every individual result is possible, and every individual result has probability zero. Ask literally anyone with a reasonable amount of statistical knowledge.
Infinite objects are unintuitive until you get some familiarity working with them. Even ones that are very familiar like the real numbers.
in ppl years it like 105.
Oh, if time is the problem, there may be tricks to speed it up. Everything in the computer is 1s and 0s, so it works with them very quickly. Just pretend 1s are heads and 0s are tails (or the other way around).
Then you can use simple operations like AND and bit shifting, which computers do very quickly.
For for example you do the AND function for two bits. If it returns 1 then you know they're the same. If it returns zero you know they're different. Then you shift to the next bit and do it again. (lol, 0 and 0 outputs zero, but still... use XOR same idea).
When the answer is zero, just pretend they "cancel out" and when the answer is 1, increment a counter. What you'll be left with is the counter shows the "extra" heads or tails. Add half of your starting number to the "extra" for heads, and subtract "extra" from half the starting number for tails (or vice versa).
I've never tried this with python, but I assume it's possible. You can also start experimenting with C++ for example. Once you can do one language it's pretty easy to try some of the same things in a new one. Python tends to be slower which is why I mention it.
Please try to understand the next paragraph. I have posted similar things several times and you have been too lazy to get the point. This has consequences.
If you are unfamiliar with the idea of events being possible and having probability zero, I can assure this is ubiquitous for a large class of random variables that are widely important in applications. As I have pointed out, one example is when you have typical real-valued probability distributions like the normal distribution (surely you are familiar with this?). With such a random variable (which outputs a single real value, with the values being more likely to be in the "thick" centre of the distribution and less likely to be in the thinner tails), the probability of any specific result is zero. Of course, it makes no sense to think of all results as being impossible - the whole idea is that you always get some result! Every individual result is possible, and every individual result has probability zero. Ask literally anyone with a reasonable amount of statistical knowledge.
Infinite objects are unintuitive until you get some familiarity working with them. Even ones that are very familiar like the real numbers.
So....in a nutshell, any finite number divided by infinity is zero......correct? Isn't that what this is saying?
I undersand that where there are infinite possibilities, each has a probability of zero. Another way to express it would be "infinitesimal". If one of those possibilities occurs, then it's possible with a nominal probability of zero.
I've always understood that. It does not mean that particular possibility will occur. It means it can occur, provided conditions allow it. I believe I've just spotted the mistake. It's an understandable mistake and it took me two seconds to see it.
The key is "provided conditions allow it". In atomic physics, elementary particals appear to display all types of behaviour that we can conceive of. They have spins, charges, they appear and disappear etc. That's because we live in a three dimensional environment, temporal, which allows us to envisage them. Particles display behaviours that we understand and they more or less display all that we can imagine to be possible and more besides. Living cells evolved from non-living matter, because what other mechanism is possible? Some people get uptight when I say that living cells evolved from non-living matter but I have no problem with it, because I see evolution in its broadest possible sense.
We have to ask ourselves whether the conditions of an infinite series of coin tosses permits no heads. You see, the idea of infinity is working opposite to normal. Infinite time allows all possible elemenal combinations and makes the development of living matter a certainty. However, the argument is that it is nominally possible for there to be no heads, yet the effect of an infinite series makes that impossible. It has to be assumed to be impossible but then we start to get problems. For instance, is zero heads impossible but one in infinity possible? Where is the magic boundary that makes some proportions possible and others not?
I'm sure this is the magical problem, that is perceived to prevent my way of looking at it being seen to be true.
In my opinion, all that is necessary is a completely new branch of maths that will enable exploration of a possible asymptotic convergence. Why can't we have infinity plus one tails and negative one heads? Precisely. The convergence isn't at zero. Why isn't zero heads an option? You're fond of definitions. We could define infinity to be such that it's an ideal, if we wished, rather than pretending it's an uncountably big number. The point is, it isn't a number. Can I blame generations of mathematicians for not understanding that? No.
Would I expect more philosophers to understand me than mathematicians? Yes.
Please try to understand the next paragraph. I have posted similar things several times and you have been too lazy to get the point. This has consequences.
If you are unfamiliar with the idea of events being possible and having probability zero, I can assure this is ubiquitous for a large class of random variables that are widely important in applications. As I have pointed out, one example is when you have typical real-valued probability distributions like the normal distribution (surely you are familiar with this?). With such a random variable (which outputs a single real value, with the values being more likely to be in the "thick" centre of the distribution and less likely to be in the thinner tails), the probability of any specific result is zero. Of course, it makes no sense to think of all results as being impossible - the whole idea is that you always get some result! Every individual result is possible, and every individual result has probability zero. Ask literally anyone with a reasonable amount of statistical knowledge.
Infinite objects are unintuitive until you get some familiarity working with them. Even ones that are very familiar like the real numbers.
So....in a nutshell, any finite number divided by infinity is zero......correct? Isn't that what this is saying?
But also, summing infinitely many of those zeros together = 1 ![]()
One way to think of it is, the probability shrinks proportionally to 1/n while the number of probabilities grows proportionally with n... since the sum of probabilities started as 1, and since the proportions cancel out, the sum remains 1.
At least that's my way of making sense of it. I'm sure there are formal ways to do it.
But also, summing infinitely many of those zeros together = 1 >>>
Infinity multiplies by zero is not 1.
That's because infinity x 0 = 0 x infinity.
The meaning is that we have no infinities.
Do we have a stick of rhubarb? It wasn't mentioned so we don't. Infinity was mentioned but it was specified that we have none of them.
So we don't have much.
The only way round what I just explained is to invent a new branch of maths where
a x b does not equal b x a.
Now, I do know that is a real thing. But does it apply? Maybe it might **where conditions allow it**.
But also, summing infinitely many of those zeros together = 1 >>>
Infinity multiplies by zero is not 1.
Tha's because infinity x 0 = 0 x infinity.
The meaning is that we have no infinities.
Do we have a stick of rhubarb? It wasn't mentioned so we don't. Infinity was mentioned but it was specified that we have none of them. So we don't have much.
More formally, the integral of a PDF from -inf to inf = 1.
@4206
the probability of any specific result is zero
Question: When you are talking about an infinite serious, what is meant by a "specific result"?
Also don't get me wrong. I've thought that infinity x zero = 1 for many years because I did have a formal maths training. But it also = zero and that's the partly the point of my arguments here.
PDF? Pls don't assume I know what pls means.
PDF is probability density function.
When you take the integral of the PDF, you get the CDF or cumulative distribution function.
If a distribution is continuous, then there are infinitely many values it can take.
When you have a CDF equation, and you plug in a number, it will tell you the probability of getting that number or less. For example plugging in 5, it will tell you the probability of getting 5 or less. For the exact value of 5 (or any exact value), it will tell you the probability is zero.
@4206
the probability of any specific result is zero
Question: When you are talking about an infinite serious, what is meant by a "specific result"?
For example if the experiment is to flip a coin 2 times, there are 4 individual results:
HH
HT
TH
TT
You could ask the question "what is the probability of getting at least one H?"
or
"What is the probability of getting the result HT?"
When there are infinitely many results, it's not useful to ask questions like the 2nd (an exact result). You ask questions like the first one (where the answer is a collection of results, in this case 3 have at least one H, so the probability is 0.75).
Oh, if time is the problem, there may be tricks to speed it up. Everything in the computer is 1s and 0s, so it works with them very quickly. Just pretend 1s are heads and 0s are tails (or the other way around).
Then you can use simple operations like AND and bit shifting, which computers do very quickly.
For for example you do the AND function for two bits. If it returns 1 then you know they're the same. If it returns zero you know they're different. Then you shift to the next bit and do it again. (lol, 0 and 0 outputs zero, but still... use XOR same idea).
When the answer is zero, just pretend they "cancel out" and when the answer is 1, increment a counter. What you'll be left with is the counter shows the "extra" heads or tails. Add half of your starting number to the "extra" for heads, and subtract "extra" from half the starting number for tails (or vice versa).
I've never tried this with python, but I assume it's possible. You can also start experimenting with C++ for example. Once you can do one language it's pretty easy to try some of the same things in a new one. Python tends to be slower which is why I mention it.
Python is slow for such applications...it's an interpreted language, not compiled. Some Python developers will try to claim it is compiled *and* interpreted, because Python has a runtime "compiler" that takes the interpreted bytecode and runs it in the Python virtual machine, but since it actually interprets the code each time it is run, it is an interpreted language, and the step of interpreting the code is just wasted cycles vs. a compiled executable.
Compiled executables are themselves somewhat inefficient, but very few people write Assembly Language code anymore, so a good compiler is about as fast as you are going to get.
The app itself is simple. You could write a coin flip app in a batch file
, but that would also be incredibly slow.
@4206
the probability of any specific result is zero
Question: When you are talking about an infinite serious, what is meant by a "specific result"?
For example if the experiment is to flip a coin 2 times, there are 4 individual results:
HH
HT
TH
TT
You could ask the question "what is the probability of getting at least one H?"
or
"What is the probability of getting the result HT?"
When there are infinitely many results, it's not useful to ask questions like the 2nd (an exact result). You ask questions like the first one (where the answer is a collection of results, in this case 3 have at least one H, so the probability is 0.75).
The reason I asked the question is that Elroch made reference to that in Quote 4206, so I was actually asking Elroch.
@4206
the probability of any specific result is zero
Question: When you are talking about an infinite serious, what is meant by a "specific result"?
For example if the experiment is to flip a coin 2 times, there are 4 individual results:
HH
HT
TH
TT
You could ask the question "what is the probability of getting at least one H?"
or
"What is the probability of getting the result HT?"
When there are infinitely many results, it's not useful to ask questions like the 2nd (an exact result). You ask questions like the first one (where the answer is a collection of results, in this case 3 have at least one H, so the probability is 0.75).
The reason I asked the question is that Elroch made reference to that in Quote 4206, so I was actually asking Elroch.
I apologize, I didn't check the number.
hilarious...my computer blew up this morning !...i was doing a trillion coin flips w/ a built-in parity check and it died !...no smoke or n/t. like that fortunately a shutdown-restart saved it from its own funerral. in ppl years it like 105.
Maybe do it in batches. You could do 1 billion, but do it 1000 times. Have the computer record the results each time and start over.
As a check, for 1 trillion, on average you'll have about 800,000 more of H or T and be at about 50.00008 %
Earlier in the topic Elroch noted the number of flips and the square of the number of "extra" flips have a linear relationship.
From my graph I can tell you the relationship is about 0.6366
So for example X number of flips, multiply by 0.6366, then take the square root of that, and that's the predicted (or rather "average") number of extras. The simulation wont be that exactly, but it can be a useful sanity check... it might be half that or 1/4 that, but off by a factor of 10 and you might want to double check things.