Tossing unknown biased coin

Assume that there are two biased coins. Coin A heavily biased towards head with the probability of head equal to 0.9. And Coin B is heavily biased towards tail with the probability of tail equal to 0.9. Now, we randomly and equally likely select one of the coins and toss it twice. Let’s call the outcome Y_1 and Y_2. Now, the question is, what is the mutual information between Y_1 and Y_2?

I put a similar question as above in a midterm, and I didn’t expect to stumble the entire class.

All students thought that the mutual information I(Y_1;Y_2)=0 because the two outcomes are independent. When we toss a coin sequentially, the outcomes are supposed to be independent, right? Yes, but that is only when we know what coin we are tossing.

Think intuitively with the above example. If we didn’t toss the coin twice, but toss it ten times and got ten heads. What do we expect the outcome to be if we toss it another time?

I think an intelligent guess should be another head. Because given the ten heads we got earlier, it has a very high chance that the picked coin is Coin A. And so the next toss is very likely to be the head as well.

Now, the same argument holds when we are back to the original setup. When the first toss is head, the second toss is likely to be head as well. So Y_1 and Y_2 are in no way to be independent.

So what is I(Y_1; Y_2)?

I(Y_1;Y_2)=H(Y_1)+H(Y_2)-H(Y_1,Y_2)=2H(Y_1)-H(Y_1,Y_2)

Let’s compute H(Y_1) and H(Y_1,Y_2).

Denote X as the coin that we pick.

Pr(Y_1=H)=Pr(X=A)0.9+Pr(X=B)0.1=0.5

So H(Y_1)=H(0.5)=1 bit.

Now, for H(Y_1,Y_2), note that

p_{Y_1,Y_2}(H,H)=Pr(X=A)(0.9\cdot 0.9) + Pr(X=B)(0.1\cdot 0.1)=0.41

p_{Y_1,Y_2}(T,T)=Pr(X=A)(0.1\cdot 0.1)+Pr(X=B)(0.9\cdot 0.9)=0.41

p_{Y_1,Y_2}(T,H)=p_{Y_1,Y_2}(H,T)=Pr(X=A)(0.1\cdot 0.9)+Pr(X=B)(0.9\cdot 0.1)=0.09

Therefore, H(Y_1,Y_2)=H([0.41,0.41,0.09,0.09])=1.68 bit.

So I(Y_1;Y_2)=H(Y_1)+H(Y_2)-H(Y_1,Y_2)=0.32 bit.

Note that this is an example that variables are conditional independent but not independent. More precisely, we have Y_1 \not\bot Y_2 but Y_1 \bot Y_2|X.

Probability education trap

I wondered a little bit why none of my students could answer the above question. I blame a trap that is embedded in most elementary probability courses. We were always introduced with a consecutive coin tossing or dice throwing example with each subsequent event to be independent of the earlier event. In those examples, we always assume that the probabilities of getting all outcomes are known but this assumption was never emphasized. As we see in the above example, even each subsequent tossing or throwing is independent relative to the current coin or the current dice, overall those events are not independent when the statistics behind the coin and dice are not known.

Actually, this also makes some “pseudo-science” not really that non-scientific after all. For example, we all tend to believe that the gender of a newborn is close to random and hence unpredictable. But what if there is some hidden variable that affects the gender of a newborn. And that factor may have a strong bias towards one gender over another. Then, it probably is not really that unlikely for someone to have five consecutive girls or six consecutive sons. Of course, I also tend to believe that the probability of a newborn to be a boy or a girl is very close to one half. A less extreme example may occur at the casino. If we believe that the odd from each lottery machine in a casino is not so perfectly tune (before the digital age, that is probably much more likely), then there is a chance that some machine has a higher average reward than another one. Then, a gambler trying to select a lottery machine to play is an essential strategy to win and is not really superstition after all. Of course, this is just the multi-armed bandit problem.

Independence and conditional independence

Independence and conditional independence are one of the most basic concepts in probabilities. Two random variables are independent, as the term suggested, if the outcome of one variable should not affect the outcome of another. Mathematically, two variables X and Y are independent if

p(x,y) = p(x) p(y) for any outcome x for variable X and outcome y for variable Y. And we often indicate this independece by X \bot Y.

Let’s inspect this definition more carefully, given Y=y and by Bayes’ rule, the probability of X=x is

Pr(X=x|Y=y)=\frac{p(x,y)}{p(y)}= \frac{p(x)p(y)}{p(y)}=p(x). Indeed the probability does not depend on the outcome of Y and so X and Y are independent.

Similarly, when we say X and Y are conditionally independent given Z, it means that knowing Z, X and Y become independent. Note that X and Y do not need to be independent to start with. Many students have the misconceptions that independence implies conditional independence, or vice versa. The fact is that they are completely unrelated concepts. We can easily find examples that variables are independent but not conditional independent and vice versa. And also examples that variables are both independent and conditional independent. And of course cases when neither independence is satisfied.

Mathematically, we denote the independence by X \bot Y|Z, and we have

p(x,y|z)=p(x|z)p(y|z) for any x, y, and z.

Note that the definition above implies that p(x|y,z)=\frac{p(x,y|z)}{p(y|z)}=p(x|z). Hence, indeed if we are given z, the probability of X=x does not dependent on the outcome of Y. So it depicts well the conditional independence that we anticipated.

 

Copyright OU-Tulsa Lab of Image and Information Processing 2024
Tech Nerd theme designed by Siteturner