Difference between revisions of "Conditional Probability"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
(Created page with " In probability theory, '''conditional probability''' is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or...")
 
 
(6 intermediate revisions by the same user not shown)
Line 167: Line 167:
 
'''''Probability that'' ''D''<sub>1</sub>&nbsp;+&nbsp;''D''<sub>2</sub>&nbsp;≤&nbsp;5'''''
 
'''''Probability that'' ''D''<sub>1</sub>&nbsp;+&nbsp;''D''<sub>2</sub>&nbsp;≤&nbsp;5'''''
  
Table 2 shows that ''D''<sub>1</sub>&nbsp;+&nbsp;''D''<sub>2</sub>&nbsp;≤&nbsp;5 for exactly 10 of the 36 outcomes, thus ''P''(''D''<sub>1</sub>&nbsp;+&nbsp;''D''<sub>2</sub>&nbsp;≤&nbsp;5)&nbsp;=&nbsp;{{frac|10|36}}:
+
Table 2 shows that ''D''<sub>1</sub>&nbsp;+&nbsp;''D''<sub>2</sub>&nbsp;≤&nbsp;5 for exactly 10 of the 36 outcomes, thus ''P''(''D''<sub>1</sub>&nbsp;+&nbsp;''D''<sub>2</sub>&nbsp;≤&nbsp;5)&nbsp;=&nbsp;<math>\frac{10}{36}</math>:
  
 
:{| class="wikitable" border="1" style="background:silver; text-align:center; width:300px"
 
:{| class="wikitable" border="1" style="background:silver; text-align:center; width:300px"
Line 207: Line 207:
 
Table 3 shows that for 3 of these 10 outcomes, ''D''<sub>1</sub>&nbsp;=&nbsp;2.
 
Table 3 shows that for 3 of these 10 outcomes, ''D''<sub>1</sub>&nbsp;=&nbsp;2.
  
Thus, the conditional probability P(''D''<sub>1</sub>&nbsp;=&nbsp;2&nbsp;|&nbsp;''D''<sub>1</sub>+''D''<sub>2</sub>&nbsp;≤&nbsp;5)&nbsp;=&nbsp;{{frac|3|10}}&nbsp;=&nbsp;0.3:
+
Thus, the conditional probability P(''D''<sub>1</sub>&nbsp;=&nbsp;2&nbsp;|&nbsp;''D''<sub>1</sub>+''D''<sub>2</sub>&nbsp;≤&nbsp;5)&nbsp;=&nbsp;<math>\frac{3}{10}</math>&nbsp;=&nbsp;0.3:
  
 
:{| class="wikitable" border="1" style="text-align:center; width:300px"
 
:{| class="wikitable" border="1" style="text-align:center; width:300px"
Line 302: Line 302:
 
=== Assuming conditional probability is of similar size to its inverse ===  
 
=== Assuming conditional probability is of similar size to its inverse ===  
  
[[File:Bayes_theorem_visualisation.svg|thumb|300px|A geometric visualisation of Bayes' theorem. In the table, the values 2, 3, 6 and 9 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that P(A<nowiki>|</nowiki>B) P(B) = P(B<nowiki>|</nowiki>A) P(A) i.e. P(A<nowiki>|</nowiki>B) = {{sfrac|P(B<nowiki>|</nowiki>A) P(A)|P(B)}} . Similar reasoning can be used to show that P(Ā<nowiki>|</nowiki>B) = {{sfrac|P(B<nowiki>|</nowiki>Ā) P(Ā)|P(B)}} etc.]]
+
[[File:Bayes_theorem_visualisation.svg|thumb|300px|A geometric visualisation of Bayes' theorem. In the table, the values 2, 3, 6 and 9 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that P(A<nowiki>|</nowiki>B) P(B) = P(B<nowiki>|</nowiki>A) P(A) i.e. P(A<nowiki>|</nowiki>B) = <math>\frac{P(B|A) P(A)}{P(B)}</math> . Similar reasoning can be used to show that P(Ā<nowiki>|</nowiki>B) = {{sfrac|P(B<nowiki>|</nowiki>Ā) P(Ā)|P(B)}} etc.]]
  
 
In general, it cannot be assumed that ''P''(''A''|''B'')&nbsp;≈&nbsp;''P''(''B''|''A''). This can be an insidious error, even for those who are highly conversant with statistics. The relationship between ''P''(''A''|''B'') and ''P''(''B''|''A'') is given by Bayes' theorem:
 
In general, it cannot be assumed that ''P''(''A''|''B'')&nbsp;≈&nbsp;''P''(''B''|''A''). This can be an insidious error, even for those who are highly conversant with statistics. The relationship between ''P''(''A''|''B'') and ''P''(''B''|''A'') is given by Bayes' theorem:
Line 317: Line 317:
 
:<math>P(A) = \sum_n P(A \cap B_n) = \sum_n P(A\mid B_n)P(B_n).</math>
 
:<math>P(A) = \sum_n P(A \cap B_n) = \sum_n P(A\mid B_n)P(B_n).</math>
  
where the events <math>(B_n)</math> form a countable Partition of a set|partition of <math>\Omega</math>.
+
where the events <math>(B_n)</math> form a countable partition of <math>\Omega</math>.
  
This fallacy may arise through selection bias. For example, in the context of a medical claim, let ''S''{{sub|''C''}} be the event that a sequela (chronic disease) ''S'' occurs as a consequence of circumstance (acute condition) ''C''. Let ''H'' be the event that an individual seeks medical help. Suppose that in most cases, ''C'' does not cause ''S'' (so that ''P''(''S''{{sub|''C''}}) is low). Suppose also that medical attention is only sought if ''S'' has occurred due to ''C''. From experience of patients, a doctor may therefore erroneously conclude that ''P''(''S''{{sub|''C''}}) is high. The actual probability observed by the doctor is ''P''(''S''{{sub|''C''}}|''H'').
+
This fallacy may arise through selection bias. For example, in the context of a medical claim, let <math>S_{C}</math> be the event that a sequela (chronic disease) ''S'' occurs as a consequence of circumstance (acute condition) ''C''. Let ''H'' be the event that an individual seeks medical help. Suppose that in most cases, ''C'' does not cause ''S'' (so that ''P''(<math>S_{C}</math>) is low). Suppose also that medical attention is only sought if ''S'' has occurred due to ''C''. From experience of patients, a doctor may therefore erroneously conclude that ''P''(<math>S_{C}</math>) is high. The actual probability observed by the doctor is ''P''(<math>S_{C}</math>|''H'').
  
 
=== Over- or under-weighting priors ===
 
=== Over- or under-weighting priors ===
Line 357: Line 357:
 
\end{align}</math>
 
\end{align}</math>
  
 
+
== Licensing ==  
== References ==
+
Content obtained and/or adapted from:
# Gut, Allan (2013). Probability: A Graduate Course (Second ed.). New York, NY: Springer. ISBN 978-1-4614-4707-8.
+
* [https://en.wikipedia.org/wiki/Conditional_probability Conditional probability, Wikipedia] under a CC BY-SA license
# "List of Probability and Statistics Symbols". Math Vault. 2020-04-26. Retrieved 2020-09-11.
 
# "Conditional Probability". www.mathsisfun.com. Retrieved 2020-09-11.
 
# Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). "A Modern Introduction to Probability and Statistics". Springer Texts in Statistics: 26. doi:10.1007/1-84628-168-7. ISSN 1431-875X.
 
# Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). "A Modern Introduction to Probability and Statistics". Springer Texts in Statistics: 25–40. doi:10.1007/1-84628-168-7. ISSN 1431-875X.
 
# Kolmogorov, Andrey (1956), Foundations of the Theory of Probability, Chelsea
 
# "Conditional Probability". www.stat.yale.edu. Retrieved 2020-09-11.
 
# Gillies, Donald (2000); "Philosophical Theories of Probability"; Routledge; Chapter 4 "The subjective theory"
 
# Gal, Yarin. "The Borel–Kolmogorov paradox" (PDF).
 
# Draheim, Dirk (2017). "Generalized Jeffrey Conditionalization (A Frequentist Semantics of Partial Conditionalization)". Springer. Retrieved December 19, 2017.
 
# Jeffrey, Richard C. (1983), The Logic of Decision, 2nd edition, University of Chicago Press, ISBN 9780226395821
 
# "Bayesian Epistemology". Stanford Encyclopedia of Philosophy. 2017. Retrieved December 29, 2017.
 
# Casella, George; Berger, Roger L. (2002). Statistical Inference. Duxbury Press. ISBN 0-534-24312-6.
 
# Paulos, J.A. (1988) Innumeracy: Mathematical Illiteracy and its Consequences, Hill and Wang. ISBN 0-8090-7447-8 (p. 63 et seq.)
 
# Thomas Bruss, F; Der Wyatt Earp Effekt; Spektrum der Wissenschaft; March 2007
 
# George Casella and Roger L. Berger (1990), Statistical Inference, Duxbury Press, ISBN 0-534-11958-1 (p. 18 et seq.)
 
# Grinstead and Snell's Introduction to Probability, p. 134
 

Latest revision as of 10:48, 30 October 2021

In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(A|B) or occassionally (A). This can also be understood as the fraction of probability B that intersects with A: .

For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell is coughing might be 75%, in which case we would have that P(Cough) = 5% and P(Cough|Sick) = 75%. Although, there doesn't have to be relationship or dependence between A and B, and they don't have to occur simultaneously.

P(A|B) may or may not be equal to P(A) (the unconditional probability of A). If P(A|B) = P(A), then events A and B are said to be independent: in such a case, knowledge about either event does not alter the likelihood of each other. P(A|B) (the conditional probability of A given B) typically differs from P(B|A). For example, if a person has dengue fever, they might have a 90% chance of testing positive for the disease. In this case, what is being measured is that if event B (having dengue) has occurred, the probability of A (testing positive) given that B occurred is 90%: P(A|B) = 90%. Alternatively, if a person tests positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high false positive rates. In this case, the probability of the event B (having dengue) given that the event A (testing positive) has occurred is 15%: P(B|A) = 15%.It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through base rate fallacies.

While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a condition probability using Bayes' theorem: . Another option is to display conditional probabilities in conditional probability table to illuminate the relationship between events.

Definition

Illustration of conditional probabilities with an Euler diagram. The unconditional probability P(A) = 0.30 + 0.10 + 0.12 = 0.52. However, the conditional probability P(A|) = 1, P(A|) = 0.12 ÷ (0.12 + 0.04) = 0.75, and P(A|) = 0.
On a tree diagram, branch probabilities are conditional on the event associated with the parent node. (Here, the overbars indicate that the event does not occur.)
Venn Pie Chart describing conditional probabilities

Conditioning on an event

Kolmogorov definition

Given two events A and B from the sigma-field of a probability space, with the unconditional probability of B being greater than zero (i.e., P(B)>0), the conditional probability of A given B is defined to be the quotient of the probability of the joint of events A and B, and the probability of B:

where is the probability that both events A and B occur. This may be visualized as restricting the sample space to situations in which B occurs. The logic behind this equation is that if the possible outcomes for A and B are restricted to those in which B occurs, this set serves as the new sample space.

Note that the above equation is a definition—not a theoretical result. We just denote the quantity as , and call it the conditional probability of A given B.

As an axiom of probability

Some authors, such as de Finetti, prefer to introduce conditional probability as an axiom of probability:

Although mathematically equivalent, this may be preferred philosophically; under major probability interpretations, such as the subjective theory, conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability of and introduces a symmetry with the summation axiom for mutually exclusive events

As the probability of a conditional event

Conditional probability can be defined as the probability of a conditional event . The Goodman–Nguyen–Van Fraassen conditional event can be defined as

It can be shown that

which meets the Kolmogorov definition of conditional probability.

Conditioning on an event of probability zero

If , then according to the definition, is undefined.

The case of greatest interest is that of a random variable Y, conditioned on a continuous random variable X resulting in a particular outcome x. The event has probability zero and, as such, cannot be conditioned on.

Instead of conditioning on X being exactly x, we could condition on it being closer than distance away from x. The event will generally have nonzero probability and hence, can be conditioned on. We can then take the limit

For example, if two continuous random variables X and Y have a joint density , then by L'Hôpital's rule:

The resulting limit is the conditional probability distribution of Y given X and exists when the denominator, the probability density , is strictly positive.

It is tempting to define the undefined probability using this limit, but this cannot be done in a consistent manner. In particular, it is possible to find random variables X and W and values x, w such that the events and are identical but the resulting limits are not:

The Borel–Kolmogorov paradox demonstrates this with a geometrical argument.

Conditioning on a discrete random variable

Let X be a discrete random variable and its possible outcomes denoted V. For example, if X represents the value of a rolled die then V is the set . Let us assume for the sake of presentation that X is a discrete random variable, so that each value in V has a nonzero probability.

For a value x in V and an event A, the conditional probability is given by . Writing

for short, we see that it is a function of two variables, x and A.

For a fixed A, we can form the random variable . It represents an outcome of whenever a value x of X is observed.

The conditional probability of A given X can thus be treated as a random variable Y with outcomes in the interval . From the law of total probability, its expected value is equal to the unconditional probability of A.

Partial conditional probability

The partial conditional probability is about the probability of event given that each of the condition events has occurred to a degree (degree of belief, degree of experience) that might be different from 100%. Frequentistically, partial conditional probability makes sense, if the conditions are tested in experiment repetitions of appropriate length . Such -bounded partial conditional probability can be defined as the conditionally expected average occurrence of event in testbeds of length that adhere to all of the probability specifications , i.e.:

Based on that, partial conditional probability can be defined as

where

Jeffrey conditionalization is a special case of partial conditional probability, in which the condition events must form a partition:

Example

Suppose that somebody secretly rolls two fair six-sided dice, and we wish to compute the probability that the face-up value of the first one is 2, given the information that their sum is no greater than 5.

  • Let D1 be the value rolled on dice 1.
  • Let D2 be the value rolled on dice 2.

Probability that D1 = 2

Table 1 shows the sample space of 36 combinations of rolled values of the two dice, each of which occurs with probability 1/36, with the numbers displayed in the red and dark gray cells being D1 + D2.

D1 = 2 in exactly 6 of the 36 outcomes; thus P(D1 = 2) =  = :

Table 1
+ D2
1 2 3 4 5 6
D1 1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Probability that D1 + D2 ≤ 5

Table 2 shows that D1 + D2 ≤ 5 for exactly 10 of the 36 outcomes, thus P(D1 + D2 ≤ 5) = :

Table 2
+ D2
1 2 3 4 5 6
D1 1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Probability that D1 = 2 given that D1 + D2 ≤ 5

Table 3 shows that for 3 of these 10 outcomes, D1 = 2.

Thus, the conditional probability P(D1 = 2 | D1+D2 ≤ 5) =  = 0.3:

Table 3
+ D2
1 2 3 4 5 6
D1 1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Here, in the earlier notation for the definition of conditional probability, the conditioning event B is that D1 + D2 ≤ 5, and the event A is D1 = 2. We have as seen in the table.

Use in inference

In statistical inference, the conditional probability is an update of the probability of an event based on new information. The new information can be incorporated as follows:

  • Let A, the event of interest, be in the sample space, say (X,P).
  • The occurrence of the event A knowing that event B has or will have occurred, means the occurrence of A as it is restricted to B, i.e. .
  • Without the knowledge of the occurrence of B, the information about the occurrence of A would simply be P(A)
  • The probability of A knowing that event B has or will have occurred, will be the probability of relative to P(B), the probability that B has occurred.
  • This results in whenever P(B) > 0 and 0 otherwise.

This approach results in a probability measure that is consistent with the original probability measure and satisfies all the Kolmogorov axioms. This conditional probability measure also could have resulted by assuming that the relative magnitude of the probability of A with respect to X will be preserved with respect to B (cf. a Formal Derivation below).

The wording "evidence" or "information" is generally used in the Bayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A). This is consistent with the frequentist interpretation, which is the first definition given above.

Statistical independence

Events A and B are defined to be statistically independent if

If P(B) is not zero, then this is equivalent to the statement that

Similarly, if P(A) is not zero, then

is also equivalent. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined, and the preferred definition is symmetrical in A and B.

Independent events vs. mutually exclusive events

The concepts of mutually independent events and mutually exclusive events are separate and distinct. The following table contrasts results for the two cases (provided that the probability of the conditioning event is not zero).

If statistically independent If mutually exclusive
0
0
0

In fact, mutually exclusive events cannot be statistically independent (unless both of them are impossible), since knowing that one occurs gives information about the other (in particular, that the latter will certainly not occur).

Common fallacies

These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question.

Assuming conditional probability is of similar size to its inverse

A geometric visualisation of Bayes' theorem. In the table, the values 2, 3, 6 and 9 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that P(A|B) P(B) = P(B|A) P(A) i.e. P(A|B) = . Similar reasoning can be used to show that P(Ā|B) = Template:Sfrac etc.

In general, it cannot be assumed that P(A|B) ≈ P(B|A). This can be an insidious error, even for those who are highly conversant with statistics. The relationship between P(A|B) and P(B|A) is given by Bayes' theorem:

That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B).

Assuming marginal and conditional probabilities are of similar size

In general, it cannot be assumed that P(A) ≈ P(A|B). These probabilities are linked through the law of total probability:

where the events form a countable partition of .

This fallacy may arise through selection bias. For example, in the context of a medical claim, let be the event that a sequela (chronic disease) S occurs as a consequence of circumstance (acute condition) C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S (so that P() is low). Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P() is high. The actual probability observed by the doctor is P(|H).

Over- or under-weighting priors

Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is conservatism.

Formal derivation

Formally, P(A | B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.

Let Ω be a sample space with elementary events {ω}, and let P be the probability measure with respect to the σ-algebra of Ω. Suppose we are told that the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. All events that are not in B will have null probability in the new distribution. For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. The former is required by the axioms of probability, and the latter stems from the fact that the new probability measure has to be the analog of P in which the probability of B is one - and every event that is not in B, therefore, has a null probability. Hence, for some scale factor α, the new distribution must satisfy:

Substituting 1 and 2 into 3 to select α:

So the new probability distribution is

Now for a general event A,

Licensing

Content obtained and/or adapted from: