Probability.pptx. .

AkshayBadore2 27 views 25 slides Jun 11, 2024
Slide 1
Slide 1 of 25
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25

About This Presentation

Probability


Slide Content

Probability Dr.Akshay

Introduction The word probable is used very commonly in everyday language to mean something which is very likely to happen. One function of statistical methods is to provide techniques for making inductive inferences and also for measuring the degree of uncertainty of such inferences. The long-term regularity provides us with a measure of the amount of chance with a particular trial is subject. This measure of chance is denoted as probability.

THE PROBABILITY SCALE Chance is measured on a probability scale having zero at one end and unity at the other. The top end of the scale marked unity represents absolute certainty. The bottom end of the scale marked zero, represents 'absolute impossibility'. The other points on the probability scale falling between 0 and 1 would indicate the relative chance of occurrence of the outcome. When there are unable to place any odds between the occurrence and the non-occurrence of an outcome we say that P= 1/2, or that the event can happen or not happen with equal odds.

MEASUREMENT OF PROBABILITY A priori or Classical Probability Suppose that we toss an ideal coin. What is the probability of getting heads? Since there are only two ways that the coin can fall-heads or tails-and since the coin is well-balanced, one would expect it to fall heads and tails with about equal frequency; hence, in the long run one would expect it to fall heads about one-half of the time and so the probability of the outcome of heads will be 1/2. This kind of reasoning prompted the following classical definition of probability: lf there are a total of n mutually exclusive and equally likely outcomes of a trial and if nA of these outcomes have an attribute A, then the probability of A is the fraction nA /n.

A posteriori or Frequency Probability In the previous sub-section the evaluation of probability was done in some simple cases making use of our intuitive notion of chance. However, in more complex situations the evaluation of probability will have to be based on observational or experimental evidence. The estimate of probability of a specified outcome based on a series of independent trials is given by : Probability = The number of times the outcome occurred Total number of trials

Sometimes this probability is referred to as statistical probability, frequency or empirical probability or a posteriori probability, i.e., after the event. For example, if we want to know the probability of success of a surgical procedure, a review of past experience of this surgical procedure under similar conditions will provide the data for estimating this probability. The longer the series we have, the closer the estimate would be to the true value.

Laws of Probability for Independent Event There are two important laws of probability which are useful in finding out probabilities in complex situations where the events concerned are independent.

Additional Law If an event can occur in any one of several mutually exclusive ways, the probability of that event is the sum of the individual probabilities of the different ways in which it can occur. For example, when a die is tossed, what is the probability of getting 2 or 4 or 6? The probability of 2 = 1/6 The probability of 4 = 1/6 The probability of 6 = 1/6 So, probability of 2 or 4 or 6 is, 1/6+1/6+1/6= 3/6 = ½

Multiplication Law The probability of simultaneous occurrence of 2 or more independent events is the product of the individual probabilities. For example, in tossing 2 coins Probability of heads-in one con = ½ Probability of heads-in another coin = ½ Thus, probability of heads in both coins = ½ × ½ = ¼

Conditional Probability In some situations the chance of occurrence of a particular event depends on some other event. The multiplication law is not applicable in case of dependable events. For example, the chance that a patient with some disease survives the next year depends, of course, on his having survived to the present time and the current status of his disease. Such probability is called Conditional Probability.

Let P(A) represent the probability of occurrence of event A, and P(B) that of event B. Let P(AB) represent the probability of the simultaneous occurrence of events A and B. Then, by definition, the probability that event A occurs, given that event B has already occurred, P (A/B), is given by P (A/B) = P(AB) P(B)

The general rule of multiplication in its modified form in terms of conditional probabilities becomes P (A and B) = P (B) × P(A/B) or P (A) x P (B/A). Example: From past experience with the illnesses of his patients, a doctor has gathered the following information in a population:

5% feel that they have cancer and do have cancer; 45% feel that they have cancer and don't have cancer; 10% do not feel that they have cancer and do have it; and the remaining 40% feel that they do not have cancer and really do not have it. Denoting the events as A when the patient feels he has cancer, and B when the patient has cancer, we have

P(AB) = 0.05 P(A) = 0.5 P (B) = 0.15 The probability that a patient has cancer, given that he feels he has it, given by P(B/A)= P(AB)/P(A) = 0.05/0.5 = 0.1 The probability he feels he has cancer, given that he does have it is given by P(A/B)= P(AB)/P(B) = 0.05/0.15 = 0.33

BAYES’ THEOREM Usually the physician knows the conditional probability of a particular symptom (or positive test) for a particular disease. However, it is important that he knows the conditional probability of the disease for an individual patient, given the particular symptom (or positive test). A theorem attributed to Thomas Bayes (1702-61) provides the means to derive the latter probability from the former probability. This theorem is illustrated by an example.

This example concerns bacteriuria, and pyelonephritis in pregnancy. Suppose it is known that roughly 6 per cent of pregnant women attending a prenatal clinic at a large urban hospital have bacteriuria.

Consider the two events: A a pregnant woman has bacteriuria, and A1 she does not have bacteriuria. Since A and A1 are mutually exclusive and complementary P (A) = 0.06, P(A1) = 1 – 0.06 = 0,94 Suppose it is further known that 30 per cent of bacteriuric and 1 per cent of non- bacteriuric pregnant women proceed to develop this disease. Using B to denote the occurrence of pyelonephritis, and B1 its absence, then, Notation Event Probability B/A Pyelonephritis given that the pregnant woman has bacteriuric 0.30 B/A1 Pyelonephritis given that the pregnant woman was non-bacteriuric 0.01

1. With these definitions consider the following probability questions: (a) What is the chance of a pregnant woman having both bacteriuria and Pyelonephritis? Using multiplicative law: P(A and B) = P(B/A) P (A) = (0.30) (0.06) = 0.0180 (b) What is the chance of a pregnant woman not having bacteriuria but having pyelonephritis? Using multiplicative law: P(A1 and B) = P(B/A1) P (A) = (0.01l) (0.94) = 0.0094 2. What is the chance of pyelonephritis? In this particular example, pyelonephritis can occur in two mutually exclusive ways, with or without bacteriuria. Hence, application of the additive law to the probabilities determined in 1(a) and 1(b) gives P (pyelonephritis) = P (B) = P (A and B) +P (A1 and B) = 0.0180 + 0.0094 =0.0274

3. Finally, with the knowledge that a pregnant woman has developed Pyelonephritis, what is the chance she had been bacteriuric? Using the notation developed, the question is: What is the probability of A/B, i.e., the presence of bacteriuria given that the pregnant woman has pyelonephritis. From the alternative form for the multiplicative law of the preceding section and the answers to 1(a) and 2 above: P (A/B) = P(A and B) / P (B) = 0.0180/0.0274 = 0.6569 In other words, if a pregnant woman has developed pyelonephritis there is a 65.7 per cent chance that she had been bacteriuric. Substituting letters for numbers and working backward from the expression in the answer to Question 3, it may be seen that P (A/B) = P(A and B) P (B) = P(A and B) P(A and B) + P (A1 and B) = P(B/A) P (A) P(B/A) P (A) + P (B/ A1) P (A1) The last expression is the usual formulation of Bayes ’ theorem.

APPLICATION OF BAYES’ THEOREM IN DETERMINING DIAGNOSTIC EFFICACY Bayes ’ theorem has been used frequently to evaluate the performance of diagnostic and screening tests. Assessing a new test begins with the identification of a group of subjects known to have the disorder or disease of interest, using an accepted reference test known as the gold standard. Let us denote subjects with the disease as D+ and those without as D-. Further, let us denote subjects who are positive for the new diagnostic test as T+ and those negative as T- . Test + - Total + a b a+b - c d c+d Total a+c b+d Disease

Sensitivity : This is the probability that a diseased individual will have a positive t est result, and hence, the true positive rate (TPR) of the test. In conditional p robability notation, sensitivity is written P (T+/D+). From the Table, Sensitivity = (a/a + c). Specificity : This is the probability that a disease-free individual will have a n egative test result, and is the true negative rate (TNR) of the test. In conditional p robability notation, specificity is written as P (T-/D-). From the Table, Specificity = (d/b + d). False negative rate (FNR) : This is the probability that diseased individual w ill have a negative test result. In conditional probability notation, FNR is written P(T-/D+), and from Table, FNR = (c/a + c). False positive rate (FPR) : This is the probability that a disease-free individual w ill have a positive test result. In conditional probability notation, FPR is written as P(T+/D-), and from the Table, FPR = (b/b + d). T wo other indicators, known as likelihood ratios and very useful in clinical p ractice, can also be calculated. These are also known as positive and negative p redictive values. Predictive value positive (PVP) : This is the probability that an individual with a positive test result has the disease. PVP is also known as the posterior probability, or post-test probability of disease. In conditional probability notation. PVP is P (D+/T+) and from the table, PVP = (a/a + b). Predictive value negative (PVN) : This is the probability that an individual with a negative test result does not have the disease. In conditional probability notation, PVN is written as P (D-/T-), and from the table, PVN= (d/c + d).

The formula for PVP can be written as, PVP = (Prevalence) (Sensitivity) (Prevalence) (Sensitivity) + (1- Prevalence) (1- Specifcity ) The formula for PVN may be written as, PVN = (1-Prevalence) (Specificity) (1-Prevalence) ( Specifícity ) + (Prevalence) (1-Sensitivity) In general, the more sensitive a test, the better its PVN; and the more specific a test, the better its PVP.

Sometimes, it is useful to show graphically the relationship between sensitivity a nd specificity for a diagnostic test. Such a diagram is known as the receiver operator c haracteristic (ROC) curve, and provides a simple tool for applying the predictive v alue method to the choice of a positivity criterion. The ROC curve is constructed by plotting the true positive rate (sensitivity) a gainst the false positive rate (1 – specificity) for several choices of the positivity c riterion. The ROC curves can also be used to compare two diagnostic tests. The area u nder the curve represents the overall accuracy of a test; the larger the area, the better t he test.

THANK YOU…