Unit 2 Testing of Hypothesis - Hypothesis - Null, Alternative, Type 1 and 2 and level of significance, P value.pptx
1,820 views
34 slides
Aug 27, 2024
Slide 1 of 34
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
About This Presentation
Hypothesis - Null Hypothesis, Alternative Hypothesis,
Type 1 Error and Type 2 Error
Level of significance
P value
Size: 1.06 MB
Language: en
Added: Aug 27, 2024
Slides: 34 pages
Slide Content
Basics of testing hypothesis Ravinandan A P Assistant Professor Sree Siddaganga College of Pharmacy Tumkur 1 Ravinandan A P
Hypothesis - Null, Alternative, Type 1 and 2 Level of significance P value Ravinandan A P 2
Contents Sample, Population, large sample, small sample, Null hypothesis, alternative hypothesis, sampling, essence of sampling, types of sampling, Error-I type, Error-II type, Standard error of mean (SEM) - Pharmaceutical examples Parametric test: t-test(Sample, Pooled or Unpaired and Paired) , ANOVA, (One way and Two way), Least Significance difference 3 Ravinandan A P
Definition : A statistical hypothesis is an assumption or a statement which may or may not be true concerning one or more populations. Ex. 1) The mean height of the SSCPT students is 1.63m. 2) There is no difference between the distribution of Pf and Pv malaria in India (are distributed in equal proportions.) 4 Ravinandan A P
The null & alternative hypotheses The main hypothesis which we wish to test is called the null hypothesis, since acceptance of it commonly implies “no effect” or “ no difference.” It is denoted by the symbol H O . 5 Ravinandan A P
HYPOTHESIS 6 Ravinandan A P
Ravinandan A P 7
Ravinandan A P 8
Type I Error Is committed by rejecting the null hypothesis when in really it is true & probability of committing Type I error is denoted as α α = P (Type I error) = (Rejecting the null hypothesis when it is true) 9 Ravinandan A P
Type II Error Is committed we accept the null hypothesis when in reality it is false & probability of committing Type II error is denoted as β β =P (Type II error) = (Accepting the null hypothesis when it is false) 10 Ravinandan A P
In testing the hypothesis there is no chance or probability for any type of error. But, in practice it is not possible to eliminate both types of errors. Hence, we fix the probability of one error (Type I error) i . e. α and try to minimize the probability of the other (Type II error) 11 Ravinandan A P
Ravinandan A P 12
Examples 1) HO: μ = 1.63 m (from the previous example). 2) At present, only 60% of patients with leukemia survive more than 6 years. A Pharmacist develops a new drug. Of 40 patients, chosen at random, on whom the new drug is tested, 26 are alive after 6 years. Is the new drug better than the former treatment? 13 Ravinandan A P
Hypothesis testing offers us two choices: Conclude that the difference between the two groups is so large that it is unlikely to be due to chance alone. Reject the null hypothesis and conclude that the groups really do differ. Conclude that the difference between the two groups could be explained just by chance. Accept the null hypothesis, at least for now. Note that you could be making a mistake, either way! 14 Ravinandan A P
Hypothesis testing outcomes Decision Outcome if null hypothesis true Outcome if null hypothesis false Do not reject null hypothesis Correct decision. Type II error Reject null hypothesis Type I error Correct decision 15 Ravinandan A P
Test of Significance. These tests are mathematical methods by which the probability (p) or relative frequency of an observed difference, occurring by chance is found. It may be difference between means or proportions of sample and universe or between the estimates of experiment & control groups. 16 Ravinandan A P
Methods of determining the significance of difference are discussed to draw inference and conclusion Common tests in use are ‘Z’ test, ‘t’ test and ‘ χ 2’ test . ‘Z’ test & ‘t’ tests express the difference observed in terms of standard error (SE) which is measure of variation in sample estimates that occur by chance. 17 Ravinandan A P
The stages in performing a test of significance State the null hypothesis of no or chance difference & the alternative Determine P i.e., probability of occurrence of your estimates by chance i.e., accept or reject null hypothesis. Draw conclusion on the basis of P value, i.e., decide whether the difference observed is due to chance or play of some external factors on the sample under study 18 Ravinandan A P
Level of Significance The maximum probability of rejecting the hypothesis when it is true (or maximum probability of Type I error) is known as the Level of Significance. It is denoted by α . These probabilities are generally taken as 0.05, 0.01 or 0.001 etc. or in % 5%, 1% or 0.1% etc. 19 Ravinandan A P
If the calculated p-value is smaller than or equal to 0.05, then the null hypothesis is rejected and the result is called “statistically significant”. The limit of 5 % is called significance level. Every value between 0 and 1 could theoretically be used as significance level but only small values like 0.01, 0.05 or 0.10 are useful. In medicine, the value 0.05 has been established as a standard. 20 Ravinandan A P
Power of test The probability of accepting null hypothesis when it is false. It is denoted by β . The power of test = 1- β error. P(reject Ho | Ho is false) 21 Ravinandan A P
Power and Sample Size Truth Decision H true H false Retain H Correct retention Type II error Reject H Type I error Correct rejection α ≡ probability of a Type I error β ≡ Probability of a Type II error Two types of decision errors: Type I error = erroneous rejection of true H Type II error = erroneous retention of false H 22 Ravinandan A P
Power β ≡ probability of a Type II error β = Pr(retain H | H false) (the “|” is read as “given”) 1 – β = “Power” ≡ probability of avoiding a Type II error 1– β = Pr(reject H | H false) 23 Ravinandan A P
In biostatistics, power is the probability of correctly rejecting a null hypothesis or the probability of detecting an effect when it's present. It's calculated as 1 - beta, where beta is the probability of making a type II error or concluding that the null hypothesis is correct when it's not. For example, if the type II error rate is 0.2, the statistical power is 1 - 0.2 = 0.8 . A power closer to 1 indicates that the hypothesis test is better at detecting a false null hypothesis Ravinandan A P 24
power is the probability of correctly rejecting the null hypothesis. We’re typically only interested in the power of a test when the null is in fact false. This definition also makes it more clear that power is a conditional probability: the null hypothesis makes a statement about parameter values, but the power of the test is conditional upon what the values of those parameters really are. Ravinandan A P 25
Ravinandan A P 26
Ravinandan A P 27
Factors that can affect power include: Significance level: Also known as alpha, this is the probability of concluding that the null hypothesis is not correct when it is. Sample size: Planning the sample size to keep alpha and beta low can help ensure the study is meaningful without being too expensive or difficult. Variability: The variance in the measured response variable can also affect power. Effect size: Increasing the effect size can also increase power. Research design: For example, in a within-subjects design, each participant is tested in all treatments, so individual differences are less likely to affect the results. In a between-subjects design, each participant only takes part in one treatment, so individual differences may have a greater impact Ravinandan A P 28
P-Value The p-value of a test is the smallest value of for which the null hypothesis would be rejected. An alternative definition is the probability of obtaining the experimental result if the null hypothesis is true. Smaller p-values mean more significant differences between the null hypothesis and the sample result. 29 Ravinandan A P
What is P? P depends on the observed outcome P = fraction of studies which, by chance alone, would produce data more discrepant from H than that observed in this particular study. P-values measure strength of the evidence but not the importance of the result. 30 Ravinandan A P
Interpretation P -value answer the question: What is the probability of the observed test statistic … when H is true ? Thus, smaller and smaller P -values provide stronger and stronger evidence against H Small P -value strong evidence 31 Ravinandan A P
Interpretation Conventions* P > 0.10 non-significant evidence against H 0.05 < P 0.10 marginally significant evidence 0.01 < P 0.05 significant evidence against H P 0.01 highly significant evidence against H Examples P = .27 non-significant evidence against H P =.01 highly significant evidence against H * It is unwise to draw firm borders for “significance” 32 Ravinandan A P
The p-value of a test is the smallest value of for which the null hypothesis would be rejected. An alternative definition is the probability of obtaining the experimental result if the null hypothesis is true. Smaller p-values mean more significant differences between the null hypothesis and the sample result. 33 Ravinandan A P