Unit 2 Testing of Hypothesis -Parametric test- Biostatistics and Research methodology
2,749 views
47 slides
Aug 27, 2024
Slide 1 of 47
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
About This Presentation
Unit 2 Testing of Hypothesis -Parametric test,
Parametric test: t-test(Sample, Pooled or Unpaired and Paired) , ANOVA, (One way and Two way), Least Significance difference
Size: 962.52 KB
Language: en
Added: Aug 27, 2024
Slides: 47 pages
Slide Content
Basics of testing hypothesis- Parametric test Ravinandan A P Assistant Professor Sree Siddaganga College of Pharmacy Tumkur 1 Ravinandan A P
Contents Sample, Population, large sample, small sample, Null hypothesis, alternative hypothesis, sampling, essence of sampling, types of sampling, Error-I type, Error-II type, Standard error of mean (SEM) - Pharmaceutical examples Parametric test: t-test(Sample, Pooled or Unpaired and Paired) , ANOVA, (One way and Two way), Least Significance difference 2 Ravinandan A P
3 Ravinandan A P
4 Ravinandan A P
Syllabus Content Unit II: Parametric test: t-test(Sample, Pooled or Unpaired and Paired) ANOVA, (One way and Two way) Least Significance difference Ravinandan A P 5
Level of Significance - Parametric Data 6 Ravinandan A P
Parametric statistical tests assume that the observations are independent (except when paired, that the data are randomly drawn from the normally distributed population of values, and that the dependent variable is interval-level data. Parametric tests are sensitive to sample size . 7 Ravinandan A P
Non-parametric tests also assume that the observations are independent and randomly drawn from the population. However, they don't require the population of values to be normally distributed. When comparing two independent groups using rank statistics (i.e., rank sum), the distributions are expected to be similar within each group. The key difference between parametric and nonparametric test is that the parametric test relies on statistical distributions in data whereas nonparametric do not depend on any distribution. 8 Ravinandan A P
9 Ravinandan A P
T-test Definition (Meaning) t-test is a statistical technique applied to know the significance difference between two groups of individuals or samples 10 Ravinandan A P
Criteria for applying t- test Random variables Quantitative data Variables normally distributed Sample size less than 30 11 Ravinandan A P
Application of t-test It is applied to find the significance of difference between two means as; Unpaired t-test and as Paired t-test 12 Ravinandan A P
3 13 Ravinandan A P
14 Degrees of Freedom It denotes the number of samples that a researcher has the freedom to choose. Degrees of freedom, often represented by v or df , is the number of independent pieces of information used to calculate a statistic. It's calculated as the sample size minus the number of restrictions The concept can be explained by an analogy : X + Y = 10 df = 1 X+ Y+Z = 15 df = 2 For paired t test df = n-1 For unpaired t test df = N1+N2 - 2 Ravinandan A P
15 TEST FOR STATISTICAL SIGNIFICANCE It involves 1. Students “T” test. 2. Chi-square test. Students “T” test : It is designed by W.S Gosset in 1908 & perfected by R.A Fischer in 1926. It has two versions 1. Unpaired “t”-test 2. Paired “t” test The ratio of observed difference between two means of small samples to the SE of difference in the same is denoted by a letter “t” Ravinandan A P
16 Gosset said that the ratio follows different distribution called “t” distribution. The probability of occurrence “p” of this calculated value “t” is determined by reference “t” table. If the calculated “t” value exceeds the value given under p=0.05 in the table, it is said to be significance at 5% level & null hypothesis is rejected & alternative hypothesis is accepted. Ravinandan A P
17 If the calculated “t” value less than the value given under p=0.05 in the table, it is said to be non-significance at 5% level & null hypothesis is accepted & alternative hypothesis is rejected. Ravinandan A P
18 Unpaired “t”-test: It is general technique that can be used to test whether variable differs between two groups does not require that two groups be paired anyway. A typical example would be if we were comparing a variable drug-A & other with drug-B. Ravinandan A P
19 Calculations When the same parameter Ex BP is studied in two conditions normal & drug treated then this t-test can answer whether the change due to the drug treatment is significant or not. The value of “t” is determined by the following formula: X 1 - X 2 t = ------------------ √ (SE1) 2 + (SE) 2 Where, X1&X2 are the means of the two sets of data & SE1&SE2 are the standard errors of the two means respectively. Ravinandan A P
20 Example : The following tables shows gain in body weight of two lots of adults population maintained on two different diet (high & low protein) calculate whether the changes in the body weight observed is due to diet or not. Ravinandan A P
n1= 12 Total=1440 n2= 7 Total= 707 X 1 = 120 X 2 =101 X1- X2 t = ------------------ √ (SE1) 2 + (SE 2 ) 2 t= 19/10.4 t=1.89 For t=1.89 at 17 degrees of freedom p›0.05 here the observed difference gain in body weight due to two different types of diets is not significant. 23 Ravinandan A P
Ravinandan A P 24
Paired “t” test It is an useful technique if we are investigating a variable in two groups where there is a meaningful one to one correspondence between the points in one group & those in the other. Used for analysis of paired data Same animals/ human or tissues are exposed to both the treatments subject itself is control The observed difference in each pair is calculated “t” is determined by the formula │d │ t = --------------- √s 2 /n-1 d is mean difference in each pair S is std deviation N is no of matched pair No of degrees of freedom is (n-1) 25 Ravinandan A P
Example: BP values were estimated before & after the treatment with vit-B12 therapy in 6 human volunteers. Calculate these changes were significant due to drug therapy or not. 26 Ravinandan A P
d = ∑d/n t = 5.92 df =5 p<0.01 Vit-B12 therapy has brought about a significant change in Hb content. 27 Ravinandan A P
Ravinandan A P 28
ANOVA (Analysis of variance) It is used for comparison of more than two samples ANOVA compares the variance (variation) in the mean between the treatments with those within the treatments. If between and within variations are approximately same size , there is no significant difference between control and treatment group ANOVA tells about differences between treatment and control group. 29 Ravinandan A P
Many studies involve comparisons between more than two groups of subjects. ANOVA is an abbreviation for the full name of the method: ANalysis Of VAriance invented by R.A. Fisher in the 1920’s 30 Ravinandan A P
Assumptions in ANOVA The observations are independent. Parent populations from which observations are taken are normal. Various treatment and environmental effects are additive in nature. The populations have normal distributions. The populations have the same variance 2 (or standard deviation The samples are simple random samples. The samples are independent of each other. Ravinandan A P 31
32 ANOVA stands for analysis of variance; it is one of the most extensively used statistical tests. ANOVA is used when more then two treatments are to be used in an experiment. ANOVA is completely randomized design. In ANOVA design the means of two or more groups are compared. EX: In a clinical trial 150 patients are to be assigned to 3 treatment groups, one placebo & two actives. The optimal assignment will result in equal number in each group N/t units per group i.e 150/3 =50 So, 50 patients per treatment assigned.
Analysis of Variance The ANOVA F-test is a comparison of the average variability between groups to the average variability within groups The variability within each group is a measure of the spread of the data within each of the groups. The variability between groups is a measure of the spread of the group means around the overall mean for all groups combined. 33 Ravinandan A P
ANOVA: F-statistic If variability between groups is large relative to the variability within groups, the F-statistic will be large. If variability between groups is similar or smaller than variability within groups, the F-statistic will be small. If the F-statistic is large enough, the null hypothesis that all means are equal is rejected. 34 Ravinandan A P
Ravinandan A P 35 Socioeconomic status (SES)
ANOVA One-Way ANOVA When we compare more than two groups, based on one factor (independent variable), this is called one way ANOVA. For example, it is used if a manufacturing company wants to compare the productivity of three or more employees based on working hours. This is called one way ANOVA Two-way ANOVA Two-way ANOVA two main effects of two factors and one two-way interaction effect between the two Ravinandan A P 36
37 For one way ANOVA, the sum of the squares consists of between sum of square & within sum of square between sum of square represents differences among treatments. Large values indicates large treatment differences( if the treatment means are identical, the BSS will be Zero on the average ) within sum of square represents differences with in treatment or error i.e the differences among objects within a treatment is a measure of variability of the observations. ANOVA table consists of Source of variation Degree of freedom sum of square Mean of square In one way ANOVA the sources consist of between, within & total terms.
The sum of squares divided by Df is known as mean square, between mean square, within mean square. The Df for treatment is t-1 The Df for error (within treatment) is n-t Where “n” is the total number of observations. EX Treatment of animals with 4 different drug causes alteration in the brain tissue concentration of Ach, Is there any significant difference observed between the treatment groups & control groups, 38 Ravinandan A P
(X) 2 /n (X) 2 /n 39 Ravinandan A P
Correction factor( c.f ) = T2 /n = (410) 2 / 20 = 8405 T = Grand total; n = total observations of all groups Total sum of squares = ∑X 2 – C,f = 8824-8405 = 419 Between groups sum of squares = (∑X) 2 /n – C.f = 8689-8405 =284 Error sum of squares ; = (total sum of squares)- (between groups sum of squares) = 419-284=135 Between groups ( Df )= no of groups 1= 5-1=4 Total ( Df ) =Total observations of all groups 1=20-1=19 Error ( Df ) = Total ( Df )-Between groups ( Df )= 19-4=15 Mean squares(between groups) = Between groups sum squares/ degree of freedom = 284/4=71 Mean square (error) = 135/15=9 F= Between groups mean squares/ error mean squares = 71/9= 7.89 40 Ravinandan A P
41 Ravinandan A P
Ravinandan A P 42
Least Significance difference (LSD) It is a method to compare pairwise comparisons between quantitative variables coming from three or more independent groups. Two treatment means are declared to be significantly different (using LSD) if where n denotes the number of observations involved in the computation of a treatment mean. Ravinandan A P 43
The LSD test is a statistical method that compares the means of two or more sample groups to determine if they are significantly different. The LSD test is used in the context of analysis of variance (ANOVA) when the F-ratio suggests that the null hypothesis should be rejected, meaning that the difference between the population means is significant. The LSD test compares the populations in pairs to identify which populations have statistically different means Ravinandan A P 44
The LSD test provides a threshold, called the LSD value, below which mean differences are considered not significant. The LSD test compares differences between pairs of means to this threshold to identify significantly different pairs. The LSD test assumes that the data is normally distributed and homogeneous. The LSD test can be used in phase III clinical trials to compare two doses of an active treatment against placebo Ravinandan A P 45