INTRODUCTION The F-test was developed by R.A. Fisher. The F test is used to check the equality of variances using hypothesis testing. F value is based on the ratio of two variance. So, it is also known as variance ratio test . Objectives A two-tailed F test is used to check whether the variances of the two given samples (or populations) are equal or not. However, if an F test checks whether one population variance is either greater than or lesser than the other, it becomes a one-tailed hypothesis f test.
The following conditions are critical for using the F test to compare the variance of two populations Normality: the populations must have a normal distribution. Independent and random selection of sample items: the selection of the samples components should be independent and random. More than unity: the variance ratio must be one or larger than one. It cannot be less than one. When dividing variance estimate, smaller estimates divide the larger estimates of variances. The additive property states that the total of different variance components will equal the total variance, i.e., the total variance between samples and the variance within samples.
Formula F statistic for large samples: F = σ1² / σ2² ( σ 1 ² > σ2² ) where, σ1² is the variance of the first population, σ2² is the variance of the second population, F statistic for small samples : F = s1²/s2² ( s1²>s2² ) Where, s1² is the variance of the first sample s2² is the variance of the second sample.
Variance is square root of standard deviation s² = ∑(x- x̄)²/n-1 Degree of freedom ( ν ) ν1 (nominator)= n1-1 (larger variance) ν2 (denominator)= n2-1( small variance)
Now the calculated F value will be compared with tabulated F value for ν 1 and ν 2 at 1% or 5% level of significance. If Calculated F value < table F value Null hypothesis is accepted and there is no significant difference between two variables. If calculated F value > table F value Null hypothesis is rejected and there is significant difference between two variables.
Properties of F- distribution T he F-distribution curve is positively skewed towards the right with a range of 0 to ∞ and having roughly the median value 1. The value of F is always positive or zero. No negative values. The shape of the distribution depends on the degrees of freedom of numerator ν 1 and denominator ν 2. The degree of skewness decreased with an increase in degrees of freedom of the numerator and denominator. The f-distribution curve can never be symmetrical; if degrees of freedom increase it will be more similar to the symmetrical.
Uses of F -Test There are different types of F tests, each for a different purpose. In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. F-test is to test the equality of several means. While ANOVA uses to test the equality of means. F-tests for linear regression models are to test whether any of the independent variables in the multiple linear regression are significant or not. It also indicates a linear relationship between the dependent variable and at least one of the independent variables.
Steps to conduct F-test Choose the test: Note down the independent variables and dependent variables and also assume the samples are normally distributed Calculate the F statistic, choose the highest variance in the numerator and lowest variance in the denominator with a degrees of freedom (n-1) Determine the statistical hypothesis State the level of significance Compute the critical F value from the F table. Calculate the test statistic Finally, draw the statistical conclusion. reject the null hypothesis; If the test statistic falls in the critical region.
Question :1 Two random samples were drawn from two normal populations and their values are Test whether the two populations have the same variance at 5% level of significance. A 16 17 25 26 32 34 38 40 42 B 14 16 24 28 32 35 37 42 43 45 47
Solution: Given data A – 16,17,25,26,32,34,38,40,42 n=9 x̄ = 270/9= 30 B – 14,16,24,28,32,35,37,42,43,45,47 n= 11 x̄ = 363/11= 33