The difference between parametric and non-parametric tests
Size: 147.95 KB
Language: en
Added: Nov 04, 2021
Slides: 7 pages
Slide Content
Extra: (Non) parametric tests Assumptions for parametric tests The data should at least be recorded on an interval scale Observations or errors must be independent But some analyses can handle independent errors! 1 Qualitative scales Nominal scale Ordinal or ranking scale Quantitative scales Interval scale Ratio scale From high to low scale Note that transformation from higher to lower scales is possible Tests for lower scales can be applied on higher scales Tests for higher scales cannot be applied on lower scales
Extra: (Non) parametric tests Assumptions for parametric tests The data should at least be recorded on an interval scale Observations or errors must be independent Errors or data are normally distributed and have constant variance 2 Heteroscedasticity in case of ANOVA Homo/heteroscedasticity in case of regression NB: Test for equal variances: Bartlett’s test if your data follow a normal (bell-shaped) distribution. If your samples are small, or your data are not normal (or you don’t know whether they’re normal), use Levene’s test.
Extra: (Non) parametric tests Examples of parametric tests ANOVA, t-test, linear regression, pearson correlation coefficient Problems with ecological datasets: Small sample size Often violation of assumptions Transform your data (normality and homoscedasticity) Or: Use a non-parametric test 3
Extra: (Non) parametric tests Advantages of nonparametric tests: applicable if sample size is (very) small no normality assumptions (distribution free) no homoscedasticity assumption data of ordinal or nominal scales possible (but: also interval/ratio) Disadvantages of nonparametric tests: power < parametric tests The power of a test = 1- β = The probability of rejecting H0 when it is in fact false (and thus should be rejected!) 4 More information: https://statisticsbyjim.com/hypothesis-testing/nonparametric-parametric-tests/
Extra: (Non) parametric tests Possible outcomes of a test 5 ≠ accept H REALITY (unknown) Null hypothesis is true Alt hypothesis is true STATISTICAL OUTCOME Reject null hypothesis P< α Type I-error “false positive” ( α ) Test outcome correct “correct positive” (1- β ) Fail to reject null hypothesis P> α Test outcome correct “correct negative” (1- α ) Type II-error “false negative” ( β ) NB: positive result = rejection of H
Extra: (Non) parametric tests Possible outcomes of a test Probability of a: Type I error = α = the probability of rejecting the H 0, when it is in fact true; the larger α, the more likely it is that H is rejected falsely Type II error = β = the probability of rejecting H 1 , when H 1 is in fact true (failing to reject the null hypothesis H when, in fact, it is false) The power of a test = 1- β = The probability of rejecting H when it is in fact false (and thus should be rejected!) 6
Extra: (Non) parametric tests Possible outcomes of a test P-value < α Reject H , there is a significant difference probability to be wrong is α. You can be at least 95% sure that you are correct when a hypothesis test finds a significant difference P-value > α Fail to reject H , we didn’t find a significant difference This is not the same as accepting H (i.e. significant homogeneity) There is no way to reliably estimate how likely you are to be wrong (Type II error) Statistical tests are set up in a way that gives researchers relative certainty about the result only when the test shows a significant difference. 7