Understanding ANOVA Statistical analysis of business
palwinderkaurhkc
2 views
13 slides
Nov 01, 2025
Slide 1 of 13
1
2
3
4
5
6
7
8
9
10
11
12
13
About This Presentation
Understanding ANOVA Statistics.pdf
Size: 1.92 MB
Language: en
Added: Nov 01, 2025
Slides: 13 pages
Slide Content
Understanding ANOVA
(Analysis of Variance)
A Guide to Comparing Group Means
What is ANOVA?
Moving beyond the t-test to compare three or more groups
at once.
Why Not Just Use t-Tests?
Comparing multiple groups (eg, A vs B, B vs C, A vs C)
with individual t-tests seems logical, but it has a Todd
ek meh nun ner vales
Any
Each test has a chance of a “false positive’ (Type | Error),
typically 5% (a = 0.05). LU
4
+ When you run multiple tests, these error chances add up
quickly. This is called **Type | Error Inflation**.
Typen Type!
+ With just 3 groups, your *family-wise error rate* (the emor error
chance of at least one false positive) jumps to about
14%!
Solution: ANOVA tests all group means simultaneously,
keeping the overall error rate at 5%.
How ANOVA Thinks: Analyzing Variance
Q Core Idea: ANOVA compares two types of variance to see if the group means are
different.
Between-Group Variance (The "Signal"): How much do the group *means* vary
from the overall *grand mean*? A large variance here suggests the groups are
different.
2 Within-Group Variance (The "Noise"): How much do individual data points vary
from their *own group's mean*? This represents the random, natural variation.
& The F-Statistic: It's the ratio of these two variances:
F = (Variance Between Groups) / (Variance Within Groups)
The Logic of the F-Ratio
Large F-Ratio (Significant) Small F-Ratio (Not Significant)
This happens when the "Signal" is much This happens when the "Signal" is weak and
stronger than the "Noise." The difference drowned out by the "Noise." The difference
between the groups is much larger than the between the groups is small compared to
random variation within them. This suggests the random variation within them. The
the groups are truly different. groups are not considered different.
Types of ANOVA
Choosing the right test for your experimental design.
One-Way vs. Two-Way ANOVA
>
One-Way ANOVA
Use this when you have one
independent variable (factor) with 3
or more levels.
Question: Does *teaching method*
(A.B, or C) affect *test scores*?
A
Two-Way ANOVA
Use this when you have two
independent variables (factors).
Question: How do *teaching
method* AND *study time* (Low,
High) affect *test scores*?
La
Interaction Effect
A Two-Way ANOVA also tests if th
factors interact.
Example: Method A only works well
with *high* study time, while
Method B works well for both.
The "Rules" of ANOVA
For your ANOVA results to be valid, your data should meet these key assumptions:
Normality: The data within each group should be approximately normally distributed. (You
can check this with a histogram or a Shapiro-Wilk test).
Homogeneity of Variance: The variance (spread) of the data should be roughly equal across
all groups. (You can check this with Levene's Test).
#æ Independence: The observations must be independent. This means one participant's score
does not influence another's (e.g, participants are not in multiple groups).
Interpreting the Results
Understanding the F-Statistic and the all-important p-value.
Example: Teaching Method Study
Method A (Control) 75%
Method B
1%
(interactive) =
Method C (Video) 72%
An ANOVA test was run to compare the mean test scores of the three methods.
The output gives us: F(2, 87) = 8.45, p = 0.0004
Drawing a Conclusion
Interpreting the p-value
Our significance level (a) is 0.05.
Our p-value is 0.0004.
Since p < a (0.0004 < 0.05), we reject the
null hypothesis.
What Does This Mean?
We have strong evidence that not all group
means are equal. There is a statistically
significant difference in test scores based on
the teaching method.
Next Step: ANOVA doesn't tell us *which*
groups are different. We must run a post-hoc
test (like Tukey's HSD) to find out (e.g, is B >
A? is B > C?).