Probability and Statistics for Engineers

hebaelkouly 153 views 11 slides Aug 13, 2024
Slide 1
Slide 1 of 11
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11

About This Presentation

Probability and Statistics for Engineers


Slide Content

Walpole Walpole
Ch 03: Ch 03: 11
Spring 2007
Probability & Statistics Probability & Statistics
for Engineers & for Engineers &
Scientists, byScientists, by
Walpole, Myers, Myers Walpole, Myers, Myers
& Ye& Ye
~ ~
Chapter 3 NotesChapter 3 Notes
Class notes for ISE 201
San Jose State University
Industrial & Systems Engineering Dept.
Steve Kennedy

Walpole Walpole
Ch 03: Ch 03: 22
Spring 2007
A Random VariableA Random Variable
So far, we have seen how to calculate the probability of
occurrence of various outcomes in the sample space of
an experiment, and to calculate the probability that
various events (subsets of S) occur.
A random variable is a function that associates a real
number with each element in a sample space.
•Example: 3 components can either be defective (D) or not (N).
•What is the sample space for this situation?
•S = {NNN, NND, NDN, NDD, DNN, DND, DDN, DDD}
•Let random variable X denote the number of defective
components in each sample point.
•Describe P(X  2) in words.
•What are the values of P(X = x), for x = 0, 1, 2, 3?
•.125, .375, .375, .125.

Walpole Walpole
Ch 03: Ch 03: 33
Spring 2007
Discrete and Continuous Sample SpacesDiscrete and Continuous Sample Spaces
A discrete sample space has a finite or countably
infinite number of points (outcomes).
•Example of countably infinite: experiment consists of flipping
a coin until a heads occurs.
•S = ?
•S = {H, TH, TTH, TTTH, TTTTH, TTTTTH, …}
•S has a countably infinite number of sample points.
A continuous sample space has an infinite number of
points, equivalent to the number of points on a line
segment.
•For discrete sample spaces, we know that the sum of all of the
probabilities of the points in the sample space equals 1.
•Addition won’t work for continuous sample spaces. Here,
P(X = x) = 0 for any given value of x.

Walpole Walpole
Ch 03: Ch 03: 44
Spring 2007
Discrete DistributionsDiscrete Distributions
For a discrete random variable X, we generally look at
the probability P(X = x) of X taking on each value x.
Often, the probability can be expressed in a formula,
f(x) = P(X = x).
The set of ordered pairs (x, f(x)), is called the
probability distribution or probability function of X.

Note that f(x)  0, and 
x
f(x) = 1.

Walpole Walpole
Ch 03: Ch 03: 55
Spring 2007
Cumulative Distribution & PlottingCumulative Distribution & Plotting
The cumulative distribution, denoted F(x), of a discrete
random variable X with distribution f(x), is
F(x) = P(X  x)
•How is F(x) calculated?
•F(x) = 
t  x f(t).
It is useful to plot both a probability distribution and
the corresponding cumulative distribution.
•Typically, the values of f(x) versus x are plotted using a
probability histogram.
•Cumulative distributions are also plotted using a similar type
of histogram/step function.

Walpole Walpole
Ch 03: Ch 03: 66
Spring 2007
Continuous DistributionsContinuous Distributions
Continuous distributions have an infinite number of
points in the sample space, so for a given value of x,
what is P(X = x)?
•P(X = x) = 0.
•Otherwise the probabilities couldn’t sum to 1.
What we can calculate is the probability that X lies in a
given interval, such as P(a < X < b), or P(X < C).
•Since the probability of any individual point is 0,
P(a < X < b) = P(a  X  b)
on, the endpoints can be included or not.
For continuous distributions, f(x) is called a probability
density function.

Walpole Walpole
Ch 03: Ch 03: 77
Spring 2007
Probability Density FunctionsProbability Density Functions
If f(x) is a continuous probability density function,
•f(x)  0, as before.
•What corresponds to 
x f(x) = 1 for discrete distributions?
•
-… f(x) dx = 1.
•and, P(a  X  b) = ?
•P(a  X  b) = 
a…b f(x) dx
The cumulative distribution of a continuous random
variable X is?
•F(x) = P(X  x) = ?
•F(x) = 
-…x f(x) dx
What is P(a < X < b) in terms of F(x)?
•P(a < X  b) = F(b) - F(a)
•If discrete, must use "a < X", and not "a  X", above.

Walpole Walpole
Ch 03: Ch 03: 88
Spring 2007
Joint Probability DistributionsJoint Probability Distributions
Given a pair of discrete random variables on the same
sample space, X and Y, the joint probability
distribution of X and Y is

f (x,y) = P (X = x, Y = y)
f (x,y) equals the probability that both x and y occur.
The usual rules hold for joint probability distributions:
•f (x,y)  0
•
x 
y f (x, y) = 1
•For any region A in the xy plane,
P[(X,Y)  A] = 
A f (x, y)
For continuous joint probability distributions, the
sums above are replaced with integrals.

Walpole Walpole
Ch 03: Ch 03: 99
Spring 2007
Marginal DistributionsMarginal Distributions
The marginal distribution of X alone or Y alone can be
calculated from the joint distribution function as
follows:
•g (x) = 
y f (x,y) and h (y) = 
x f (x,y) if discrete
•g (x) = 
y f (x,y) and h (y) = 
x f (x,y) if continuous
In other words, for example, g (x) = P(X = x) is the sum
(or integral) of f(x,y) over all values of y.

Walpole Walpole
Ch 03: Ch 03: 1010
Spring 2007
Conditional DistributionsConditional Distributions
For either discrete or continuous random variables, X
and Y, the conditional distribution of Y, given that
X = x, is
f (y | x) = f (x,y) / g(x) if g(x) > 0
and
f (x | y) = f (x,y) / h(y) if h(y) > 0
X and Y are statistically independent if
f (x,y) = g (x) h (y)
for all x and y within their range.
A similar equation holds for n mutually statistically
independent jointly distributed random variables.

Walpole Walpole
Ch 03: Ch 03: 1111
Spring 2007
Statistical IndependenceStatistical Independence
The definition of independence is as before:
•Previously, P (A | B) = P (A) and P (B | A) = P (B).
•How about terms of the conditional distribution?
•f (x | y) = g (x) and f (y | x) = h (y).
•The other way to demonstrate independence?
•f (x, y) = g(x) h(y) x, y in range.
Similar formulas also apply to more than two mutually
independent random variables.
Tags