Random process.pptx

NeethaK 1,605 views 70 slides Apr 11, 2023
Slide 1
Slide 1 of 70
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70

About This Presentation

see


Slide Content

Overview of Random variables and random process

Table of Contents 2 ◊ 1 Introduction ◊ 2 Probability ◊ 3 Random Variables ◊ 4 Statistical Averages ◊ 5 Random Processes ◊ 6 Mean, Correlation and Covariance Functions ◊ 7 Transmission of a Random Process through a Linear Filter ◊ 8 Power Spectral Density ◊ 9 Power Spectral Density ◊ 9 Gaussian Process ◊ 10 Noise ◊ 11 Narrowband Noise

Introduction ◊ Fourier transform is a mathematical tool for the representation of deterministic signals . ◊ Deterministic signals : the class of signals that may be modeled as completely specified functions of time. A signal is “random” if it is not possible to predict its precise value ◊ 3 in advance. ◊ A random process consists of an ensemble (family) of sample functions , each of which varies randomly with time. ◊ A random variable is obtained by observing a random process at a fixed instant of time.

Probability ◊ Probability theory is rooted in phenomena that, explicitly or implicitly, can be modeled by an experiment with an outcome that is subject to chance. ◊ Example: Experiment may be the observation of the result of tossing a fair coin. In this experiment, the possible outcomes of a trial are “heads” or “tails”. ◊ If an experiment has K possible outcomes, then for the k th possible outcome we have a point called the sample point , which we denote by s k . With this basic framework, we make the following definitions: ◊ The set of all possible outcomes of the experiment is called the sample space , which we denote by S . ◊ An event corresponds to either a single sample point or a set of sample points in the space S . 4

Probability ◊ Probability theory is rooted in phenomena that, explicitly or implicitly, can be modeled by an experiment with an outcome that is subject to chance. ◊ Example: Experiment may be the observation of the result of tossing a fair coin. In this experiment, the possible outcomes of a trial are “heads” or “tails”. ◊ If an experiment has K possible outcomes, then for the k th possible outcome we have a point called the sample point , which we denote by s k . With this basic framework, we make the following definitions: ◊ The set of all possible outcomes of the experiment is called the sample space , which we denote by S . ◊ An event corresponds to either a single sample point or a set of sample points in the space S . 5

Probability ◊ A single sample point is called an elementary event . ◊ The entire sample space S is called the sure event ; and the null set is called the null or impossible event .  ◊ Two events are mutually exclusive if the occurrence of one event precludes the occurrence of the other event. ◊ A probability measure P is a function that assigns a non-negative number to an event A in the sample space S and satisfies the following three properties (axioms):  5.1   5 . 2  1.  P  A   1 2. P  S  1 3. If A and B are two mutually exclusive events, then P  A  B   P  A   P  B   5 . 3  6

7

Conditional Probability

Conditional Probability ◊ Example 5.1 Binary Symmetric Channel ◊ This channel is said to be discrete in that it is designed to handle discrete messages. ◊ The channel is memoryless in the sense that the channel output at any time depends only on the channel input at that time. ◊ The channel is symmetric , which means that the probability of receiving symbol 1 when 0 is sent is the same as the probability of receiving symbol when symbol 1 is sent. 11

Conditional Probability

Random Variables ◊ We denote the random variable as X(s ) or just X . ◊ X is a function . ◊ Random variable may be discrete or continuous . ◊ Consider the random variable X and the probability of the event X ≤ x . We denote this probability by P[ X ≤ x ]. ◊ To simplify our notation, we write  5.15  F X  x   P  X  x  ◊ The function F X ( x ) is called the cumulative distribution function (cdf) or simply the distribution function of the random variable X . ◊ The distribution function F X ( x ) has the following properties: 1.  F x  x   1 15 if x 1  x 2 2. F x  x 1   F x  x 2 

Random Variables 16 There may be more than one random variable associated with the same random experiment.

Random Variables ◊ If the distribution function is continuously differentiable, then X X dx f  x   d F  x   5.17  ◊ f X ( x ) is called the probability density function (pdf) of the random variable X . ◊ Probability of the event x 1 < X ≤ x 2 equals P  x 1  X  x 2   P  X  x 2   P  X  x 1   F X  x 2   F X  x 1   5.1 9  ⎯ x ⎯ 1    ⎯  x 2 f  x  dx x X F  x   X f  ξ  d ξ     1 17 X x ◊ Probability density function must always be a nonnegative function, and with a total area of one.

Random Variables ◊ Example 5.2 Uniform Distribution 1 ⎧ 0, ⎪ x  a , a  x  b f X  x  ⎨ ⎪ b  a x  b ⎪ ⎩ 0, x  a ⎧ 0, ⎪ x  a F X  x   ⎨ , a  x  b x  b ⎪ b  a ⎪ ⎩ 0, 18

Random Variables ◊ Several Random Variables ◊ Consider two random variables X and Y . We define the joint distribution function F X , Y ( x , y ) as the probability that the random variable X is less than or equal to a specified value x and that the random variable Y is less than or equal to a specified value y . F X , Y x , y  P X  x , Y  y 5 . 23 ◊ Suppose that joint distribution function F X , Y ( x , y ) is continuous everywhere, and that the partial derivative X , Y X , Y  2 F  x , y  f  x , y    5.24   x  y 19 exists and is continuous everywhere. We call the function f X , Y ( x , y ) the joint probability density function of the random variables X and Y .

Random Variables ◊ Several Random Variables ◊ The joint distribution function F X , Y ( x , y ) is a monotone- nondecreasing function of both x and y . ◊ X , Y f   ,   d  d   1         ◊ Marginal density f X ( x )         X X , Y X X , Y f f  ,  d  d  x ,  d   5.2 7   x     F x  ⎯⎯ f x     ◊ Suppose that X and Y are two continuous random variables with joint probability density function f X , Y ( x , y ). The conditional probability density function of Y given that X = x is defined by   f x , y    5.2 8  20 X , Y Y X f  x  f y x 

Random Variables ◊ Several Random Variables ◊ If the random variable X and Y are statistically independent , then knowledge of the outcome of X can in no way affect the distribution of Y . f Y  y x   f Y  y  ⎯ b ⎯ y  5. ⎯ 28   f  x , y   f  x  f  y   5.32  X , Y X Y P  x  A , Y  B  P  X  A P  Y  B 21  5.33 

Random Variables ◊ Example 5.3 Binomial Random Variable ◊ Consider a sequence of coin-tossing experiments where the probability of a head is p and let X n be the Bernoulli random variable representing the outcome of the n th toss. ◊ Let Y be the number of heads that occur on N tosses of the coins: N n  1 Y   X n ⎜ y ⎟ ⎝ ⎠ P  Y  y   ⎛ N ⎞ p y  1  p  N  y ⎛ N ⎞  ⎜ y ⎟ N ! y !  N  y  ! ⎝ ⎠ 22

Statistical Averages ◊ The expected value or mean of a random variable X is defined by  x  dx  5.3 6  x X      E  X    xf ◊ Function of a Random Variable ◊ Let X denote a random variable, and let g ( X ) denote a real- valued function defined on the real line. We denote as Y  g  X   5.37  ◊ To find the expected value of the random variable Y . 23            5.3 8  Y x E Y y f g x f x dx     ⎡ g X ⎤   ⎣ ⎦  y dy ⎯⎯ E 

Statistical Averages ◊ Example 5.4 Cosinusoidal Random Variable ◊ Let Y = g ( X )=cos( X ) ◊ X is a random variable uniformly distributed in the interval (- π , π )    x   f ⎧ 1 ,  x   ⎪ 2  ⎨ o t h e r w i s e X ⎪ ⎩ 0, E  Y     1 cos x d x     ⎛ ⎞ ⎜ 2  ⎟ ⎝ ⎠ sin x 1 2   x       24

Statistical Averages ◊ Moments ◊ For the special case of g ( X ) = X n , we obtain the nth moment of the probability distribution of the random variable X ; that is  x  dx  5.3 9  X   ⎣ ⎦  E ⎡ X n ⎤   x n f ◊ Mean-square value of X :  5.4   x  dx ⎣ ⎦  E ⎡ X 2 ⎤   x 2 f ◊ The n th central moment is 25 X        5.4 1  n n X X X  ⎡ ⎤  x   f  x  dx ⎣ ⎦   E X  

Statistical Averages ◊ For n = 2 the second central moment is referred to as the variance of the random variable X , written as       2 2 X X  ⎡ ⎤ v a r X   x   f X  x  dx  5.42  ⎣ ⎦    E X   ◊ The variance of a random variable X is commonly denoted as . 2 X  ◊ The square root of the variance is called the standard deviation of the random variable X . ◊ 2 2 ⎡ ⎤   X X X 2 X   var  X ⎣ ⎦ ⎤  2  X   ⎦  E X    E ⎡ ⎣ X 2   26 2 X 2 X ⎡ ⎤  E X  2  E X   2  5.4 4  2 X ⎣ ⎦ ⎡ ⎤   ⎣ ⎦  E X

Statistical Averages ◊ Chebyshev inequality ◊ Suppose X is an arbitrary random variable with finite mean m x and finite variance σ x 2 . For any positive number δ: 2   2 x P  X  m     x ◊ Proof: 2 2  2 x x x x | x  m |     ( x  m ) p ( x ) dx  ( x  m ) 2 p ( x ) dx       p ( x ) dx   2 P (| X  m |   ) x 27 | x  m x |  

Statistical Averages ◊ Chebyshev inequality ◊ Another way to view the Chebyshev bound is working with the zero mean random variable Y = X-m x . ◊ Define a function g ( Y ) as: Y  ⎩ g  Y   ⎧ 1 and E  g  Y    P  Y     Y    ⎨ 2 ⎜ ⎟ ⎝ ⎠  ⎛ Y ⎞ 2 ◊ Upper-bound g ( Y ) by the quadratic ( Y /δ) , i.e. g  Y   ◊ The tail probability 2  2  2  2  2  2  E  Y 2    y   x ⎟ ⎟  ⎜ ⎝ ⎠ ⎛ Y 2 ⎞ E  g  Y    E ⎜ 28

Statistical Averages ◊ Chebychev inequality ◊ A quadratic upper bound on g ( Y ) used in obtaining the tail probability (Chebyshev bound) ◊ For many practical applications, the Chebyshev bound is extremely loose. 29

◊ Statistical Averages Characteristic function  X    is defined as the expectation of the complex exponential function exp( jυX ), as shown by          5.4 5  X X exp j x exp j   X dx    f ⎣ ⎦  j   E ⎡  X ⎤ ◊ In other words , the characteristic function  X    is the Fourier transform of the probability density function f X ( x ). ◊ Analogous with the inverse Fourier transform:  5.4 6  30 X X   2   f  x   1  j   exp   j  X  d  

Statistical Averages ◊ Characteristic functions ◊ First moment (mean) can be obtained by: v  x dv ( jv ) E ( X )  m   j d ◊ Since the differentiation process can be repeated, n- th moment can be calculated by: n E ( X )  (  j ) n d n  ( jv ) v  31 dv n

Statistical Averages ◊ Characteristic functions ◊ Determining the PDF of a sum of statistically independent random variables: jvY Y i n Y   X   i  1 ⎣ ⎡ ⎞ ⎤ X ⎛ ( jv )  E ( e )  E ⎢ exp ⎜ jv n  i ⎟ ⎥ ⎝ i  1 ⎠ ⎦ n n jvX  e  i e ⎟ p ( x 1 , x 2 ,..., x n ) dx 1 dx 2 ... dx n jvx i ⎞ ⎛  ⎥ ⎤ ⎡  E ⎢        .. .    ⎜ ⎣ i  1 ⎦ ⎝ i  1 ⎠ n Since the random variables are statistically independent, p ( x 1 , x 2 ,..., x n )  p ( x 1 ) p ( x 2 )... p ( x n )   Y ( jv )    X i ( jv ) 32 i  1 If X i are iid (independent and identically distributed)   ( j v )    ( j v )  n Y X

Statistical Averages ◊ Characteristic functions ◊ The PDF of Y is determined from the inverse Fourier transform of Ψ Y ( jv ). ◊ Since the characteristic function of the sum of n statistically independent random variables is equal to the product of the characteristic functions of the individual random variables, it follows that, in the transform domain, the PDF of Y is the n - fold convolution of the PDFs of the X i . ◊ Usually, the n -fold convolution is more difficult to perform than the characteristic function method in determining the PDF of Y . 33

Statistical Averages ◊ Example 5.5 Gaussian Random Variable ◊ The probability density function of such a Gaussian random variable is defined by: 1 X 2 X X 2  2  ⎛  x    2 ⎞ f  x   exp ⎜   X ⎟ ,    x   ⎜ ⎟ ⎝ ⎠ ◊ The characteristic function of a Gaussian random variable with mean m x and variance σ 2 is (Problem 5.1): 1 2   e   x  m x  2 / 2  2 ⎤ dx  e jv m x   1 / 2  v 2  2 ⎢ ⎣ ⎥ ⎦  ⎡ e jvx   jv      ◊ It can be shown that the central moments of a Gaussian random variable are given by: ⎧ 1  3  ( k  1) k (even k ) ⎩ 34  ⎨ (odd k ) k k E  ( X  m )    x

Statistical Averages ◊ Example 5.5 Gaussian Random Variable (cont.) ◊ The sum of n statistically independent Gaussian random variables is also a Gaussian random variable. ◊ Proof: n Y   X i i  1 2 2 n n Y e i i X i   jvm y  v 2  2 / 2 y jvm  v  / 2  e   jv     jv   and  i 1 i 1 n n y i y  2 2 i   where m   m Therefore, Y is Gaussian- distributed with mean m y i  1 i  1 2 . 35 and variance y

Statistical Averages ◊ Joint Moments ◊ Consider next a pair of random variables X and Y . A set of statistical averages of importance in this case are the joint moments , namely, the expected value of X i Y k , where i and k may assume any positive integer values. We may thus write X , Y  x , y  d x d y  5.5 1  ⎣ ⎦      E ⎡ X i Y k ⎤  x i y k f ◊ A joint moment of particular importance is the correlation defined by E [ XY ], which corresponds to i = k = 1. Y  E  Y  5.53  36  X  Y ◊ Covariance of X and Y : cov  XY  E ⎡ ⎣ X  E  X ⎤ ⎦ = E  XY

Statistical Averages ◊ Correlation coefficient of X and Y :   cov  XY   5.5 4  37  X  Y ◊ σ X and σ Y denote the variances of X and Y . ◊ We say X and Y are uncorrelated if and only if cov[ XY ] = 0. ◊ Note that if X and Y are statistically independent , then they are uncorrelated . ◊ The converse of the above statement is not necessarily true. ◊ We say X and Y are orthogonal if and only if E[ XY ] = 0.

Statistical Averages ◊ Example 5.6 Moments of a Bernoulli Random Variable ◊ Consider the coin-tossing experiment where the probability of a head is p . Let X be a random variable that takes the value 0 if the result is a tail and 1 if it is a head. We say that X is a Bernoulli random variable . ⎧ 1  p 1 x  x  1 o t h e r w is e ⎨ ⎪ ⎩ P  X  x   ⎪ p E  X    k P  X  k     1  p   1  p  p k  ⎧ ⎡     1 2 X   k   X  2 P X  k j k 2 j ⎪ ⎣ ⎦ ⎣ j k ⎦ ⎡ ⎤ ⎣ ⎦ ⎨ ⎩ ⎪ E X ⎤ E X j  k j  k E ⎡ X X ⎤  E X k  ⎧ p 2  ⎨ j  k ⎩ p j  k    p  2  1  p    1  p  2  p  1  p  38 1 k  ⎡ ⎤  ⎣ ⎦ j  2 wher e t h e E X k 2 P  X  k  .

Random Processes An ensemble of sample functions. 39

Random Processes ◊ At any given time instant, the value of a stochastic process is a random variable indexed by the parameter t . We denote such a process by X ( t ) . ◊ In general, the parameter t is continuous, whereas X may be either continuous or discrete, depending on the characteristics of the source that generates the stochastic process. ◊ The noise voltage generated by a single resistor or a single information source represents a single realization of the stochastic process. It is called a sample function . 40

Random Processes ◊ The set of all possible sample functions constitutes an ensemble of sample functions or, equivalently, the stochastic process X ( t ) . ◊ In general, the number of sample functions in the ensemble is assumed to be extremely large; often it is infinite. ◊ Having defined a stochastic process X ( t ) as an ensemble of sample functions, we may consider the values of the process at any set of time instants t 1 > t 2 > t 3 >…> t n , where n is any positive integer. ◊ In g enera l , the random variables X t  X  t i  , i  1 , 2 ,..., n , are i 41 t 1 t 2 t n character i zed statistic a lly by their joint PD F p  x , x ,..., x  .

Random Processes ◊ Stationary stochastic processes ◊ Consid e r a noth e r s e t of n r a ndom v a ri a b l e s X t  t  X  t i  t  , i i  1, 2,..., n , where t is an arbitrary time shift. These random v ar i a b l e s ar e c h arac t er i ze d by the joint P DF p  x t 1  t , x t 2  t , ..., x t n  t  . ◊ The jont PDFs of the random variables X t and X t  t ,i  1 , 2 ,...,n , i i may or may not be identical. When they are identical, i.e., when p  x t 1 , x t 2 , ... , x t n   p  x t 1  t , x t 2  t , ... , x t n  t  for all t and all n , it is said to be stationary in the strict sense (SSS). ◊ When the joint PDFs are different, the stochastic process is non-stationary . 42

Random Processes ◊ Averages for a stochastic process are called ensemble averages . ◊  i n n The n th moment of the random variable X t is defined as : x dx E  X   x p   ◊    t i t i t i t i ◊ In general, the value of the n th moment will depend on the time instant t i if the PDF of X t depends on t i . i When the process is stationary, p x t  t  p x for all t . i t i Therefore, the PDF is independent of time, and, as a consequence, the n th moment is independent of time. 43

Random Processes ◊ Two random variables: X t i  X  t i  , i  1 , 2 . ◊ The correlation is measured by the joint moment :   x x p x , x dx dx   E  X X   t 1 t 2 t 1 t 2 t 1 t 2 t 1 t 2       ◊ Since this joint moment depends on the time instants t 1 and t 2 , it is denoted by R X ( t 1 , t 2 ). ◊ R X ( t 1 , t 2 ) is called the autocorrelation function of the stochastic process. 44 ◊ For a stationary stochastic process, the joint moment is: E ( X t X t )  R X ( t 1 , t 2 )  R X ( t 1  t 2 )  R X (  ) 1 2 ◊ ' ' 1 1 1 1 1 1 X t t   t   t t   R X (   )  E ( X t X )  E ( X X )  E ( X X )  R (  ) 2 ◊ Average power in the process X ( t ): R X (0)= E ( X t ).

Random Processes 45 ◊ Wide-sense stationary (WSS) ◊ A wide-sense stationary process has the property that the mean value of the process is independent of time (a constant) and where the autocorrelation function satisfies the condition that R X ( t 1 , t 2 )=R X ( t 1 - t 2 ). ◊ Wide-sense stationarity is a less stringent condition than strict-sense stationarity.

Random Processes ◊ Auto-covariance function ◊ The auto-covariance function of a stochastic process is defined as:   t 1 , t 2   E ⎡ ⎣ X t 1  m  t 1  ⎦ ⎤ ⎣ ⎡ X t 2  m  t 2  ⎦ ⎤  R X  t 1 , t 2   m  t 1  m  t 2  ◊ When the process is stationary, the auto-covariance function simplifies to: 2  ( t 1 , t 2 )   ( t 1  t 2 )   (  )  R X (  )  m ◊ For a Gaussian random process, higher-order moments can be expressed in terms of first and second moments . Consequently, a Gaussian random process is completely characterized by its first two moments. 46

Mean , Correlation and Covariance Functions          5.57  X X t x f x dx  ◊ Consider a random process X ( t ). We define the mean of the process X ( t ) as the expectation of the random variable obtained by observing the process at some time t , as shown by    ⎣ ⎦  t  E ⎡ [ X t ]  ◊ A random process is said to be stationary to first order if the distribution function (and therefore density function) of X ( t ) does not vary with time. 47         1 2 1 2 X t X t f x for all t for all t x  f and t   X  t    X  5.59  ◊ The mean of the random process is a constant. ◊ The variance of such a process is also constant.

Mean, Correlation and Covariance Functions ◊ We define the autocorrelation function of the process X ( t ) as the expectation of the product of two random variables X ( t 1 ) and X ( t 2 ). R X  t 1 , t 2   E ⎡ ⎣ X  t 1  X  t 2  ⎤ ⎦ ◊ We say a random process X ( t ) is stationary to second order if the       1 2 1 2 1 2 1 2  5.6  X t , X t x x f x , x dx dx              48   1 2 1 2 X t , X t joint distribution f x , x depends on the difference between the observation time t 1 and t 2 . R X  t 1 , t 2   R X  t 2  t 1  f or a l l t 1 a n d t 2  5.6 1  ◊ The autocovariance function of a stationary random process X ( t ) is written as  X 2 1 2 X  t  5.6 2   t   C X  t 1 , t 2   E ⎣ ⎡  X  t 1    X   X  t 2    X  ⎦ ⎤  R

Mean , Correlation and Covariance Functions ◊ For convenience of notation, we redefine the autocorrelation function of a stationary process X ( t ) as for all t R X     E ⎡ ⎣ X  t    X  t  ⎤ ⎦  5.63  ◊ This autocorrelation function has several important properties:  5.6 4   5.65 49 1. R X    E ⎡ ⎣ X  t  ⎤ ⎦ 2 2. R X   R X    3. R X     R X    5.67  ◊ Proof of (5.64) can be obtained from (5.63) by putting τ = 0.

5.6 Mean, Correlation and Covariance Functions ◊ Proof of (5.65): R X     E ⎡ ⎣ X  t    X  t  ⎤ ⎦  E ⎡ ⎣ X  t  X  t    ⎤ ⎦  R X     ◊ Proof of (5.67) : ⎣ ⎦ E ⎡  X  t     X  t   2 ⎤   E ⎣ ⎡ X 2  t    ⎦ ⎤  2 E ⎣ ⎡ X  t    X  t  ⎤ ⎦  E ⎣ ⎡ X 2  t  ⎦ ⎤   2 R X    2 R X       R X    R X     R X    R X     R X   50

5.6 Mean, Correlation and Covariance Functions ◊ The physical significance of the autocorrelation function R X ( τ ) is that it provides a means of describing the “interdependence” of two random variables obtained by observing a random process X ( t ) at times τ seconds apart. 51

5.6 Mean, Correlation and Covariance Functions ◊ Example 5.7 Sinusoidal Signal with Random Phase ◊ Consider a sinusoidal signal with random phase: X t  A cos 2  f t   f ⎧ 1 ,       c     ⎪ 2 elsewhere  ⎨ ⎪ ⎩ 0,   2 2 c o s 4 A R X     E ⎡ ⎣ X  t    X  t  ⎤ ⎦ ⎡ ⎤   f c   ⎤  f c t    2 f c  2  ⎦ ⎦  E A     E ⎡ ⎣ c o s 2 2 c o s 4 c c c A 2  f  ⎣ 2  A 1  f t  2  f   2  d   c o s 2  2 cos  2  f c      2 2  2  A 2 52

5.6 Mean, Correlation and Covariance Functions ◊ Averages for joint stochastic processes ◊ Let X ( t ) and Y ( t ) denote two stochastic processes and let X t i ≡ X ( t i ), i =1,2,…, n , Y t’ j ≡Y ( t’ j ), j =1,2,…, m , represent the random variables at times t 1 > t 2 > t 3 >…> t n , and t’ 1 > t’ 2 > t’ 3 >…> t’ m , respectively. The two processes are characterized statistically by their joint PDF: ◊ The cross-correlation function of X ( t ) and Y ( t ), denoted by   ' ' 1 2 n 1 2 m t t t t t t p x , x ,..., x , y , y ' ,..., y   R xy ( t 1 , t 2 ), is defined as the joint moment: 1 2       t 1 t 2 t 1 t 2 t 1 t 2 R xy ( t 1 , t 2 )  E ( X t Y t )  x y p ( x , y ) dx dy ◊ The cross-covariance is: 53  xy ( t 1 , t 2 )  R xy ( t 1 , t 2 )  m x ( t 1 ) m y ( t 2 )

5.6 Mean, Correlation and Covariance Functions ◊ Averages for joint stochastic processes ◊ When the process are jointly and individually stationary, we have R xy ( t 1 , t 2 )=R xy ( t 1 - t 2 ), and μ xy ( t 1 , t 2 )= μ xy ( t 1 - t 2 ): ' ' ' ' 1 1 1 1 1 1 yx xy t t   t   t t t   R (   )  E ( X Y  ) )  E ( X Y )  E ( Y X )  R ( ◊ The stochastic processes X ( t ) and Y ( t ) are said to be statistically independent if and only if : ' ' ' ' ' ' 1 2 n 1 2 m 1 2 m t t t t t t t t t ) p ( y , y ,..., y ) ,..., x p ( x t , x t ,..., x t , y , y ,..., y )  p ( x , x 1 2 n for all choices of t i and t’ i and for all positive integers n and m . ◊ The processes are said to be uncorrelated if 54 1 2 t t R xy ( t 1 , t 2 )  E ( X ) E ( Y )  xy ( t 1 , t 2 ) 

5.6 Mean, Correlation and Covariance Functions ◊ Example 5.9 Quadrature-Modulated Processes ◊ Consider a pair of quadrature-modulated processes X 1 ( t ) and X 2 ( t ): X 1  t   X  t  cos  2  f c t    X 2  t   X  t  sin  2  f c t    R 12  t X 2 t   ⎤ ⎦  E ⎡ ⎣ X 1  E ⎡ ⎣ X  t  X  t    cos  2  f c t    sin  2  f c t  2  f c     ⎤ ⎦  E ⎣ ⎡ X  t  X  t    ⎦ ⎤ E ⎡ ⎣ cos  2  f c t    sin  2  f c t  2  f c     ⎤ ⎦ 2 X c c c ⎣ ⎦  1 R    E ⎡ sin  4  f t  2  f t  2    sin  2  f   ⎤ 2 X c   1 R    sin  2  f   12 1 2 ⎣ ⎦ R    E ⎡ X  t  X  t  ⎤  55

5.6 Mean, Correlation and Covariance Functions ◊ Ergodic Processes ◊ In many instances, it is difficult or impossible to observe all sample functions of a random process at a given time. ◊ It is often more convenient to observe a single sample function for a long period of time. ◊ For a sample function x t ), the time average of the mean value over an observation period 2 T is T   1 x  t  d t  5.8 4  56 x , T 2 T  T  ◊ For many stochastic processes of interest in communications, the time averages and ensemble averages are equal, a property known as ergodicity . ◊ This property implies that whenever an ensemble average is required, we may estimate it by using a time average.

5.6 Mean, Correlation and Covariance Functions ◊ Cyclostationary Processes (in the wide sense) ◊ There is another important class of random processes commonly encountered in practice, the mean and autocorrelation function of which exhibit periodicity:  X  t 1  T    X  t 1  R X t 1  T , t 2  T  R X t 1 , t 2 for all t 1 and t 2 . ◊ Modeling the process X ( t ) as cyclostationary adds a new dimension, namely, period T to the partial description of the process. 57

5.7 Transmission of a Random Process Through a Linear Filter ◊ Suppose that a random process X ( t ) is applied as input to linear time-invariant filter of impulse response h ( t ), producing a new random process Y ( t ) at the filter output. ◊ Assume that X ( t ) is a wide-sense stationary random process. ◊ The mean of the output random process Y ( t ) is given by     1 Y  h  X t   d ⎤ 1 1 ⎥ ⎦  ⎢   ⎣ ⎣ ⎦    t   E ⎡ Y  t  ⎤  E ⎡     58 1 1 1 = d    ⎣ ⎦  h  E ⎡ X t   ⎤     1 1 1  5.8 6  X h    t   d     

5.7 Transmission of a Random Process Through a Linear Filter ◊ When the input random process X ( t ) is wide-sense stationary, the mean t X Y is a constant X , then mean t is also a constant Y .     Y 1 1 X h  d    t       H  5.87  X    where H (0) is the zero-frequency (dc) response of the system. ◊ The autocorrelation function of the output random process Y ( t ) is given by:         1 1 2 Y h  X   t   d  h  X u   d  ⎢  ⎣ ⎤ 2 2 ⎥ ⎦ 1    R  t , u   E ⎡ Y  t  Y  u  ⎤  E ⎡         1 1 2 2 1 2 d  h  d  h  ⎣  ⎦      ⎣ ⎦   E ⎡ X t   X u   ⎤   59     1 1 2 2 1 2 X d  h  d  h       R t   , u    

5.7 Transmission of a Random Process Through a Linear Filter ◊ When the input X ( t ) is a wide-sense stationary random process, the autocorrelation function of X ( t ) is only a function of the difference between the observation times:          5.9  1 2 X 1 2 1 2 Y R   h  h  R      d  d         ◊ If the input to a stable linear time-invariant filter is a wide-sense stationary random process, then the output of the filter is also a wide-sense stationary random process. 60

5.8 Power Spectral Density ◊ The Fourier transform of the autocorrelation function R X (τ) is called the power spectral density S X ( f ) of the random process X ( t ).       X X j 2  f  d  R  e x p  S f       5.91   5.9 2  R X        S X  f  e x p  j 2  f   d f ◊ Equations (5.91) and (5.92) are basic relations in the theory of spectral analysis of random processes, and together they constitute what are usually called the Einstein-Wiener-Khintchine relations . 61

5 5 . . 8 8 P P o o w w e e r r S S p p e e c c t t r r a a l l D D e e n n s s i i t t y y ◊ Properties of the Power Spectral Density  ◊ Property 1: ◊ Proof: Let f =0 in Eq. (5.91)      5.9 3  X X S R  d      ◊ Property 2:     2  5.9 4  X S f d f    ⎡ ⎤ t  ⎣ ⎦  E X ◊ Proof: Let τ =0 in Eq. (5.92) and note that R X (0)= E [ X 2 ( t )]. 62 ◊ Property 3:  5.95   5.9 6  S X  f   for all f S X   f   S X  f  ◊ Property 4: ◊ Proof: From (5.91)                 X X X X R  e x p X R  e x p  X j 2  f  d S  f    S f    R   R   j 2  f  d        

Proof of Eq. (5.95) ◊ It can be shown that (see eq. 5.106) S Y  f   S X  f  H  f  2   2 Y Y X R   S S  f  H  f  exp  j 2  f   df       f e xp j 2  f  df        2 Y X R   ⎡ ⎤  E Y t       S  f  H  f  2 df  for any H  f  ⎣ ⎦  ◊ Suppose we let | H ( f )| 2 =1 for any arbitrarily small interval f 1 ≤ f ≤ f 2 , and H ( f )=0 outside this interval. Then, we have: 1 63 X f 2 S  f  df   f This is possible if an only if S X ( f )≥0 for all f . ◊ Conclusion: S X ( f )≥0 for all f .

5.8 Power Spectral Density ◊ Example 5.10 Sinusoidal Signal with Random Phase ◊ Consider the random process X ( t )= A cos(2π f c t+ Θ), where Θ is a uniformly distributed random variable over the interval (- π , π ). ◊ The autocorrelation function of this random process is given in Example 5.7: A 2 R X     cos  2  f c   (5.74) ◊ Taking the Fourier transform of both sides of this relation: 2 A       X c c S ⎡  f  f  f   f  f ⎤ ( 5.97 ) 4 ⎣ ⎦ 64

5.8 Power Spectral Density ◊ Example 5.12 Mixing of a Random Process with a Sinusoidal Process ◊ A situation that often arises in practice is that of mixing (i.e., multiplication) of a WSS random process X ( t ) with a sinusoidal signal cos(2π f c t+ Θ), where the phase Θ is a random variable that is uniformly distributed over the interval (0,2π). ◊ Determining the power spectral density of the random process Y ( t ) defined by: ( 5.101) 65 Y  f   X  t  cos  2  f c t    ◊ We note that random variable Θ is independent of X ( t ) .

5.8 Power Spectral Density ◊ Example 5.12 Mixing of a Random Process with a Sinusoidal Process (continued) ◊ The autocorrelation function of Y ( t ) is given by: R Y     E ⎣ ⎡ Y  t    Y  t  ⎦ ⎤  E ⎡ ⎣ X  t    cos  2  f c t  2  f c     X  t  cos  2  f c t    ⎤ ⎦  E ⎡ ⎣ X  t    X  t  ⎤ ⎦ E ⎡ ⎣ cos  2  f c t  2  f c     cos  2  f c t    ⎤ ⎦ 2 X c c c ⎣ ⎦  1 R    E ⎡ cos  2  f    cos  4  f t  2  f   2   ⎤ 2 X c  1 R    cos  2  f   Fourier transform (5.103) 66 Y X S  f   1 ⎡ S 4 ⎣ c X  f  f   S c  f  f  ⎤ ⎦

5.8 Power Spectral Density ◊ Relation among the Power Spectral Densities of the Input and Output Random Processes ◊ Let S Y ( f ) denote the power spectral density of the output random process Y ( t ) obtained by passing the random process through a linear filter of transfer function H ( f ).   Y Y R  e d    j 2  f  S  f             1 2 1 2 1 2 Y X  d   5.90    R   h  h  R      d         1 2 1 2 X 1 2 h  h  d  d  d      j 2  f   R      e           Let    1   2           0 1 2 1 2 1 2 0 X h  h  R  e d  d  d      j 2  f             1 2 1 2 X e j 2  f  h  e d  h  e d  R  d            j 2  f   j 2  f     1   2    5.106  67  H  f  H   f  S  f   H  f  2 S  f  X X

5.8 Power Spectral Density ◊ Example 5.13 Comb Filter ◊ Consider the filter of Figure (a) consisting of a delay line and a summing device. We wish to evaluate the power spectral density of the filter output Y ( t ). 68

5.8 Power Spectral Density ◊ Example 5.13 Comb Filter (continued) ◊ The transfer function of this filter is H  f   1  exp   j 2  fT   1  cos  2  fT   j sin  2  fT      2 2 H f c o s 2 ⎡  ⎡ 1   fT ⎤  sin 2  2  fT  ⎤ ⎣ ⎦ ⎣ ⎦ =2 ⎡ ⎣ 1  cos  2  fT  ⎦ ⎤ =4sin 2   fT  ◊ Because of the periodic form of this frequency response (Fig. (b)), the filter is sometimes referred to as a comb filter . ◊ The power spectral density of the filter output is: S Y  f   H  f  S  f   4sin   fT  S  f  2 2 X X If fT is very small 69 S Y ( f )  4  f T S ( f ) 2 2 2 X (5.107)

5.9 Gaussian Process ◊ A random variable Y is defined by: N T Y   g  t  X  t  dt ◊ We refer to Y as a linear functional of X ( t ). Y   a i X i i  1 a i are constants X i are random variables Y is a linear function of the X i . ◊ If the weighting function g ( t ) is such that the mean-square value of the random variable Y is finite, and if the random variable Y is a Gaussian-distributed random variable for every g ( t ) in this class of functions, then the process X ( t ) is said to be a Gaussian process . ◊ In other words, the process X ( t ) is a Gaussian process if every linear functional of X ( t ) is a Gaussian random variable . ◊ The Gaussian process has many properties that make analytic results possible. ◊ The random processes produced by physical phenomena are often such that a Gaussian model is appropriate. 70
Tags