Stochastic Process and its Applications.

1,486 views 128 slides Dec 08, 2023
Slide 1
Slide 1 of 128
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128

About This Presentation

Stochastic Process and its Applications


Slide Content

Stochastic Processes and its Applications Dr. Rahul Pandya (IIT-Dh)

Course content - Syllabus RJEs: Remote job entry points 2 Introduction to stochastic processes : Definitions and examples, Bernoulli process and Poisson process. Markov Process, stationary and ergodic process. Markov Chains : Discrete time and continuous time Markov chains, classification of states, notion of stationary distribution. Simulating stochastic processes like Gaussian process, Poisson process, Markov chains and Brownian motion. Introduction to Markov chain monte carlo methods , Hidden Markov chain and Markov decision process. Statistics: MLE, MAP and Bayesian Estimation, sufficient statistics, and CramerRao lower bound.

Reference books RJEs: Remote job entry points Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis https://www.vfu.bg/en/e-Learning/Math--Bertsekas_Tsitsiklis_Introduction_to_probability.pdf Introduction to probability models by Sheldon Ross http://mitran-lab.amath.unc.edu/courses/MATH768/biblio/introduction-to-prob-models-11th-edition.PDF Stochastic process by Sheldon Ross Stochastic process and models by David Stirzaker

Stochastic process RJEs: Remote job entry points 4 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis A stochastic process is a mathematical model of a probabilistic experiment that evolves in time and generates a sequence of numerical values . For example, a stochastic process can be used to model: (a) the sequence of daily prices of a stock ; (b) the sequence of scores in a football game; (c) the sequence of failure times of a machine; (d) the sequence of hourly traffic loads at a node of a communication network; (e) the sequence of radar measurements of the position of an airplane.

Two major categories of stochastic processes RJEs: Remote job entry points 5 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Arrival-Type Processes: Here, we are interested in occurrences that have the character of an “arrival,” such as message receptions at a receiver , job completions in a manufacturing cell, customer purchases at a store, etc. Inter-arrival times ( the times between successive arrivals ) are independent random variables. The inter-arrival times are Geometrically distributed – Bernoulli process. The inter-arrival times are Exponentially distributed – Poisson process.

Two major categories of stochastic processes RJEs: Remote job entry points 6 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Markov Processes Experiments that evolve in time and in which the future evolution exhibits a probabilistic dependence on the past . As an example, the future daily prices of a stock are typically dependent on past prices.

Bernoulli Process RJEs: Remote job entry points 7 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Bernoulli process as a sequence X 1 ,X 2 , . . . of independent Bernoulli random variables X i with Given an arrival process, one is often interested in random variables such as the number of arrivals within a certain time period , or the time until the first arrival.

Bernoulli Process and their Properties RJEs: Remote job entry points 8 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Memorylessness or Fresh-start property RJEs: Remote job entry points 9 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Memorylessness property Past trials provides no information on the outcomes of future trials A Bernoulli process has been running for n time steps, X 1 , X 2 , . . . , X n We notice that the sequence of future trials X n+1 ,X n+2 , . . . are independent Bernoulli trials In addition, these future trials are independent from the past ones . We conclude that starting from any given point in time, the future is also modelled by a Bernoulli process, which is independent of the past. We refer to this as the fresh-start property of the Bernoulli process

Memorylessness and the Fresh-Start Property of the Bernoulli Process RJEs: Remote job entry points 10 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The number T − n of trials until the first success after time n has a geometric distribution with parameter p , and is independent of the past. For any given time n , the sequence of random variables X n+1 ,X n+2 , . . . (the future of the process) is a Bernoulli process Independent from X 1 , . . . , X n (the past of the process)

Inter-arrival Times RJEs: Remote job entry points 11 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Random variable associated with the Bernoulli process is the time of the k th success, which we denote by Y k . Random variable for the k th inter-arrival time, denoted by T k T 1 = 3, T 2 = 5, T 3 = 2, T 4 = 1 Y 1 = 3, Y 2 = 8, Y 3 = 10, Y 4 = 11

Exercise RJEs: Remote job entry points 12 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis A computer executes two types of tasks, priority and nonpriority , and operates in discrete time units ( slots ). A priority task arises with probability p at the beginning of each slot , independently of other slots, and requires one full slot to complete. A nonpriority task is executed at a given slot only if no priority task is available. In this context, it may be important to know the probabilistic properties of the time intervals available for nonpriority tasks. T = the time index of the first idle slot ; (b) B = the length (number of slots) of the first busy period ; (c) I = the length of the first idle period .

Exercise RJEs: Remote job entry points 13 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis We recognize T as a geometrically distributed random variable with parameter 1 − p . Its PMF is Its mean and variance are A priority task arises with probability p at the beginning of each slot. We always handle high priority task first (k-1) and if there is no high priority task, in that case we will look at the low priority task.

Illustration of busy (B) and idle (I) periods RJEs: Remote job entry points 14 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis T = 4, B = 3, and I = 2 T = 1, I = 5, and B = 4 T = the time index of the first idle slot; (b) B = the length (number of slots) of the first busy period; (c) I = the length of the first idle period.

Exercise RJEs: Remote job entry points 15 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Let us now consider the first busy period. It starts with the first busy slot, call it slot L . (In the top diagram , L = 1 ; in the bottom diagram , L = 6 .) The number Z of subsequent slots until (and including) the first subsequent idle slot has the same distribution as T , because the Bernoulli process starts fresh at time L + 1 . We then notice that Z = B and conclude that B has the same PMF as T . If we reverse the roles of idle and busy slots, and interchange p with 1 −p , we see that the length I of the first idle period has the same PMF as the time index of the first busy slot, so that

Bernoulli Process and their Properties RJEs: Remote job entry points 16 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Exercise RJEs: Remote job entry points 17 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis In each minute of basketball play, Alice commits a single foul with probability p and no foul with probability 1 − p . The number of fouls in different minutes are assumed to be independent. Alice will foul out of the game once she commits her sixth foul, and will play 30 minutes if she does not foul out. What is the PMF of Alice’s playing time? We model fouls as a Bernoulli process with parameter p. Alice’s playing time Z is equal to Y6, the time until the sixth foul, except if Y6 is larger than 30, in which case, her playing time is 30, the duration of the game; that is, Z = min {Y6, 30}. The random variable Y6 has a Pascal PMF of order 6, which is given by To determine the PMF p Z ( z ) of Z , we first consider the case where z is between 6 and 29. For z in this range, we have

Splitting and Merging of Bernoulli Processes RJEs: Remote job entry points 18 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis probability pq of a kept arrival probability of a discarded arrival at each time slot equal to p (1 − q ). Starting with a Bernoulli process in which there is a probability p of an arrival at each time, consider splitting it as follows. Whenever there is an arrival , we choose to either keep it (with probability q ), or to discard it (with probability 1 −q ); Assume that the decisions to keep or discard are independent for different arrivals. If we focus on the process of arrivals that are kept, we see that it is a Bernoulli process: in each time slot, there is a probability pq of a kept arrival , independently of what happens in other slots. For the same reason, the process of discarded arrivals is also a Bernoulli process, with a probability of a discarded arrival at each time slot equal to p (1 − q ).

Merging of independent Bernoulli process RJEs: Remote job entry points 19 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis In a reverse situation, we start with two independent Bernoulli processes (with parameters p and q , respectively) and merge them into a single process. An arrival is recorded in the merged process if and only if there is an arrival in at least one of the two original processes Merged with probability p + q − pq Since different time slots in either of the original processes are independent, different slots in the merged process are also independent. Thus, the merged process is Bernoulli, with success probability p + q − pq at each time step

The Poisson Approximation to the Binomial RJEs: Remote job entry points 20 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Exercise RJEs: Remote job entry points 21 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis As a rule of thumb, the Poisson/binomial approximation is valid to several decimal places if n ≥ 100, p ≤ . 01, and λ = np . Gary Kasparov, the world chess champion (as of 1999) plays against 100 amateurs in a large simultaneous exhibition. It has been estimated from past experience that Kasparov wins in such exhibitions 99% of his games on the average (in precise probabilistic terms, we assume that he wins each game with probability 0 . 99 , independently of other games). What are the probabilities that he will win 100 games, 98 games, 95 games, and 90 games ? We model the number of games X that Kasparov does not win as a binomial random variable with parameters n = 100 and p = 0 . 01. Thus the probabilities that he will win 100 games, 98, 95 games, and 90 games are

Exercise RJEs: Remote job entry points 22 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis We model the number of games X that Kasparov does not win as a binomial random variable with parameters n = 100 and p = 0 . 01. Thus the probabilities that he will win 100 games, 98, 95 games, and 90 games are Binomial n = 100 and p = 0 . 01 Poisson Lamda = np = 100*0.01 = 1

Exercise RJEs: Remote job entry points 23 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis A packet consisting of a string of n symbols is transmitted over a noisy channel. Each symbol has probability p = 0 . 0001 of being transmitted in error , independently of errors in the other symbols. How small should n be in order for the probability of incorrect transmission (at least one symbol in error) to be less than 0.001 ? Each symbol transmission is viewed as an independent Bernoulli trial . Thus, the probability of a positive number S of errors in the packet is For this probability to be less than 0.001, we must have 1 − (1 − . 0001) n < . 001 We can also use the Poisson approximation for P ( S = 0), which is Given that n must be integer, both methods lead to the same conclusion that n can be at most 10 λ = np = 0 . 0001 *n

THE POISSON PROCESS RJEs: Remote job entry points 24 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis To see the need for a continuous-time version of the Bernoulli process, let us consider a possible model of traffic accidents within a city. We can start by discretizing time into one-minute periods and record a “success” during every minute in which there is at least one traffic accident. Assuming the traffic intensity to be constant over time, the probability of an accident should be the same during each period. Under the additional (and quite plausible) assumption that different time periods are independent, the sequence of successes becomes a Bernoulli process. Note that in real life, two or more accidents during the same one-minute interval are certainly possible, but the Bernoulli process model does not keep track of the exact number of accidents. In particular, it does not allow us to calculate the expected number of accidents within a given period. One way around this difficulty is to choose the length of a time period to be very small, so that the probability of two or more accidents becomes negligible. But how small should it be? A second? A millisecond? Instead of answering this question, it is preferable to consider a limiting situation where the length of the time period becomes zero, and work with a continuous time model. We consider an arrival process that evolves in continuous time, in the sense that any real number t is a possible arrival time. We define

THE POISSON PROCESS RJEs: Remote job entry points 25 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis λ – arrival rate or intensity 𝜏 – Time interval K – number of arrivals in a given time interval 𝜏

Definition of the Poisson Process RJEs: Remote job entry points 26 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis An arrival process is called a Poisson process with rate λ if it has the following properties: a) Time-homogeneity: The probability P(k, 𝜏 ) of k arrivals is the same for all intervals of the same length 𝜏 . b) Independence: The number of arrivals during a particular interval is independent of the history of arrivals outside this interval. c) Small interval probabilities: The probabilities P(k, 𝜏 ) satisfy

Poisson Process and their Properties RJEs: Remote job entry points 27 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The first property states that arrivals are “equally likely” at all times. The arrivals during any time interval of length τ are statistically the same, in the sense that they obey the same probability law. This is a counterpart of the assumption that the success probability p in a Bernoulli process is constant over time. To interpret the second property, consider a particular interval [t, t], of length t − t. The unconditional probability of k arrivals during that interval is P(k, t − t). Suppose now that we are given complete or partial information on the arrivals outside this interval. Property (b) states that this information is irrelevant: the conditional probability of k arrivals during [t, t] remains equal to the unconditional probability P(k, t − t). This property is analogous to the independence of trials in a Bernoulli process. The third property is critical. The o(τ ) and o1(τ ) terms are meant to be negligible in comparison to τ , when the interval length τ is very small. They can be thought of as the O(τ 2) terms in a Taylor series expansion of P(k, τ). Thus, for small τ , the probability of a single arrival is roughly λτ , plus a negligible term. Similarly, for small τ , the probability of zero arrivals is roughly 1 − λτ . Note that the probability of two or more arrivals is

Bernoulli approximation of the Poisson process RJEs: Remote job entry points 28 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Different periods are independent, by property (b). Furthermore, each period has one arrival with probability approximately equal to λδ , or zero arrivals with probability approximately equal to 1 − λδ . Therefore, the process being studied can be approximated by a Bernoulli process, with the approximation becoming more and more accurate the smaller δ is chosen. Thus the probability P ( k, τ ) of k arrivals in time τ , is approximately the same as the (binomial) probability of k successes in n = τ/δ independent Bernoulli trials with success probability p = λδ at each trial. While keeping the length τ of the interval fixed, we let the period length δ decrease to zero. We then note that the number n of periods goes to infinity, while the product np remains constant and equal to λτ . Under these circumstances, we saw in the previous section that the binomial PMF converges to a Poisson PMF with parameter λτ . We are then led to the important conclusion that

Bernoulli approximation of the Poisson process RJEs: Remote job entry points 29 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis N τ - Stands for the number of arrivals during a time interval of length τ .

Bernoulli approximation of the Poisson process RJEs: Remote job entry points 30 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Let us now derive the probability law for the time T of the first arrival, assuming that the process starts at time zero. Note that we have T > t if and only if there are no arrivals during the interval [0 , t ]. Therefore We then differentiate the CDF F T ( t ) of T , and obtain the PDF formula

Poisson Process RJEs: Remote job entry points 31 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Bernoulli process as the discrete-time version of the Poisson RJEs: Remote job entry points 32 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Exercise RJEs: Remote job entry points 33 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis We get email according to a Poisson process at a rate of λ = 0 . 2 messages per hour. We check your email every hour. What is the probability of finding 0 and 1 new messages? These probabilities can be found using the

Bernoulli Process and their Properties RJEs: Remote job entry points 34 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Suppose that we have not checked your email for a whole day. What is the probability of finding no new messages? We use again the Poisson PMF and obtain

Home work RJEs: Remote job entry points 35 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Example 5.9. - During rush hour, from 8 am to 9 am, traffic accidents occur according to a Poisson process with a rate μ of 5 accidents per hour. Between 9 am and 11 am, they occur as an independent Poisson process with a rate ν of 3 accidents per hour. What is the PMF of the total number of accidents between 8 am and 11 am?

Alternative Description of the Poisson Process RJEs: Remote job entry points 36 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis 1. Start with a sequence of independent exponential random variables T 1 , T 2 ,. . . , with common parameter λ , and let these stand for the inter-arrival times. 2. Record an arrival at times T 1, T 1 + T 2, T 1 + T 2 + T 3, etc.

The kth Arrival Time RJEs: Remote job entry points 37 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The k th arrival time is equal to the sum of the first k inter-arrival times Y k = T 1 + T 2 + ・ ・ ・ + T k , and the latter are independent exponential random variables with common parameter λ The mean and variance of Yk are given by

Erlang PDF RJEs: Remote job entry points 38 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The PDF of Y k is given by Known as the Erlang PDF of order k .

Erlang PDF RJEs: Remote job entry points 39 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis For a small δ , the product is the probability that the k th arrival occurs between times y and y + δ . When δ is very small, the probability of more than one arrival during the interval [ y, y + δ ] is negligible . Thus, the k th arrival occurs between y and y + δ if and only if the following two events A and B occur: (a) event A : there is an arrival during the interval [ y, y + δ ]; (b) event B : there are exactly k − 1 arrivals before time y . The probabilities of these two events are

Erlang PDF RJEs: Remote job entry points 40 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Since A and B are independent , we have

Exercise RJEs: Remote job entry points 41 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis You call the IRS hotline and you are told that you are the 56th person in line, excluding the person currently being served. Callers depart according to a Poisson process with a rate of λ = 2 per minute. How long will you have to wait on the average until your service starts, and what is the probability you will have to wait for more than an hour? By the memoryless property, the remaining service time of the person currently being served is exponentially distributed with parameter 2. The service times of the 55 persons ahead of you are also exponential with the same parameter, and all of these random variables are independent. Thus, your waiting time Y is Erlang of order 56, The probability that you have to wait for more than an hour is given by the formula

Exercise RJEs: Remote job entry points 42 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Two light bulbs have independent and exponentially distributed lifetimes T 1 and T 2 , with parameters λ 1 and λ 2 , respectively. What is the distribution of the first time Z = min {T 1, T 2 } at which a bulb burns out? This is recognized as the exponential CDF with parameter λ 1 + λ 2 . Thus, the minimum of two independent exponentials with parameters λ 1 and λ 2 is an exponential with parameter λ 1 + λ 2 .

Home work RJEs: Remote job entry points 43 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Example 5.16. More on Competing Exponentials. Three light bulbs have independent exponentially distributed lifetimes with a common parameter λ . What is the expectation of the time until the last bulb burns out?

Markov Chains

Discrete-Time Markov Chains RJEs: Remote job entry points 45 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The Bernoulli and Poisson processes are memoryless If the future depends on and can be predicted to some extent by what has happened in the past. We emphasize models where the effect of the past on the future is summarized by a state, which changes over time according to given probabilities Discrete-time Markov chains , in which the state changes at certain discrete time instants, indexed by an integer variable n . At each time step n , the Markov chain has a state , denoted by Xn , which belongs to a finite set S of possible states, called the state space . Without loss of generality, and unless there is a statement to the contrary, we will assume that S = { 1 , . . . , m} , for some positive integer m . The Markov chain is described in terms of its transition probabilities pij : whenever the state happens to be i , there is probability pij that the next state is equal to j .

Discrete-Time Markov Chains RJEs: Remote job entry points 46 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The key assumption underlying Markov processes is that the transition probabilities pij apply whenever state i is visited, no matter what happened in the past, and no matter how state i was reached. Mathematically, we assume the Markov property , which requires that The transition probabilities p ij must be of course nonnegative, and sum to one:

Specification of Markov Models RJEs: Remote job entry points 47 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Transition Probability Matrix RJEs: Remote job entry points 48 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis All of the elements of a Markov chain model can be encoded in a transition probability matrix , which is simply a two-dimensional array whose element at the i th row and j th column is p ij : Transition probability graph , whose nodes are the states and whose arcs are the possible transitions. By recording the numerical values of p ij near the corresponding arcs, one can visualize the entire model in a way that can make some of its major properties readily apparent.

Transition Probability Matrix RJEs: Remote job entry points 49 https://www.researchgate.net/publication/330360197_Decision_Support_Models_for_Operations_and_Maintenance_for_Offshore_Wind_Farms_A_Review/figures?lo=1 All of the elements of a Markov chain model can be encoded in a transition probability matrix , which is simply a two-dimensional array whose element at the i th row and j th column is p ij : Transition probability graph , whose nodes are the states and whose arcs are the possible transitions. By recording the numerical values of p ij near the corresponding arcs, one can visualize the entire model in a way that can make some of its major properties readily apparent.

Exercise RJEs: Remote job entry points 50 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Alice is taking a probability class and in each week she can be either up-to-date or she may have fallen behind. If she is up-to-date in a given week, the probability that she will be up-to-date (or behind) in the next week is 0.8 (or 0.2, respectively). If she is behind in the given week, the probability that she will be up-to-date (or behind) in the next week is 0.6 (or 0.4, respectively). We assume that these probabilities do not depend on whether she was up-to-date or behind in previous weeks, so the problem has the typical Markov chain character (the future depends on the past only through the present).

Exercise RJEs: Remote job entry points 51 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The transition probability graph

Exercise RJEs: Remote job entry points 52 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis A fly moves along a straight line in unit increments. At each time period, it moves one unit to the left with probability 0 . 3, one unit to the right with probability 0 . 3, and stays in place with probability 0 . 4, independently of the past history of movements. A spider is lurking at positions 1 and m : if the fly lands there, it is captured by the spider, and the process terminates. We want to construct a Markov chain model, assuming that the fly starts in one of the positions 2 , . . . , m − 1. Let us introduce states 1, 2, . . . , m, and identify them with the corresponding positions of the fly. The nonzero transition probabilities are

Exercise RJEs: Remote job entry points 53 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Exercise RJEs: Remote job entry points 54 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Markov property RJEs: Remote job entry points 55 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis W here the last equality made use of the Markov property. We then apply the same argument to the term P ( X 0 = i , . . . , Xn− 1 = i n− 1) and continue similarly, until we eventually obtain the desired expression. If the initial state X is given and is known to be equal to some i 0, a similar argument yields

Exercise RJEs: Remote job entry points 56 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis For the spider and fly example, we have

n-Step Transition Probabilities RJEs: Remote job entry points 57 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Many Markov chain problems require the calculation of the probability law of the state at some future time, conditioned on the current state. This probability law is captured by the n -step transition probabilities , defined by In words, rij ( n ) is the probability that the state after n time periods will be j , given that the current state is i . It can be calculated using the following basic recursion, known as the Chapman-Kolmogorov equation

Chapman-Kolmogorov equation RJEs: Remote job entry points 58 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Chapman-Kolmogorov equation RJEs: Remote job entry points 59 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis To verify the formula, we apply the total probability theorem as follows:

CLASSIFICATION OF STATES RJEs: Remote job entry points 60 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis In particular, some states, after being visited once, are certain to be revisited again, while for some other states this may not be the case. Let us say that a state j is accessible from a state i if for some n, the n-step transition probability rij (n) is positive, i.e., if there is positive probability of reaching j, starting from i , after some number of time periods. Let A( i ) be the set of states that are accessible from i . We say that i is recurrent if for every j that is accessible from i , i is also accessible from j; that is, for all j that belong to A( i ) we have that i belongs to A(j). A state is called transient if it is not recurrent. In particular, there are states j ∈ A( i ) such that i is not accessible from j.

CLASSIFICATION OF STATES RJEs: Remote job entry points 61 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Form a recurrent class (or simply class ), meaning that states in A ( i ) are all accessible from each other, and no state outside A ( i ) is accessible from them. For a recurrent state i , we have A ( i ) = A ( j ) for all j that belong to A ( i ), as can be seen from the definition of recurrence Recurrent state Transient state

Markov Chain Decomposition RJEs: Remote job entry points 62 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Single class of recurrent states RJEs: Remote job entry points 63 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Any state can be revisited so all are recurrent

Single class of recurrent states (1 and 2) and one transient state (3) RJEs: Remote job entry points 64 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis State 1 and 2 can be revisited where as state -3 can not be revisited

Classes RJEs: Remote job entry points 65 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Two classes of recurrent states (class of state1 and class of states 4 and 5) and two transient states (2 and 3)

Periodicity RJEs: Remote job entry points 66 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis To the presence or absence of a certain periodic pattern in the times that a state is visited. In particular, a recurrent class is said to be periodic if its states can be grouped in d > 1 disjoint subsets S 1 , . . . , Sd so that all transitions from one subset lead to the next subset

Periodicity RJEs: Remote job entry points 67 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Steady-State Convergence Theorem RJEs: Remote job entry points 68 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Consider a Markov chain with a single recurrent class, which is aperiodic. Then, the states j are associated with steady-state probabilities π j that have the following properties.

Stationary distribution RJEs: Remote job entry points 69 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Since the steady-state probabilities π j sum to 1, they form a probability distribution on the state space, called the stationary distribution Using the total probability theorem, we have Thus, if the initial state is chosen according to the stationary distribution, all subsequent states will have the same distribution.

Balance equations RJEs: Remote job entry points 70 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Chapman-Kolmogorov equation

Exercise RJEs: Remote job entry points 71 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Consider a two-state Markov chain with transition probabilities Note that the above two equations are dependent, since they are both equivalent to

Exercise RJEs: Remote job entry points 72 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Homework-Exercise RJEs: Remote job entry points 73 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Example 6.5. An absent-minded professor has two umbrellas that she uses when commuting from home to the office and back. If it rains and an umbrella is available in her location, she takes it. If it is not raining, she always forgets to take an umbrella. Suppose that it rains with probability p each time she commutes, independently of other times. What is the steady-state probability that she gets wet on a given day?

Birth-Death Processes RJEs: Remote job entry points 74 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis A birth-death process is a Markov chain in which the states are linearly arranged and transitions can only occur to a neighbouring state, or else leave the state unchanged.

Transition probability graph for a birth-death process RJEs: Remote job entry points 75 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis For a birth-death process, the balance equations can be substantially simplified. Let us focus on two neighbouring states, say, i and i +1. In any trajectory of the Markov chain, a transition from i to i +1 has to be followed by a transition from i + 1 to i , before another transition from i to i + 1 can occur. Therefore, the frequency of transitions from i to i + 1, which is π i b i , must be equal to the frequency of transitions from i + 1 to i , which is π i +1 d i +1 . This leads to the local balance equation

Transition probability graph for a birth-death process RJEs: Remote job entry points 76 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Random Walk with Reflecting Barriers RJEs: Remote job entry points 77 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis A person walks along a straight line and, at each time period, takes a step to the right with probability b , and a step to the left with probability 1 − b . The person starts in one of the positions 1, 2, . . . , m, but if he reaches position 0 (or position m+1), his step is instantly reflected back to position 1 (or position m, respectively). Equivalently, we may assume that when the person is in positions 1 or m. he will stay in that position with corresponding probability 1 − b and b, respectively. Markov chain model whose states are the positions 1, . . . , m.

Random Walk with Reflecting Barriers RJEs: Remote job entry points 78 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The local balance equations are

Random Walk with Reflecting Barriers RJEs: Remote job entry points 79 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The local balance equations are

Statistical Multiplexing RJEs: Remote job entry points Statistical Multiplexing, the packets of all traffic streams are merged into a single queue and transmitted or served on a first-come first-serve basis. Queuing/Buffering Shared Buffering Pledged Buffering Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

QUEUEING MODELS - LITTLE’S THEOREM RJEs: Remote job entry points Customers arrive at random times to obtain service Service time Problem Statement: Estimate the following 1. The average number of customers in the system (i.e., the “typical” number of customers either waiting in queue or undergoing service) 2. The average delay per customer (i.e., the “typical” time a customer spends waiting in queue plus the service time). Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

QUEUEING MODELS - LITTLE’S THEOREM RJEs: Remote job entry points Problem Statement: Estimate the following 1. The average number of customers in the system (i.e., the “typical” number of customers either waiting in queue or undergoing service) 2. The average delay per customer (i.e., the “typical” time a customer spends waiting in queue plus the service time). These quantities will be estimated in terms of: 1. The customer arrival rate (i.e., the "typical" number of customers entering the system per unit time) 2. The customer service rate (i.e., the "typical" number of customers the system serves per unit time when it is constantly busy) Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

LITTLE’S THEOREM RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

LITTLE’S THEOREM RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Little’s Theorem RJEs: Remote job entry points N : average number of customers in system : mean arrival rate T: mean time, a customer spends in system   Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Little’s Theorem RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Little’s Theorem RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Little’s Theorem RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Little’s Theorem RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Little’s Theorem RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Examples of Little’s Theorem RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager λ is the arrival rate in a transmission line N Q is the average number of packets waiting in queue (but not under transmission). W is the average time spent by a packet waiting in queue (not including the transmission time) Little's Theorem gives if is the average transmission time, then Little's Theorem gives   line's utilization factor At most one packet can be under transmission, ρ is also the line's utilization factor. (i.e. the proportion of time that the line is busy transmitting a packet)

THE M/M/1 QUEUEING SYSTEM RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager The M/M/1 queuing system consists of a single queuing station with a single server (in a communication context, a single transmission line ). Customers arrive according to a Poisson process with rate λ , and the probability distribution of the service time is exponential with mean 1/ µ sec . The first letter indicates the nature of the arrival process First, M stands for memory less, which here means a Poisson process (i.e., exponentially distributed inter arrival times), G stands for a general distribution of inter-arrival times, D stands for deterministic inter-arrival times. E.g. M/M/1, G/M/1, D/M/1 The second letter indicates the nature of the probability distribution of the service times (e.g., M , G, and D stand for exponential, general, and deterministic distributions, respectively). The last number indicates the number of servers .

THE M/M/1 QUEUEING SYSTEM RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

THE M/M/1 QUEUEING SYSTEM RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

THE M/M/1 QUEUEING SYSTEM RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

THE M/M/1 QUEUEING SYSTEM RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Markov chain formulation RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Markov chain formulation RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Markov chain formulation RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Discrete-time Markov chain for the M/M/1 system RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Discrete-time Markov chain for the M/M/1 system RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Discrete-time Markov chain for the M/M/1 system RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Average Number of Customers in the system vs. Utilization Factor RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Delay (Waiting in the Queue +Service Time) RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

M/M/m: The m-Server Case RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager The M / M / m queuing system with m-servers A customer at the head of the queue is routed to any server that is available

M/M/m: The m-Server Case RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager Global balance equations for the steady-state probabilities Pn

M/M/m: The m-Server Case RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

M/M/m: The m-Server Case RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

M/M/m: The m-Server Case RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

The m-Server Loss System RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager Consider a system, which is identical to the M/M/m system except that if an arrival finds all m servers busy, it does not enter the system and is lost instead (not reattempted). Last m in the M/M/m/ m notation indicates the limit on the number of customers in the system Model is used widely in telephony (in circuit switched networks) In this context, customers in the system correspond to active telephone conversations and the m servers represent a single transmission line consisting of m circuits. The average service time 1/µ is the average duration of a telephone conversation. Objective: Find the blocking probability Vs.

The m-Server Loss System RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

The Infinite-Server Case RJEs: Remote job entry points Ref. Book: Data Networks by Dimitri Bertsekas and Robert Gallager

Continuous-Time Markov Chains RJEs: Remote job entry points 113 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis When the time between transitions takes values from a continuous range, it is known as Continuous-Time Markov Chain.

Limit Theorems

THE WEAK LAW OF LARGE NUMBERS RJEs: Remote job entry points 115 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis The weak law of large numbers asserts that the sample mean of a large number of independent identically distributed random variables is very close to the true mean , with high probability X1, X2, . . . of independent identically distributed random variables with mean μ and variance σ 2 , and define the sample mean by

THE WEAK LAW OF LARGE NUMBERS RJEs: Remote job entry points 116 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Chebyshev Inequality RJEs: Remote job entry points 117 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis It asserts that if the variance of a random variable is small , then the probability that it takes a value far from its mean is also small . We observe that for any fixed > 0, the right-hand side of this inequality goes to zero as n increases

THE WEAK LAW OF LARGE NUMBERS RJEs: Remote job entry points 118 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Convergence of a Deterministic Sequence RJEs: Remote job entry points 119 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Convergence of a Deterministic Sequence RJEs: Remote job entry points 120 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

The Central Limit Theorem RJEs: Remote job entry points 121 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

The Central Limit Theorem RJEs: Remote job entry points 122 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis According to the weak law of large numbers, the distribution of the sample mean Mn is increasingly concentrated in the near vicinity of the true mean μ . In particular, its variance tends to zero. On the other hand, the variance of the sum Sn = X 1 + ・ ・ ・ + Xn = nMn increases to infinity, and the distribution of Sn cannot be said to converge to anything meaningful. An intermediate view is obtained by considering the deviation Sn − nμ of Sn from its mean nμ , and scaling it by a factor proportional to 1 / √ n . What is special about this particular scaling is that it keeps the variance at a constant level. The central limit theorem asserts that the distribution of this scaled random variable approaches a normal distribution. More specifically, let X1,X2, . . . be a sequence of independent identically distributed random variables with mean μ and variance σ2. We define

The Central Limit Theorem RJEs: Remote job entry points 123 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

The Central Limit Theorem RJEs: Remote job entry points 124 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Normal Approximation Based on the Central Limit Theorem RJEs: Remote job entry points 125 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis

Exercise RJEs: Remote job entry points 126 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis We load on a plane 100 packages whose weights are independent random variables that are uniformly distributed between 5 and 50 pounds. What is the probability that the total weight will exceed 3000 pounds? It is not easy to calculate the CDF of the total weight and the desired probability, but an approximate answer can be quickly obtained using the central limit theorem. We want to calculate P ( S 100 > 3000), where S 100 is the sum of the 100 packages. The mean and the variance of the weight of a single package are

Exercise RJEs: Remote job entry points 127 Ref. Book - Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis Based on the formulas for the mean and variance of the uniform PDF. We thus calculate the normalized value

128