hidden markov model presentation best resources

wereg76514 21 views 16 slides Jun 02, 2024
Slide 1
Slide 1 of 16
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16

About This Presentation

hidden markov model presentation best resources


Slide Content

Hidden Markov Model

Hidden Markov Model  (HMM) is a statistical model that is used to describe the probabilistic relationship between a sequence of observations and a sequence of hidden states . The  hidden states  are the underlying variables that generate the observed data, but they are not directly observable. The  observations  are the variables that are measured and observed. 

The Hidden Markov Model (HMM) is the relationship between the hidden states and the observations using two sets of probabilities: the transition probabilities and the emission probabilities.  The  transition probabilities  describe the probability of transitioning from one hidden state to another. The  emission probabilities  describe the probability of observing an output given a hidden state.

Consider a scenario with two weather states: "Sunny" and "Rainy," and two possible observations: "Umbrella" and "No Umbrella." Transition Probabilities: P(Sunny to Sunny) = 0.7 P(Sunny to Rainy) = 0.3 P(Rainy to Sunny) = 0.4 P(Rainy to Rainy) = 0.6 Emission Probabilities: P(Umbrella | Sunny) = 0.2 P(No Umbrella | Sunny) = 0.8 P(Umbrella | Rainy) = 0.6 P(No Umbrella | Rainy) = 0.4 Initial State Probabilities: P(Initial Sunny) = 0.6 P(Initial Rainy) = 0.4

Viterbi algorithm The purpose of the Viterbi algorithm is a dynamic programming algorithm to make an inference based on a trained model and some observed data. Given an observation sequence, determine the best sequence of hidden states

Start with the initial state probabilities, which represent your belief about the possible hidden states at the first time step. Compute the probability of each state being the starting state based on the initial state probabilities and the emission probabilities for the first observation. For each subsequent time step (from t=2 to the end of the sequence), calculate the probability of being in each state at that time step. Use the probabilities from the previous time step and the transition probabilities to compute the probability of transitioning to each state.

Multiply this transition probability with the maximum probability from the previous time step and the emission probability for the current observation. This gives you the maximum probability for each state at the current time step. Store the maximum probability and the corresponding back pointer (the state that maximizes the probability) for each state at each time step.

Initial Step (t=1): P(Sunny at t=1) = P(Initial Sunny) * P(Umbrella | Sunny) = 0.6 * 0.2 = 0.12 P(Rainy at t=1) = P(Initial Rainy) * P(Umbrella | Rainy) = 0.4 * 0.6 = 0.24 Next Steps (t=2): P(Sunny at t=2) = max(P(Sunny at t=1) * P(Sunny to Sunny) * P(Umbrella | Sunny), P(Rainy at t=1) * P(Rainy to Sunny) * P(Umbrella | Sunny)) = max(0.12 * 0.7 * 0.2, 0.24 * 0.4 * 0.2) = 0.0168 (Sunny) P(Rainy at t=2) = max(P(Sunny at t=1) * P(Sunny to Rainy) * P(Umbrella | Rainy), P(Rainy at t=1) * P(Rainy to Rainy) * P(Umbrella | Rainy )) = max(0.12 * 0.3 * 0.6, 0.24 * 0.6 * 0.6) = 0.0432 (Rainy )

Mr. X is happy someday and angry on other days. We can only observe when he smiles, frowns, laughs, or yells but not his actual emotional state. Let us start on day 1 in the happy state. There can be only one state transition per day. It can be either to happy state or angry state

What is  P(q 2  = Happy ) = P(q 2  = Happy  |  q 1  = Happy ) = 0.8 2. What is  P(o 2  = frown ) we don’t know the states whether he is happy or not on day 2 (we know he was happy on day 1 ) the probability of the observation is the sum of products of observation probabilities and all possible hidden state transitions .

3. What is  P(q 2  = Happy | o 2  = frown )=

4. What is  P(o 1  = frown o 2  = frown o 3  = frown o 4  = frown o 5  = frown, q 1  = Happy q 2  = Angry q 3  = Angry q 4  = Angry q 5  = Angry)  if   π = [0.7, 0.3 ] observation sequence “ frown frown frown frown frown ” given the state sequence “ Happy Angry Angry Angry Angry ”.  π   is the initial probabilities . P(f f f f f , H A A A A )                = P( f|H )*P( f|A )*P( f|A )*P( f|A )*P( f|A )*P(H)*P(A|H)*P(A|H)*P(A|H)*P(A|H)             = 0.1 * 0.5 * 0.5 * 0.5 * 0.5 * 0.7 * 0.2 * 0.2 * 0.2 * 0.2             = 0.000007