Description:
- Each variable is modeled as a Markov Chain, so conditional probablity of next state only depends on last state
- first order Markov Chain
- Past and future independent given the present
- Each time step only depends on the previous
- Has Stationary Assumption
- transition probablity: P(Xt∣Xt−1)
- But we need to update our model (beliefs) as we see new observations
- Underlying Markov chain over states X and observe outputs (effects) at each time step
- Conditional independence: HMMs have two important independence properties:
- Markov hidden process: future depends on past via the present
- Current observation independent of all else given current state
- Evidence variables are not guaranteed to be independent because they are correlated by the hidden state
Filtering with HMMS:
- Define B(X) as states of belief model
- Filtering, or monitoring, is the task of tracking the distribution Bt(X)=Pt(Xt∣e1,...,et) over time
- We start with B1(X) in an initial setting, usually uniform
- As time passes, or we get observations, we update B(X)
Inference: find state given evidence
- We are given evidence at each time and want to know Bt(X)=P(Xt∣E1:t)
- Idea: Start with P(X1) and derive Bt in terms of Bt−1 and Bt+1 in terms of Bt with conditional
- Equivalently, derive Bt+1 in terms of Bt
Passage of time:
- Assume that B(Xt)=P(Xt∣e1:t)
- After 1 time step from t, P(Xt+1∣e1:t)=∑xtP(Xt+1,xt∣e1:t)=∑xtP(Xt+1∣xt,e1:t)P(xt,e1:t)
- At t+1, we observe et+1 but the state Xt+1 only depends on Xt and doesnt depend on e1:t
- P(Xt+1∣e1:t)=∑xtP(Xt+1∣xt)P(xt,e1:t) and P(Xt∣e1:t) is known recursively equal to B(Xt)
- Or compactly B′(Xt+1)=∑x1P(X′∣x1)B(xt)
Two steps: passage of time + observation