Description:
- Each variable is modeled as a Markov Chain, so conditional probablity of next state only depends on last state
- first order Markov Chain
- Past and future independent given the present
- Each time step only depends on the previous
- Has Stationary Assumption
- transition probablity: P(Xtββ£Xtβ1β)
- But we need to update our model (beliefs) as we see new observations
- Underlying Markov chain over states X and observe outputs (effects) at each time step
- Conditional independence: HMMs have two important independence properties:
- Markov hidden process: future depends on past via the present
- Current observation independent of all else given current state
- Evidence variables are not guaranteed to be independent because they are correlated by the hidden state
Filtering with HMMS:
- Define B(X) as states of belief model
- Filtering, or monitoring, is the task of tracking the distribution Btβ(X)=Ptβ(Xtββ£e1β,...,etβ) over time
- We start with B1β(X) in an initial setting, usually uniform
- As time passes, or we get observations, we update B(X)
Inference: find state given evidence
- We are given evidence at each time and want to know Btβ(X)=P(Xtββ£E1:tβ)
- Idea: Start with P(X1β) and derive Btβ in terms of Btβ1β and Bt+1β in terms of Btβ with conditional
- Equivalently, derive Bt+1β in terms of Btβ
Passage of time:
- Assume that B(Xtβ)=P(Xtββ£e1:tβ)
- After 1 time step from t, P(Xt+1ββ£e1:tβ)=βxtββP(Xt+1,xtβββ£e1:tβ)=βxtββP(Xt+1ββ£xtβ,e1:tβ)P(xtβ,e1:tβ)
- At t+1, we observe et+1β but the state Xt+1β only depends on Xtβ and doesnt depend on e1:tβ
- P(Xt+1ββ£e1:tβ)=βxtββP(Xt+1ββ£xtβ)P(xtβ,e1:tβ) and P(Xtββ£e1:tβ) is known recursively equal to B(Xtβ)
- Or compactly Bβ²(Xt+1β)=βx1ββP(Xβ²β£x1β)B(xtβ)
Two steps: passage of time + observation