Definition:

  • Given a Bayes’s Net, how can we collect episodes from it?

Sample from Given Distribution

Prior Sampling

Rejection Sampling:

  • For conditional probabilities, only keep the samples that relevant to the evidence, ex , then reject all samples that is not positive
  • If is unlikely, most samples will be rejected

Likelihood Weighting:

  • Fix evidence variables and sample the rest, and use weights by probability of evidence given parents
    • turn the evidence collected to the value we querying
    • each sample have its own weight
  • Instead of taking in the evidence variables, add a weight to guess the likelyhood of those variables to be in the way we want it to be with
  • for each sample:
    • Input: evidences instantiation
    • w = 1 at first
    • For i= 1,2,…n:
      • if is an evidence variable
        • = observation for
        • w = w *
      • else: sample from
    • return , w
  • Likelihood weighting doesn’t solve all our problems
    • Evidence only influences the choice of downstream variables, but not upstream ones (root variable isn’t more likely to get a value matching the evidence)
  • We would like to consider evidence when we sample every variable (leads to Gibbs sampling)

Gibbs Sampling:

  • Procedure:keeptrackofafullinstantiationx1,x2,…,xn. Startwithan arbitrary instantiation consistent with the evidence. Sample one variable at a time, conditioned on all the rest, but keep evidence fixed. Keep repeating this for a long time.
  • Property: in the limit of repeating this infinitely many times the resulting samples come from the correct distribution (i.e. conditioned on evidence).
  • Rationale: both upstream and downstream variables condition on evidence.
  • In contrast: likelihood weighting only conditions on upstream evidence, and hence weights obtained in likelihood weighting can sometimes be very small. Sum of weights over all samples is indicative of how many “effective” samples were obtained, so we want high weight.