Module 2 - Approximate inference in graphical models

Bayesian models are at the core of many machine learning applications. In such models, random variables are used to represent both the observed data and the unknown (or latent) variables of the model. Due to their probabilistic nature, these models are able to systematically represent and cope with the uncertainty that is inherent to most data. Furthermore, they offer a high degree of flexibility in constructing complex models from simple parts. For instance, this can be accomplished by modeling both observed and latent variables using a probabilistic graphical model, which encodes the structure of the model by modeling the conditional independencies among these variables.

A central task in Bayesian modeling is to infer the latent variables from data; that is, to compute the posterior probability distribution over the latent variables conditionally on the data. In most cases this leads to an intractable integration problem, which calls for approximate inference algorithms. In this course module we will introduce some of the most widely used algorithms of this type, including Markov chain Monte Carlo, approximate message-passing, and variational inference, with a particular emphasis on inference in probabilistic graphical models.

 

Teachers

Fredrik Lindsten, fredrik.lindsten@liu.se (examiner)
Johan Alenlöv, johan.alenlov@liu.se


David Broman, dbro@kth.se

 

Examination

The basic examination for the course is a  hand in assignment that is to be solved in groups of two students.  The deadline for handing in your solutions is November 18, 12.00. There will be a couple of practical sessions during the course days and you will be able to work together on your assignment at these sessions.

Please sign up for a group no later than October 22 (go to People->Module 2: Group), since the same groups will be used for peer review of the preparatory exercise (see below). 

Specialization: The specialization for Module 2 will be another set of hand-in assignment that is solved individually, in addition to the basic examination. (Hence, you can sign up for a group for the basic examination regardless of whether your teammate will do the specialization or not). More information will be posted before the start of the course.

Prerequisites

To get the most out of the course days in Linköping, you should be familiar with Bayesian statistics, graphical models and basic message-passing algorithms.

  • For an introduction to Bayesian statistics, we recommend reading Chapters 1.1-1.2 of Chris Bishop's book Pattern Recognition and Machine Learning, Springer 2006 [PRML].
  • Chapter 2 of PRML introduces several probability distributions that will be used in the module, as well as the notion of conjugate priors. This chapter will also be useful for solving the preparatory exercise.
  • Graphical models and exact inference on trees are covered in module 1 of this course, but if you want to refresh your memory you can have a look at Chapter 8 of PRML.

Preparations

For everyone to get some hands on experience with Bayesian modeling we ask you to solve a short preparatory exercise before the first session. This exercise is to be solved individually. It will not be graded by the teachers, but instead we ask you to send your solutions to your group member for peer review.

The deadline is October 23, 17:00. When you receive the solutions from your teammate, have a quick look and if you spot any issues or if anything is unclear, please discuss this with your teammate so that you are on the same page when coming to the course.

In summary:

  • Find a teammate by signing up for a group no later than October 22, 17:00. (People->Module 2: Group)
  • Solve the preparatory exercise on your own and send your solutions to your teammate by October 23, 17:00.
  • Discuss the solutions with your teammate if you think it's necessary.  

 

Schedule 

Please see this page for a detailed schedule.