Probabilistic Graphical Models

Winter 2021 (COMP766-002)

Administrative

Class: Tuesdays and Thursdays 10:05-11:25 AM
Office hours: Thursdays 11:30-12:30 PM
TA: Arnab Kumar Mondal
TA office hours: TBD

Course Description

Graphical models are powerful probabilistic modeling tools. They can model the complex behavior of systems of interacting variables through local relations specified using a graph. These probabilistic models represent the conditional dependencies between subsets of variables in a compressed and elegant form. The graphical models' framework has achieved remarkable success across various domains, from near-optimal codes for communication to the state-of-the-art in combinatorial optimization; these models are widely used in bioinformatics, robotics, vision, natural language processing, and machine learning. In this course, we study both directed and undirected models, exact and approximate inference procedures, and learning methods for complete and partial observations.


Prerequisites:

Basic familiarity with probability theory and algorithm design is required. Assigments need familiarity with Python and Numpy. Background in AI and in particular (applied) machine learning is highly recommended.

Main textbook:

Koller, Daphne, and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.

Further readings:


Course Material:

Zoom Links, Assignments, announcements, slides, project descriptions and other course materials are posted on myCourses.

Outline

Representation
  • Syllabus, review of the probability theory (slides, chapters 2)
  • Bayesian Networks (slides, chapter 3)
  • Markov Networks (slides ) and
    converting directed to/from undirected models (slides, chapter 4)
  • Local and Conditional Probability Models (slides, chapter 5)
  • Gaussian Network Models (slides, chapter 6)
Inference
  • Variable Elimination ( slides, chapter 7)
  • Junction Trees and Belief Propagation (slides, chapter 10)
  • Variational Inference (chapters 8, 11, 13)
    • Exponential Family and Variational Inference (slides)
    • Loopy Belief Propagation and Bethe Free Energy (slides)
    • Naive Mean-Field (slides)
  • Maximum a Posteriori Inference (slides, chapter 13)
  • Sampling Based Inference (chapter 12)
    • Monte Carlo Inference in Graphical Models (slides)
    • Markov Chain Monte Carlo (slides)
Learning
  • Overview: Objectives in Learning (slides, chapter 16)
  • Maximum likelihood and Bayesian Estimation in Directed Models (slides, chapter 17)
  • Structure learning in Directed Models (slides, chapter 18)
  • Parameter-Learning in Undirected Models (slides, chapter 20)
  • Learning with Partial Observations (slides, chapters 19)

Academic Integrity

“McGill University values academic integrity. Therefore, all students must understand the meaning and consequences of cheating, plagiarism and other academic offences under the Code of Student Conduct and Disciplinary Procedures” (see www.mcgill.ca/students/srr/honest/ for more information). (Approved by Senate on 29 January 2003)