Probabilistic Graphical Models

Winter 2021 (COMP766-002)

Administrative

Class: Tuesdays and Thursdays 10:05-11:25 AM
Office hours: Thursdays 11:30-12:30 PM
TA: Arnab Kumar Mondal
TA office hours: TBD

Course Description

Graphical models are powerful probabilistic modeling tools. They can model the complex behavior of systems of interacting variables through local relations specified using a graph. These probabilistic models represent the conditional dependencies between subsets of variables in a compressed and elegant form. The graphical models' framework has achieved remarkable success across various domains, from near-optimal codes for communication to the state-of-the-art in combinatorial optimization; these models are widely used in bioinformatics, robotics, vision, natural language processing, and machine learning. In this course, we study both directed and undirected models, exact and approximate inference procedures, and learning methods for complete and partial observations.


Prerequisites:

Basic familiarity with probability theory and algorithm design is required. Assigments need familiarity with Python and Numpy. Background in AI and in particular (applied) machine learning is highly recommended.

Main textbook:

Koller, Daphne, and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.

Further readings:


Course Material:

Zoom Links, Assignments, announcements, slides, project descriptions and other course materials are posted on myCourses.

Outline

Representation
  • Syllabus, review of the probability theory
  • Bayesian Networks
  • Markov Networks and
    converting directed to/from undirected models
  • Local and Conditional Probability Models
  • Gaussian Network Models
Inference
  • Complexity of Inference and Variable Elimination
  • Junction Trees and Belief Propagation
  • Variational Inference
    • Exponential Family and Variational Inference
    • Loopy Belief Propagation and Bethe Free Energy
    • Naive Mean-Field
  • Maximum a Posteriori Inference
  • Sampling Based Inference
    • Monte Carlo Inference in Graphical Models
    • Markov Chain Monte Carlo
Learning
  • Overview: Objectives in Learning Graphical Models
  • Maximum likelihood and Bayesian Estimation in Directed Models
  • Structure learning in Directed Models (?)
  • Parameter-Learning in Undirected Models
  • Learning with Partial Observations
  • Causality (?)

Academic Integrity

“McGill University values academic integrity. Therefore, all students must understand the meaning and consequences of cheating, plagiarism and other academic offences under the Code of Student Conduct and Disciplinary Procedures” (see www.mcgill.ca/students/srr/honest/ for more information). (Approved by Senate on 29 January 2003)