| STUDENTS
> Graphical Models
|
Graphical Models
Note:
Whilst every effort is made to keep the syllabus and assessment records correct
for this course, the precise details must be checked with the lecturer(s).
Code: | M056
(Also taught as: GI08)
|
Year: | 4 |
Prerequisites: | |
Term: | 1 |
Taught By: | David Barber (100%)
|
Aims: | The module provides an entry into probabilistic modeling and reasoning, primarily of discrete variable systems. Very little continuous variable calculus is required, and students more familiar with discrete mathematics should find the course digestible. The emphasis is to demonstrate the potential applications of the techniques in plausible real-world scenarios related to information retrieval and analysis. Concrete challenges include questionnaire analysis, low-density parity check error correction, and collaborative filtering of Netflix data.
|
Learning Outcomes: | Students will learn the basics of discrete graphical models, in particular inference algorithms in both singly and multiply connected structures. To cement understanding, the students must demonstrate their acquired skills by attacking several real-world challenges using the techniques acquired. The course should inspire and motivate students to the real-world applications of the theories and provide a strong enough platform for studies of more complex real-valued models.
|
Content:
Introduction | Review of probability theory and motivations for dealing with uncertainty reasoning. Bayesian Reasoning and probability. |
Graphical Models Belief Networks | A more formal and general specification of directed and undirected
models, their semantics, independence statements. |
The Junction Tree Algorithm | Exact inference in discrete singly-connected graphs (Belief
Propagation). Derivation of inference in multiply-connected graphs. |
Learning | Using data to learn the tables in discrete graphs using maximum likelihood. Examples including diagnostic systems. Expectation-Maximisation Algorithm. |
Common Models in Machine Learning and Applications | Models related to data clustering, analysis and visualization. Mixtures of Bernoulli distributions for Questionnaire Analysis. Hidden Markov Models and applications in sequence clustering and analysis. Low-density Parity Check error correction as an example of Belief propagation. Latent Semantic Analysis (PLSA) and Information Retrieval. Applications to web link analysis. |
Method of Instruction:
Lecture presentations
Assessment:
The course has the following assessment components:
- Written Examination (2.5 hours, 70%)
- Coursework Section (1 piece, 30%)
To pass this course, students must:
- Obtain an overall pass mark of 50% for all sections combined
The examination rubric is: TBCResources:
D.J.C. MacKay: Information Theory, Inference and Learning Algorithms. Cambridge University Press
Christopher M. Bishop: Pattern Recognition and Machine Learning. Springer (2006)
D. Barber: Machine Learning: a probabilistic approach (click on 'lecture notes', below)
lecture notes
|