David MacKay
.






Search :

.

OLD Summary of the course

Lecture plan, suggested reading, and suggested questions
NB, these suggestions are old, so don't map perfectly onto the 2005 book. (see also supervision recommendations here)
Week 1:
Friday
Lecture notes:
Ch 1 - Intro to information theory
Exercise to do before Wednesday:
Ex 4.2 (p.71)
Suggested reading:
Ch 2 - Probabilities and Inference
Suggested examples
Ex 1.2 (p.13), 1.8 (p.19)
Week 1:
Wednesday
Lecture notes:
Ch 5 (now Ch 4) - Source Coding Theorem
Suggested reading:
rest of Ch 5 (this is the toughest bit of the course)
Suggested examples
Ex 2.14, 2.15, 2.18, 2.20, 2.28 (p.40)
Week 2:
Lecture notes:
Ch 6 (now Ch 5) - Symbol Codes
Suggested examples
Ex 6.19, 6.20, 6.25 (p.111-2)
Week 3:
Lecture notes:
Ch 7 (now Ch 6) - Stream Codes; (Lempel-Ziv not examinable)
Reading :
Ch 9 (now 8) - Correlated random variables
Suggested examples
Ex 7.4, 7.8 (p.131), 8.3, 8.5 (147)
Ex 9.1, 9.5, 9.7, 9.8
Week 4:
Lecture notes:
Ch 10-11 (now 9-10) - Communication over noisy channel; Channel coding theorem
Suggested examples
Ex 10.12, 10.13, 10.15
Week 5:
Reading:
Ch 3 - More on inference; Ch 12 (12.1-12.2) (now 11.1-11.2) - Inference for Gaussian channels
Suggested examples
Ex 3.3, 10.19, 10.20, 11.4
Lecture notes:
Ch's 24, 25*, 27* (now 29, 30, 32) - Monte Carlo methods
Week 6:
Lecture notes:
Ch 28 (now 33) - Variational methods
Reading:
Ch 13 (now 12) - Hash codes, efficient information retrieval;
Ch 22 - Inference; Ch 26 (now 31) - Ising models.
(Ch 13 is not examinable, but I want you to think about the question `how to make a content-addressable memory?')
Suggested examples
24.5, 24.9, 26.3 (p. 363).
Week 7:
Lecture notes:
Ch 31, 32, 34 (38, 39, 41) - Neuron.
Reading:
Ch 33 (details optional)
Suggested examples
28.2, 30.1 (p. 383). 32.2 (402), 32.5 (407)
Week 8:
Lecture notes:
Ch 35, 36 (now 42, 43) - Hopfield networks and Boltzmann machines.
Suggested examples
Automatic clustering: 22.3 (p.304); 35.3, 35.4 (441)

Site last modified Sat Sep 16 17:35:51 BST 2006