Information Theory 2014
General Procedure
For the final exam of the course, every student will pick an information-theoretic subject from the list below and study that. Every student has to prepare and give a 20-minute talk followed by 10 minutes of questions about the chosen topic.Detailed Procedure
- Look through the list of topic below. By Wednesday, 3 December, 13:00, send us by email on ordered list of your top 3 preferences. Note that you are welcome to add your own topic of choice, see below. Let us know as well if you have any constraints for the presentation slots during the exam week (Wed 17 Dec, Thu 18 Dec, all 13:00-16:00).
- Study the material and think about how you want to present the subject to your fellow students in a 20-minute talk.
- Write down in a few bullet points how you want to structure your talk. Send this list to Philip and Chris by Wednesday, 10 December 2014, 13:00.
- If you want, we can then arrange a personal meeting. In this meeting, we will discuss your talk outline and you can ask questions about the studied material (you are of course welcome to ask any questions at any time.)
- We will create the presentation schedule, you give a 20-minute talk, followed by 10 minutes of questions from the audience. (Yes, we will be asking questions.)
Presentation
A presentation should make your fellow students care about the subject and convey the main ideas of the topic/paper. Try to show how your topic connects to something we have learned in class. You should present the main result(s) with some precision, and it would be nice to give some key ingredients of proofs (where appropriate - some proofs are nasty and/or not interesting). Do not spend all your time putting up formula after formula: prepare a presentation you would like to listen to. It is important to stay on time, you leave a bad impression if your 20-minute talk lasts for 25 minutes or more. Here are a few general tips:- Keep it simple! The talk is only 20 minutes and the goal is really not to impress anybody with technical details, but instead to convey the main message to your fellow students who do not know anything about your topic.
- Use at least some slides, 20 minutes are probably not enough for a blackboard-only talk. Presenting from slides comes with the danger of going (way) too fast. Make sure everybody in the room can follow you. A rough average of 1min per slide tends to be reasonable.
- Practice your talk in order to get an estimate of the right timing!
Grades
Your grade for the final presentation will be determined by the quality of the presentation, your ability to answer questions about the subject (we will use this list for the evaluation). You do not have to stick to the indicated sources. To the contrary, you are encouraged to find other articles or sources that are relevant for the selected topic. The final presentation counts 1/2 towards your final grade of the course (the other 1/2 are determined by the homework exercises).Research tools
Scientific papers can most easily be found online via Google or Google Scholar. As an alternative, almost all universities have a subscription to SpringerLink, allowing you to access these articles for free from their computers. If neither works, feel free to contact us.Suggested topics for final presentation:
Note that you are very welcome to suggest a topic yourself. In that case, send us (at least) one link to an academic paper about the subject. We will let you know quickly if that subject is appropriate.
Some subjects are far more challenging than others. The following list if roughly ordered in decreasing
difficulty of topics.
A good talk about a challenging subject is more impressive than a good talk about an easy subject,
and the grade will reflect that.
Some topics are copied with kind permission from Mathias Madsen's course on information theory
Rate Distortion Theory
Rate-distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding a given distortion D.
Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate-distortion functions.
One of the most intriguing aspects of this theory is that joint descriptions are more efficient than individual descriptions. It is simpler to describe an elephant and a chicken with one description than to describe each alone. This is true even for independent random variables. It is simpler to describe \(X_1\) and \(X_2\) together (at a given distortion for each) than to describe each by itself. Why don't independent problems have independent solutions? The answer is found in the geometry. Apparently, rectangular grid points (arising from independent descriptions) do not fill up the space efficiently.
Study it in Section 10.4 in [MacKay] and Chapter 10 in [CT]
Kolmogorov Komplexity
The great mathematician Kolmogorov culminated a lifetime of research in mathematics, complexity, and information theory with his definition in 1965 of the intrinsic descriptive complexity of an object. In the treatment in the course, the object \(X\) has been a random variable drawn according to a probability mass function \(P_X(x)\). If \(X\) is random, there is a sense in which the descriptive complexity of the event \(X = x\) is \(\log \frac{1}{P_X(x)}\) , because \(\lceil{\log \frac{1}{P_X(x)}}\rceil\) is the number of bits required to describe \(x\) by an optimal code. One notes immediately that the descriptive complexity of such an object depends on the probability distribution.
Kolmogorov went further. He defined the algorithmic (descriptive) complexity of an object to be the length of the shortest binary computer program that describes the object. (Apparently, a computer, the most general form of data decompressor, will after a finite amount of computation, use this description to exhibit the object described.) Thus, the Kolmogorov complexity of an object dispenses with the probability distribution. Kolmogorov made the crucial observation that the definition of complexity is essentially computer independent. It is an amazing fact that the expected length of the shortest binary computer description of a random variable is approximately equal to its entropy. Thus, the shortest computer description acts as a universal code which is uniformly good for all probability distributions. In this sense, algorithmic complexity is a conceptual precursor to entropy.
Learn more about the topic in Chapter 14 of [CT].
Stopping Rules and Wald's equality
A stopping rule is a rule for deciding when to stop doing something, based on a finite amount of past data - for instance, when to leave the casino based on your past gambling record, or when to stop a series of clinical trials based on the results so far. Wald's equality is a theorem which states that the average gain associated with a specific stopping rule is equal to the expected gain from each individual trial, times the expected time before you stop. Some things you might want to do with this topic are:- Give a clear and careful proof of Wald's equality.
- Discuss what the theorem entails for gambling. Is it bad news?
- Discuss the implications of Wald's equality for statistics.
- Consider the gambling strategy "Stop when you're ahead": Before you enter the casino, you decide to leave once you have reached a certain level of capital c. What is the probability that you will ever leave the casino if you use this stopping rule?
- Think about the connection between stopping rules and Turing machines with infinite input tapes: A binary input tape can be seen as path through a binary tree, and a Turing machine can be seen as a deterministic rule for deciding whether you should stop or continue when you reach a particular node.
- Consider the relation to finite stopping problems like the secretary problem.
A good place to start learning about this topic is Robert Gallager's MIT course on stochastic processes.
Entropy Rates of a Stochastic Process
The asymptotic equipartition property (AEP) established that \(n H(X)\) bits suffice on the average to describe \(n\) iid random variables. But what if the random variables are dependent? In particular, what if the random variables form a stationary process? In Chapter 4 of [CT], it is shown, just as in the iid case, that the entropy \(H(X_1,X_2, \ldots, X_n)\) grows (asymptotically) linearly in \(n\) at rate \(H(\mathcal{X})\), the entropy rate of the process.Gambling and Data Compression
At first sight, information theory and gambling seem to be unrelated. But as you can read in Chapter 6 of [CT], there is a strong duality between the growth rate of investment in a horse race and the entropy rate of the horse race. Indeed, the sum of the growth rate and the entropy rate is a constant. In the process of proving this, it is argued that the financial value of side information is equal to the mutual information between the horse race and the side information. In order to work on this topic, you will have to understand parts of the previous topic (Chapter 4 of [CT]) as well.Codebreaking for Traditional Cipher Systems
Information theory provides general insights about the security of various encryption schemes. However, cracking an "insecure" cipher often requires a lot of clever probabilistic inference, for instance in the form of sampling schemes like Gibbs sampling of the posterior distribution over encryption keys. Historically, cracking the German Enigma encryption machine was such a task, you can read about it in Section 18.4 of [MacKay]. More recently, Kevin Knight and colleagues has also written a number of papers on codebreaking from an explicitly Bayesian viewpoint (e.g., this one). If you like a programming challenge, you can also try to decipher the encrypted texts listed under this topic on Mathias' page and report on your findings.Randomness Extraction and Privacy Amplifications
In cryptography, we want to limit the amount of information an eavesdropper learns. One way to make sure that a certain n-bit string X is close to unknown to an eavesdropper Eve who holds side information Y about X is to extract the randomness out of X by picking a random function mapping n bits to less bits, thereby making it more difficult for Eve to learn f(X). Study the details of this privacy-amplification procedure in Sections 8 and 9 of the [CF] script.Hamming Codes
If you send messages through a binary channel with a fixed and symmetric probability of bit flips (i.e., 0 being received as 1 or vice versa), then your task as an encoder is essentially to choose codewords that are as far apart as possible in terms of bit flip distance. A method for creating such codebooks was invented by Hamming in the 1940s. It turns out that his encoding method can be expressed as a certain kind of matrix multiplication, and the choice of encoding scheme for the binary symmetric channel thus corresponds to a choice of matrix. The reverse inference problem then similarly becomes a linear-algebra problem. Things you can do under this topic:- Read Hamming's original paper and a more modern presentation of his ideas.
- Explain how the (7,4) Hamming code works.
- Explain the how Hamming codes can be represented as matrices.
- Find the error rate of a Hamming code and show it depends on the bit flip probability, the length of the code word, and the number of parity check bits.
Uniqueness of the Logarithmic Uncertainty Measure
In his 1948 paper, Shannon gave a small set of axioms that were sufficient to uniquely select entropy as a measure of uncertainty (up to the choice of logarithmic base). Other authors since him have used slightly different but equivalent sets of axioms, and many textbooks in information theory contain a version of this theorem. Some things you can do under this topic are:- Choose a specific set of axioms and prove that the entropy is the only function fulfilling them.
- Compare different axiomatizations.
- Discuss which of the axioms that constitute the most substantial assumptions, and if they can be relaxed.