CS103 ‐ Topics in Representation Learning, Information Theory and Control

Caltech, Winter Term 2019
Location: Friday 9-11am in 106 Annenberg

Course description

Prerequisites: Background in machine learning, probability, Information Theory

Summary: This course will cover current topics on information theory, deep representation learning, and control. We will use an information theoretic formalisation to study properties of the representation learned by deep networks, with focus on invariance (e.g., translations, shape variability) and compositionality of the representation. We will then see how this representations can be learnt, though implicit or explicit biases, by deep networks, and exploited to perform efficient exploration and control.

Course Details


Alessandro Achille (achille@cs.ucla.edu)


Weeks Topics
1Introduction: Overview of the class, embodied intelligence (sensing, cognition, action), role of representations
2-3Invariant representations: overview of invariant representations, different formalizations (group invariance/equivariance, contractive representations, statistical independence), deep convolutional representations
4Task-relevant information and information theoretic formalization of invariance: review of information theory, rate-distortion theory, Information Bottleneck, Actionable Information, minimality and invariance
5-6Learning invariant representation: MDL principle, Variational Auto-Encoders, stochastic optimization, SGD as Variational Inference, Kramer's rate
7Life-long learning of compositional representations: Compositionality and disentanglement, current formalizations (total correlation, linear action, causality), β-VAEs
8-9Information and Actions: Variational Inference in Control/Reinforcement Learning, Visual Turing Test, information theoretic duality for control


Lecture slides will be here, together with links to suggested readings for the class.

Suggested Readings

Link to Google Spreadsheet