Course at a Glance
9.520/6.860, Class 01
Instructor: Tomaso Poggio
Description
We introduce and motivate the main theme of much of the course, setting the problem of supervised learning from examples as the ill-posed problem of approximating a multivariate function from sparse data. We present an overview of the theoretical part of the course and sketch the connection between classical Regularization Theory with its RKHS-based algorithms and Learning Theory. We briefly describe several different applications ranging from vision to computer graphics, to finance and neuroscience. The last third of the course will be on data representations for learning and deep learning. It will introduce recent theoretical developments towards a) understanding why deep learning works and b) a new phase in machine learning, beyond classical supervised learning: how to learn in an unsupervised way representations that significantly decrease the sample complexity of a supervised learning.
Slides
Slides for this lecture: PDF.
Video
Lecture video recording: Class 01
Relevant Reading
- Mnih et. al. (Deep Mind), Human-level control through deep reinforcement learning, Nature 518, pp. 529-533, 2015.
- Nature Insights, Machine Intelligence (with review article on Deep Learning), Nature, Vol. 521 No. 7553, pp. 435-482, 2015.