Multi-task Learning and Structured Sparsity

Speaker: Massimiliano PONTIL (University College London)

Time: 3.00 PM

Date: Thursday March 14th 2013

Location: Statistics Seminar Room L550B, 5th floor, Library Building

Abstract:
A fundamental limitation of supervised learning is the cost incurred by the preparation of the large training samples required for good generalization. A potential remedy is offered by multi-task learning: in many cases, while individual sample sizes are rather small, there are samples to represent a large number of learning tasks, which share some constraining or generative property. If this property is sufficiently simple it should allow for better estimation of the individual tasks despite their small individual sample sizes. In this talk we review different classes of regularizers which implement task relatedness assumptions, building upon ideas from kernel methods and sparse estimation. We address the predictive properties of the methods and describe optimisation techniques to solve the underlying regularization problem. Finally, we report on some applications of the methods to problems arising in affective computing, computer visionand user modelling.

Biography:
Massimiliano Pontil is Professor of Computational Statistics and Machine Learning in the Department of Computer Science at University College London. He received a Laurea degree and a PhD in Physics from the University of Genova in 1994 and 1999, respectively. His research interests are in the field of machine learning with a focus on regularization methods, convex optimization and statistical estimation. He has published about 100 research papers on these topics, is regularly in the programme committee of the leading conferences in the field, is an associate editor of the Machine Learning Journal, of Statistics and Computing, and is a member of the scientific advisory board of the Max Planck Institute for Intelligent Systems.

Series: Statistics Seminar Series