Sparsity in Machine Learning and Statistics
Cumberland Lodge, 1 - 3 April 2009

Workshop Description

Sparse estimation (or sparse recovery) is playing an increasingly important role in the statistics and machine learning communities. Several methods have recently been developed in both fields, which rely upon the notion of sparsity (e.g. penalty methods like the Lasso, Dantzig selector, etc.). Many of the key theoretical ideas and statistical analysis of the methods have been developed independently, but there is increasing awareness of the potential for cross-fertilization of ideas between statistics and machine learning.

Furthermore, there are interesting links between lasso-type methods and boosting (particularly, LP-boosting); there has been a renewed interest in sparse Bayesian methods. Sparse estimation is also important in unsupervised method (sparse PCA, etc.). Recent machine learning techniques for multi-task learning and collaborative filtering have been proposed which implement sparsity constraints on matrices (rank, structured sparsity, etc.). At the same time, sparsity is playing an important role in various application fields, ranging from image and video reconstruction and compression, to speech classification, text and sound analysis, etc.

The overall goal of the workshop is to bring together machine learning researchers with statisticians working on this timely topic of research, to encourage exchange of ideas between both communities and discuss further developments and theoretical underpinning of the methods.