Computational optimization for tensor decompositions

March 29 to April 2, 2010

at the

American Institute of Mathematics, Palo Alto, California

organized by

Rasmus Bro, Michael Friedlander, Tamara G. Kolda, and Stephen Wright

This workshop, sponsored by AIM and the NSF, will be devoted to facilitating the development of new decomposition methods and to provide fundamentally new insights into both tensor decompositions and numerical optimization. During the past decade, there has been an explosion of interest in tensor decompositions as an important mathematical tool in fields such as psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, and graph analysis. Tensor decompositions are generalizations of matrix decompositions (which have proved to be a vital tool in many areas of science and engineering) to N-way tensors, where N is greater than 2. In many circumstances, N-way tensor representations allow a much more natural framework for representing relationships between elements in a data set than do traditional matrix representations. Tensor decompositions have the potential to revolutionize our scientific capabilities in such applications as environmental monitoring, medical diagnostics, cybersecurity, anti-doping testing, telecommunications, and more.

Further advances in tensor decompositions depend critically on advanced optimization algorithms. Computational tools have not changed significantly in the past four decades and are often based on a simple alternating least squares (ALS) approach, which is often slow and comes with no guarantees of convergence to a useful solution. Despite these drawbacks, it remains the method of choice because it is quite general and because it allows for useful modifications (for example, to problems with missing data). Recent work has shown that optimization methods other than ALS can provide superior solutions in specific situations. Our workshop seeks to further this line of research by bringing together leading experts in numerical optimization and tensor decompositions, with the purpose of developing optimization-based tensor decomposition methods that are robust, accurate, numerically stable, and scablable. Furthermore, these methods should be able to allow constraints such as nonnegativity to be imposed on the parameters; they should allow missing data to be efficiently and accurately handled; they should enable sparse data and sparse solutions; and they should be able to handle formulations that involve alternative loss functions such as (generalized) weighted least squares.

The goal of this workshop is to foster a new scientific community to facilitate the development of new decomposition methods and to provide fundamentally new insights into both tensor decompositions and numerical optimization. This community is expected have an impact in many diverse areas (including those listed above) in the years to come.

The workshop will differ from typical conferences in some regards. Participants will be invited to suggest open problems and questions before the workshop begins, and these will be posted on the workshop website. These include specific problems on which there is hope of making some progress during the workshop, as well as more ambitious problems which may influence the future activity of the field. Lectures at the workshop will be focused on familiarizing the participants with the background material leading up to specific problems, and the schedule will include discussion and parallel working sessions.

The deadline to apply for support to participate in this workshop has passed.

For more information email workshops@aimath.org


Plain text announcement or brief announcement.

Go to the American Institute of Mathematics.
Go to the list of upcoming workshops.