Computational optimization for tensor decompositions

March 29 to April 2, 2010

at the

American Institute of Mathematics, San Jose, California

organized by

Rasmus Bro, Michael Friedlander, Tamara G. Kolda, and Stephen Wright

Original Announcement

This workshop will be devoted to facilitating the development of new decomposition methods and to provide fundamentally new insights into both tensor decompositions and numerical optimization. During the past decade, there has been an explosion of interest in tensor decompositions as an important mathematical tool in fields such as psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, and graph analysis. Tensor decompositions are generalizations of matrix decompositions (which have proved to be a vital tool in many areas of science and engineering) to N-way tensors, where N is greater than 2. In many circumstances, N-way tensor representations allow a much more natural framework for representing relationships between elements in a data set than do traditional matrix representations. Tensor decompositions have the potential to revolutionize our scientific capabilities in such applications as environmental monitoring, medical diagnostics, cybersecurity, anti-doping testing, telecommunications, and more.

Further advances in tensor decompositions depend critically on advanced optimization algorithms. Computational tools have not changed significantly in the past four decades and are often based on a simple alternating least squares (ALS) approach, which is often slow and comes with no guarantees of convergence to a useful solution. Despite these drawbacks, it remains the method of choice because it is quite general and because it allows for useful modifications (for example, to problems with missing data). Recent work has shown that optimization methods other than ALS can provide superior solutions in specific situations. Our workshop seeks to further this line of research by bringing together leading experts in numerical optimization and tensor decompositions, with the purpose of developing optimization-based tensor decomposition methods that are robust, accurate, numerically stable, and scablable. Furthermore, these methods should be able to allow constraints such as nonnegativity to be imposed on the parameters; they should allow missing data to be efficiently and accurately handled; they should enable sparse data and sparse solutions; and they should be able to handle formulations that involve alternative loss functions such as (generalized) weighted least squares.

The goal of this workshop is to foster a new scientific community to facilitate the development of new decomposition methods and to provide fundamentally new insights into both tensor decompositions and numerical optimization. This community is expected have an impact in many diverse areas (including those listed above) in the years to come.

Material from the workshop

A list of participants.

The workshop schedule.

A report on the workshop activities.

Papers arising from the workshop:

All-at-once optimization for coupled matrix and tensor factorizations
by  Evrim Acar, Tamara G. Kolda, and Daniel M. Dunlavy
Musings on Multilinear Fitting
by  Martin J. Mohlenkamp