Goals of the chair

Many problems in machine learning and signal processing involve the optimisation of a loss function with respect to a set of parameters of interest.

A common choice is the quadratic loss because it enjoys mathematical properties that make it convenient for optimisation.

Principal investigator

However, from a modelling point of view, the quadratic loss underlies a Gaussian model that does not always comply with the geometry of the data.

This is the case when dealing with nonnegative, integer-valued or binary data for which non-quadratic losses are more suitable.

Co-chairs

The aim of AMINA is to advance the theory and methodology of optimisation with non-quadratic loss functions using the framework of majorisation-minimisation (MM). MM consists in iteratively building and minimising a locally tight upper bound of the loss. In other words, it resorts to the iterative optimisation of a local approximation. This is an intuitive and yet powerful optimisation framework that does not require stringent assumptions. MM algorithms decrease the value of the loss at every iteration and do not require tuning parameters. Well-designed upper bounds can finely capture the local curvature of the loss, resulting in efficient updates. Though MM can be traced back to the 1970s, it has enjoyed a significant revival in the last ten years. It subsumes well-known algorithms such as (Bregman) proximal gradient methods, Expectation-Maximisation (EM) or the Concave-Convex Procedure (CCP).
AMINA tackles challenging problems related to the design and convergence of MM algorithms in four innovative machine learning and signal processing settings:
1) non-alternating updates for nonnegative matrix factorisation (NMF),
2) phase retrieval with the beta-divergence,
3) unbalanced optimal transport for audio interpolation,
4) stochastic MM for deep learning.
Designing efficient optimisation algorithms with convergence guarantees is a crucial step in building trustworthy AI systems.

Ne manquez rien !

Inscrivez-vous pour recevoir l'actualité d'ANITI chaque mois.

Nous n’envoyons pas de messages indésirables !

en_GBEnglish