About me

I am a PostDoc researcher in the parietal team @ INRIA Saclay, since spring 2018, working on convolutional dictionary learning.

My research interests touch several areas of Machine Learning, Signal Processing and High-Dimensional Statistics. In particular, I have been working on Convolutional Dictionary Learning, studying both their computational aspects and their possible application to pattern analysis. I am also interested in theoretical properties of learned optimization algorithms such as LISTA.

I did my PhD under the supervision of Nicolas Vayatis and Laurent Oudre, in the CMLA @ ENS Paris-Saclay. My PhD focuses on convolutional representations and their applications to physiological signals. I am also involved in open-source projects such as joblib or loky, for parallel scientific computing.

Latest publication and projects

Learning step sizes for unfolded sparse coding
Pierre Ablin; Thomas Moreau; Mathurin Massias; Alexandre Gramfort, May 2019, preprint /
This paper presents a theoretical study on how LISTA can learn to accelerate computation compared to ISTA based on larger step sizes adapted to the sparsity distribution of the solution estimate. this mechanism is the only one which ensure assymptotic convergence to the Lasso estimator.
Sparse coding is typically solved by iterative optimization techniques, such as the Iterative Shrinkage-Thresholding Algorithm (ISTA). Unfolding and learning weights of ISTA using neural networks is a practical way to accelerate estimation. In this paper, we study the selection of adapted step sizes for ISTA. We show that a simple step size strategy can improve the convergence rate of ISTA by leveraging the sparsity of the iterates. However, it is impractical in most large-scale applications. Therefore, we propose a network architecture where only the step sizes of ISTA are learned. We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes. Experiments show that our method is competitive with state-of-the-art networks when the solutions are sparse enough.
Learning step sizes for unfolded sparse coding slides
Tue Jul 2019, At Parietal seminar; INRIA Saclay
In this talk, we show that adaptive analytic step sizes can be used to improved ISTA. This steps can also be learned using a neural network architecture simlar to LISTA, which is theoretically the only way to learn to accelerate ISTA asymptotically.
Sparse coding is typically solved by iterative optimization techniques, such as the Iterative Shrinkage-Thresholding Algorithm (ISTA). Unfolding and learning weights of ISTA using neural networks is a practical way to accelerate estimation. In this paper, we study the selection of adapted step sizes for ISTA. We show that a simple step size strategy can improve the convergence rate of ISTA by leveraging the sparsity of the iterates. However, it is impractical in most large-scale applications. Therefore, we propose a network architecture where only the step sizes of ISTA are learned. We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes. Experiments show that our method is competitive with state-of-the-art networks when the solutions are sparse enough.
Loky Aug 2018
The aim of this project is to provide a robust, cross-platform and cross-version implementation of the ProcessPoolExecutor class of concurrent.futures.
The aim of this project is to provide a robust, cross-platform and cross-version implementation of the ProcessPoolExecutor class of concurrent.futures. It features:
  • Deadlock free implementation: one of the major concern in standard multiprocessing and concurrent.futures libraries is the ability of the Pool/Executor to handle crashes of worker processes. This library intends to fix those possible deadlocks and send back meaningful errors.

  • Consistent spawn behavior: All processes are started using fork/exec on POSIX systems. This ensures safer interactions with third party libraries.

  • Reusable executor: strategy to avoid respawning a complete executor every time. A singleton pool can be reused (and dynamically resized if necessary) across consecutive calls to limit spawning and shutdown overhead. The worker processes can be shutdown automatically after a configurable idling timeout to free system resources.


python, multiprocessing, parallel computing