About me

I am a PostDoc researcher in the parietal team @ INRIA Saclay, since spring 2018, working on convolutional dictionary learning.

My research interests touch several areas of Machine Learning, Signal Processing and High-Dimensional Statistics. In particular, I have been working on Convolutional Dictionary Learning, studying both their computational aspects and their possible application to pattern analysis. I am also interested in theoretical properties of learned optimization algorithms such as LISTA.

I did my PhD under the supervision of Nicolas Vayatis and Laurent Oudre, in the CMLA @ ENS Paris-Saclay. My PhD focuses on convolutional representations and their applications to physiological signals. I am also involved in open-source projects such as joblib or loky, for parallel scientific computing.

Latest publication and projects

Sparsity-based blind deconvolution of neural activation signal in fMRI
Hamza Cherkaoui, Thomas Moreau, Abderrahim Halimi, Philippe Ciuciu, May 2019, In proceedings of IEEE International Conference on Acoustic Speech and Signal Processing
In this work, we formulate the joint estimation of the HRF and neural activation signal as a semi blind deconvolution problem.
The estimation of the hemodynamic response function (HRF) in functional magnetic resonance imaging (fMRI) is critical to deconvolve a time-resolved neural activity and get insights on the underlying cognitive processes. Existing methods pro-pose to estimate the HRF using the experimental paradigm(EP) in task fMRI as a surrogate of neural activity. These approaches induce a bias as they do not account for latencies in the cognitive responses compared to EP and cannot be applied to resting-state data as no EP is available. In this work, we formulate the joint estimation of the HRF and neural activation signal as a semi blind deconvolution problem. Its solution can be approximated using an efficient alternate minimization algorithm. The proposed approach is applied to task fMRI data for validation purpose and compared to a state-of-the-art HRF estimation technique. Numerical experiments suggest that our approach is competitive with others while not requiring EP information.
Distributed Convolutional Dictionary Learning (DiCoDiLe): Pattern Discovery in Large Images and Signals slides
Tue Apr 2019, At Parietal seminar; INRIA Saclay
DiCoDiLe: a distributed and asynchronous algorithm, employing locally greedy coordinate descent and an asynchronous locking mechanism that does not require a central server.
Convolutional dictionary learning (CDL) estimates shift invariant basis adapted to multidimensional data. CDL has proven useful for image denoising or inpainting, as well as for pattern discovery on multivariate signals. As estimated patterns can be positioned anywhere in signals or images, optimization techniques face the difficulty of working in extremely high dimensions with millions of pixels or time samples, contrarily to standard patch-based dictionary learning. To address this optimization problem, this work proposes a distributed and asynchronous algorithm, employing locally greedy coordinate descent and an asynchronous locking mechanism that does not require a central server. This algorithm can be used to distribute the computation on a number of workers which scales linearly with the encoded signal's size. Experiments confirm the scaling properties which allows us to learn patterns on large scales images from the Hubble Space Telescope.
Loky Aug 2018
The aim of this project is to provide a robust, cross-platform and cross-version implementation of the ProcessPoolExecutor class of concurrent.futures.
The aim of this project is to provide a robust, cross-platform and cross-version implementation of the ProcessPoolExecutor class of concurrent.futures. It features:
  • Deadlock free implementation: one of the major concern in standard multiprocessing and concurrent.futures libraries is the ability of the Pool/Executor to handle crashes of worker processes. This library intends to fix those possible deadlocks and send back meaningful errors.

  • Consistent spawn behavior: All processes are started using fork/exec on POSIX systems. This ensures safer interactions with third party libraries.

  • Reusable executor: strategy to avoid respawning a complete executor every time. A singleton pool can be reused (and dynamically resized if necessary) across consecutive calls to limit spawning and shutdown overhead. The worker processes can be shutdown automatically after a configurable idling timeout to free system resources.


python, multiprocessing, parallel computing