My research interests include weakly supervised learning and generalization in deep neural networks. I am interested in trying to build bridges between deep learning training mechanisms and more established machine learning techniques such as linear models and ensemble methods.
I did my PhD at Mila in Québec, under the joint supervision of Pascal Vincent and Guillaume Lajoie. Previously, I was a multiple hats engineer at Eco-Adapt where I worked with time series from various industrial sensors, trying to develop automated algorithms to make sense of these data streams. Prior to that I studied at École des Mines.
Here is my academic CV.
|Jul 4, 2023
|I will present our paper Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty at Conférence sur l’Apprentissage Automatique (CAp) in Strasbourg
|May 2, 2023
|I am now a postdoctoral researcher at Orange Labs in Vincent Lemaire’s group.
|Apr 20, 2023
|I successfully defended my PhD Thesis: Deep networks training and generalization: insights from linearization (manuscript, slides) 🥳
|Dec 21, 2022
|Our paper Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty was accepted to TMLR. (code)
|Sep 23, 2022
|New pre-print Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty available.
EKFACFast approximate natural gradient descent in a kronecker factored eigenbasisAdvances in Neural Information Processing Systems, 2018
NTKAlignImplicit regularization via neural feature alignmentIn International Conference on Artificial Intelligence and Statistics, 2021
LazyHastyLazy vs hasty: linearization in deep networks impacts learning schedule based on example difficultyTransactions on Machine Learning Research, 2022