news

Jan 28, 2025 I will attend EGC in Strasbourg to present our paper Calibration des modèles d’apprentissage pour l’amélioration des détecteurs automatiques d’exemples mal-étiquetés as well as the workshop IACD for a presentation of our paper Mislabeled examples detection viewed as probing machine learning models: concepts, survey and extensive benchmark.
Oct 17, 2024 Our paper Mislabeled examples detection viewed as probing machine learning models: concepts, survey and extensive benchmark was accepted to TMLR. (code, video)
May 02, 2024 I joined Orange Innovation as a permanent research scientist in the Responsible and Impactful AI program.
Jul 04, 2023 I will present our paper Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty at Conférence sur l’Apprentissage Automatique (CAp) in Strasbourg
May 02, 2023 I am now a postdoctoral researcher at Orange Labs in Vincent Lemaire’s group.
Apr 20, 2023 I successfully defended my PhD Thesis: Deep networks training and generalization: insights from linearization (manuscript, slides) 🥳
Dec 21, 2022 Our paper Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty was accepted to TMLR. (code)
Sep 23, 2022 New pre-print Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty available.
Jul 22, 2022 I will present our recent work Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty (paper, poster) at the SCIS workshop at ICML 2022.
Jul 23, 2021 Presentation of our workshop paper Continual learning and Deep Networks: an Analysis of the Last Layer with Timothée Lesort at the Theory of Continual Learning workshop at ICML.
Jun 14, 2021 Presentation of Implicit Regularization via Neural Feature Alignment at Conférence sur l’Apprentissage Automatique 2021
Apr 21, 2021 I will present NNGeometry at the Pytorch Ecosystem Day
Feb 19, 2021 I will give a talk titled Optimization and generalization through the lens of the linearization of neural networks training dynamics in front of Roger Grosse’s group at Vector Institute Toronto.