Thomas George

I am a researcher at Orange Innovation in the Responsible and Impactful AI program.
My research interests include:
- Generalization and explainability of deep neural networks
- Weakly supervised learning
- Causality in machine learning
I did my PhD at Mila in Québec, under the joint supervision of Pascal Vincent and Guillaume Lajoie, followed by a post-doc in Vincent Lemaire’s group at Orange Innovation. Previously, I was a multiple hats engineer at Eco-Adapt where I worked on time series from industrial sensors, in order to implement automated algorithms to make sense of these data streams. Prior to that I studied at École des Mines.
Here is my academic CV.
news
Feb 10, 2025 | New post on Orange Innovation’s Hello Future blog on how to make AI systems explainable? (français, english). |
---|---|
Jan 28, 2025 | I will attend EGC in Strasbourg to present our paper Calibration des modèles d’apprentissage pour l’amélioration des détecteurs automatiques d’exemples mal-étiquetés as well as the workshop IACD for a presentation of our paper Mislabeled examples detection viewed as probing machine learning models: concepts, survey and extensive benchmark. |
Jan 23, 2025 | Together with Pierre Nodet, we will present our recent works on Weakly supervised learning at the LFI/LIP6 Seminary: Apprentissage faiblement supervisé: algorithmes biqualité et détection automatisée d’exemples mal-étiquetés. |
Oct 17, 2024 | Our paper Mislabeled examples detection viewed as probing machine learning models: concepts, survey and extensive benchmark was accepted to TMLR. (code, video) |
May 02, 2024 | I joined Orange Innovation as a permanent research scientist in the Responsible and Impactful AI program. |
latest posts
Feb 15, 2019 | Derivatives through a batch norm layer |
---|---|
Nov 09, 2018 | What is the empirical Fisher ? |
Oct 29, 2018 | How to compute the Fisher of a conditional when applying natural gradient to neural networks? |