Thomas George

pic.jpg

I am a postdoctoral researcher at Orange Labs in Vincent Lemaire’s group.

My research interests include weakly supervised learning and generalization in deep neural networks. I am interested in trying to build bridges between deep learning training mechanisms and more established machine learning techniques such as linear models and ensemble methods.

I did my PhD at Mila in Québec, under the joint supervision of Pascal Vincent and Guillaume Lajoie. Previously, I was a multiple hats engineer at Eco-Adapt where I worked with time series from various industrial sensors, trying to develop automated algorithms to make sense of these data streams. Prior to that I studied at École des Mines.

Here is my academic CV.

news

Apr 20, 2023 I successfully defended my PhD Thesis: Deep networks training and generalization: insights from linearization (manuscript, slides) 🥳
Dec 21, 2022 Our paper Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty was accepted to TMLR. (code)
Sep 23, 2022 New pre-print Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty available.
Jul 22, 2022 I will present our recent work Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty (paper, poster) at the SCIS workshop at ICML 2022.
Jul 23, 2021 Presentation of our workshop paper Continual learning and Deep Networks: an Analysis of the Last Layer with Timothée Lesort at the Theory of Continual Learning workshop at ICML.

latest posts

selected publications

  1. EKFAC
    Fast approximate natural gradient descent in a kronecker factored eigenbasis
    Thomas George, César Laurent, Xavier Bouthillier, and 2 more authors
    Advances in Neural Information Processing Systems, 2018
  2. NTKAlign
    Implicit regularization via neural feature alignment
    Aristide Baratin, Thomas George, César Laurent, and 4 more authors
    In International Conference on Artificial Intelligence and Statistics, 2021
  3. LazyHasty
    Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty
    Thomas George, Guillaume Lajoie, and Aristide Baratin
    Transactions on Machine Learning Research, 2022