Sign in
Thomas Fel
Publication Activity (10 Years)
Years Active: 2020-2024
Publications (10 Years): 34
Top Topics
Object Detection
Diffusion Models
Natural Images
Neural Network
Top Venues
CoRR
NeurIPS
CVPR
ACL (Findings)
</>
Publications
</>
Chris J. Hamblin
,
Thomas Fel
,
Srijani Saha
,
Talia Konkle
,
George A. Alvarez
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images.
CoRR
(2024)
Drew Linsley
,
Ivan Felipe Rodríguez
,
Thomas Fel
,
Michael Arcaro
,
Saloni Sharma
,
Margaret S. Livingstone
,
Thomas Serre
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex.
CoRR
(2023)
Léo Andéol
,
Thomas Fel
,
Florence De Grancey
,
Luca Mossina
Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signaling.
COPA
(2023)
Thomas Fel
,
Agustin Picard
,
Louis Béthune
,
Thibaut Boissin
,
David Vigouroux
,
Julien Colin
,
Rémi Cadènc
,
Thomas Serre
CRAFT: Concept Recursive Activation FacTorization for Explainability.
CVPR
(2023)
Katherine L. Hermann
,
Hossein Mobahi
,
Thomas Fel
,
Michael C. Mozer
On the Foundations of Shortcut Learning.
CoRR
(2023)
Drew Linsley
,
Ivan F. Rodriguez Rodriguez
,
Thomas Fel
,
Michael Arcaro
,
Saloni Sharma
,
Margaret S. Livingstone
,
Thomas Serre
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex.
NeurIPS
(2023)
Thomas Fel
,
Thibaut Boissin
,
Victor Boutin
,
Agustin Picard
,
Paul Novello
,
Julien Colin
,
Drew Linsley
,
Tom Rousseau
,
Rémi Cadène
,
Lore Goetschalckx
,
Laurent Gardes
,
Thomas Serre
Unlocking Feature Visualization for Deep Network with MAgnitude Constrained Optimization.
NeurIPS
(2023)
Fanny Jourdan
,
Agustin Picard
,
Thomas Fel
,
Laurent Risser
,
Jean-Michel Loubes
,
Nicholas Asher
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP.
ACL (Findings)
(2023)
Thomas Fel
,
Thibaut Boissin
,
Victor Boutin
,
Agustin Picard
,
Paul Novello
,
Julien Colin
,
Drew Linsley
,
Tom Rousseau
,
Rémi Cadène
,
Laurent Gardes
,
Thomas Serre
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization.
CoRR
(2023)
Victor Boutin
,
Thomas Fel
,
Lakshya Singhal
,
Rishav Mukherji
,
Akash Nagaraj
,
Julien Colin
,
Thomas Serre
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
ICML
(2023)
Victor Boutin
,
Thomas Fel
,
Lakshya Singhal
,
Rishav Mukherji
,
Akash Nagaraj
,
Julien Colin
,
Thomas Serre
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
CoRR
(2023)
Fanny Jourdan
,
Agustin Picard
,
Thomas Fel
,
Laurent Risser
,
Jean-Michel Loubes
,
Nicholas Asher
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks.
CoRR
(2023)
Mathieu Serrurier
,
Franck Mamalet
,
Thomas Fel
,
Louis Béthune
,
Thibaut Boissin
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective.
NeurIPS
(2023)
Thomas Fel
,
Victor Boutin
,
Louis Béthune
,
Rémi Cadène
,
Mazda Moayeri
,
Léo Andéol
,
Mathieu Chalvidal
,
Thomas Serre
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation.
NeurIPS
(2023)
Drew Linsley
,
Pinyuan Feng
,
Thibaut Boissin
,
Alekh Karkada Ashok
,
Thomas Fel
,
Stephanie Olaiya
,
Thomas Serre
Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception.
CoRR
(2023)
Léo Andéol
,
Thomas Fel
,
Florence De Grancey
,
Luca Mossina
Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signaling.
CoRR
(2023)
Sabine Muzellec
,
Léo Andéol
,
Thomas Fel
,
Rufin VanRullen
,
Thomas Serre
Gradient strikes back: How filtering out high frequencies improves explanations.
CoRR
(2023)
Thomas Fel
,
Melanie Ducoffe
,
David Vigouroux
,
Rémi Cadène
,
Mikael Capelle
,
Claire Nicodème
,
Thomas Serre
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis.
CVPR
(2023)
Thomas Fel
,
Victor Boutin
,
Mazda Moayeri
,
Rémi Cadène
,
Louis Béthune
,
Léo Andéol
,
Mathieu Chalvidal
,
Thomas Serre
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation.
CoRR
(2023)
Paul Novello
,
Thomas Fel
,
David Vigouroux
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure.
CoRR
(2022)
Thomas Fel
,
Ivan Felipe Rodríguez
,
Drew Linsley
,
Thomas Serre
Harmonizing the object recognition strategies of deep neural networks with humans.
CoRR
(2022)
Thomas Fel
,
Melanie Ducoffe
,
David Vigouroux
,
Rémi Cadène
,
Mikael Capelle
,
Claire Nicodeme
,
Thomas Serre
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis.
CoRR
(2022)
Paul Novello
,
Thomas Fel
,
David Vigouroux
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure.
NeurIPS
(2022)
Thomas Fel
,
Lucas Hervier
,
David Vigouroux
,
Antonin Poche
,
Justin Plakoo
,
Rémi Cadène
,
Mathieu Chalvidal
,
Julien Colin
,
Thibaut Boissin
,
Louis Béthune
,
Agustin Picard
,
Claire Nicodeme
,
Laurent Gardes
,
Grégory Flandin
,
Thomas Serre
Xplique: A Deep Learning Explainability Toolbox.
CoRR
(2022)
Thomas Fel
,
Ivan F. Rodriguez Rodriguez
,
Drew Linsley
,
Thomas Serre
Harmonizing the object recognition strategies of deep neural networks with humans.
NeurIPS
(2022)
Julien Colin
,
Thomas Fel
,
Rémi Cadène
,
Thomas Serre
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods.
NeurIPS
(2022)
Mathieu Serrurier
,
Franck Mamalet
,
Thomas Fel
,
Louis Béthune
,
Thibaut Boissin
When adversarial attacks become interpretable counterfactual explanations.
CoRR
(2022)
Thomas Fel
,
David Vigouroux
,
Rémi Cadène
,
Thomas Serre
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks.
WACV
(2022)
Thomas Fel
,
Agustin Picard
,
Louis Béthune
,
Thibaut Boissin
,
David Vigouroux
,
Julien Colin
,
Rémi Cadène
,
Thomas Serre
CRAFT: Concept Recursive Activation FacTorization for Explainability.
CoRR
(2022)
Mohit Vaishnav
,
Thomas Fel
,
Ivan Felipe Rodríguez
,
Thomas Serre
Conviformers: Convolutionally guided Vision Transformer.
CoRR
(2022)
Thomas Fel
,
Rémi Cadène
,
Mathieu Chalvidal
,
Matthieu Cord
,
David Vigouroux
,
Thomas Serre
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis.
NeurIPS
(2021)
Thomas Fel
,
Julien Colin
,
Rémi Cadène
,
Thomas Serre
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods.
CoRR
(2021)
Thomas Fel
,
Rémi Cadène
,
Mathieu Chalvidal
,
Matthieu Cord
,
David Vigouroux
,
Thomas Serre
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis.
CoRR
(2021)
Thomas Fel
,
David Vigouroux
Representativity and Consistency Measures for Deep Neural Network Explanations.
CoRR
(2020)