Login / Signup
Maximilian Dreyer
Publication Activity (10 Years)
Years Active: 2020-2024
Publications (10 Years): 15
Top Topics
Quantization Noise
Latent Space
Bias Correction
Top Venues
CoRR
Nat. Mac. Intell.
xxAI@ICML
MICCAI (2)
</>
Publications
</>
Maximilian Dreyer
,
Frederik Pahde
,
Christopher J. Anders
,
Wojciech Samek
,
Sebastian Lapuschkin
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space.
AAAI
(2024)
Dilyara Bareeva
,
Maximilian Dreyer
,
Frederik Pahde
,
Wojciech Samek
,
Sebastian Lapuschkin
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression.
CoRR
(2024)
Reduan Achtibat
,
Sayed Mohammad Vakilzadeh Hatefi
,
Maximilian Dreyer
,
Aakriti Jain
,
Thomas Wiegand
,
Sebastian Lapuschkin
,
Wojciech Samek
AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers.
CoRR
(2024)
Maximilian Dreyer
,
Erblina Purelku
,
Johanna Vielhaben
,
Wojciech Samek
,
Sebastian Lapuschkin
PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits.
CoRR
(2024)
Christian Tinauer
,
Anna Damulina
,
Maximilian Sackl
,
Martin Soellradl
,
Reduan Achtibat
,
Maximilian Dreyer
,
Frederik Pahde
,
Sebastian Lapuschkin
,
Reinhold Schmidt
,
Stefan Ropele
,
Wojciech Samek
,
Christian Langkammer
Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification.
CoRR
(2024)
Maximilian Dreyer
,
Frederik Pahde
,
Christopher J. Anders
,
Wojciech Samek
,
Sebastian Lapuschkin
From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space.
CoRR
(2023)
Reduan Achtibat
,
Maximilian Dreyer
,
Ilona Eisenbraun
,
Sebastian Bosse
,
Thomas Wiegand
,
Wojciech Samek
,
Sebastian Lapuschkin
From attribution maps to human-understandable explanations through Concept Relevance Propagation.
Nat. Mac. Intell.
5 (9) (2023)
Maximilian Dreyer
,
Reduan Achtibat
,
Wojciech Samek
,
Sebastian Lapuschkin
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations.
CoRR
(2023)
Frederik Pahde
,
Maximilian Dreyer
,
Wojciech Samek
,
Sebastian Lapuschkin
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models.
CoRR
(2023)
Maximilian Dreyer
,
Reduan Achtibat
,
Thomas Wiegand
,
Wojciech Samek
,
Sebastian Lapuschkin
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations.
CVPR Workshops
(2023)
Frederik Pahde
,
Maximilian Dreyer
,
Wojciech Samek
,
Sebastian Lapuschkin
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models.
MICCAI (2)
(2023)
Reduan Achtibat
,
Maximilian Dreyer
,
Ilona Eisenbraun
,
Sebastian Bosse
,
Thomas Wiegand
,
Wojciech Samek
,
Sebastian Lapuschkin
From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation.
CoRR
(2022)
Maximilian Dreyer
,
Reduan Achtibat
,
Thomas Wiegand
,
Wojciech Samek
,
Sebastian Lapuschkin
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations.
CoRR
(2022)
Daniel Becking
,
Maximilian Dreyer
,
Wojciech Samek
,
Karsten Müller
,
Sebastian Lapuschkin
: Explainability-Driven Quantization for Low-Bit and Sparse DNNs.
CoRR
(2021)
Daniel Becking
,
Maximilian Dreyer
,
Wojciech Samek
,
Karsten Müller
,
Sebastian Lapuschkin
: Explainability-Driven Quantization for Low-Bit and Sparse DNNs.
xxAI@ICML
(2020)