Sign in
Leander Weber
ORCID
Publication Activity (10 Years)
Years Active: 2020-2024
Publications (10 Years): 16
Top Topics
Explanatory Power
Fuzzy Artmap
Formal Model
Neural Network
Top Venues
CoRR
Inf. Fusion
ICIP
CD-MAKE
</>
Publications
</>
Anna Hedström
,
Leander Weber
,
Sebastian Lapuschkin
,
Marina M.-C. Höhne
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test.
CoRR
(2024)
Alexander Binder
,
Leander Weber
,
Sebastian Lapuschkin
,
Grégoire Montavon
,
Klaus-Robert Müller
,
Wojciech Samek
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations.
CVPR
(2023)
Anna Hedström
,
Leander Weber
,
Daniel Krakowczyk
,
Dilyara Bareeva
,
Franz Motzkus
,
Wojciech Samek
,
Sebastian Lapuschkin
,
Marina M.-C. Höhne
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond.
J. Mach. Learn. Res.
24 (2023)
Leander Weber
,
Jim Berend
,
Alexander Binder
,
Thomas Wiegand
,
Wojciech Samek
,
Sebastian Lapuschkin
Layer-wise Feedback Propagation.
CoRR
(2023)
Leander Weber
,
Sebastian Lapuschkin
,
Alexander Binder
,
Wojciech Samek
Beyond explaining: Opportunities and challenges of XAI-based model improvement.
Inf. Fusion
92 (2023)
Alexander Binder
,
Leander Weber
,
Sebastian Lapuschkin
,
Grégoire Montavon
,
Klaus-Robert Müller
,
Wojciech Samek
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations.
CoRR
(2022)
Christopher J. Anders
,
Leander Weber
,
David Neumann
,
Wojciech Samek
,
Klaus-Robert Müller
,
Sebastian Lapuschkin
Finding and removing Clever Hans: Using explanation methods to debug and improve deep models.
Inf. Fusion
77 (2022)
Franz Motzkus
,
Leander Weber
,
Sebastian Lapuschkin
Measurably Stronger Explanation Reliability Via Model Canonization.
ICIP
(2022)
Leander Weber
,
Sebastian Lapuschkin
,
Alexander Binder
,
Wojciech Samek
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement.
CoRR
(2022)
Sami Ede
,
Serop Baghdadlian
,
Leander Weber
,
An Nguyen
,
Dario Zanca
,
Wojciech Samek
,
Sebastian Lapuschkin
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI.
CoRR
(2022)
Sami Ede
,
Serop Baghdadlian
,
Leander Weber
,
An Nguyen
,
Dario Zanca
,
Wojciech Samek
,
Sebastian Lapuschkin
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI.
CD-MAKE
(2022)
Frederik Pahde
,
Leander Weber
,
Christopher J. Anders
,
Wojciech Samek
,
Sebastian Lapuschkin
PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging.
CoRR
(2022)
Anna Hedström
,
Leander Weber
,
Dilyara Bareeva
,
Franz Motzkus
,
Wojciech Samek
,
Sebastian Lapuschkin
,
Marina M.-C. Höhne
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations.
CoRR
(2022)
Franz Motzkus
,
Leander Weber
,
Sebastian Lapuschkin
Measurably Stronger Explanation Reliability via Model Canonization.
CoRR
(2022)
Gary S. W. Goh
,
Sebastian Lapuschkin
,
Leander Weber
,
Wojciech Samek
,
Alexander Binder
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution.
CoRR
(2020)
Gary S. W. Goh
,
Sebastian Lapuschkin
,
Leander Weber
,
Wojciech Samek
,
Alexander Binder
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution.
ICPR
(2020)