Login / Signup
Liam Fowl
Publication Activity (10 Years)
Years Active: 2019-2024
Publications (10 Years): 38
Top Topics
Transfer Learning
Feature Representations
Private Data
Neural Nets
Top Venues
CoRR
NeurIPS
ICASSP
ICML
</>
Publications
</>
Hossein Souri
,
Arpit Bansal
,
Hamid Kazemi
,
Liam Fowl
,
Aniruddha Saha
,
Jonas Geiping
,
Andrew Gordon Wilson
,
Rama Chellappa
,
Tom Goldstein
,
Micah Goldblum
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion.
CoRR
(2024)
Beltrán Labrador
,
Guanlong Zhao
,
Ignacio López-Moreno
,
Angelo Scorza Scarpati
,
Liam Fowl
,
Quan Wang
Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting.
ICASSP
(2023)
Harrison Foley
,
Liam Fowl
,
Tom Goldstein
,
Gavin Taylor
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning.
CoRR
(2022)
Gowthami Somepalli
,
Liam Fowl
,
Arpit Bansal
,
Ping-Yeh Chiang
,
Yehuda Dar
,
Richard G. Baraniuk
,
Micah Goldblum
,
Tom Goldstein
Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective.
CVPR
(2022)
Beltrán Labrador
,
Guanlong Zhao
,
Ignacio López-Moreno
,
Angelo Scorza Scarpati
,
Liam Fowl
,
Quan Wang
Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting.
CoRR
(2022)
Yuxin Wen
,
Jonas Geiping
,
Liam Fowl
,
Micah Goldblum
,
Tom Goldstein
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification.
CoRR
(2022)
Yuxin Wen
,
Jonas Geiping
,
Liam Fowl
,
Hossein Souri
,
Rama Chellappa
,
Micah Goldblum
,
Tom Goldstein
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning.
CoRR
(2022)
Liam Fowl
,
Jonas Geiping
,
Steven Reich
,
Yuxin Wen
,
Wojtek Czaja
,
Micah Goldblum
,
Tom Goldstein
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models.
CoRR
(2022)
Hossein Souri
,
Liam Fowl
,
Rama Chellappa
,
Micah Goldblum
,
Tom Goldstein
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch.
NeurIPS
(2022)
Pedro Sandoval Segura
,
Vasu Singla
,
Liam Fowl
,
Jonas Geiping
,
Micah Goldblum
,
David Jacobs
,
Tom Goldstein
Poisons that are learned faster are more effective.
CVPR Workshops
(2022)
Gowthami Somepalli
,
Liam Fowl
,
Arpit Bansal
,
Ping-Yeh Chiang
,
Yehuda Dar
,
Richard G. Baraniuk
,
Micah Goldblum
,
Tom Goldstein
Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective.
CoRR
(2022)
Pedro Sandoval Segura
,
Vasu Singla
,
Liam Fowl
,
Jonas Geiping
,
Micah Goldblum
,
David Jacobs
,
Tom Goldstein
Poisons that are learned faster are more effective.
CoRR
(2022)
Yuxin Wen
,
Jonas Geiping
,
Liam Fowl
,
Micah Goldblum
,
Tom Goldstein
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification.
ICML
(2022)
Liam Fowl
,
Jonas Geiping
,
Wojtek Czaja
,
Micah Goldblum
,
Tom Goldstein
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models.
CoRR
(2021)
Liam Fowl
,
Micah Goldblum
,
Ping-yeh Chiang
,
Jonas Geiping
,
Wojciech Czaja
,
Tom Goldstein
Adversarial Examples Make Strong Poisons.
NeurIPS
(2021)
Liam Fowl
,
Ping-Yeh Chiang
,
Micah Goldblum
,
Jonas Geiping
,
Arpit Bansal
,
Wojtek Czaja
,
Tom Goldstein
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release.
CoRR
(2021)
Hossein Souri
,
Micah Goldblum
,
Liam Fowl
,
Rama Chellappa
,
Tom Goldstein
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch.
CoRR
(2021)
Liam Fowl
,
Micah Goldblum
,
Ping-Yeh Chiang
,
Jonas Geiping
,
Wojtek Czaja
,
Tom Goldstein
Adversarial Examples Make Strong Poisons.
CoRR
(2021)
Jonas Geiping
,
Liam Fowl
,
Gowthami Somepalli
,
Micah Goldblum
,
Michael Moeller
,
Tom Goldstein
What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors.
CoRR
(2021)
Eitan Borgnia
,
Valeriia Cherepanova
,
Liam Fowl
,
Amin Ghiasi
,
Jonas Geiping
,
Micah Goldblum
,
Tom Goldstein
,
Arjun Gupta
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff.
ICASSP
(2021)
Eitan Borgnia
,
Jonas Geiping
,
Valeriia Cherepanova
,
Liam Fowl
,
Arjun Gupta
,
Amin Ghiasi
,
Furong Huang
,
Micah Goldblum
,
Tom Goldstein
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations.
CoRR
(2021)
Ahmed Abdelkader
,
Michael J. Curry
,
Liam Fowl
,
Tom Goldstein
,
Avi Schwarzschild
,
Manli Shu
,
Christoph Studer
,
Chen Zhu
Headless Horseman: Adversarial Attacks on Transfer Learning Models.
ICASSP
(2020)
W. Ronny Huang
,
Zeyad Emam
,
Micah Goldblum
,
Liam Fowl
,
Justin K. Terry
,
Furong Huang
,
Tom Goldstein
Understanding Generalization Through Visualizations.
ICBINB@NeurIPS
(2020)
Micah Goldblum
,
Steven Reich
,
Liam Fowl
,
Renkun Ni
,
Valeriia Cherepanova
,
Tom Goldstein
Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks.
ICML
(2020)
Micah Goldblum
,
Liam Fowl
,
Tom Goldstein
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach.
NeurIPS
(2020)
W. Ronny Huang
,
Jonas Geiping
,
Liam Fowl
,
Gavin Taylor
,
Tom Goldstein
MetaPoison: Practical General-purpose Clean-label Data Poisoning.
CoRR
(2020)
Micah Goldblum
,
Steven Reich
,
Liam Fowl
,
Renkun Ni
,
Valeriia Cherepanova
,
Tom Goldstein
Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks.
CoRR
(2020)
Liam Fowl
,
Micah Goldblum
,
Arjun Gupta
,
Amr Sharaf
,
Tom Goldstein
Random Network Distillation as a Diversity Metric for Both Image and Text Generation.
CoRR
(2020)
Eitan Borgnia
,
Valeriia Cherepanova
,
Liam Fowl
,
Amin Ghiasi
,
Jonas Geiping
,
Micah Goldblum
,
Tom Goldstein
,
Arjun Gupta
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff.
CoRR
(2020)
Micah Goldblum
,
Liam Fowl
,
Soheil Feizi
,
Tom Goldstein
Adversarially Robust Distillation.
AAAI
(2020)
Jonas Geiping
,
Liam Fowl
,
W. Ronny Huang
,
Wojciech Czaja
,
Gavin Taylor
,
Michael Moeller
,
Tom Goldstein
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching.
CoRR
(2020)
W. Ronny Huang
,
Jonas Geiping
,
Liam Fowl
,
Gavin Taylor
,
Tom Goldstein
MetaPoison: Practical General-purpose Clean-label Data Poisoning.
NeurIPS
(2020)
Neehar Peri
,
Neal Gupta
,
W. Ronny Huang
,
Liam Fowl
,
Chen Zhu
,
Soheil Feizi
,
Tom Goldstein
,
John P. Dickerson
Deep k-NN Defense Against Clean-Label Data Poisoning Attacks.
ECCV Workshops (1)
(2020)
Ahmed Abdelkader
,
Michael J. Curry
,
Liam Fowl
,
Tom Goldstein
,
Avi Schwarzschild
,
Manli Shu
,
Christoph Studer
,
Chen Zhu
Headless Horseman: Adversarial Attacks on Transfer Learning Models.
CoRR
(2020)
Neal Gupta
,
W. Ronny Huang
,
Liam Fowl
,
Chen Zhu
,
Soheil Feizi
,
Tom Goldstein
,
John P. Dickerson
Strong Baseline Defenses Against Clean-Label Poisoning Attacks.
CoRR
(2019)
W. Ronny Huang
,
Zeyad Emam
,
Micah Goldblum
,
Liam Fowl
,
Justin K. Terry
,
Furong Huang
,
Tom Goldstein
Understanding Generalization through Visualizations.
CoRR
(2019)
Micah Goldblum
,
Liam Fowl
,
Tom Goldstein
Robust Few-Shot Learning with Adversarially Queried Meta-Learners.
CoRR
(2019)
Micah Goldblum
,
Liam Fowl
,
Soheil Feizi
,
Tom Goldstein
Adversarially Robust Distillation.
CoRR
(2019)