Login / Signup

RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models.

Bichen WangYuzhe ZiYixin SunYanyan ZhaoBing Qin
Published in: CoRR (2024)
Keyphrases