Login / Signup

PromptIntern: Saving Inference Costs by Internalizing Recurrent Prompt during Large Language Model Fine-tuning.

Jiaru ZouMengyu ZhouTao LiShi HanDongmei Zhang
Published in: CoRR (2024)
Keyphrases