Login / Signup
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples.
Jia-Yu Yao
Kun-Peng Ning
Zhen-Hui Liu
Mu-Nan Ning
Li Yuan
Published in:
CoRR (2023)
Keyphrases
</>
training examples
specific features
feature set
feature extraction
low level
classification accuracy
gabor filters
false positives
image classification
distinctive features
invariant features
salient features
key features
co occurrence
supervised learning
semi supervised
image features
training data