Login / Signup

Annotation alignment: Comparing LLM and human annotations of conversational safety.

Rajiv MovvaPang Wei KohEmma Pierson
Published in: CoRR (2024)
Keyphrases