How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN.
R. Thomas McCoyPaul SmolenskyTal LinzenJianfeng GaoAsli CelikyilmazPublished in: CoRR (2021)
Keyphrases
- language model
- text generation
- training data
- natural language generation
- language modeling
- natural language
- information retrieval
- document retrieval
- n gram
- probabilistic model
- language modelling
- decision trees
- learning algorithm
- speech recognition
- retrieval model
- query expansion
- test collection
- training set
- classification accuracy
- context sensitive
- statistical language models
- novelty detection
- vector space model
- document ranking
- language models for information retrieval
- natural language processing
- ad hoc information retrieval
- query terms
- smoothing methods
- query specific
- relevance model
- theorem prover
- term dependencies
- first order logic
- naive bayes
- spoken term detection