ALE: A Simulation-Based Active Learning Evaluation Framework for the Parameter-Driven Comparison of Query Strategies for NLP.
Philipp KohlNils FreyerYoka KrämerHenri WerthSteffen WolfBodo KraftMatthias MeineckeAlbert ZündorfPublished in: DeLTA (2023)
Keyphrases
- evaluation framework
- active learning
- relevance feedback
- evaluation process
- evaluation methodology
- evaluation measures
- query processing
- user queries
- evaluation metrics
- natural language processing
- query expansion
- query suggestion
- information extraction
- text mining
- data sources
- query formulation
- natural language
- learning algorithm
- wordnet
- semantic annotation
- retrieval systems
- search queries
- web search
- benchmark datasets
- error rate
- co occurrence
- data sets
- supervised learning
- semi supervised
- domain knowledge
- training data
- information retrieval
- machine learning