LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models.
Neel GuhaJulian NyarkoDaniel E. HoChristopher RéAdam ChiltonAditya NarayanaAlex Chohlas-WoodAustin PetersBrandon WaldonDaniel N. RockmoreDiego ZambranoDmitry TalismanEnam HoqueFaiz SuraniFrank FaganGalit SarfatyGregory M. DickinsonHaggai PoratJason HeglandJessica WuJoe NudellJoel NiklausJohn J. NayJonathan H. ChoiKevin TobiaMargaret HaganMegan MaMichael A. LivermoreNikon Rasumov-RaheNils HolzenbergerNoam KoltPeter HendersonSean RehaagSharad GoelShang GaoSpencer WilliamsSunny GandhiTom ZurVarun IyerZehua LiPublished in: CoRR (2023)
Keyphrases
- language model
- legal reasoning
- language modeling
- case based reasoning
- document retrieval
- defeasible reasoning
- n gram
- probabilistic model
- inference rules
- information retrieval
- speech recognition
- legal cases
- statistical language models
- query expansion
- test collection
- language modelling
- smoothing methods
- relevance model
- retrieval model
- vector space model
- artificial intelligence and law
- ad hoc information retrieval
- context sensitive
- language models for information retrieval
- pseudo relevance feedback
- term dependencies
- query specific
- document ranking
- okapi bm
- language modeling framework
- modal logic
- hidden markov models