DevEval: A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories.
Jia LiGe LiYunfei ZhaoYongmin LiHuanyu LiuHao ZhuLecheng WangKaibo LiuZheng FangLanshen WangJiazheng DingXuanming ZhangYuqi ZhuYihong DongZhi JinBinhua LiFei HuangYongbin LiPublished in: CoRR (2024)
Keyphrases
- code generation
- manually annotated
- real world
- relation extraction
- application development
- code generator
- ground truth
- software development
- formal specification
- modeling language
- software reuse
- design patterns
- model driven
- source code
- rapid prototyping
- named entities
- case study
- information extraction
- automatic extraction
- text mining
- development environment
- semantic relations
- learning objects
- data mining