DevEval: A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories.
Jia LiGe LiYunfei ZhaoYongmin LiHuanyu LiuHao ZhuLecheng WangKaibo LiuZheng FangLanshen WangJiazheng DingXuanming ZhangYuqi ZhuYihong DongZhi JinBinhua LiFei HuangYongbin LiBin GuMengfei YangPublished in: ACL (Findings) (2024)
Keyphrases
- code generation
- manually annotated
- real world
- relation extraction
- ground truth
- code generator
- application development
- software development
- model driven
- modeling language
- source code
- software reuse
- formal specification
- rapid prototyping
- case study
- artificial intelligence
- design patterns
- learning objects
- automatic extraction
- data processing
- semi supervised
- process model
- data management
- domain specific
- information extraction
- expert systems
- web services