Isolating Language-Coding from Problem-Solving: Benchmarking LLMs with PseudoEval

Bibliographic Details
Title: Isolating Language-Coding from Problem-Solving: Benchmarking LLMs with PseudoEval
Authors: Wu, Jiarong, Chen, Songqiang, Cao, Jialun, Lo, Hau Ching, Cheung, Shing-Chi
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Software Engineering, Computer Science - Computation and Language
More Details: Existing code generation benchmarks for Large Language Models (LLMs) such as HumanEval and MBPP are designed to study LLMs' end-to-end performance, where the benchmarks feed a problem description in natural language as input and examine the generated code in specific programming languages. However, the evaluation scores revealed in this way provide a little hint as to the bottleneck of the code generation -- whether LLMs are struggling with their problem-solving capability or language-coding capability. To answer this question, we construct PseudoEval, a multilingual code generation benchmark that provides a solution written in pseudocode as input. By doing so, the bottleneck of code generation in various programming languages could be isolated and identified. Our study yields several interesting findings. For example, we identify that the bottleneck of LLMs in Python programming is problem-solving, while Rust is struggling relatively more in language-coding. Also, our study indicates that problem-solving capability may transfer across programming languages, while language-coding needs more language-specific effort, especially for undertrained programming languages. Finally, we release the pipeline of constructing PseudoEval to facilitate the extension to existing benchmarks. PseudoEval is available at: https://anonymous.4open.science/r/PseudocodeACL25-7B74.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2502.19149
Accession Number: edsarx.2502.19149
Database: arXiv
More Details
Description not available.