Big Code Bench
Big Code Bench
https://bigcode-bench.github.io/
{[email protected]; [email protected]}
A BSTRACT
Task automation has been greatly empowered by the recent advances in Large
Language Models (LLMs) via Python code, where the tasks ranging from software
engineering development to general-purpose reasoning. While current benchmarks
have shown that LLMs can solve tasks using programs like human developers, the
majority of their evaluations are limited to short and self-contained algorithmic
tasks or standalone function calls. Solving challenging and practical tasks requires
the capability of utilizing diverse function calls as tools to efficiently implement
functionalities like data analysis and web development. In addition, using multiple
tools to solve a task needs compositional reasoning by accurately understanding
complex instructions. Fulfilling both of these characteristics can pose a great chal-
lenge for LLMs. To assess how well LLMs can solve challenging and practical tasks
via programs, we introduce BigCodeBench, a benchmark that challenges LLMs
to invoke multiple function calls as tools from 139 libraries and 7 domains for 1,140
fine-grained tasks. To evaluate LLMs rigorously, each task encompasses 5.6 test
cases with an average branch coverage of 99%. In addition, we propose a natural-
language-oriented variant of BigCodeBench, BigCodeBench-Instruct,
that automatically transforms the original docstrings into short instructions only
with essential information. Our extensive evaluation of 60 LLMs shows that LLMs
are not yet capable of following complex instructions to use function calls pre-
cisely, with scores up to 60%, significantly lower than the human performance of
97%. The results underscore the need for further advancements in this area.
∗
The work does not relate to the author’s position at Amazon.
1
BigCode Technical Report
Figure 1: Programming tasks in BigCodeBench are structured with complex instructions in the
docstrings, annotated by experts. The behavior of the solution is evaluated against a class of rigorous
test cases with the proper environment setup.
1 I NTRODUCTION
Task automation, including competitive programming (Li et al., 2022; Hendrycks et al., 2021; Jain
et al., 2024), GitHub issue resolution (Yang et al.), and question answering (Gao et al., 2023; Chen
et al.), has attracted significant interest from academia and industry to facilitate the development of
advanced models for code, especially in Python (Wang et al.). With recent advances in data-centric
deep learning techniques, large language models (LLMs) trained on large-scale corpora have shown
superior capabilities of translating textual inputs to syntactically correct and functional programs.
However, widely-used benchmarks like HumanEval (Chen et al., 2021) and MBPP (Austin et al.,
2021) only contain short, self-contained, and algorithm-focused programming tasks and have been
saturated by recent model releases. In this work, we aim to close the evaluation gap between these
isolated coding exercises and real-world programming, asking Can LLMs solve more challenging
and more practical tasks via programs?
Solving programming tasks in real-world scenarios typically has two main characteristics: (1) Diverse
Function Calls as Tools.1 A complex programming task often requires the invocation of diverse
function call sequences as tools (Robillard & DeLine, 2011; Hu et al., 2018; Qin et al., 2023). To
avoid reinventing the wheel, domain-specific libraries are designed to cover function calls (or APIs)
with comprehensive functionalities; and (2) Complex Instructions. Due to the complexity to perform
various programming tasks, instructions may require compositional reasoning ability to perform a
sequence of functionalities in the correct order (e.g., input data manipulation, error message handling,
and specific output formatting) (Wiegers & Beatty, 2013; Paetsch et al., 2003; Partsch, 2012). For
instance, creating a network application that includes functionality for retrieving responses from an
HTTPS server (Figure 1) requires integrating several components with multiple function calls, such
as managing SSL contexts, handling socket connections, and ensuring the response is returned in the
specific format.
Building a high-quality execution-based benchmark and the environment that simulates aforemen-
tioned tasks with practical and challenging constraints is non-trivial. First, it is hard to naturally
source self-contained programming tasks with complex instructions, unlike the short code exercises
in HumanEval. Although most GitHub repositories contain realistic source code, the functions inside
these repositories often require cross-file information (Ding et al., 2024). Second, real-world program-
ming scenarios are extremely diverse (Zhout et al., 2023). Existing benchmarks (Zan et al., 2022b;
Lai et al., 2023) only focus on popular scenarios like data science. Third, mainstream programming
benchmarks have a significant number of ambiguous or under-specified specifications, resulting in
inaccurate evaluation results (Siddiq et al., 2024; Jain et al.). While there have been attempts to
improve the data quality with LLMs (Jain et al., 2023), LLMs have their own bias and cannot reliably
perform refinement (Zheng et al., 2024a).
1
In this work, we refer to “tools” as library-oriented but non-object-oriented Code Function Call APIs, which
are discussed in Gorilla OpenFunctions The Code Function Calling APIs are typically seen in common
external Python packages like Numpy, Sklearn, and Pandas.
2
BigCode Technical Report
Fe
at a given time (UTC). Prompt Solution
ed
"""
✅✅
ba
<code omitted>
ck
Figure 2: Each programming task in BigCodeBench is created through a three-stage construction
process. The task quality is controlled by the human-LLM collaboration.
To construct massive high-quality programming tasks, we propose a novel framework (Figure 2) that
uses collaboration between LLMs and human experts to build a rigorous execution-based benchmark,
BigCodeBench. Particularly, we utilize LLMs to source programming tasks, refactor programs,
and add test cases, under constant human supervision. Our benchmark contains 1,140 rich-context
and multi-tool-use programming tasks in Python, covering 723 function calls from 139 popular
libraries across 7 domains. As we aim to let models reason any suitable function calls to complete the
tasks via code, we design the unit test cases in an open-ended manner and examine certain behaviors
based on different inputs (Jain et al.). We assess two common programming scenarios: (1) Code
Completion (BigCodeBench-Complete), which evaluates the capability of code generation
based on the structured docstrings; and (2) Instruction to Code (BigCodeBench-Instruct),
which evaluates the ability to complete programming tasks based on natural-language-oriented
(NL-oriented) instructions. While BigCodeBench-Complete emphasizes structured docstring
prompts, BigCodeBench-Instruct challenges LLMs to generate precise code without relying
on non-essential details like interactive Python examples.
Through extensive studies on 60 models, we find that LLMs still struggle to invoke multiple function
calls from cross-domain libraries and follow complex instructions to solve programming tasks
using Python. Specifically, the best performing LLM, GPT-4o, solves merely 60% of tasks on
BigCodeBench-Complete and less than 50% on BigCodeBench-Instruct, indicating that
LLMs themselves lack the ability to align with human expectations when instructions are more
natural. Interestingly, we find that some instruction-tuned LLMs like GPT-4 constantly refuse to
follow long instructions to repeat essential context and thus fail the test cases. Furthermore, LLMs
perform differently when using domain-specific function calls as tools. We also demonstrate the
strongly positive correlations between mainstream benchmarks and BigCodeBench, validating our
evaluation results.
2 B ENCHMARK C ONSTRUCTION
One intuitive method would be to rely on source code repositories to construct the function-level
programming tasks. However, most repository-level functions require cross-file information, such as
3
BigCode Technical Report
customized modules (Ding et al., 2024), which are hard to self-contain and document. We argue that
leveraging LLMs to synthesize customized programming tasks can be more viable (Wei et al., 2023;
Luo et al., 2023), especially when overseen by humans. Given a code snippet of API usage with a brief
human instruction as the seed example (Figure 2), an LLM is instructed to enrich the programming
intent and refine the corresponding implementation by using diverse libraries. Specifically, we
use seed examples from ODEX (Wang et al., 2023c), a benchmark containing intent-paired code
skeletons from Stack Overflow. We use the GPT-4 API2 , the strongest LLM at the time of data
sourcing, to synthesize the programming tasks. To help the LLM synthesize self-contained and
relevant programming tasks based on the seed example, we instruct the model with a 2-shot in-context
demonstration (Appendix J.1) crafted by the lead annotator.
As previous studies have shown that LLMs favor their own generations (Zheng et al., 2024a; Pan-
ickssery et al., 2024), such phenomena can make the model evaluation unfair. We mitigate the model
biases with an obfuscation and perturbation process. We first replace the semantic-rich program
entry points with dummy function names. In addition, we perturb the natural language descriptions
in docstrings with the back-translation method of NL-Augmenter (Dhole et al., 2023), inspired by
ReCode (Wang et al., 2023a). After validating the post-processed programs with an abstract syntax
tree parser, we collected 4,718 function-level programming samples.
Programs synthesized by LLMs may contain various issues, such as undeclared variables and runtime
bugs. Without proper verification, the implementation cannot directly serve as a ground-truth solution.
To construct a high-quality execution-based benchmark, we need to add test cases that can rigorously
verify the correctness of programs and identify any bugs. However, it takes non-trivial effort for
human developers to understand synthesized programs and properly refactor them with thorough
testing. To improve the program quality and ease human annotation, we propose a conversion-driven
framework for program refactoring and test case generations inspired by (Xia & Zhang, 2023).
Specifically, we utilize the Code Interpreter session in the web-based GPT-4, which is back-ended
by a Linux-based virtual environment with pre-installed Python packages. We engage 13 authors as
human annotators (including the lead annotator) for this step, assigning each annotator 100 randomly
sampled programming tasks based on their preferences of code topics (Appendix J.2.1). We detail
the design of this human-in-the-loop framework from human and LLM aspects as follows:
Human Aspect Human developers possess varying preferences and levels of familiarity with
specific data types and programming scenarios. To aid human annotators in providing more precise
feedback for refining programs with GPT-4, we have defined 10 data types (e.g., SQL, CSV, and
Python built-in types) and task scenarios (e.g., data analysis, networking, and visualization). GPT-4
API is utilized to automatically classify each program according to these categories, with detailed
descriptions available in Appendix J.2.1. The annotators’ role is to continually instruct GPT-4 to
refractor the programs and to provide continuous feedback to guide the model whenever it fails to
self-debug or incorrectly refactor the program.
LLM Aspect To effectively guide GPT-4 in the iterative refinement of programs and test cases,
we provide detailed annotation guidelines in Appendix J.2.2 as an initial prompt. These guidelines
encompass two high-level instructions: (1) Refine the function, including its docstrings, to enhance
realism and reduce ambiguity; and (2) Write unit tests to ensure the functional correctness of the given
program description. Specifically, the model is taught to follow a step-by-step refinement process:
(1) Remove unused libraries and add necessary libraries if they are missing in the code snippet; (2)
Reformat docstrings to adhere to PEP-257 conventions; (3) Align program implementations with
the instructions inside docstrings; (4) Write and execute unit tests to ensure they pass; and (5) Add
refined programs and test cases to files for downloading.
During interactions with GPT-4, we identify two main drawbacks in the Code Interpreter session.
First, GPT-4 struggles to write proper test cases when mocking tests are employed. While the model
can generate high-level designs of mocking tests, it often fails to understand how test cases should be
constructed based on execution feedback. Second, GPT-4 can become stuck while resolving runtime
2
We use the model version gpt-4-0613.
4
BigCode Technical Report
bugs, leading to iterative refinement until the session times out. Continuous human feedback on
viable solutions is essential to address these issues and ensure the model stays on track.
During the post-processing of the annotated data, we observe that a significant number of test cases
were incomplete. This issue arises because GPT-4 often omits partial content when writing long
contexts into files. After removing all invalid programs via program analysis, we end up with 1,223
refactored programming tasks with paired test cases.
Pre-Evaluation To enhance the benchmark quality, we perform a dry run to pre-evaluate an LLM
other than GPT-4 used in previous steps. We choose GPT-3.5-Turbo API3 to generate solutions based
on the examined task prompts. By understanding how the model fails on the tasks, annotators may
add essential details to the docstrings to clarify the task instructions but avoid describing step-by-step
implementations.
Cross-Checking To further validate data quality and ensure consistency across all programming
tasks in the testing environment, 7 additional annotators refine and finalize the data annotated by
others. This round focuses on the utility of docstrings and test cases. We automatically parse the
docstrings and ask annotators to manually correct the docstring structures, specifically addressing
task descriptions, function parameters, expected returns, exception handling, required modules, and
interactive examples. Additionally, we remove unused imported modules via program analysis. For
the interactive examples, we ensure their correctness via doctest, except for those requiring system
and network access. The confirmed programming tasks are finally validated by the automated test
workflows in GitHub Container Registry, where the test cases are automatically run against the task
implementations in a configured sandbox environment. To ensure the finalized data quality, we
randomly assign 33 finalized task prompts to the 11 annotators (the lead annotator was excluded;
one annotator was unavailable at the time), to write the solutions. The lead annotator conducts the
evaluation of the solutions, finding that 97% (32 out of 33) of sampled tasks can pass all test cases.
When instruction tuning LLMs using NL data, the input is mainly an NL instruction, and the target
output is the NL or source code for completion (Muennighoff et al., 2023). This training objective
aligns with the downstream applications such as multi-turn dialogue, where users ask questions or
provide task-specific instructions to the models. While existing programming benchmarks commonly
format the verbose instructions in docstrings (Chen et al., 2021), users may instruct the models
to generate code samples with the less verbose NL context. Despite that there have been similar
attempts like HumanEvalPack (Muennighoff et al., 2023) addressing this limitation, their inputs still
lack some naturalness. Generally, users tend to describe the high-level idea of functionalities and
avoid redundant information (e.g., parameters) or too-specific details (e.g., interactive examples).
Thus, we create BigCodeBench-Instruct, a benchmark variant that prompts LLMs to solve
3
We use model version gpt-3.5-turbo-1106.
5
BigCode Technical Report
programming tasks with more NL-oriented instructions, assessing the model’s ability to understand
human requirements correctly. Based on the task prompts created in BigCodeBench-Complete,
we design a set of parsing rules and transform them into more natural instructions (Figure 3). For
quality control, 5 authors who do not participate in previous stages inspect the randomly sampled
prompts and their corresponding ground-truth solutions and reach an agreement on the alignment
between the instruction prompts and task implementations.
3 B ENCHMARK S TATISTICS
datetime.datetime, datetime.datetime.now,
Time datetime, time, pytz, dateutil,
time.time, time.sleep,
(10%) holidays, calendar
datetime.datetime.strptime…
Figure 4: Examples of tools in BigCodeBench are illustrated. Each function call belongs to a
domain-specific library. The distribution of each domain is computed based on the frequency of
domain-specific libraries appearing per task. For example, “63%” in “Computation” means that there
are 63% tasks in BigCodeBench using at least one computation library.
Overall Statistics The first part of Table 1 presents an overview comparison between
BigCodeBench and other mainstream function-level Python programming benchmarks. We note
that the full DS-1000 dataset emphasizes the perturbed tasks to avoid model memorization and thus
includes an original subset for reference. Therefore, we also include statistics for non-perturbed
problems (DS-1000 Orig.). As the provided statistics suggest, BigCodeBench contains much more
rigorous execution-based evaluation and has longer task prompts that contain complex instructions.
The ground-truth solutions are also longer than in prior benchmarks, indicating that the tasks in-
side BigCodeBench require more complex implementation logic. To illustrate the programming
complexity, we measure cyclomatic complexity, which is the number of independent paths through
6
BigCode Technical Report
Overall Statistics
Test (Avg.) Prompt (Avg.) Solution (Avg.)
Benchmark # Task
# Cov. Char. (Code) Line Char. Line C.C.
HumanEval 164 7.8 98% 450.6 (450.6) 13.7 180.9 6.8 3.6
DS-1000 1,000 1.6 98% 871.8 (193.9) 29.1 138.1 5.1 1.6
DS-1000 (Orig.) 452 1.5 98% 831.4 (201.2) 26.2 115.5 4.2 1.4
ODEX 945 1.8 96% 87.5 (87.5) 1.0 50.4 1.9 1.4
BigCodeBench-Complete 1112.5 (1112.5) 33.5
1,140 5.6 99% 426.0 10.0 3.1
BigCodeBench-Instruct 663.2 (124.0) 11.7
Tool Statistics
# Lib. # Call Tasks (Avg.) Combination
Benchmark # Dom.
Std. / Ext. Std. / Ext. # Lib. # Call # Lib. # Calls # Dom.
HumanEval (Chen et al., 2021) 3 4/0 7/0 0.1 0.1 6 8 5
DS-1000 (Lai et al., 2023) 4 5/9 7 / 321 0.8 1.1 66 331 24
DS-1000 (Orig.) (Lai et al., 2023) 4 4/9 5 / 289 0.9 1.3 59 260 23
ODEX (Wang et al., 2023c) 7 40 / 26 128 / 102 0.6 0.5 105 202 20
BigCodeBench 7 77 / 62 281 / 442 2.8 4.7 577 1045 56
the task solution. We notice that BigCodeBench has a similar complexity to HumanEval, much
higher than DS-1000 and ODEX. The high cyclomatic complexity indicates that solving the tasks in
BigCodeBench requires non-trivial reasoning ability from the programming perspective.
4 E VALUATION
Our evaluation uses the unbiased version of Pass@K (Chen et al., 2021) to accurately assess the
functional correctness of generated code snippets by LLMs. To make general observations, we
extensively evaluate 60 state-of-the-art LLMs on BigCodeBench-Complete and 35 instruction-
tuned LLMs on BigCodeBench-Instruct. Specifically, following prior works (Roziere et al.,
2023; Liu et al., 2024; Lai et al., 2023), we report Pass@1 with greedy decoding for the main
7
BigCode Technical Report
experiments in the zero-shot setting. To investigate more thoroughly, we compute Pass@1 and
Pass@5 results with random sampling to generate N (N =5) samples with a temperature of 0.8 and
top-p of 0.95 in Appendix L. While it is encouraged to generate much more (N ≥ K) samples
to avoid bias, we take the lower bound due to limited computational resources. We use the same
prompts for code generation from (Liu et al., 2024), given in Appendix K.
We first evaluate the task-solving performance of each LLM and summarize the findings as follows.
We show the main results in Figure 6. As models constantly omit the essential code in the generation
and hence fail the tests, we calibrate the generation quality by adding the missing setup. The Pearson’s
r correlation between the model ranks on BigCodeBench-Complete and BigCodeBench-
Instruct is 0.982, indicating a strong alignment. In addition, model rankings show the signs of
scaling laws (Kaplan et al., 2020), where bigger models can solve more tasks. We also observe that
there is still some performance gap between the top closed models and open ones. Detailed studies
can be found in Appendix M. We highlight a few findings as follows:
Pass@1
BigCodeBench-Complete
?B > 70B ~34B ~14B ~7B < 3B
60
Closed Models
Open Models
Calibrated Score
0
Models
Qw
Mi
Dp
Co
Gr
GP
GP
GP
Cla
Cla
Cla
Mi
Mi
Dp
Yi1
Co
Sta
Gr
Co
Ma
Co
Dp
Yi1
Co
Gr
Lla
Yi1
Co
Gr
Dp
Qw
GP
Lla
Qw
deL
ani
xtr
ani
ani
ani
deL
deL
deQ
deG
deL
sk-
stra
stra
T-4
T-4
T-3
sk-
.5-C
rCo
sk-
.5-C
ma
.5-C
sk-
en2
gic
ma
ud
ud
ud
T-4
en2
en2
al-I
te-C
te-C
te-C
te-C
Ch
e-3
e-3
e-3
Co
ode
Co
3-I
Co
o
-Tu
.5-T
Lam
3-I
l-L
l-S
Lam
Lam
Lam
-57
der
we
em
hat
hat
hat
-72
-7B
ns-
at-v
ns-
-Op
-So
-Ha
der
der
der
ns-
arg
ma
r
B-A
urb
r-S
ode
ode
n1.5
ode
ode
2-I
ma
B-I
-34
-9B
-6B
bo
-In
a-I
a-I
a-I
a-I
8x2
8B
70B
-In
-In
-In
nn
ll
e
2
iku
-DS
ns-
us
-In
ns
B
ns-
o
-In
s
14B
ns-
-In
ns-
-In
ns-
-In
-Ch
et
2B
s-3
s-6
s-1
15B
s-7
70B
s-3
s-2
s-8
s-3
-6.7
34B
13B
7B
-In
at-7
3B
.7B
.3B
B
4B
0B
B
B
s
Pass@1
BigCodeBench-Instruct
60
Closed Models
Open Models
Calibrated Score
0
Models
GP
GP
GP
Cla
Cla
Cla
Mi
Mi
Lla
Co
Mi
Dp
Qw
Dp
Gr
Yi1
Co
Sta
Gr
Co
Co
Ma
Dp
Yi1
Co
Lla
Yi1
Co
Dp
GP
Qw
Qw
ani
ani
deL
deL
deL
deQ
deG
deL
stra
stra
xtr
T-4
T-4
T-3
ma
sk-
sk-
.5-C
rCo
sk-
.5-C
ma
.5-C
sk-
gic
ud
ud
ud
T-4
en2
en2
en2
al-I
te-C
te-C
e-3
e-3
e-3
3-I
Ch
Co
ode
Co
3-I
Co
o
-Tu
.5-T
l-L
l-S
Lam
Lam
Lam
Lam
der
we
em
hat
hat
hat
-72
-7B
-57
ns-
ns-
-Op
-So
-Ha
ns-
der
der
der
at-v
arg
ma
r
urb
r-S
ode
ode
n1.5
B-I
2-I
m
B-A
-9B
-6B
bo
-
a-I
a-I
a-I
a-I
3
I
70B
8B
8x2
a-I
4B
ns
-In
-In
-In
nn
ll
e
iku
-DS
ns-
us
ns
o
ns-
-In
-In
ns-
ns-
-Ch
14B
ns-
et
s-3
s-3
s-6
s-1
2B
15B
s-3
s-2
-6.7
70B
13B
7B
at-7
4B
7B
-In
3B
.7B
.3B
4B
0B
B
s
8
BigCode Technical Report
Instruction-tuned LLMs omit essential details of long code prompts Interestingly, we ob-
serve that instruction-tuned LLMs can omit the essential import statements of the given prompts
in BigCodeBench-Complete, which can lead to task failure due to the lack of proper module
and constant definitions. The omission is likely to happen when models need to repeat the long
context in the response. Such behaviors are denoted as “model laziness” in long-context inter-
actions, similar to the observations in Section 2.2. Due to the limited prompt length of existing
programming benchmarks (Table 1), there is no quantitative evidence of the laziness phenomenon
in prior code generation benchmarks. To understand how laziness can affect the model perfor-
mance, we calibrate the generation quality by adding the missing setup (e.g., import statements and
global constants). When comparing the calibrated Pass@1 and the original ones in the top figure
of Figure 6 and Appendix L, we find that GPT-4 tends to omit much more context and perform
poorly on BigCodeBench-Complete, consistent with the previous community discussion4 and
confirmed by OpenAI (OpenAI, 2024b). While instruction-tuned LLMs have an average perfor-
mance degradation of 0.8% on BigCodeBench-Complete, there is a less than 0.3% difference on
BigCodeBench-Instruct, which validates the hypothesis that models omit more information
of longer inputs.
LLMs are sensitive to the verbosity of programming instructions From the bottom figure
of Figure 6, we notice that LLMs perform much worse on BigCodeBench-Instruct than
BigCodeBench-Complete with an average decrease of 8.5% on Pass@1, while maintaining
similar rankings. This observation indicates that LLMs still lack the proper understanding of
condensed human requirements since the task instructions of BigCodeBench-Instruct are
less verbose. While it is possible that the lower verbosity may introduce more ambiguity, we
make sure that the instructions transformed from BigCodeBench-Complete do not lose the
key information from the human perspective. In addition, we find that models with lower scores
on BigCodeBench-Complete degrade less on BigCodeBench-Instruct, limited by their
programming capability.
9
BigCode Technical Report
Table 2: Tool-use comparisons between all generated solutions (Sol.) and ground truths (GT.) on
BigCodeBench-Complete.
BigCodeBench-Complete BigCodeBench-Instruct
r p r p
HumanEval+ 0.849 0.861 0.864 0.894
LiveCodeBench 0.853 0.898 0.778 0.815
6 R ELATED W ORK
Large Language Models for Code With the rise of LLMs, there have been various models trained
on code. Codex (Chen et al., 2021) marks the first base LLM pre-trained on code, which was used
as the backbone model for GitHub Copilot. More recently, pre-trained base code models (Nijkamp
et al., 2022; Li et al., 2023; Lozhkov et al., 2024; Roziere et al., 2023) have been built to perform
accurate code completion. Later, with the advance in instruction tuning (Ouyang et al., 2022),
LLMs can generate the code snippet that aligns with the given NL instruction. Instruction-tuned
code models (Muennighoff et al., 2023; Luo et al., 2023; Wei et al., 2023), are generally better at
programming than their base ones.
10
BigCode Technical Report
been built to challenge code LLMs on application-specific scenarios by using specific sets of tools
and libraries. However, they focus on simple intents and use limited specific function calls per
programming task, making their evaluation less challenging and realistic. Benchmarks like SWE-
bench (Jimenez et al., 2023) are built to evaluate the performance of a code agent framework
(e.g., iterated prompting, real-time environment interaction, and long-context exploration). Our
BigCodeBench focuses on evaluating the fundamental code generation capability of LLMs, which
is also an essential path toward strong code agents. In addition, SWE-Bench is constructed from
GitHub repositories with existing test cases — which limits the diversity of the tasks considered.
In contrast, our collaborative LLM-human annotator procedure allows generating tasks from seed
queries collected from (Wang et al., 2023c), tackling a broader range of software tasks.
7 C ONCLUSION
We introduce BigCodeBench, a new high-quality programming benchmark constructed via the
collaboration between human experts and LLMs, assessing the capability of tool use and complex
instruction following. Through the extensive evaluation of 60 LLMs, we find that there is a long
way for models to achieve perfection on this benchmark and share a few findings that can potentially
improve the performance. We urge the community to work on more advanced LLMs for code
and continually build upon our benchmark and extended BigCodeBench-Hard (Appendix F),as
discussed in our long-term roadmap (Appendix G).
ACKNOWLEDGEMENTS
We truly appreciate BigCode making many great works happen, including The Stack (Lozhkov
et al., 2024), SantaCoder (Allal et al., 2023), StarCoder (Li et al., 2023), OctoPack (Muennighoff
et al., 2023), Astraios (Zhuo et al., 2024), StarCoder2 (Lozhkov et al., 2024), and StarCoder2-
Instruct (BigCode, 2024). BigCodeBench cannot be built without the support of the BigCode
community. We thank Sea AI Lab and MASSIVE for providing the computational resources. The
project is also partially supported by Terry Yue Zhuo’s CSIRO’s Data61 PhD Scholarships and
Xiaoning Du’s Google Research Scholar Program Award. For the benchmark, we particularly thank
Xiangru Tang, Dmitry Abulkhanov, Noah Ziems, Chengran Yang, Jiamou Sun, Nickil Maveli, and
Lili Bo for their eagerness to participate. We are extremely grateful to the EvalPlus team for their
open-source evaluation framework. We also thank Loubna Ben Allal, Zhiruo Wang, Zhensu Sun,
Sean Hughes, and Zhou Yang for their valuable comments.
R EFERENCES
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report.
arXiv preprint arXiv:2303.08774, 2023.
Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. Unified pre-training for pro-
gram understanding and generation. In Proceedings of the 2021 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.
2655–2668, 2021.
Mistral AI. Mistral. https://mistral.ai/news/mistral-large/, 2024a.
Mistral AI. Mixtral-8x22b-v0.1. https://huggingface.co/mistralai/
Mixtral-8x22B-v0.1, 2024b.
Mistral AI. Mixtral-8x22b-instruct-v0.1. https://huggingface.co/mistralai/
Mixtral-8x22B-Instruct-v0.1, 2024c.
AI@Meta. Llama 3 model card. 2024. URL https://github.com/meta-llama/llama3/
blob/main/MODEL_CARD.md.
Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz
Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. Santacoder: don’t
reach for the stars! In Deep Learning for Code (DL4C) Workshop, 2023.
11
BigCode Technical Report
Saswat Anand, Edmund K Burke, Tsong Yueh Chen, John Clark, Myra B Cohen, Wolfgang
Grieskamp, Mark Harman, Mary Jean Harrold, Phil McMinn, Antonia Bertolino, et al. An
orchestrated survey of methodologies for automated software test case generation. Journal of
systems and software, 86(8):1978–2001, 2013.
AI Anthropic. The claude 3 model family: Opus, sonnet, haiku. Claude-3 Model Card, 2024.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan,
Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language
models. arXiv preprint arXiv:2108.07732, 2021.
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,
Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu,
Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan,
Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin
Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng
Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou,
Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report. arXiv preprint arXiv:2309.16609,
2023.
Emily M Bender and Batya Friedman. Data statements for natural language processing: Toward miti-
gating system bias and enabling better science. Transactions of the Association for Computational
Linguistics, 6:587–604, 2018.
BigCode. starcoder2-15b-instruct-v0.1. https://huggingface.co/bigcode/
starcoder2-15b-instruct-v0.1, 2024.
Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, Donald
Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q Feldman, et al. Multipl-e:
a scalable and polyglot approach to benchmarking neural code generation. IEEE Transactions on
Software Engineering, 49(7):3675–3691, 2023.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting:
Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine
Learning Research.
Xinyun Chen, Maxwell Lin, Nathanael Schaerli, and Denny Zhou. Teaching large language models
to self-debug. In The 61st Annual Meeting Of The Association For Computational Linguistics,
2023.
Colin Clement, Dawn Drain, Jonathan Timcheck, Alexey Svyatkovskiy, and Neel Sundaresan. Pymt5:
multi-mode translation of natural language and python code with transformers. In Proceedings
of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp.
9052–9065, 2020.
DeepSeek-AI. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model,
2024.
Kaustubh Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood,
Abinaya Mahadiran, Simon Mille, Ashish Shrivastava, Samson Tan, et al. Nl-augmenter: A
framework for task-sensitive natural language augmentation. Northern European Journal of
Language Technology, 9(1), 2023.
Yangruibo Ding, Zijian Wang, Wasi Ahmad, Hantian Ding, Ming Tan, Nihal Jain, Murali Krishna
Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, et al. Crosscodeeval: A diverse
and multilingual benchmark for cross-file code completion. Advances in Neural Information
Processing Systems, 36, 2024.
12
BigCode Technical Report
Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng,
Chaofeng Sha, Xin Peng, and Yiling Lou. Classeval: A manually-crafted benchmark for evaluating
llms on class-level code generation. arXiv preprint arXiv:2308.01861, 2023.
Yanai Elazar, Akshita Bhagia, Ian Magnusson, Abhilasha Ravichander, Dustin Schwenk, Alane Suhr,
Pete Walsh, Dirk Groeneveld, Luca Soldaini, Sameer Singh, et al. What’s in my big data? arXiv
preprint arXiv:2310.20707, 2023.
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou,
Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and
natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020,
pp. 1536–1547, 2020.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. In International Conference on Machine
Learning, pp. 10764–10799. PMLR, 2023.
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach,
Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. Communications of the ACM, 64(12):
86–92, 2021.
Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, LIU Shujie, Long Zhou, Nan Duan,
Alexey Svyatkovskiy, Shengyu Fu, et al. Graphcodebert: Pre-training code representations with
data flow. In International Conference on Learning Representations.
Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao
Bi, Y Wu, YK Li, et al. Deepseek-coder: When the large language model meets programming–the
rise of code intelligence. arXiv preprint arXiv:2401.14196, 2024.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin
Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence
with apps. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and
Benchmarks Track (Round 2), 2021.
Xing Hu, Ge Li, Xin Xia, David Lo, Shuai Lu, and Zhi Jin. Summarizing source code with transferred
api knowledge. In Proceedings of the 27th International Joint Conference on Artificial Intelligence,
pp. 2269–2275, 2018.
Edward Hughes, Michael Dennis, Jack Parker-Holder, Feryal Behbahani, Aditi Mavalankar, Yuge
Shi, Tom Schaul, and Tim Rocktaschel. Open-endedness is essential for artificial superhuman
intelligence. arXiv preprint arXiv:2406.04268, 2024.
Naman Jain, Manish Shetty, Tianjun Zhang, King Han, Koushik Sen, and Ion Stoica. R2e: Turning
any github repository into a programming agent environment. In ICML 2024.
Naman Jain, Tianjun Zhang, Wei-Lin Chiang, Joseph E Gonzalez, Koushik Sen, and Ion Stoica.
Llm-assisted code cleaning for training accurate code generators. arXiv preprint arXiv:2311.14904,
2023.
Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando
Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free
evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024.
Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R
Narasimhan. Swe-bench: Can language models resolve real-world github issues? In The Twelfth
International Conference on Learning Representations, 2023.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott
Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models.
arXiv preprint arXiv:2001.08361, 2020.
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. In Proceedings of naacL-HLT, volume 1,
pp. 2. Minneapolis, Minnesota, 2019.
13
BigCode Technical Report
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. Advances in neural information processing systems, 35:
22199–22213, 2022.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph
Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model
serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems
Principles, pp. 611–626, 2023.
Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen-tau Yih,
Daniel Fried, Sida Wang, and Tao Yu. Ds-1000: A natural and reliable benchmark for data science
code generation. In International Conference on Machine Learning, pp. 18319–18345. PMLR,
2023.
Maxime Lamothe, Yann-Gaël Guéhéneuc, and Weiyi Shang. A systematic review of api evolution
literature. ACM Computing Surveys (CSUR), 54(8):1–36, 2021.
Raymond Li, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone,
Christopher Akiki, LI Jia, Jenny Chim, Qian Liu, et al. Starcoder: may the source be with you!
Transactions on Machine Learning Research, 2023.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom
Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation
with alphacode. Science, 378(6624):1092–1097, 2022.
Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by
chatgpt really correct? rigorous evaluation of large language models for code generation. Advances
in Neural Information Processing Systems, 36, 2024.
Tianyang Liu, Canwen Xu, and Julian McAuley. Repobench: Benchmarking repository-level code
auto-completion systems. In The Twelfth International Conference on Learning Representations.
Renze Lou, Kai Zhang, and Wenpeng Yin. A comprehensive survey on instruction following. arXiv
preprint arXiv:2303.10475, 2023.
Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane
Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, et al. Starcoder 2 and the stack v2: The
next generation. arXiv preprint arXiv:2402.19173, 2024.
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin
Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. Codexglue: A machine learning benchmark
dataset for code understanding and generation. In Thirty-fifth Conference on Neural Information
Processing Systems Datasets and Benchmarks Track (Round 1).
Qingzhou Luo, Farah Hariri, Lamyaa Eloussi, and Darko Marinov. An empirical analysis of flaky
tests. In Proceedings of the 22nd ACM SIGSOFT international symposium on foundations of
software engineering, pp. 643–653, 2014.
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing
Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with
evol-instruct. In The Twelfth International Conference on Learning Representations.
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing
Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with
evol-instruct. In The Twelfth International Conference on Learning Representations, 2023.
Mayank Mishra, Matt Stallone, Gaoyuan Zhang, Yikang Shen, Aditya Prasad, Adriana Meza So-
ria, Michele Merler, Parameswaran Selvam, Saptha Surendran, Shivdeep Singh, et al. Gran-
ite code models: A family of open foundation models for code intelligence. arXiv preprint
arXiv:2405.04324, 2024.
14
BigCode Technical Report
Niklas Muennighoff, Qian Liu, Armel Randy Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue
Zhuo, Swayam Singh, Xiangru Tang, Leandro Von Werra, and Shayne Longpre. Octopack:
Instruction tuning code large language models. In The Twelfth International Conference on
Learning Representations, 2023.
Jinjie Ni, Fuzhao Xue, Xiang Yue, Yuntian Deng, Mahir Shah, Kabir Jain, Graham Neubig, and
Yang You. Mixeval: Deriving wisdom of the crowd from llm benchmark mixtures. arXiv preprint
arXiv:2406.06565, 2024.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese,
and Caiming Xiong. Codegen: An open large language model for code with multi-turn program
synthesis. In The Eleventh International Conference on Learning Representations, 2022.
OpenAI. Gpt-3.5-turbo. https://openai.com/index/
introducing-chatgpt-and-whisper-apis/, 2023.
OpenAI. Gpt-4o. https://openai.com/index/hello-gpt-4o/, 2024a.
OpenAI. Gpt-4-turbo. https://openai.com/index/
new-embedding-models-and-api-updates/, 2024b.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong
Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow
instructions with human feedback. Advances in neural information processing systems, 35:27730–
27744, 2022.
Frauke Paetsch, Armin Eberlein, and Frank Maurer. Requirements engineering and agile software
development. In WET ICE 2003. Proceedings. Twelfth IEEE International Workshops on Enabling
Technologies: Infrastructure for Collaborative Enterprises, 2003., pp. 308–313. IEEE, 2003.
Arjun Panickssery, Samuel R Bowman, and Shi Feng. Llm evaluators recognize and favor their own
generations. arXiv preprint arXiv:2404.13076, 2024.
Helmut A Partsch. Specification and transformation of programs: a formal approach to software
development. Springer Science & Business Media, 2012.
Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model
connected with massive apis. arXiv preprint arXiv:2305.15334, 2023.
Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei
Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint
arXiv:2304.08354, 2023.
Alec Radford. Improving language understanding by generative pre-training. 2018.
N Reimers. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint
arXiv:1908.10084, 2019.
Martin P Robillard and Robert DeLine. A field study of api learning obstacles. Empirical Software
Engineering, 16:703–732, 2011.
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code.
arXiv preprint arXiv:2308.12950, 2023.
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Yu Wu,
and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language
models. arXiv preprint arXiv:2402.03300, 2024.
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion:
Language agents with verbal reinforcement learning. Advances in Neural Information Processing
Systems, 36, 2024.
Mohammed Latif Siddiq, Simantika Dristi, Joy Saha, and Joanna Santos. Quality assessment of
prompts used in code generation. arXiv preprint arXiv:2404.10155, 2024.
15
BigCode Technical Report
Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak,
Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models
based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024.
Junjie Wang, Yuchao Huang, Chunyang Chen, Zhe Liu, Song Wang, and Qing Wang. Software
testing with large language models: Survey, landscape, and vision. IEEE Transactions on Software
Engineering, 2024a.
Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar,
Samson Tan, Baishakhi Ray, Parminder Bhatia, et al. Recode: Robustness evaluation of code
generation models. In The 61st Annual Meeting Of The Association For Computational Linguistics,
2023a.
Shuai Wang, Liang Ding, Li Shen, Yong Luo, Bo Du, and Dacheng Tao. Oop: Object-oriented
programming evaluation benchmark for large language models. arXiv preprint arXiv:2401.06628,
2024b.
Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji.
Executable code actions elicit better llm agents. In Forty-first International Conference on Machine
Learning.
Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. Codet5: Identifier-aware unified pre-
trained encoder-decoder models for code understanding and generation. In Proceedings of the
2021 Conference on Empirical Methods in Natural Language Processing, pp. 8696–8708, 2021.
Yue Wang, Hung Le, Akhilesh Gotmare, Nghi Bui, Junnan Li, and Steven Hoi. Codet5+: Open
code large language models for code understanding and generation. In Proceedings of the 2023
Conference on Empirical Methods in Natural Language Processing, pp. 1069–1088, 2023b.
Zhiruo Wang, Shuyan Zhou, Daniel Fried, and Graham Neubig. Execution-based evaluation for
open-domain code generation. In Findings of the Association for Computational Linguistics:
EMNLP 2023, pp. 1271–1290, 2023c.
Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. Magicoder: Source code is
all you need. arXiv preprint arXiv:2312.02120, 2023.
Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. Magicoder: Empowering
code generation with oss-instruct. In Forty-first International Conference on Machine Learning,
2024.
Karl E Wiegers and Joy Beatty. Software requirements. Pearson Education, 2013.
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe
Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents:
A survey. arXiv preprint arXiv:2309.07864, 2023.
Chunqiu Steven Xia and Lingming Zhang. Keep the conversation going: Fixing 162 out of 337 bugs
for $0.42 each using chatgpt. arXiv preprint arXiv:2304.00385, 2023.
Fanjia Yan, Huanzhi Mao, Charlie Cheng-Jie Ji, Tianjun Zhang, Shishir G. Patil, Ion Stoica,
and Joseph E. Gonzalez. Berkeley function calling leaderboard. https://gorilla.cs.
berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html,
2024.
Weixiang Yan, Haitian Liu, Yunkun Wang, Yunzhe Li, Qian Chen, Wen Wang, Tingyu Lin, Weishan
Zhao, Li Zhu, Shuiguang Deng, et al. Codescope: An execution-based multilingual multitask
multidimensional benchmark for evaluating llms on code understanding and generation. arXiv
preprint arXiv:2311.08588, 2023.
John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik Narasimhan,
and Ofir Press. Swe-agent: Agent-computer interfaces enable automated software engineering.
16
BigCode Technical Report
Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng
Zhu, Jianqun Chen, Jing Chang, et al. Yi: Open foundation models by 01. ai. arXiv preprint
arXiv:2403.04652, 2024.
Hao Yu, Bo Shen, Dezhi Ran, Jiaxin Zhang, Qi Zhang, Yuchi Ma, Guangtai Liang, Ying Li, Qianxiang
Wang, and Tao Xie. Codereval: A benchmark of pragmatic code generation with generative pre-
trained models. In Proceedings of the 46th IEEE/ACM International Conference on Software
Engineering, pp. 1–12, 2024.
Daoguang Zan, Bei Chen, Dejian Yang, Zeqi Lin, Minsu Kim, Bei Guan, Yongji Wang, Weizhu Chen,
and Jian-Guang Lou. Cert: Continual pre-training on sketches for library-oriented code generation.
Daoguang Zan, Bei Chen, Zeqi Lin, Bei Guan, Wang Yongji, and Jian-Guang Lou. When language
model meets private library. In Findings of the Association for Computational Linguistics: EMNLP
2022, pp. 277–288, 2022a.
Daoguang Zan, Bei Chen, Dejian Yang, Zeqi Lin, Minsu Kim, Bei Guan, Yongji Wang, Weizhu Chen,
and Jian-Guang Lou. Cert: continual pre-training on sketches for library-oriented code generation.
arXiv preprint arXiv:2206.06888, 2022b.
Daoguang Zan, Bei Chen, Fengji Zhang, Dianjie Lu, Bingchao Wu, Bei Guan, Wang Yongji, and
Jian-Guang Lou. Large language models meet nl2code: A survey. In Proceedings of the 61st
Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.
7443–7464, 2023.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and
chatbot arena. Advances in Neural Information Processing Systems, 36, 2024a.
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi
Wang, Yang Li, et al. Codegeex: A pre-trained model for code generation with multilingual bench-
marking on humaneval-x. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining, pp. 5673–5684, 2023.
Tianyu Zheng, Ge Zhang, Tianhao Shen, Xueling Liu, Bill Yuchen Lin, Jie Fu, Wenhu Chen, and
Xiang Yue. Opencodeinterpreter: Integrating code generation with execution and refinement. arXiv
preprint arXiv:2402.14658, 2024b.
Xin Zhout, Kisub Kim, Bowen Xu, Jiakun Liu, DongGyun Han, and David Lo. The devil is in the
tails: How long-tailed code distributions impact large language models. In 2023 38th IEEE/ACM
International Conference on Automated Software Engineering (ASE), pp. 40–52. IEEE, 2023.
Qihao Zhu, Qingyuan Liang, Zeyu Sun, Yingfei Xiong, Lu Zhang, and Shengyu Cheng. Grammart5:
Grammar-integrated pretrained encoder-decoder neural model for code. In Proceedings of the
IEEE/ACM 46th International Conference on Software Engineering, pp. 1–13, 2024.
Terry Yue Zhuo, Armel Zebaze, Nitchakarn Suppattarachai, Leandro von Werra, Harm de Vries, Qian
Liu, and Niklas Muennighoff. Astraios: Parameter-efficient instruction tuning code large language
models. arXiv preprint arXiv:2401.00788, 2024.
17
BigCode Technical Report
A PPENDIX
Contents
A Contributions 19
B Datacard 19
C Data Sheet 20
C.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
C.2 Composition/Collection Process/Preprocessing/Cleaning/Labeling and Use . . . 21
C.3 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C.4 Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
D Data Contamination 21
F BigCodeBench-Hard 23
H Artifacts 27
I Tool Statistics 29
I.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
I.2 Comparison to Existing Programming Benchmarks . . . . . . . . . . . . . . . . 29
I.3 Version Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
I.4 Domain Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
K Evaluation Setup 39
K.1 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
K.2 Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
K.3 Prompt Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
M Qualitative Studies 44
P Development Timeline 51
18
BigCode Technical Report
A C ONTRIBUTIONS
P ROJECT L EADERSHIP
B ENCHMARK C ONSTRUCTION
Chien Vu, Jenny Chim, Han Hu, Haolan Zhan, Xiaoheng Hong, Wenhao Yu, Niklas Muennighoff,
Jean Kaddor, Wen-Ding Li, Junda He, Ming Xu, Zhihan Zhang, Ratnadira Widyasari, Indraneil Paul,
Simon Brunner, Imam Nur Bani Yusuf, Thong Hoang, Chen Gong, Armel Zebaze, Prateek Yadav,
Terry Yue Zhuo
E XPERIMENT
E VALUATION F RAMEWORK
A NALYSIS
Terry Yue Zhuo, Binyuan Hui, Zhoujun Cheng, Alex Gu, Naman Jain
PAPER W RITING
P RESENTATION E DITING
Niklas Muennighoff, Indraneil Paul, David Lo, Zijian Wang, Daniel Fried, Binyuan Hui, Qian Liu,
Jean Kaddor, Jiawei Liu, Imam Nur Bani Yusuf, Chen Gong
Daniel Fried, Niklas Muennighoff, Qian Liu, Zijian Wang, Binyuan Hui, Xiaoning Du, David Lo,
Jiawei Liu, Harm de Vries, Leandro von Werra
B DATACARD
We follow (Bender & Friedman, 2018) to create the datacard for BigCodeBench, where we tend
to summarize and centralize all information that might be relevant for the benchmark analysis.
Language Variety Information about our annotators’ nationality will not be provided, as the
constructed benchmark is hardly related to regional or social dialects. However, we confirm that all
communications during the annotation process are in mainstream English (en-US). We note that the
first language of some annotators is not English, which can introduce some inaccurate expressions to
the task prompts in BigCodeBench.
Curators Demographic The benchmark construction requires the great annotation effort of Cura-
tors, who are involved in the process detailed in Section 2. They come from the following population:
• Age:
19
BigCode Technical Report
– 36-45: 5% (1/20)
– 1-3: 5% (1/20)
• Academic Background:
– Bachelor: 5% (1/20)
C DATA S HEET
Besides the provided Datacard, we follow the documentation frameworks provided by (Gebru et al.,
2021).
C.1 M OTIVATION
Our dataset aims at providing a thorough assessment of the capability of solving programming
tasks. Particularly, we focus on the challenges and practicability of the tasks, and pinpoint two
main characteristics that few benchmarks highlight: (1) Diverse Function Calling; and (2) Complex
Instruction Following. This dataset will help stakeholders better understand the fundamental abilities
and limitations associated with deploying LLMs.
We believe that there are three main expectations of a good execution-based programming benchmark:
• The benchmark should be easy to use and efficient in evaluating the fundamental capabilities
of LLMs. Repository-level benchmarks (e.g., SWE-bench (Yang et al.)) are not suitable for
this purpose.
• The benchmark should be practical, covering various programming scenarios. Algorithm-
specific benchmarks (e.g., HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021))
are unsuitable. Domain-specific benchmarks (e.g., DS-1000 (Lai et al., 2023)) are also
unsuitable for this purpose.
• The benchmark should be challenging, where the tasks require LLMs’ strong compositional
reasoning capabilities and instruction-following capabilities. The benchmarks with simple
tasks (e.g., ODEX (Wang et al., 2023c)) are unsuitable.
BigCodeBench is the first benchmark that meets all three expectations. It is an easy-to-use
benchmark that evaluates LLMs with practical and challenging programming tasks, accompanied by
an end-to-end evaluation framework BigCodeBench. We aim to assess how well LLMs can solve
programming tasks in an open-ended setting.
20
BigCode Technical Report
The answers are described in our paper as well as the GitHub repository: https://github.com/
bigcode-project/bigcodebench-annotation.
C.3 D ISTRIBUTION
C.3.1 W ILL THE DATASET BE DISTRIBUTED TO THIRD PARTIES OUTSIDE OF THE ENTITY
( E . G ., COMPANY, INSTITUTION , ORGANIZATION ) ON BEHALF OF WHICH THE DATASET
WAS CREATED ?
No. Our dataset will be managed and maintained by the BigCode community (https://www.
bigcode-project.org/).
C.4 M AINTENANCE
Please contact Terry Yue Zhuo ([email protected]) and the BigCode Project
([email protected]), who are responsible for maintenance.
C.4.2 W ILL THE DATASET BE UPDATED ( E . G ., TO CORRECT LABELING ERRORS , ADD NEW
INSTANCES , DELETE INSTANCES )?
Yes. If we include more tasks or find any errors, we will correct the dataset hosted on Hugging Face
and GitHub and update the results in the leaderboard accordingly. It will be updated on our website.
For dataset contributions and evaluation modifications, the most efficient way to reach us is via GitHub
pull requests. For more questions, contact Terry Yue Zhuo ([email protected]) and the
BigCode Project ([email protected]), who are responsible for maintenance.
D DATA C ONTAMINATION
While BigCodeBench tasks are constructed from scratch, there are still some concerns regarding the
potential data contamination. Therefore, we conduct N-gram contamination experiments on ODEX
intents (English) and an anonymized archive of Stack Overflow used by StarCoder2 Lozhkov et al.
(2024) which may be correlated to BigCodeBench instructions. We also evaluated StarCoderData
(Python) (Li et al., 2023), which has been widely used as the code training data for various LLMs. We
focus on the overlaps between the BigCodeBench instructions and the queries contained by these
data sources, using 10-gram and 13-gram setups (Brown, 2020; Elazar et al., 2023; Shao et al., 2024;
21
BigCode Technical Report
Guo et al., 2024; Bai et al., 2023) to indicate potential data contamination. Due to the significant
computational resources required, the 10-gram overlap on StarCoderData is timeout and thus omitted.
As shown in Table 4, the likelihood of our task descriptions being contaminated by existing data is
extremely low. With a stricter 10-gram configuration, no more than 2.5% of BigCodeBench tasks
overlapped with the tested data sources.
22
BigCode Technical Report
et al., 2024b), their tasks are limited in terms of quantity, complexity, and diversity. Later, benchmarks
like RepoBench (Liu et al.), CrossCodeEval (Ding et al., 2024), CoderEval (Yu et al., 2024), and
SWE-bench (Yang et al.) are designed to evaluate the performance of a code agent framework, which
includes iterated prompting, real-time environment interaction, and long-context exploration. Our
benchmark focuses on evaluating the fundamental code generation capability of LLMs, which is also
an essential path toward strong code agents. Additionally, SWE-bench is constructed from GitHub
repositories with existing test cases, which limits the diversity of the tasks considered. In contrast,
our collaborative LLM-human annotator procedure allows for generating tasks driven by real-world
software engineering requirements, similar to the queries from StackOverflow, tackling a broader
range of software tasks. Furthermore, our benchmark emphasizes some important aspects that have
not been well discussed in the programming domain, like open-endedness (Hughes et al., 2024),
multi-tool use (Qin et al., 2023), and instruction-following (Lou et al., 2023).
F B I G C O D E B E N C H -H A R D
Running the full set of BigCodeBench will be burdensome for common users, especially when
evaluating a large model on both BigCodeBench-Complete and BigCodeBench-Instruct.
In order to save budgets, we release a minimal high-quality subset of BigCodeBench-Hard,
serving as a proxy for the full set.
As illustrated in Figure 8, the workflow to construct BigCodeBench-Hard is mainly inspired by
MixEval (Ni et al., 2024), which utilizes a small number of benchmark samples to align user-facing
evaluation. While MixEval focuses on general-domain evaluation and considers only code generation
tasks with minimal samples from MBPP and HumanEval, we extend the idea to make code generation
evaluation more user-centric. Specifically, we follow these steps to create BigCodeBench-Hard:
Embeddings Dot X
Product Retrieved Tasks
Sentence
Library Solution Average
Transformers Usage Length Solve Rate
>2 > Average > 50%
Filtering
BigCodeBench
BigCodeBench
Embeddings BigCodeBench-Hard
First, we choose an anonymized archive of Stack Overflow that has been preprocessed by the BigCode
community. Details of the preprocessing can be found in the StarCoder2 (Lozhkov et al., 2024). The
archive contains 10.4 million questions and answers, covering diverse programming languages and
topics, making it a good source of user queries.
To bridge the query source and BigCodeBench, we leverage all-mpnet-base-v2, a pre-
trained sentence embedding model recommended by the Sentence Transformers documenta-
tion (Reimers, 2019). This model, trained on a mixture of text and code data, is suitable for
identifying similarities between programming queries and BigCodeBench tasks.
We use the model to retrieve the most similar tasks for each query in the Stack Overflow archive,
ranking them by the dot product between normalized embeddings. Based on manual inspection of the
retrieved tasks, we conclude that a similarity score above 0.7 is a good threshold for task selection. By
applying this threshold, we obtain 6,895 queries and 626 BigCodeBench tasks after deduplication.
We illustrate the alignment between the Stack Overflow queries (Figure 9) and the BigCodeBench
tasks (Figure 10). As shown in the figures, both the query and the task prompt revolve around web
scraping to extract hyperlinks from web pages, using Python libraries to handle HTTP requests and
parse HTML. Both involve interaction with CSV files to either read input URLs or write output
data. While the specific implementation details differ, the core objective of extracting and handling
hyperlink data from web pages is a shared aspect, aligning their overall scope closely.
23
BigCode Technical Report
24
BigCode Technical Report
However, the retrieved 626 tasks are still infeasible for evaluation. To improve evaluation efficiency,
we further filter the tasks by difficulty. Unlike the construction of MixEval-Hard, we define the
following more explainable criteria: (1) Library Usage: Each task in BigCodeBench emphasizes
the compositional reasoning ability for coding and requires the use of at least two libraries. For
BigCodeBench-Hard, we keep only the tasks that require more than two libraries, challenging
the models to choose more diverse function calls as tools to solve the tasks; (2) Solution Length: We
set the threshold at 426 tokens, which is the average solution length of the tasks in BigCodeBench.
The ground-truth solution provides a reference for the task complexity, and tasks with longer solutions
are more challenging to solve; and (3) Solve Rate: We compute the solve rate per task based on all
the evaluated models on the leaderboard. The solve rate is defined as the number of models that can
solve the task divided by the total number of models. Specifically, we deem tasks with a solve rate
below 50% as hard tasks.
Through comparison, we notice that the model performance on BigCodeBench-Hard differs signif-
icantly from the one on the full set of BigCodeBench. We suggest that these differences arise from
the imbalanced distribution of target domains and a large number of easy tasks in BigCodeBench,
resulting in a slight misalignment between the evaluation and user-facing tasks. For example, GPT-
4o-2024-05-13 and GPT-4-0613 may be overfitting to the easy tasks in BigCodeBench, leading to
low performance on BigCodeBench-Hard.
To validate the effectiveness of BigCodeBench-Hard, we use a private leaderboard, SEAL-Coding
curated by Scale AI, as a reference. The SEAL-Coding leaderboard is designed to evaluate models on
a set of user-facing tasks across various application domains and programming languages. Specifically,
SEAL-Coding compares four of the best closed LLMs on Python, with the following rankings: (1)
GPT-4-Turbo Preview, (2) Claude 3.5 Sonnet, (3) GPT-4o, and (4) Gemini 1.5 Pro (May 2024).
These rankings align with our results based on the average score of the Complete and Instruct
splits of BigCodeBench-Hard, indicating that BigCodeBench-Hard is more user-centric and
challenging for model evaluation.
We encourage the community to use BigCodeBench-Hard when the budget is limited, and the
evaluation needs to be more user-centric. Additionally, we note that BigCodeBench-Hard can be
25
BigCode Technical Report
dynamic by design, depending on user queries and the evaluated models. We can periodically update
BigCodeBench-Hard to keep the evaluation challenging and user-centric.
For dataset contributions and evaluation modifications, the most efficient way to reach us is via GitHub
pull requests. For more questions, please contact Terry Yue Zhuo ([email protected])
and BigCode Project ([email protected]), who are responsible for mainte-
nance.
G.1 L IMITATIONS
Given the limited time and budget we have to develop the initial benchmark, we have foreseen several
limitations and aim to address them step-by-step.
Multilingualism One of the main limitations is that BigCodeBench is Python-only and cannot
be easily extended to other programming languages. As the function calls are mostly language-
specific, it is hard to find a package or library with exactly the same functionalities other than Python.
However, given the fact that Python is the most flexible and popular programming language to
automate various tasks, BigCodeBench may fulfill most of the community needs. In the meantime,
we still seek efficient approaches to construct BigCodeBench-like programming tasks with tools
in other languages, without much human effort.
Saturation Another potential criticism is that some LLMs can still perform reasonably well
BigCodeBench, considering that the best models can only resolve no more than 30% of real-world
GitHub issues on SWE-bench. One might indicate that our benchmark is not challenging enough.
However, we note that the low performance on SWE-bench is likely due to the under-specified
instructions and misaligned test cases5 . Compared to SWE-bench, we tend to make the programming
tasks much less ambiguous and ensure that the authors can pass their own solutions during annotation.
Reliability During the execution-based evaluation, we notice that some test cases are flaky (Luo
et al., 2014), which results in uncertainty across multiple test runs without any code changes. We have
progressively resolved some identified issues, such as missing setup of random states and improper
removal of non-existent files. However, the remaining cases are trickier. For example, the socket
query can be timed out or refused due to the unstable connection. With that being said, we try our
best to make the uncontrollable changes of Pass@1 under 0.6%. We plan to continually enhance the
reliability of all test cases, with the help of the community. To maintain the high reproducibility, we
host a real-time code execution sandbox in the Hugging Face space.
Rigorousness While we achieve high test coverage for the ground-truth solutions in
BigCodeBench, it does not guarantee that any code generated by LLMs will be correctly as-
sessed against existing test cases. Previous works like EvalPlus (Liu et al., 2024) have attempted to
extend the limited test cases by augmenting the input-output pairs via LLM- and mutation-based
strategies. However, it is challenging to adapt EvaPlus to the test harness in BigCodeBench, as
the harness only examines the expected program behaviors during the runtime (e.g., mocking tests).
Furthermore, the function calls used to pass test cases by LLMs are more nondeterministic, making
traditional test generation (Anand et al., 2013) methods cover all possible scenarios. Therefore, we
still consider LLM-based test generation (Wang et al., 2024a) promising, but with proper designs.
Specifically, a possible approach is to collect all the generated solutions that pass the current test cases
and make a capable LLM (e.g., GPT-4o) harness with self-refinement in a sandbox environment.
5
https://github.com/princeton-nlp/SWE-bench/issues/72
26
BigCode Technical Report
Generalization One intuitive question is “How well do the models generalize to the unseen tools
and tasks?” Current BigCodeBench only covers the common libraries and daily programming
tasks. It will be more interesting to benchmark models on the programming tasks that use emerging
libraries like transformers and langchain. Crafting new high-quality programming tasks
requires huge effort, as demonstrated in this paper. There are two efficient approaches for building
a programming benchmark for model generalization. First, we can instruction tune an LLM on
BigCodeBench data with additional information of libraries and function calls. The trained
model is expected to generate programming tasks with proper test cases based on the given library or
function call details. However, the quality of such data synthesis is unknown, making the practicability
questionable. Another way is to replace the function calls and libraries in BigCodeBench with the
synthetic names, simulating the unseen ones. A similar approach (Zan et al., 2022b) has been used
for code generation on unknown libraries. Although this method may lose the naturalness of software
development, the construction process is more controllable and practical.
Evolution Naturally, the libraries can be obsolete or updated (Lamothe et al., 2021), which means
that the source code data for model training will constantly evolve. Thus, the models may not
memorize function calls from a deprecated library version. This poses a challenge for any tool-
dependent programming benchmarks to correctly examine the model capability without periodic
updates. Another related concern is the test set contamination issue due to the evolving training data.
It is suggested to have both a public set and a private set for better evaluation in a recent blog6 . For
future releases, we aim to perform the benchmark evolution both publicly and privately, and host the
private test set internally.
Interaction Recent interests are around the concept of LLMs as Agents (Xi et al., 2023), which
is deemed as a way towards artificial general intelligence. Specifically, LLMs will be grounded
in a less constrained sandbox environment, where they can interact with any given applications,
such as the web browser and terminal. The environment can help unlock the capabilities like self-
debugging (Chen et al., 2023) and self-reflection (Shinn et al., 2024). We tend to work in this direction
and see how well LLMs as Agents can perform on BigCodeBench.
G.2 B I G C O D E B E N C H -OOD
G.3 B I G C O D E B E N C H -I NTERACT
H A RTIFACTS
6
https://www.jasonwei.net/blog/evals
27
BigCode Technical Report
28
BigCode Technical Report
I T OOL S TATISTICS
I.1 A NALYSIS
426 7
80% | 20% 64
80% | 20% 80% | 20%
Frequency
Short Head
Frequency
Short Head
Frequency
Frequency
Frequency
Short Head Short Head
Long Tail Long Tail
5 14 5 14
(d) DS-1000 (Orig.) (e) DS-1000
Figure 12: Library density comparisons. We sort the libraries by frequency count, showcasing the
long-tail distribution to highlight the broad diversity within BigCodeBench.
350 3
80% | 20% 22
80% | 20% 80% | 20%
Frequency
Short Head
Frequency
Frequency
Frequency
Figure 13: Function call density comparisons. We sort function calls by frequency count, showcasing
the long-tail distribution to highlight the broad diversity within BigCodeBench.
Table 6: Depth (Complexity — Solution Characters) and breath (Diversity – Function Calls) compar-
isons to existing programming benchmarks in Python.
29
BigCode Technical Report
30
BigCode Technical Report
"calendar": "Time",
"cgi": "Network",
"chardet": "Network",
"cmath": "Computation",
"codecs": "Cryptography",
"collections": "General",
"cryptography": "Cryptography",
"csv": "System",
"ctypes": "System",
"datetime": "Time",
"dateutil": "Time",
"difflib": "General",
"django": "Network",
"docx": "System",
"email": "Network",
"faker": "General",
"flask": "Network",
"flask_login": "Network",
"flask_mail": "Network",
"flask_restful": "Network",
"fnmatch": "General",
"folium": "Visualization",
"functools": "General",
"geopy": "Network",
"getpass": "System",
"glob": "System",
"gzip": "System",
"hashlib": "Cryptography",
"heapq": "General",
"hmac": "Cryptography",
"html": "Network",
"http": "Network",
"importlib": "General",
"inspect": "General",
"io": "System",
"ipaddress": "Network",
"itertools": "General",
"json": "System",
"keras": "Computation",
"librosa": "Computation",
"logging": "System",
"lxml": "Network",
"math": "Computation",
"matplotlib": "Visualization",
"mechanize": "Network",
"mimetypes": "Network",
"multiprocessing": "System",
"nltk": "Computation",
"numpy": "Computation",
"openpyxl": "System",
"operator": "General",
"os": "System",
"pandas": "Computation",
"pathlib": "System",
"pickle": "System",
"pkgutil": "General",
"platform": "System",
"prettytable": "General",
"psutil": "System",
"pytesseract": "Computation",
"pytz": "Time",
"queue": "General",
"random": "General",
"re": "General",
"requests": "Network",
"rsa": "Cryptography",
"scipy": "Computation",
"seaborn": "Visualization",
"secrets": "Cryptography",
"select": "System",
"sendgrid": "Network",
"shutil": "System",
"sklearn": "Computation",
"smtplib": "Network",
"socket": "Network",
"soundfile": "Computation",
"sqlite3": "System",
"ssl": "Network",
"statistics": "Computation",
"statsmodels": "Computation",
"string": "General",
"struct": "System",
"subprocess": "System",
"sys": "System",
"tarfile": "System",
"tensorflow": "Computation",
"texttable": "General",
"textwrap": "General",
"threading": "System",
"time": "Time",
"turtle": "Visualization",
"types": "General",
31
BigCode Technical Report
"unicodedata": "General",
"urllib": "Network",
"uuid": "General",
"warnings": "General",
"werkzeug": "Network",
"wordninja": "Computation",
"wtforms": "Network",
"xlwt": "System",
"xml": "Network",
"xmltodict": "Network",
"yaml": "System",
"zipfile": "System",
"Levenshtein": "Computation",
"ast": "General",
"configparser": "System",
"cv2": "Computation",
"decimal": "General",
"enum": "General",
"errno": "System",
"flask_wtf": "Network",
"ftplib": "Network",
"gensim": "Computation",
"geopandas": "Computation",
"holidays": "Time",
"mpl_toolkits": "Visualization",
"natsort": "General",
"pyquery": "Network",
"python_http_client": "Network",
"regex": "General",
"shapely": "Computation",
"shlex": "System",
"signal": "System",
"skimage": "Computation",
"sympy": "Computation",
"textblob": "Computation",
"typing": "General",
"wikipedia": "Network",
"wordcloud": "Visualization",
"zlib": "System",
"aspose": "System",
"builtins": "General",
"locale": "System",
"imp": "System",
"docxtpl": "System",
"selenium": "Network",
"IPython": "Computation",
"filecmp": "System",
"multidict": "General",
"sqlalchemy": "System",
"obspy": "Computation",
"pprint": "General",
"xlrd": "System",
"argparse": "General",
"torch": "Computation",
"copy": "General"
}
Scenario 1:
pandas, pytz, datetime, random, matplotlib
32
BigCode Technical Report
‘‘‘python
import pandas as pd
import pytz
from datetime import datetime
from random import randint
import matplotlib.pyplot as plt
# Constants
CITIES = [’New York’, ’London’, ’Beijing’, ’Tokyo’, ’Sydney’]
WEATHER_CONDITIONS = [’Sunny’, ’Cloudy’, ’Rainy’, ’Snowy’, ’Stormy’]
def generate_weather_report(utc_datetime):
"""
Generate a report of weather conditions for a list of cities across various
time zones at a given time (UTC).
Parameters:
utc_datetime (datetime): The datetime in UTC.
Returns:
DataFrame: A pandas DataFrame with weather conditions for the cities.
Requirements:
- pandas
- pytz
- datetime
- random
- matplotlib.pyplot
Example:
>>> utc_time = datetime(2023, 6, 15, 12, 0, 0, tzinfo=pytz.UTC)
>>> report = generate_weather_report(utc_time)
>>> print(report)
>>> report[’Weather Condition’].value_counts().plot(kind=’bar’)
"""
report_data = []
return report_df
‘‘‘
‘GPT_ODEX_BREAK‘
Scenario 2:
pytz, datetime, numpy, dateutil
‘‘‘python
import pytz
from datetime import datetime
import numpy as np
from dateutil.parser import parse
# Constants
LEAP_SECONDS = np.array([1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980,
1981, 1982, 1983, 1985, 1988, 1990, 1993, 1994, 1997,
1999, 2006, 2009, 2012, 2015, 2016, 2020])
Parameters:
date_str (str): The date string in "yyyy-mm-dd hh:mm:ss" format.
from_tz (str): The timezone of the given date string.
to_tz (str): The timezone to which the current time should be converted.
Returns:
int: The total seconds.
Requirements:
- datetime
- pytz
- numpy
- dateutil.parser
Example:
>>> total_seconds_since_date(’1970-01-01 00:00:00’, ’UTC’, ’America/New_York’)
33
BigCode Technical Report
"""
from_tz = pytz.timezone(from_tz)
to_tz = pytz.timezone(to_tz)
given_date = parse(date_str).replace(tzinfo=from_tz)
current_date = datetime.now().astimezone(to_tz)
total_seconds += leap_seconds
return int(total_seconds)
‘‘‘
# You should output the suitable labels in a list format, such as ["CSV", "DataFrames"].
J.2.2 G UIDELINES
## Annotation Guideline:
- Remove the library imports that are not used in the code.
- Import libraries before the function declaration.
- Check if the usage of these libraries is reasonable. For example, if the description asks to complete a ←-
functionality that can be implemented without any of these libraries, then the usage of these ←-
libraries APIs is not reasonable. You need to check Step 2 for more details to modify the description←-
so that it can make use of the imported libraries.
#### Description
- Check if the expression of the description is clear and concise. If not, you need to modify the ←-
description to make it clear and concise.
- The description must mention the following five things:
- Functionality
- Input
- Output to be returned
- Requirements of the imported libraries/modules to be used
- 1 or 2 examples of the input and output of the function
- Mention the necessary data structure if the function requires data manipulation.
- You must not change the overall functionality of the function, and remove any libraries/modules that are←-
imported in the function to accommodate the blackbox testing.
34
BigCode Technical Report
- Check if the function takes input parameters. If not, you need to modify the description and the ←-
function to make it take input parameters.
#### Example
‘‘‘
- Check if the function implementation is correct. If not, you need to repair the function implementation ←-
to make it correct.
- Check if the function uses any constants.
- If yes and the description has not mentioned any of them, you need to either leave off the argument so←-
it takes the default value, or modify the description to mention them.
- For example, for the ‘plt.hist(counts, bins=np.arange(len(counts)+1)-0.5, rwidth=0.8)‘ in ‘f_0‘, the←-
description should mention the specific way to compute with ‘len(counts)+1)-0.5‘ and use ‘rwidth←-
=0.8‘.
- Check if the function has the return values. If not, you need to modify the function to make it return ←-
the values for the test cases to check.
- If the function requires to write or show some data, you shall either return the data or make the ←-
function take a path for file storage or a variable for data visualization. For example, if the ←-
function requires to show a plot, you must return the plot (via Axes).
- If the function requires checking some properties and uses ‘print()‘ to show the results, you need to ←-
modify this function to return these results. The revised function should amalgamate these properties←-
with the preexisting return values, thereby facilitating more versatile utilization of the outputs ←-
in the rest of your code.
- Consider this original function:
def check_properties(original_list):
is_empty = not bool(original_list)
print(f"Is the list empty? {is_empty}")
length = len(original_list)
print(f"Length of the list is {length}")
check_properties([1, 2, 3, 4])
This function checks two properties of a list: whether it’s empty and its length. It then prints these ←-
results. However, these results can’t be used elsewhere in your program.
def check_properties(original_list):
is_empty = not bool(original_list)
length = len(original_list)
In this modified version, the function returns the two properties instead of printing them. This allows ←-
you to capture the returned values in list_empty and list_length variables and use them elsewhere in ←-
your program.
- If you return any formats of values(e.g. ‘string‘), make sure that you mention the format in the ←-
description. It is better for assessing the correctness of the function implementation.
### Step4: Run The Function and Write Blackbox Test Cases
- The function is contained in a file named ‘function.py‘, and you are required to write a blackbox ←-
testing function named ‘run_tests()‘ that contains assertion-based blackbox test cases in ‘test.py‘.
- If any of the following data types are used for manipulation, you need to manually design the data or ←-
utilize 3rd party libraries and websites (e.g. ‘Faker‘ and ‘unittest.mock‘) to generate or mock the ←-
test data. You can use the "file://" protocol to access the local HTML files, and any url request ←-
APIs should work correctly with this protocol. (See https://chat.openai.com/share/84ba0dc9-067d-4eb0-←-
a4d4-d8f4a77ff1a5)
35
BigCode Technical Report
- You should test the possible attributes of that written data. For example, if you return a plot, you ←-
need to test if the plot contains the correct title, x-axis, y-axis and data points.
- To formalize the test case writing, you need to write with the following function:
‘‘‘python
def run_tests():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(TestCases))
runner = unittest.TextTestRunner()
runner.run(suite)
class TestCases(unittest.TestCase):
def test_case_1(self):
# Input 1: write your test case here.
# provide a series of unit tests to test all attributes of the returned values
pass
def test_case_2(self):
# Input 2: write your test case here.
# Provide a series of unit tests to test all attributes of the returned values
pass
def test_case_3(self):
# Input 3: write your test case here.
# Provide a series of unit tests to test all attributes of the returned values
pass
- Each blackbox test method should be tested with a unique input value and asserted for the return values.
### Working
After reading the guideline, refine ‘function.py‘ and write a single test program named ‘run_tests()‘ in ‘←-
test.py‘, which should contain at least *five* different input cases.
When testing with data, you need to generate the data by yourself or use 3rd party libraries and websites ←-
(e.g. ‘Faker‘ and ‘unittest.mock‘) to generate or mock the test data. For example, you should design ←-
the html webpage to meet the functionality and test scenarios. For any numeric values, you need to ←-
represent them in the data.
Make the test data as complex as possible to mimic the real-world data. For example, if the function ←-
requires reading a csv file, you need to design the csv file to meet the functionality and test ←-
scenarios.
You can not remove any libraries or modules used in the original function. However, you can add new ←-
libraries or modules to the function.
Make sure you have tested all return values and the possible attributes of the written data.
Keep testing the function until you are satisfied with the function implementation and the test cases.
If any tested properties are not mentioned in the description, you need to modify the description to ←-
mention them.
As we will provide the function stub for programmers to implement by their own, please make sure there is ←-
no ambiguity in the description and the function implementation. If there is any ambiguity, you need ←-
to modify the description and the function implementation to make it clear and concise. Think about ←-
the possible questions that programmers may ask and try to answer them in the description.
Execute to make sure ‘function.py‘ will pass all blackbox test cases. Otherwise, you need to modify the ←-
function implementation to make it pass all blackbox test cases.
Note that ‘Faker‘ library has already been installed in the environment and you can use it freely.
Download refined ‘function.py‘, written ‘test.py‘ and created ‘test_data.zip‘ if there exists.
# Annotation Guidelines
## Environment Setup
You are given a file named ‘requirements.txt‘, please set up a Python environment with version 3.8.10. You←-
can use Anaconda or Python Virtual Environment for this purpose.
Please note that this environment will be the same as the one used by OpenAI GPT-4 Advanced Data Analysis ←-
(or Code Interpreter). Although it is expected that most APIs are stable, it is safer to use ←-
36
BigCode Technical Report
consistent library versions. You are encouraged to use more libraries covered in the requirements.txt←-
to enrich each sample.
## Annotation Goal
## Expected Annotation
## Issues To Be Addressed
You may notice the existence of the following issues in the given Python script:
‘Function Name‘ has not been obfuscated.
Replace the name with ‘f‘.
‘Docstring‘ is unclear, ambiguous, impractical or not well aligned with ‘Solution‘.
‘Function Description‘ should describe a practical functionality.
You should either refine the ‘Function Description‘ or ‘Solution‘. Choose the one more feasible to be ←-
done.
Make sure at least 2 correct ‘Running Examples‘ are included.
‘Solution‘ does not use all imported libraries or APIs.
Try to refine the ‘Programming Problem‘ so that the ‘Function Description‘ implies the corresponding API ←-
usage and such APIs are correctly invoked in ‘Solution‘.
If (a) is difficult to complete, remove the unused import statements.
‘Solution‘ uses APIs that are not included in ‘Import Statement‘.
Add the corresponding import statements if these APIs are necessary to complete the functionality. ←-
Otherwise, refine ‘Function Description‘ so that the functionality will require such APIs.
‘Solution‘ uses less than 2 libraries.
You should refine ‘Programming Problem‘ so that the functionality must require the API usage and invoke ←-
APIs from at least 2 distinct libraries. You can use ChatGPT (GPT-4) in Advanced Data Analysis for ←-
inspiration.
‘Solution‘ uses APIs in ‘random‘ or the random functionality.
Initialize the random seed for each ‘TestCases‘ to control the behavior.
‘Solution‘ contains dummy code.
Based on your understanding, replace the dummy code with the actual implementation of each part.
‘Solution‘ contains the display functionality, such as ‘print()‘ and ‘matplotlib.pyplt.show()‘.
If the function requires you to write or show some data, you shall either return the data or make the ←-
function take a path for file storage or a variable for data visualization. For example, if the ←-
function requires you to show a plot, you must return the plot (via Axes). If there is a specific ←-
attribute inside the object, you should mention it in the ‘Docstring‘ and test it inside ‘TestCases‘.←-
For example, the plot contains the specific title or label names. You should make sure that these ←-
attributes are either stated in ‘Docstring‘ or implied by ‘Docstring‘.
If the function requires checking some properties and uses ‘print()‘ to show the results, you need to ←-
modify this function to return these results. The revised function should amalgamate these properties←-
with the preexisting return values, thereby facilitating more versatile utilization of the outputs ←-
in the rest of your code.
Refer to Step3 in the guidelines of the previous stage.
37
BigCode Technical Report
It assumes that you now have full control of ‘Programming Problem‘. Write the test cases to validate if ←-
the returned results are equal to certain values. For the returned objects, validate if the ←-
attributes are equal to certain values.
Test cases do not use any deterministic values as expected outputs.
Come up with the expected outputs after testing.
‘TestCases‘ uses libraries or APIs that are not included in ‘Import Statement‘.
Add the corresponding import statements if these APIs are necessary to complete the testing. Otherwise, ←-
remove such APIs.
‘TestCases‘ contains test cases that do not work for ‘Solution‘.
Repair these test cases or replace them with better cases.
‘TestCases‘ does not test all attributes of the returned object, where these attributes are implied by ‘←-
Function Description‘ or completed by ‘Solution‘.
Add lines of code to test these attributes.
If these attributes are not mentioned or implied by ‘Function Description‘, try to describe them in ‘←-
Function Description‘.
‘TestCases‘ does not test the files that result in ‘Solution‘.
Some files are created during the execution of ‘Programming Problem‘. Add necessary lines of code to test ←-
the attributes of these files in each test case.
‘TestCases‘ is wrapped in ‘run_tests‘.
Separate these two.
Test cases in ‘TestCases‘ are duplicated or used to test the same behavior.
Remove them if there is a huge overlap. Replace them with more complex test cases. Make sure that at least←-
five test cases are included.
Test data used in ‘TestCases‘ is missing.
You need to manually design the data or utilize 3rd party libraries and websites (e.g. ‘Faker‘ and ‘←-
unittest.mock‘) to generate or mock the test data. Refer to Step4 in the guidelines of previous stage←-
.
6. ‘Solution‘ uses APIs in ‘random‘, but does not pass a random seed to ‘Function Parameters‘:
- When using random functionalities, for reproducibility, it’s good practice to allow the user to set ←-
a seed.
- Example:
Before: ‘random.randint(1,10)‘
After: ‘random.seed(seed); random.randint(1,10)‘
38
BigCode Technical Report
9. ‘TestCases‘ uses libraries or APIs that are not included in ‘Import Statement‘:
- Similar to the solution, all external libraries or functions used in the test cases should be ←-
imported.
10. ‘TestCases‘ contains test cases that do not work for ‘Solution‘:
- All test cases should be aligned with the function’s behavior to ensure they test the function ←-
correctly.
11. ‘TestCases‘ does not test all attributes of the returned object:
- If the function returns an object with multiple attributes or methods, the test cases should ←-
validate all of them to ensure complete coverage. For example, when plotting data on a graph, you ←-
might get an ‘AxesSubplot‘ object in return. This object has various attributes, like the title, x-←-
label, y-label, and the data points themselves. You should test all of these attributes if they are ←-
required in the functionality.
12. ‘TestCases‘ does not test the files that result in ‘Solution‘:
- If the function creates or modifies files, the test cases should validate these files to ensure the ←-
function works as expected.
14. Test cases in ‘TestCases‘ are duplicated or used to test the same behavior:
- Redundant test cases should be removed to keep the test suite concise and focused.
K E VALUATION S ETUP
K.1 I NFERENCE
We perform all the model inference on A100 GPUs, except for the closed ones. For the closed models,
we rely on their official APIs provided in the documents.
K.2 E XECUTION
We conduct the execution mainly on the Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz, composed of
2 sockets, with 18 cores per socket.
Please provide a self-contained Python script that solves the following problem in a markdown code
block:
{prompt}
Figure 14: Prompt template for models supported by vLLM (Kwon et al., 2023).
39
BigCode Technical Report
Please generate self-contained code to solve the following problem in a Python markdown block:
{prompt}
Please generate self-contained code to complete the following problem wrapped in a Python mark-
down block:
{prompt}
40
BigCode Technical Report
41
BigCode Technical Report
42
BigCode Technical Report
Closed LLMs perform better than open LLMs We notice that most strong LLMs on
BigCodeBench are non-permissive, led by the models from OpenAI and Anthropic. Among
the open LLMs, the best model, Llama3-instruct-70B, slightly outperforms Claude-3-Sonnet and
ranks 5th on both BigCodeBench-Complete and BigCodeBench-Instruct. The most
permissive LLMs are relatively small and, hence, remain in a gap from the non-permissive ones.
Jack of all trades, master of most Figure 7 shows the top 5 for instruction-tuned ranked on
BigCodeBench-Complete. Best overall base LLMs like Dpsk-Coder-Base-6.7B excel in most
domains but still fall short in certain ones. We suggest that the domain-specific specialty will likely
result from the training data.
Table 9: Calibrated Pass@1 comparison of plain prompting and zero-shot CoT prompting on both
BigCodeBench and BigCodeBench-Hard.
We provide some preliminary studies on the effectiveness of the zero-shot chain-of-thought (CoT)
(Kojima et al., 2022) on GPT-4o and Gemini-1.5-Pro by appending "\nLet’s think step by step." to
the end of the original prompt. From Table 9, we observe that there is no significant advantage to
using zero-shot CoT for performing BigCodeBench tasks, indicating that the prompting technique
is not sufficiently reliable for complex compositional reasoning. Therefore, we conclude that prompt
design is very tricky and should not be recommended for the official evaluation setup. For our prompt
design, we followed the same approach used in EvalPlus for consistency.
43
BigCode Technical Report
M Q UALITATIVE S TUDIES
Our benchmark exposes instances where even the strongest models struggle to use APIs, which may
not be exposed by simple benchmarks like HumanEval. Here, we show some examples with GPT-4o.
Example 1: In the following example, the model fails to use itertools.cycle correctly,
producing an infinite loop.
from collections import Counter
import random
from itertools import cycle
# Constants
ELEMENTS = [’A’, ’B’, ’C’, ’D’, ’E’, ’F’, ’G’, ’H’, ’I’, ’J’]
def task_func(l):
"""
Create a counter from a list "l" and move the first 3 elements to the end of the list.
Parameters:
- l (list): A list of elements that the function will process.
Returns:
- counter (collections.Counter): A frequency counter that maps elements from the input list to their ←-
frequencies in the first 30 elements of the cycled, shuffled list.
Requirements:
- collections
- random
- itertools
Example:
>>> random.seed(42)
>>> task_func(ELEMENTS)
Counter({’I’: 3, ’F’: 3, ’G’: 3, ’J’: 3, ’E’: 3, ’A’: 3, ’B’: 3, ’H’: 3, ’D’: 3, ’C’: 3})
"""
# Move the first 3 elements to the end of the list
l = l[3:] + l[:3]
Example 2: In this example, the model uses the semantics of instanceof incorrectly. Here,
isinstance(j, (int, float)) does not actually check if j is an int or float, but rather the
type (int, float).
import pandas as pd
import seaborn as sns
def task_func(data):
"""
Analyze a dataset by calculating the average of values across each row and visualizing the correlation←-
matrix as a
heatmap.
Parameters:
data (numpy.array): 2D array where each row represents a record and each column represents a feature
Returns:
tuple: A tuple containing:
- DataFrame: A pandas DataFrame enhanced with an ’Average’ column that represents the mean across ←-
each row.
- Axes: The matplotlib Axes object showing the heatmap of the correlations.
Requirements:
- pandas
- numpy
- seaborn
Raises:
ValueError: If the input data is not a 2D array or if it contains non-numeric data.
Example:
>>> data = np.array([[1, 2, 3, 4, 5, 6, 7, 8], [8, 7, 6, 5, 4, 3, 2, 1]])
>>> df, ax = task_func(data)
>>> print(df[’Average’].to_string(index=False))
4.5
4.5
"""
# Check if data is a 2D array
if not isinstance(data, (list, np.ndarray)) or not all(isinstance(i, (list, np.ndarray)) for i in data←-
):
raise ValueError("Input data is not a 2D array")
# Check if data contains only numeric values
if not all(all(isinstance(j, (int, float)) for j in i) for i in data):
44
BigCode Technical Report
Example 3: Here, the model mishandles the case where the directory does not exist with incorrect
use of os.path.dirname.
import pickle
import os
Parameters:
- filename (str): The path of the file to be overwritten with serialized data.
- data (object): The object to serialize and write to the file.
Returns:
- bool: True if the operation is successful, False otherwise.
Requirements:
- pickle
- os
Example:
>>> result = task_func(’data.pkl’, {’key’: ’value’})
>>> print(result)
True
"""
try:
# Check if the directory exists
dir_name = os.path.dirname(filename)
if not os.path.exists(dir_name):
# Create directory if it does not exist
os.makedirs(dir_name)
return True
except Exception as e:
print(f"An error occurred: {e}")
return False
def task_func(data_list):
"""
Unzips a list of tuples and calculates the mean of the numeric values for
each position.
Parameters:
- data_list (list of tuples): The data to process, structured as a list of tuples. Each tuple can ←-
contain alphanumeric values.
Returns:
- list: A list of mean values for each numeric position across the tuples. Non-numeric positions are ←-
ignored.
An empty list is returned if the input list (data_list) is empty.
Requirements:
- numpy
- itertools
Example:
>>> task_func([(’a’, 1, 2), (’b’, 2, 3), (’c’, 3, 4), (’d’, 4, 5), (’e’, 5, 6)])
[nan, 3.0, 4.0]
>>> task_func([(1, ’a’, 2), (’a’, 3, 5), (’c’, 1, -2)])
[1.0, 2.0, 1.6666666666666667]
"""
if not data_list:
45
BigCode Technical Report
return []
Example 5: In the following example, the model outputs code that has a different encoding for
password_hash and expected_password_hash so that even if the passwords are the same,
the response will indicate a failed one.
def task_func(data):
"""
This method is designed to handle the authentication process in a web application context.
It expects input in the form of a dictionary with ’username’ and ’password’ keys. The password
is expected to be a base64-encoded SHA-256 hash. The method decodes and authenticates these ←-
credentials
against predefined values (for demonstration purposes, it checks if the username is ’admin’ and the
password hash matches the hash of ’password’). Based on the authentication result, it returns an ←-
appropriate
HTTP response.
Parameters:
data (dict): A dictionary with ’username’ and ’password’ keys.
Returns:
django.http.HttpResponse: An HttpResponse indicating the login result.
HttpResponseBadRequest if the data is invalid.
Raises:
KeyError, UnicodeDecodeError, binascii.Error, ValueError if the input dictionary is invalid.
Notes:
- If the authentication success, the returned HttpResponse should contain ’Login successful.’ with ←-
status 400.
- If the authentication fails, the returned HttpResponse should contain ’Login failed.’ with status ←-
401.
- If the input data is invalid (i.e., password is a non-base64, missing keys), the function return ←-
HttpResponseBadRequest and it contains ’Bad Request.’
Examples:
>>> from django.conf import settings
>>> if not settings.configured:
... settings.configure()
>>> data = {’username’: ’admin’, ’password’: base64.b64encode(hashlib.sha256(’password’.encode()).←-
digest()).decode()}
>>> response = task_func(data)
>>> response.status_code == 200 and ’Login successful.’ in response.content.decode()
False
Requirements:
- django.http
- django.conf
- base64
- hashlib
- binascii
"""
try:
# Check if the required keys are present
if ’username’ not in data or ’password’ not in data:
raise KeyError(’Missing keys in input data’)
username = data[’username’]
password_base64 = data[’password’]
46
BigCode Technical Report
Example 6: This example showcases a failure to understand the nuances of the wikipedia
library. When page_title is not found in Wikipedia, verbatim, the wikipedia.page
method returns a wikipedia.exceptions.DisambiguationError rather than a
wikipedia.exceptions.PageError.
import wikipedia
from wordcloud import WordCloud
import matplotlib.pyplot as plt
def task_func(page_title):
"""
Create a word cloud from the text of a Wikipedia page.
Parameters:
page_title (str): The title of the Wikipedia page.
Returns:
matplotlib.axes.Axes: The Axes object of the plotted data. Is None if there is no wikipedia page with ←-
the title given as input.
Requirements:
- wikipedia
- wordcloud.WordCloud
- matplotlib.pyplot
Example:
>>> ax = task_func(’Python (programming language)’)
"""
try:
# Fetch the Wikipedia page content
page_content = wikipedia.page(page_title).content
return ax
except wikipedia.exceptions.PageError:
# Return None if the page does not exist
return None
Example 7: In this Network example, the error actually originates from the regex. For the first
example, instead of www.google.com, the regex actually extracts www.google.com,, which
leads to errors down the line.
import re
import socket
import urllib.parse
def task_func(myString):
"""
Extracts all URLs from a given string, analyzes each URL to extract the domain, and retrieves the IP ←-
address of each domain.
Parameters:
myString (str): The string from which URLs are extracted. The string should contain valid URLs ←-
starting with http or https.
Returns:
dict: A dictionary with domains as keys and their respective IP addresses (IPv4) as values. If a ←-
domain cannot be resolved, the IP address will be None.
Requirements:
- re
- urllib.parse
- socket
Raises:
socket.gaierror if the domain cannot be resolved
Example:
>>> task_func("Check these links: \protect\vrule width0pt\protect\href{http://www.google.com}{http://←-
www.google.com}, https://www.python.org")
{’www.google.com’: ’172.217.12.142’, ’www.python.org’: ’151.101.193.223’}
"""
47
BigCode Technical Report
return domain_ip_map
Example 8: In this example, the model misunderstands the host naming structures. The issue is in the
line response = requests.get(f"https://ipinfo.io/ip_address/json"),
where the /json should not be present. This leads to a hostname not found error.
import socket
import requests
def task_func(host):
"""
This function resolves the IP address of the given host and then uses the IP address
to fetch geolocation information from the ipinfo.io API. The function is robust against
various common errors, such as invalid hostnames, network issues, or problems with the
geolocation service.
Parameters:
host (str): The hostname to be resolved.
Returns:
dict: A dictionary containing the IP address and geolocation information if successful.
Raises:
ValueError: If ’host’ is None or an empty string.
ConnectionError: If there is a problem connecting to the geolocation service.
Example:
>>> result = task_func(’google.com’)
>>> ’ip_address’ in result and ’geolocation’ in result
True
>>> task_func(’’)
Traceback (most recent call last):
...
ValueError: Host must be a non-empty string.
Requirements:
- socket
- requests
"""
if not host:
raise ValueError("Host must be a non-empty string.")
try:
ip_address = socket.gethostbyname(host)
except socket.gaierror:
raise ValueError("Invalid hostname.")
try:
response = requests.get(f"https://ipinfo.io/{ip_address}/json")
response.raise_for_status()
geolocation = response.json()
except requests.RequestException as e:
raise ConnectionError(f"Error connecting to the geolocation service: {e}")
return {
"ip_address": ip_address,
"geolocation": geolocation
}
Example 9: This example shows an example where there is a simple solution using flatten, but
the generated solution does not use this function and therefore gets the function incorrect.
def task_func(dimension, seed=42):
"""
Create a 2D numeric array (matrix) of a given dimension with random integers between 1 and 100,
and a flat list of all elements in the matrix.
Parameters:
- dimension (int): The dimension of the square matrix to be created. It must be a positive integer.
Returns:
tuple: A tuple containing:
- A 2D numpy array of the given dimension with random integers between 1 and 100.
48
BigCode Technical Report
Requirements:
- numpy
- itertools
Example:
>>> matrix, flat_list = task_func(3)
>>> print(matrix)
[[52 93 15]
[72 61 21]
[83 87 75]]
>>> print(flat_list)
[52, 93, 15, 72, 61, 21, 83, 87, 75]
"""
# Set the random seed for reproducibility
np.random.seed(seed)
# Ground Truth
def task_func(dimension, seed=42):
np.random.seed(seed) # Ensure reproducible results
if dimension <= 0:
raise ValueError("The dimension must be a positive integer")
matrix = np.random.randint(1, 101, size=(dimension, dimension))
flat_list = matrix.flatten().tolist()
return matrix, flat_list
Example 10: This is another example where the model chooses to use the wrong function. Here, it
uses np.mean instead of np.nanmean (which the ground truth solution uses). This leads to the
failure of a test with incomplete data.
import pandas as pd
import numpy as np
import itertools
def task_func(data_list=[(’a’, 1, 2.1), (’b’, 2, 3.2), (’c’, 3, 4.3), (’d’, 4, 5.4), (’e’, 5, 6.5)]):
"""
Calculate the mean of numerical values in each position across tuples in a list.
Non-numeric values are ignored, and means are computed only from available data.
That means that missing data in some of the tuples is simply ignored.
A DataFrame with one columns named ’Mean Value’ which contains the mean values for all tuple positions←-
.
The index is according to this scheme: ’Position i’ where i is the current position.
If an empty list is passed, then an empty DataFrame is returned.
Parameters:
data_list (list of tuples): A list containing tuples of mixed data types (string, int, float, etc.).
Defaults to [(’a’, 1, 2.1), (’b’, 2, 3.2), (’c’, 3, 4.3), (’d’, 4, 5.4), (’e’, 5, 6.5)]
Returns:
DataFrame: A pandas DataFrame with the mean values of the numerical data at each position.
Requirements:
- pandas
- numpy
- itertools
Example:
>>> df = task_func()
>>> print(df)
Mean Value
Position 0 NaN
Position 1 3.0
Position 2 4.3
>>> data = [(’a’, ’1’, 2.1), (’b’, 21, ’c’), (12, 3, 4.3), ([’d’], 4, 5.4), (’e’, 5, 6.5)]
>>> df = task_func(data)
>>> print(df)
Mean Value
Position 0 NaN
Position 1 8.25
Position 2 4.3
"""
if not data_list:
return pd.DataFrame(columns=[’Mean Value’])
means = []
for col in transposed:
# Filter out non-numeric values
49
BigCode Technical Report
return df
HumanEval Test: The HumanEval tests only consider input-output assertions, which only work
for simple programs without configuration and environment setup. We use several tests below as an
example.
METADATA = { ’author’: ’jt’, ’dataset’: ’test’ }
def check(candidate):
assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True
assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False
assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True
assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False
assert candidate([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True
assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True
assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False
BigCodeBench Unit Test: We demonstrate the design of BigCodeBench unit tests as follows,
where we mock various scenarios for the network connection and validate the program behaviours.
Compared to the ones in HumanEval and other benchmarks like APPS using input-output assertions,
our unit tests require great human effort to design and cover various settings.
# Requirements SetUp
import unittest
from unittest.mock import patch
import http.client
import ssl
import socket
50
BigCode Technical Report
O B I G C O D E B E N C H: E VALUATION I NFRASTRUCTURE
In this section, we document the usage of bigcodebench, the evaluation infrastructure for
BigCodeBench. We note that the prototype of bigcodebench is based on EvalPlus (Liu
et al., 2024).
bigcodebench.evaluate \
--model meta-llama/Meta-Llama-3.1-8B-Instruct \
--split [complete|instruct] \
--subset [full|hard] \
--backend [vllm|openai|anthropic|google|mistral|hf]
P D EVELOPMENT T IMELINE
04/2023 - 05/2023 Project Initiation.
51
Under review as a conference paper at ICLR 2025
012
github.io/.
013
014
015
016 A Datacard
017
018 We follow ? to create the datacard for BigCodeBench, where we tend to summarize and
019 centralize all information that might be relevant for the benchmark analysis.
020
021 Curation Rationale This is detailed in Main Paper Section 2 and Appendix D.
022
023 Language Variety Information about our annotators’ nationality will not be provided,
024 as the constructed benchmark is hardly related to regional or social dialects. However, we
025 confirm that all communications during the annotation process are in mainstream English
026 (en-US). We note that the first language of some annotators is not English, which can
027 introduce some inaccurate expressions to the task prompts in BigCodeBench.
028
029 Curators Demographic The benchmark construction requires the great annotation effort
030 of Curators, who are involved in the process detailed in Main Paper Section 2. They come
031 from the following population:
032 • Age:
033
034 – 18-25: 25% (5/20)
035
036 – 26-35: 70% (14/20)
037
– 36-45: 5% (1/20)
038
039 • Experience in Python Programming (Years):
040
041 – 1-3: 5% (1/20)
042
043 – 3-5: 20% (4/20)
044
– 5+: 75% (15/20)
045
046 • Academic Background:
047
048 – Bachelor: 5% (1/20)
049
050 – Master: 20% (4/20)
051
– PhD: 75% (15/20)
052
053
Text Characteristics This is detailed in Main Paper Section 3.
1
Under review as a conference paper at ICLR 2025
054
B Data Sheet
055
056
Besides the provided Datacard, we follow the documentation frameworks provided by ?.
057
058
059 B.1 Motivation
060
061
B.1.1 For what purpose was the dataset created?
062
Our dataset aims at providing a thorough assessment of the capability of solving programming
063 tasks. Particularly, we focus on the challenges and practicability of the tasks, and pinpoint
064 two main characteristics that few benchmarks highlight: (1) Diverse Function Calling; and
065 (2) Complex Instruction Following. This dataset will help stakeholders better understand
066 the fundamental abilities and limitations associated with deploying LLMs.
067
We believe that there are three main expectations of a good execution-based programming
068
benchmark:
069
070
• The benchmark should be easy to use and efficient in evaluating the fundamental
071 capabilities of LLMs. Repo-level benchmarks (e.g., SWE-bench ?) are not suitable
072 for this purpose.
073
074 • The benchmark should be practical, covering various programming scenarios.
075
Algorithm-specific benchmarks (e.g., HumanEval ? and MBPP ?) are unsuit-
able. Domain-specific benchmarks (e.g., DS-1000 ?) are also unsuitable for this
076
purpose.
077
078 • The benchmark should be challenging, where the tasks require LLMs’ strong compo-
079 sitional reasoning capabilities and instruction-following capabilities. The benchmarks
080 with simple tasks (e.g., ODEX ?) are unsuitable.
081
082 BigCodeBench is the first benchmark that meets all three expectations. It is an easy-to-
083 use benchmark that evaluates LLMs with practical and challenging programming tasks,
084
accompanied by an end-to-end evaluation framework bigcodebench. We aim to assess how
well LLMs can solve programming tasks in an open-ended setting.
085
086
087 B.2 Composition/collection process/preprocessing/cleaning/labeling and
088 use
089
The answers are described in our paper as well as the GitHub repository: https://github.
090
com/bigcode-project/bigcodebench-annotation.
091
092
093 B.3 Distribution
094
095
B.3.1 Will the dataset be distributed to third parties outside of the entity
(e.g., company, institution, organization) on behalf of which the
096
dataset was created?
097
098 No. Our dataset will be managed and maintained by the BigCode community (https:
099 //www.bigcode-project.org/).
100
101 B.3.2 How will the dataset be distributed (e.g., tarball on website, API,
102 GitHub)?
103
104 The evaluation dataset is released to the public, and hosted on GitHub and Hugging Face.
105
106 B.3.3 When will the dataset be distributed?
107
It has been released now.
2
Under review as a conference paper at ICLR 2025
108
B.3.4 Will the dataset be distributed under a copyright or other
109
intellectual property (IP) license, and/or under applicable terms of
110
use (ToU)?
111
112 Our dataset is distributed under the Apache-2.0 license.
113
114 B.4 Maintenance
115
116 B.4.1 How can the owner/curator/manager of the dataset be contacted
117
(e.g., email address)?
118 Please contact Terry Yue Zhuo ([email protected]) and the BigCode Project
119 ([email protected]), who are responsible for maintenance.
120
121 B.4.2 Will the dataset be updated (e.g., to correct labeling errors, add
122 new instances, delete instances)?
123
124
Yes. If we include more tasks or find any errors, we will correct the dataset hosted on
Hugging Face and GitHub and update the results in the leaderboard accordingly. It will be
125
updated on our website.
126
127
B.4.3 If others want to extend/augment/build on/contribute to the
128 dataset, is there a mechanism for them to do so?
129
130 For dataset contributions and evaluation modifications, the most efficient way to reach
131 us is via GitHub pull requests. For more questions, please contact Terry Yue Zhuo
132 ([email protected]) and BigCode Project ([email protected]), who
133
are responsible for maintenance.
134
135 C Artifacts Created in BigCodeBench
136
137 Table 1: Artifacts created in our work.
138
139 Host Link
140 LeaderBoard
141 GitHub https://bigcode-bench.github.io/
142 Hugging Face https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard
143 Dataset (v0.1.0)
144 GitHub https://github.com/bigcode-project/bigcodebench-annotation/releases/tag/v0.1.0
145 Hugging Face https://huggingface.co/datasets/bigcode/bigcodebench
Croissant https://huggingface.co/api/datasets/bigcode/bigcodebench/croissant
146
Annotation Framework
147
GitHub https://github.com/bigcode-project/bigcodebench-annotation
148
149 Evaluation Framework
150 GitHub https://github.com/bigcode-project/bigcodebench
PyPI https://pypi.org/project/bigcodebench/
151
152
153
154 D Detailed Benchmark Construction
155
156 D.1 Data Synthesis Prompt
157 Based on the following simple example , write more complex scenarios and invoke multiple Python ←-
158 libraries to solve each problem .
The written intent should align with a more specific and practical scenario , but should still be ←-
159 easy to do functional correctness assertion .
For each scenario , write a single Python function with the rewritten intent .
160 Please include requirements and terminal - based input - output examples in the function docstring .
The function should contain complex logic like if - else statements and loops .
161 You have to use more than three Python libraries for a scenario . Write imports and variable ←-
definitions outside the function .
3
Under review as a conference paper at ICLR 2025
162
Try to avoid using web APIs if possible .
163 If there are any constants ( e . g . strings and numeric values ) used in the functions , you need to ←-
declare them before the function .
164 If data is used , you need to provide sample data in the comment .
165 Try to return values for correctness assertion .
Each programming scenario and intent should be separated by the special token ‘ GPT_ODEX_BREAK ‘.
166
Generate two examples with two scenarios :
167 { " task_id " : 4530069 , " prompt " : " def f_4530069 () :\ n \ treturn " , " suffix " : " " , " c a no n ic a l_ s ol u ti o n " : " ←-
168 datetime . now ( pytz . utc ) " , " test_start " : " \ nimport pytz \ nimport time \ nfrom datetime import ←-
datetime , timezone \ n \ ndef check ( candidate ) : " , " test " : [ " \ n assert ( candidate () - datetime ←-
169 (1970 , 1 , 1) . replace ( tzinfo = timezone . utc ) ) . total_seconds () - time . time () <= 1\ n " ] , " entry_point ←-
" : " f_4530069 " , " intent " : " get a value of datetime . today () in the UTC time zone " , " library " : [ " ←-
170 datetime " , " pytz " , " time " ]}
171 Scenario 1:
172 pandas , pytz , datetime , random , matplotlib
‘‘‘ python
173 import pandas as pd
import pytz
174 from datetime import datetime
175 from random import randint
import matplotlib . pyplot as plt
176
# Constants
177 CITIES = [ ’ New York ’ , ’ London ’ , ’ Beijing ’ , ’ Tokyo ’ , ’ Sydney ’]
W EA T HE R _ C O N D IT I ON S = [ ’ Sunny ’ , ’ Cloudy ’ , ’ Rainy ’ , ’ Snowy ’ , ’ Stormy ’]
178
179 # Time zones for the cities
TIMEZONES = {
180 ’ New York ’: ’ America / New_York ’ ,
’ London ’: ’ Europe / London ’ ,
181 ’ Beijing ’: ’ Asia / Shanghai ’ ,
182 ’ Tokyo ’: ’ Asia / Tokyo ’ ,
’ Sydney ’: ’ Australia / Sydney ’
183 }
187 Parameters :
utc_datetime ( datetime ) : The datetime in UTC .
188
189 Returns :
DataFrame : A pandas DataFrame with weather conditions for the cities .
190
Requirements :
191 - pandas
- pytz
192 - datetime
193 - random
- matplotlib . pyplot
194
Example :
195 >>> utc_time = datetime (2023 , 6 , 15 , 12 , 0 , 0 , tzinfo = pytz . UTC )
196 >>> report = g e n e r a t e _ w e a t h e r _ r e p o r t ( utc_time )
>>> print ( report )
197 >>> report [ ’ Weather Condition ’]. value_counts () . plot ( kind = ’ bar ’)
"""
198 report_data = []
199 for city in CITIES :
200 city_tz = pytz . timezone ( TIMEZONES [ city ])
city_time = utc_datetime . astimezone ( city_tz )
201 weather = W E AT H E R _ CO N DI T IO N S [ randint (0 , len ( W EA TH E R_ C ON D IT I ON S ) -1) ]
report_data . append ([ city , city_time , weather ])
202
203 report_df = pd . DataFrame ( report_data , columns =[ ’ City ’ , ’ Local Time ’ , ’ Weather Condition ’ ])
211 # Constants
LEAP_SECONDS = np . array ([1972 , 1973 , 1974 , 1975 , 1976 , 1977 , 1978 , 1979 , 1980 ,
212 1981 , 1982 , 1983 , 1985 , 1988 , 1990 , 1993 , 1994 , 1997 ,
1999 , 2006 , 2009 , 2012 , 2015 , 2016 , 2020])
213
214 def t o t a l _ s e c o n d s _ s i n c e _ d a t e ( date_str , from_tz , to_tz ) :
"""
215 Calculate the total seconds that have passed since a given datetime from the current time
in different timezones considering the leap seconds .
4
Under review as a conference paper at ICLR 2025
216
217 Parameters :
date_str ( str ) : The date string in " yyyy - mm - dd hh : mm : ss " format .
218 from_tz ( str ) : The timezone of the given date string .
219 to_tz ( str ) : The timezone to which the current time should be converted .
220 Returns :
int : The total seconds .
221
222 Requirements :
- datetime
223 - pytz
- numpy
224 - dateutil . parser
225 Example :
226 >>> t o t a l _ s e c o n d s _ s i n c e _ d a t e ( ’1970 -01 -01 00:00:00 ’ , ’ UTC ’, ’ America / New_York ’)
"""
227 from_tz = pytz . timezone ( from_tz )
to_tz = pytz . timezone ( to_tz )
228 given_date = parse ( date_str ) . replace ( tzinfo = from_tz )
229 current_date = datetime . now () . astimezone ( to_tz )
231 leap_years = LEAP_SECONDS [ np . logical_and ( LEAP_SECONDS >= given_date . year , LEAP_SECONDS <= ←-
current_date . year ) ]
232 leap_seconds = len ( leap_years )
233
total_seconds += leap_seconds
234
return int ( total_seconds )
235 ‘‘‘
236
Above is the illustration .
237
Generate five complex scenarios based on the following simple example :
238
239
240
D.2 Semi-automatic Program Refactoring and Testing Case Generation
241
242 We introduce the design from human and LLM aspects as follows:
243
244 Human Aspect Human developers possess varying preferences and levels of familiarity
245 with specific data types and programming scenarios. To aid human annotators in providing
246 more precise feedback for refining programs with GPT-4, we have defined 10 data types (e.g.,
247 SQL, CSV, and Python built-in types) and task scenarios (e.g., data analysis, networking,
248 and visualization). GPT-4 API is utilized to automatically classify each program according to
249 these categories, with detailed descriptions available in subsubsection D.2.1. The annotators’
250
role is to continually instruct GPT-4 to refractor the programs and to provide continuous
feedback to guide the model whenever it fails to self-bug or incorrectly refactor the program.
251
252
LLM Aspect To effectively guide GPT-4 in the iterative refinement of programs and test
253
cases, we provide detailed annotation guidelines in Appendix D.2.2 as an initial prompt.
254
These guidelines encompass two high-level instructions: (1) Refine the function, including its
255 docstrings, to enhance realism and reduce ambiguity, and (2) Write unit tests to ensure the
256 functional correctness of the given program description. Specifically, the model is taught
257 to follow a step-by-step refinement process: (1) Remove unused libraries and add necessary
258 libraries if they are missing in the code snippet; (2) Reformat docstrings to adhere to PEP-257
259 conventions; (3) Align program implementations with the instructions inside docstrings; (4)
260 Write and execute unit tests to ensure they pass; and (5) Add refined programs and test
261 cases to files for downloading.
262 During interactions with GPT-4, we identify two main drawbacks in the Code Interpreter
263 session. First, GPT-4 struggles to write proper test cases when mocking tests are employed.
264 While the model can generate high-level designs of mocking tests, it often fails to understand
265 how test cases should be constructed based on execution feedback. Second, GPT-4 can
266 become stuck while resolving runtime bugs, leading to iterative refinement until the session
267 times out. Continuous human feedback on viable solutions is essential to address these issues
268 and ensure the model stays on track.
269
D.2.1 Programming Task Classification Prompt
5
Under review as a conference paper at ICLR 2025
270
271 # Choose the most suitable labels for the given program :
SQL
272 CSV
DataFrames
273 Time
274 JSON
XML
275 HTML
Image
276 Text
277 Built - in Data Structure
Analysis
278 Networking
Processing
279 Visualization
File Storage
280 Encryption
281
# You should output the suitable labels in a list format , such as [" CSV " , " DataFrames "].
282
283
284 D.2.2 Guidelines
285
286 ## Annotation Guideline :
287 You are given a function inside " function . py ". The goal is to :
1) refine the function including its docstrings in order to make the function more realistic and ←-
288 less ambiguous . This means when you see the function stub and docstring , you should be able to ←-
289 implement with exactly the same functionality with the given function body ;
2) write blackbox unit tests to ensure the functional correctness of the given function . You should ←-
290 also make the function easy to test .
291 ### Step1 : Check Library Imports
292
#### Import Statement
293
- Remove the library imports that are not used in the code .
294 - Import libraries before the function declaration .
295 #### Library Usage
296
- Check if the usage of these libraries is reasonable . For example , if the description asks to ←-
297 complete a functionality that can be implemented without any of these libraries , then the usage ←-
of these libraries APIs is not reasonable . You need to check Step 2 for more details to modify ←-
298 the description so that it can make use of the imported libraries .
299
### Step2 : Check Docstring
300
#### Description
301
302 - Check if the expression of the description is clear and concise . If not , you need to modify the ←-
description to make it clear and concise .
303 - The description must mention the following five things :
- Functionality
304 - Input
- Output to be returned
305 - Requirements of the imported libraries / modules to be used
306 - 1 or 2 examples of the input and output of the function
- Mention the necessary data structure if the function requires data manipulation .
307 - You must not change the overall functionality of the function , and remove any libraries / modules ←-
that are imported in the function to accommodate the blackbox testing .
308
#### Input Parameters
309
310 - Check if the function takes input parameters . If not , you need to modify the description and the ←-
function to make it take input parameters .
311
#### Example
312
313 - Provide 1 or 2 examples of the input and output of the function .
- ‘‘‘ bash
314 >>> f_0 (" hello ")
" hello world "
315 >>> f_0 (" I ")
" love you "
316 ‘‘‘
317 - ‘‘‘ bash
>>> f_0 (" image_1 . jpg ")
318 < module ’ matplotlib . pyplot ’ >
>>> f_0 (" image_2 . jpg ")
319 < module ’ matplotlib . pyplot ’ >
320
‘‘‘
321
### Step 3: Check Function Implementation
322
323 - Check if the function implementation is correct . If not , you need to repair the function ←-
impl ementa tion to make it correct .
- Check if the function uses any constants .
6
Under review as a conference paper at ICLR 2025
324
- If yes and the description has not mentioned any of them , you need to either leave off the ←-
325 argument so it takes the default value , or modify the description to mention them .
- For example , for the ‘ plt . hist ( counts , bins = np . arange ( len ( counts ) +1) -0.5 , rwidth =0.8) ‘ in ‘f_0 ←-
326 ‘ , the description should mention the specific way to compute with ‘ len ( counts ) +1) -0.5 ‘ and use ←-
327 ‘ rwidth =0.8 ‘.
- Check if the function has the return values . If not , you need to modify the function to make it ←-
328 return the values for the test cases to check .
- If the function requires to write or show some data , you shall either return the data or make the ←-
329 function take a path for file storage or a variable for data visualization . For example , if the ←-
330 function requires to show a plot , you must return the plot ( via Axes ) .
- If the function requires checking some properties and uses ‘ print () ‘ to show the results , you need ←-
331 to modify this function to return these results . The revised function should amalgamate these ←-
properties with the preexisting return values , thereby facilitating more versatile utilization ←-
332 of the outputs in the rest of your code .
- Consider this original function :
333
334 def chec k_ pro per tie s ( original_list ) :
is_empty = not bool ( original_list )
335 print ( f " Is the list empty ? { is_empty }")
336 length = len ( original_list )
337 print ( f " Length of the list is { length }")
339 This function checks two properties of a list : whether it ’ s empty and its length . It then prints ←-
these results . However , these results can ’ t be used elsewhere in your program .
340
341 Now , let ’ s modify the function to return these results :
363 - You should test the possible attributes of that written data . For example , if you return a plot , ←-
you need to test if the plot contains the correct title , x - axis , y - axis and data points .
364
365 - To formalize the test case writing , you need to write with the following function :
7
Under review as a conference paper at ICLR 2025
378
pass
379
# write more tests here
380 ‘‘‘
381
- Each blackbox test method should be tested with a unique input value and asserted for the return ←-
382 values .
383 ### Working
384
After reading the guideline , refine ‘ function . py ‘ and write a single test program named ‘ run_tests () ←-
385 ‘ in ‘ test . py ‘ , which should contain at least * five * different input cases .
386 When testing with data , you need to generate the data by yourself or use 3 rd party libraries and ←-
websites ( e . g . ‘ Faker ‘ and ‘ unittest . mock ‘) to generate or mock the test data . For example , you ←-
387 should design the html webpage to meet the functionality and test scenarios . For any numeric ←-
388 values , you need to represent them in the data .
389 Make the test data as complex as possible to mimic the real - world data . For example , if the function ←-
requires reading a csv file , you need to design the csv file to meet the functionality and ←-
390 test scenarios .
391
You can not remove any libraries or modules used in the original function . However , you can add new ←-
392 libraries or modules to the function .
393 Make sure you have tested all return values and the possible attributes of the written data .
394 Keep testing the function until you are satisfied with the function implementation and the test ←-
395 cases .
396 If any tested properties are not mentioned in the description , you need to modify the description to ←-
mention them .
397
398 As we will provide the function stub for programmers to implement by their own , please make sure ←-
there is no ambiguity in the description and the function implementation . If there is any ←-
399 ambiguity , you need to modify the description and the function implementat ion to make it clear ←-
and concise . Think about the possible questions that programmers may ask and try to answer them ←-
400 in the description .
401 Execute to make sure ‘ function . py ‘ will pass all blackbox test cases . Otherwise , you need to modify ←-
402 the function implementation to make it pass all blackbox test cases .
412 You are given a file named ‘ requirements . txt ‘ , please set up a Python environment with version ←-
3.8.10. You can use Anaconda or Python Virtual Environment for this purpose .
413
414 Use the following command to install all required libraries :
‘‘‘ sh
415 pip install -U -r requirements . txt
‘‘‘
416
Please note that this environment will be the same as the one used by OpenAI GPT -4 Advanced Data ←-
417 Analysis ( or Code Interpreter ) . Although it is expected that most APIs are stable , it is safer ←-
418 to use consistent library versions . You are encouraged to use more libraries covered in the ←-
requirements . txt to enrich each sample .
419
## Annotation Goal
420
421 The goal is to :
Refine the function , including its docstrings in order to make the function more realistic and less ←-
422 ambiguous . This means when you see the function stub and docstring , you should be able to ←-
implement exactly the same functionality with the given function body . Add more library APIs if ←-
423 necessary ;
Write additional black box unit tests to ensure the functional correctness of the given function . ←-
424 You should consider as many corner cases as possible .
425
## Expected Annotation
426
You are given a Python script to execute and annotate .
427
428 We define the following terms :
‘ Programming Problem ‘ contains three parts :
429 [ Standalone ] ‘ Import Statement ‘
The imported libraries , submodules or functions should be used in the following function ←-
430 impl ementa tion .
add missing libraries , submodules or functions if one is used but not imported .
431 ‘ Problem Function ‘
‘ Function Signature ‘ and its corresponding ‘ Docstring ‘
8
Under review as a conference paper at ICLR 2025
432
The ‘ Function Name ‘ should be obfuscated in the format of ‘ f_ [ NUMBER ] ‘ to ensure anonymity . [ NUMBER ] ←-
433 should be the one inside the file name .
The docstrings ( example can be found at Google Python Style Guide ) should contain :
434 A ‘ Functionality Description ‘.
435 ‘ Function Parameters ‘ and their ‘ Function Parameters Descriptions ‘.
2 -3 ‘ Running Examples ‘ in Python Interpreter and their expected outputs .
436 ‘ Solution ‘
The function implementation to fulfil the functionality described in the ‘ Docstring ‘.
437 ‘ Test Suite ‘ contains three parts :
438 [ Standalone ] ‘ Import Statement ‘
The imported libraries , submodules or functions should be used in the following tests .
439 [ Standalone ] ‘ TestCases ‘ class
The class should contain at least five distinct test cases .
440 The test cases should be aligned with the docstring description . Test cases should not assert any ←-
attributes which is not specifically mentioned .
441 The test cases should cover as many branches in the ‘ Problem Function ‘ as possible . In order to get ←-
442 the complete coverage , you should use the command ‘ coverage run -m unittest f_ [ NUMBER ] _ [ NAME ]. ←-
py && coverage report -m ‘. Replace the ‘ f_ [ NUMBER ] _ [ NAME ]. py ‘ with the file name you are ←-
443 testing with . Ignore the missing lines in the ‘ Test Suite ‘.
‘ Programming Problem ‘ should be able to pass all these test cases . This means the scripts should run ←-
444 successfully without any failed test cases when you run ‘ python XXX . py ‘ in the terminal .
445 [ Standalone ] ‘ run_tests ‘ function
The function should only contain the code helping to execute the test cases .
446
## Issues To Be Addressed
447
You may notice the existence of the following issues in the given Python script :
448 ‘ Function Name ‘ has not been obfuscated .
449 Replace the name with ‘f ‘.
‘ Docstring ‘ is unclear , ambiguous , impractical or not well aligned with ‘ Solution ‘.
450 ‘ Function Description ‘ should describe a practical functionality .
You should either refine the ‘ Function Description ‘ or ‘ Solution ‘. Choose the one more feasible to ←-
451 be done .
452 Make sure at least 2 correct ‘ Running Examples ‘ are included .
‘ Solution ‘ does not use all imported libraries or APIs .
453 Try to refine the ‘ Programming Problem ‘ so that the ‘ Function Description ‘ implies the corresponding ←-
API usage and such APIs are correctly invoked in ‘ Solution ‘.
454 If ( a ) is difficult to complete , remove the unused import statements .
‘ Solution ‘ uses APIs that are not included in ‘ Import Statement ‘.
455 Add the corresponding import statements if these APIs are necessary to complete the functionality . ←-
456 Otherwise , refine ‘ Function Description ‘ so that the functionality will require such APIs .
‘ Solution ‘ uses less than 2 libraries .
457 You should refine ‘ Programming Problem ‘ so that the functionality must require the API usage and ←-
invoke APIs from at least 2 distinct libraries . You can use ChatGPT ( GPT -4) in Advanced Data ←-
458 Analysis for inspiration .
459 ‘ Solution ‘ uses APIs in ‘ random ‘ or the random functionality .
Initialize the random seed for each ‘ TestCases ‘ to control the behavior .
460 ‘ Solution ‘ contains dummy code .
Based on your understanding , replace the dummy code with the actual implem entation of each part .
461 ‘ Solution ‘ contains the display functionality , such as ‘ print () ‘ and ‘ matplotlib . pyplt . show () ‘.
If the function requires you to write or show some data , you shall either return the data or make ←-
462 the function take a path for file storage or a variable for data visualization . For example , if ←-
463 the function requires you to show a plot , you must return the plot ( via Axes ) . If there is a ←-
specific attribute inside the object , you should mention it in the ‘ Docstring ‘ and test it ←-
464 inside ‘ TestCases ‘. For example , the plot contains the specific title or label names . You ←-
should make sure that these attributes are either stated in ‘ Docstring ‘ or implied by ‘ ←-
465 Docstring ‘.
466 If the function requires checking some properties and uses ‘ print () ‘ to show the results , you need ←-
to modify this function to return these results . The revised function should amalgamate these ←-
467 properties with the preexisting return values , thereby facilitating more versatile utilization ←-
of the outputs in the rest of your code .
468 Refer to Step3 in the guidelines of the previous stage .
469 Global constants before ‘ Problem Function ‘.
470 If the global constants are used as sample inputs in the ‘ Solution ‘ , remove them and write your own ←-
test input in ‘ TestCases ‘.
471 If the global constants are unused , remove them directly .
Test cases inside ‘ TestCases ‘ only check the range of returned results or fail to test in a ←-
472 specific way .
473 It assumes that you now have full control of ‘ Programming Problem ‘. Write the test cases to validate ←-
if the returned results are equal to certain values . For the returned objects , validate if the ←-
474 attributes are equal to certain values .
Test cases do not use any deterministic values as expected outputs .
475 Come up with the expected outputs after testing .
‘ TestCases ‘ uses libraries or APIs that are not included in ‘ Import Statement ‘.
476 Add the corresponding import statements if these APIs are necessary to complete the testing . ←-
477 Otherwise , remove such APIs .
‘ TestCases ‘ contains test cases that do not work for ‘ Solution ‘.
478 Repair these test cases or replace them with better cases .
‘ TestCases ‘ does not test all attributes of the returned object , where these attributes are implied ←-
479 by ‘ Function Description ‘ or completed by ‘ Solution ‘.
480 Add lines of code to test these attributes .
If these attributes are not mentioned or implied by ‘ Function Description ‘ , try to describe them in ←-
481 ‘ Function Description ‘.
‘ TestCases ‘ does not test the files that result in ‘ Solution ‘.
482 Some files are created during the execution of ‘ Programming Problem ‘. Add necessary lines of code to ←-
test the attributes of these files in each test case .
483 ‘ TestCases ‘ is wrapped in ‘ run_tests ‘.
484 Separate these two .
Test cases in ‘ TestCases ‘ are duplicated or used to test the same behavior .
485 Remove them if there is a huge overlap . Replace them with more complex test cases . Make sure that at ←-
least five test cases are included .
9
Under review as a conference paper at ICLR 2025
486
Test data used in ‘ TestCases ‘ is missing .
487 You need to manually design the data or utilize 3 rd party libraries and websites ( e . g . ‘ Faker ‘ and ‘ ←-
unittest . mock ‘) to generate or mock the test data . Refer to Step4 in the guidelines of previous ←-
488 stage .
489
Lack of return value .
490 Functions should have clear return values to indicate the result , and these should be tested in the ←-
‘ TestCases ‘.
491 Lack of corner cases .
492 Corner cases should be considered in the function or in the test cases .
Lack of error handling :
493 You should add necessary checks for null inputs , incorrect data types , or values out of expected ←-
ranges to deal with incorrect input format .
494
## Further Explanation of Each Issue
495
496 1. ‘ Function Name ‘ has not been obfuscated :
- The given function should have a generic name such as ‘f ‘ to ensure anonymity . This prevents ←-
497 the user from inferring the function ’ s purpose based on its name .
- Example :
498 Before : ‘ def ca l c ul a te _a ver ag e ( nums ) : ‘
499 After : ‘ def f ( nums ) : ‘
500 2. ‘ Docstring ‘ is unclear , ambiguous , impractical or not well aligned with ‘ Solution ‘:
- The function ’ s docstring should provide a clear and concise description of its purpose , ←-
501 expected inputs , outputs , and examples of usage . If the description is vague or doesn ’ t match ←-
the function ’ s behavior , it can lead to confusion .
502 - Example :
503 Before : ‘""" Calculates something .""" ‘
After : ‘""" Calculates the average of a list of numbers .""" ‘
504
3. ‘ Solution ‘ does not use all imported libraries or APIs :
505 - If libraries are imported but not used in the function , it indicates redundant code or a ←-
506 mismatch between the problem description and the solution .
- Example :
507 Before : ‘ import math ‘ ( but no usage of ‘ math ‘ in the function )
After : Remove ‘ import math ‘ or ensure it ’ s used in the function .
508
4. ‘ Solution ‘ uses APIs that are not included in ‘ Import Statement ‘:
509 - All external libraries or functions used in the solution should be imported at the beginning ←-
510 of the script to ensure the code runs without errors .
- Example :
511 If using ‘ sqrt ‘ from ‘ math ‘ library in the function , ensure ‘ from math import sqrt ‘ is present ←-
at the beginning .
512
513 5. ‘ Solution ‘ does not use any library APIs :
- The problem should be designed in a way that requires the usage of library APIs to solve it , ←-
514 ensuring the challenge of integrating external tools .
- Example :
515 If the problem is to calculate the square root , the solution should leverage the ‘ math . sqrt ‘ ←-
function .
516
517 6. ‘ Solution ‘ uses APIs in ‘ random ‘ , but does not pass a random seed to ‘ Function Parameters ‘:
- When using random functionalities , for reproducibility , it ’ s good practice to allow the user ←-
518 to set a seed .
- Example :
519 Before : ‘ random . randint (1 ,10) ‘
520 After : ‘ random . seed ( seed ) ; random . randint (1 ,10) ‘
532 11. ‘ TestCases ‘ does not test all attributes of the returned object :
533
- If the function returns an object with multiple attributes or methods , the test cases should ←-
validate all of them to ensure complete coverage . For example , when plotting data on a graph , ←-
534 you might get an ‘ AxesSubplot ‘ object in return . This object has various attributes , like the ←-
title , x - label , y - label , and the data points themselves . You should test all of these ←-
535 attributes if they are required in the functionality .
536 12. ‘ TestCases ‘ does not test the files that result in ‘ Solution ‘:
537 - If the function creates or modifies files , the test cases should validate these files to ←-
ensure the function works as expected .
538
13. ‘ TestCases ‘ is wrapped in ‘ run_tests ‘:
539 - The test cases and the function to run them should be separated for clarity .
10
Under review as a conference paper at ICLR 2025
540
14. Test cases in ‘ TestCases ‘ are duplicated or used to test the same behavior :
541 - Redundant test cases should be removed to keep the test suite concise and focused .
11