code-bert-score

CodeBERTScore: an automatic metric for code generation, based on BERTScore

https://github.com/neulab/code-bert-score

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
    3 of 17 committers (17.6%) from academic institutions
  • Institutional organization owner
    Organization neulab has institutional domain (www.cs.cmu.edu)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.9%) to scientific vocabulary

Keywords

bert bertscore code code-bert-score code-bertscore codebert codebertscore score

Keywords from Contributors

transformer cryptocurrency cryptography jax
Last synced: 6 months ago · JSON representation

Repository

CodeBERTScore: an automatic metric for code generation, based on BERTScore

Basic Info
  • Host: GitHub
  • Owner: neulab
  • License: mit
  • Language: Jupyter Notebook
  • Default Branch: main
  • Homepage:
  • Size: 24.6 MB
Statistics
  • Stars: 199
  • Watchers: 5
  • Forks: 15
  • Open Issues: 4
  • Releases: 2
Topics
bert bertscore code code-bert-score code-bertscore codebert codebertscore score
Created over 3 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License Citation

README.md

CodeBERTScore

This is the official implementation of the paper:

Shuyan Zhou, Uri Alon, Sumit Agarwal, Graham Neubig, CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code

CodeBERTScore is an Automatic Evaluation Metric for Code, based on BERTScore. This repository is based on the code of BERTScore, and we are grateful to the authors for releasing their code.

April 2023 - CodeBERTScore is now available on pypi, which means that you can simply pip install code-bert-score!


Example:

Figure (a) shows a reference code snippet in Java. Figures (b) and (c) show two generated predictions. Among these two candidates and given the reference, BLEU prefers (scores higher) the code in (b), which is not functionally equivalent to the reference, while CodeBERTScore prefers the code in (c), which is functionaly equivalent to the reference.

How does it work?

As BERTScore, CodeBERTScore leverages the pre-trained contextual embeddings from a model such as CodeBERT and matches words in candidate and reference sentences by cosine similarity. Differently from BERTScore, CodeBERTScore also encodes natural language input or other context along with the generated code, but does not use that context to compute cosine similarities.

This example shows how CodeBERTScore can compute the similarity between the Python expressions x ** 0.5 and math.sqrt(x), which are functionally equivalent, even though they have very few overlapping tokens.

Usage

import code_bert_score pred_results = code_bert_score.score(cands=predictions, refs=refs, lang='python') Where pred_results is a 4-tuple of (precision, recall, F1, F3), where each is a 1-D tensor of scores for each prediction-reference pair. F3 is similar to the well-known F1 score, that considers recall 3 times as important as precision. See the definition on Wikipedia.

See our example.py script. Additional details are shown in the original BERTScore demo notebook.

Huggingface Models

We fine-tuned the microsoft/codebert-base-mlm model for 1,000,000 steps (with batch_size=32) on several languages separately.

We released the following models to the Huggingface hub: * neulab/codebert-python (the default model for lang='python') * neulab/codebert-javascript (the default model for lang='javascript' or 'js') * neulab/codebert-c (the default model for lang='c') * neulab/codebert-cpp (the default model for lang='cpp' or 'c++') * neulab/codebert-java (the default model for lang='java')

The appropriate model will be loaded automatically when passing the lang argument to the score(..) function, for example: lang='python'. For other uses, these models can be loaded using (for example): ```python from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.frompretrained("neulab/codebert-python") model = AutoModelForMaskedLM.frompretrained("neulab/codebert-python") ```

Additional Features

  • We found that in NL->Code problems, more accurate results are achieved by encoding the NL sources with the code prediction, but then measuring similarity only for the encoded code:

pred_results = code_bert_score.score(cands=predictions, refs=refs, lang='python', sources=sources)

  • We also found that using Inverse Document Frequencies improve the results, similarly to the original BERTScore. We included an example script that shows how to precompute them here compute_idf.py. Then, the resulting dictionary can be used with the argument idf=idf_dict. Our IDF dicts can be found in ./idf_dicts/.

  • Tuning the layer that the similarity is computed from is also helpful, using num_layers=N where N is between 5-10:

  • We found that more accurate results are achieved by encoding the entire inputs, but measures the similarity only between non-punctuation and non-whitespace tokens. To disable the removal of punctuation tokens, use no_punc=False.

See also our example.py script. Additional details are shown in the original BERTScore demo notebook.

Training

The run_mlm.py script can be used to fine-tune the base model microsoft/codebert-base-mlm on specific languages.

Evaluation

The code to reproduce the results in the paper can be found in the evaluation.

Human Evaluation

We find that CodeBERTScore is more correlated with human preference compared to a variety of common metrics. See more details in the paper.

Functional Correctness

We find that CodeBERTScore is more correlated with functional correctness compared to a variety of common metrics. See more details in the paper.

Citation

@article{zhou2023codebertscore, url = {https://arxiv.org/abs/2302.05527}, author = {Zhou, Shuyan and Alon, Uri and Agarwal, Sumit and Neubig, Graham}, title = {CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code}, publisher = {arXiv}, year = {2023}, }

Owner

  • Name: NeuLab
  • Login: neulab
  • Kind: organization
  • Location: Pittsburgh, PA

Graham Neubig's Lab at LTI/CMU

GitHub Events

Total
  • Issues event: 8
  • Watch event: 35
  • Issue comment event: 7
  • Fork event: 1
Last Year
  • Issues event: 8
  • Watch event: 35
  • Issue comment event: 7
  • Fork event: 1

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 146
  • Total Committers: 17
  • Avg Commits per committer: 8.588
  • Development Distribution Score (DDS): 0.678
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Felix Wu f****u@g****m 47
urialon u****1@g****m 40
Tiiiger z****x@g****m 33
Varsha Kishore v****e@d****u 5
Shuyan Zhou a****8@g****m 5
varshakishore v****6@g****m 3
Ethan Perez p****z@n****u 3
Jihyung Moon m****g@g****m 1
Jin Yong (Jeffrey) Yoo j****7@g****m 1
Praveenkumar p****8@g****m 1
Radhika Dua r****7@g****m 1
Ziad Amerr 7****r 1
dougian d****2@g****m 1
isabelcabezasm i****e@m****m 1
Yoh Okuno y****o@r****p 1
Varsha Kishore v****2@g****u 1
lwaekfjlk 1****2@q****m 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 10
  • Total pull requests: 5
  • Average time to close issues: 28 days
  • Average time to close pull requests: about 2 hours
  • Total issue authors: 10
  • Total pull request authors: 4
  • Average comments per issue: 2.0
  • Average comments per pull request: 0.6
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 5
  • Pull requests: 0
  • Average time to close issues: 3 days
  • Average time to close pull requests: N/A
  • Issue authors: 5
  • Pull request authors: 0
  • Average comments per issue: 0.8
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • terryyz (1)
  • lenijwp (1)
  • jackswl (1)
  • btyu (1)
  • marahgh (1)
  • higgsbosonprose (1)
  • shmuelfomberg (1)
  • VichyTong (1)
  • athmanar (1)
  • auxtern (1)
Pull Request Authors
  • isabelcabezasm (4)
  • urialon (1)
  • lwaekfjlk (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 1,017 last-month
  • Total docker downloads: 724
  • Total dependent packages: 1
  • Total dependent repositories: 1
  • Total versions: 2
  • Total maintainers: 1
pypi.org: code-bert-score

PyTorch implementation of Code BERT score

  • Versions: 2
  • Dependent Packages: 1
  • Dependent Repositories: 1
  • Downloads: 1,017 Last month
  • Docker Downloads: 724
Rankings
Docker downloads count: 2.8%
Stargazers count: 6.7%
Dependent packages count: 10.1%
Forks count: 10.2%
Average: 10.7%
Downloads: 12.6%
Dependent repos count: 21.6%
Maintainers (1)
Last synced: 6 months ago