You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Software Readability Dataset

This repository contains the dataset used to build and evaluate the readability model presented in:

A General Software Readability Model Jonathan Dorn & Westley Weimer, University of Virginia

The dataset consists of human-annotated code snippets sampled from real open-source projects and labeled for perceived readability. It is the largest such dataset collected for software readability research to date.


πŸ“¦ Dataset Summary

Property Value
Total snippets 360
Programming languages Java, Python, CUDA
Snippet lengths ~10, ~30, ~50 lines
Human annotations β‰ˆ 5,468 annotators
Total ratings β‰ˆ 76,741 readability votes
Rating scale 1–5 Likert (1 = very unreadable, 5 = very readable)
Annotator background students + industry (1,000+ with 5+ years professional experience)
Source projects 30 open-source repositories

The dataset was collected through an IRB-approved online survey. Each participant viewed 20 random snippets and rated their readability. Samples were drawn automatically and uniformly from repositories, without manual curation, to avoid bias.


🧠 Tasks Supported

This dataset supports several research directions:

🎯 Core Tasks

  • readability prediction (regression or classification)
  • feature engineering for code comprehension
  • cross-language readability comparison

🌍 Languages & Projects

Languages sampled:

  • Java (large, object-oriented)
  • Python (indentation-sensitive)
  • CUDA (GPU programming)

Each sourced from 10 real open-source repos (30 total), including widely-used projects like:

  • Liferay Portal
  • SQuirreL SQL Client
  • Docutils
  • GPUMLib

(see paper for full list)


πŸ“ Annotation Protocol

  • ratings used a 1–5 Likert scale:

    • 1 β†’ very unreadable
    • 5 β†’ very readable
  • code was syntax-highlighted in survey

  • annotators could revise earlier answers

  • snippets were shown without needing to be syntactically complete

  • annotators provided experience metadata


🀝 Citation

If you use this dataset, please cite:

@inproceedings{dorn2012readability,
  title={A general software readability model},
  author={Dorn, Jonathan and Weimer, Westley},
  booktitle={International Conference on Software Engineering},
  year={2012}
}

πŸ™Œ Acknowledgements

Special thanks to the thousands of anonymous survey respondents, Udacity participants, and reddit programmers who contributed ratings.


Contact

If you have questions or would like to extend the dataset, feel free to open an issue or discussion in the repo.


Downloads last month
19