Software Readability Dataset
This repository contains the dataset used to build and evaluate the readability model presented in:
A General Software Readability Model Jonathan Dorn & Westley Weimer, University of Virginia
The dataset consists of human-annotated code snippets sampled from real open-source projects and labeled for perceived readability. It is the largest such dataset collected for software readability research to date.
π¦ Dataset Summary
| Property | Value |
|---|---|
| Total snippets | 360 |
| Programming languages | Java, Python, CUDA |
| Snippet lengths | ~10, ~30, ~50 lines |
| Human annotations | β 5,468 annotators |
| Total ratings | β 76,741 readability votes |
| Rating scale | 1β5 Likert (1 = very unreadable, 5 = very readable) |
| Annotator background | students + industry (1,000+ with 5+ years professional experience) |
| Source projects | 30 open-source repositories |
The dataset was collected through an IRB-approved online survey. Each participant viewed 20 random snippets and rated their readability. Samples were drawn automatically and uniformly from repositories, without manual curation, to avoid bias.
π§ Tasks Supported
This dataset supports several research directions:
π― Core Tasks
- readability prediction (regression or classification)
- feature engineering for code comprehension
- cross-language readability comparison
π Languages & Projects
Languages sampled:
- Java (large, object-oriented)
- Python (indentation-sensitive)
- CUDA (GPU programming)
Each sourced from 10 real open-source repos (30 total), including widely-used projects like:
- Liferay Portal
- SQuirreL SQL Client
- Docutils
- GPUMLib
(see paper for full list)
π Annotation Protocol
ratings used a 1β5 Likert scale:
- 1 β very unreadable
- 5 β very readable
code was syntax-highlighted in survey
annotators could revise earlier answers
snippets were shown without needing to be syntactically complete
annotators provided experience metadata
π€ Citation
If you use this dataset, please cite:
@inproceedings{dorn2012readability,
title={A general software readability model},
author={Dorn, Jonathan and Weimer, Westley},
booktitle={International Conference on Software Engineering},
year={2012}
}
π Acknowledgements
Special thanks to the thousands of anonymous survey respondents, Udacity participants, and reddit programmers who contributed ratings.
Contact
If you have questions or would like to extend the dataset, feel free to open an issue or discussion in the repo.
- Downloads last month
- 19