ahmedfarazsyk commited on
Commit
d6b4918
·
verified ·
1 Parent(s): daabcf2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +173 -1
README.md CHANGED
@@ -294,4 +294,176 @@ tags:
294
  pretty_name: Hugging Face Datasets GitHub Issues
295
  size_categories:
296
  - 1K<n<10K
297
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
294
  pretty_name: Hugging Face Datasets GitHub Issues
295
  size_categories:
296
  - 1K<n<10K
297
+ ---
298
+ # Dataset Card for Dataset Name
299
+
300
+ ## Dataset Details
301
+
302
+ ### Dataset Description
303
+
304
+ This dataset is a collection of **GitHub Issue and Pull Request metadata** scraped from the public **`huggingface/datasets`** repository using the GitHub REST API. It contains **3,261 distinct issue records** (excluding pull requests) that were extracted from an initial fetch of 7,808 samples. The data provides a rich corpus of **real-world technical communication** focused on open-source software development, covering the repository's activity up to late 2025.
305
+
306
+ The data provides a rich corpus of **real-world technical communication**, primarily in **English (en)**, focused on the **open-source software development** domain of the `datasets` library. The dataset was curated to offer a clean, usable source for tasks like **issue classification** and **summarization** within a highly technical context.
307
+
308
+ ## Uses
309
+
310
+ This simplified dataset primarily supports tasks based on textual content and intrinsic labels (`state`).
311
+
312
+ **text-classification: Issue State Classification**
313
+ The dataset can be used to train a model for **Issue State Classification**, which consists in **predicting whether an issue/pull request is currently 'open' or 'closed'** based solely on its `title` and `body` text. Success on this task is typically measured by achieving a high **Accuracy** or **F1-score**.
314
+
315
+ **other:issue-summarization**
316
+ The dataset can be used to train a model for **Issue Summarization** (Sequence-to-Sequence), which consists in **generating a concise title/summary** given the longer `body` text of the issue. Success on this task is typically measured by achieving a high **ROUGE-L** score.
317
+
318
+ **other:issue-type-classification**
319
+ The dataset can be used to train a model for **Issue Type Classification**, which consists in **classifying whether a record represents a bug, a feature request, or general question** (derived or annotated from the `title` and `body` fields).
320
+
321
+ ### Languages
322
+
323
+ The primary language represented in the dataset is **English (en)** (BCP-47 code: `en`), consistent with standard practice for global open-source projects. The text is technical, informal, and conversational, reflecting developer communication on a platform like GitHub.
324
+
325
+
326
+ ## Dataset Structure
327
+
328
+ {
329
+ "url": "String",
330
+ "repository_url": "String",
331
+ "labels_url": "String",
332
+ "comments_url": "String",
333
+ "events_url": "String",
334
+ "html_url": "String",
335
+ "id": "Integer",
336
+ "node_id": "String",
337
+ "number": "Integer",
338
+ "title": "String",
339
+ "user": {
340
+ "login": "String",
341
+ "id": "Integer",
342
+ "node_id": "String",
343
+ "avatar_url": "String",
344
+ // ... (other user fields)
345
+ "site_admin": "Boolean"
346
+ },
347
+ "labels": [
348
+ // List of Structs
349
+ {
350
+ "name": "String",
351
+ "color": "String",
352
+ "id": "Integer"
353
+ // ... (other label fields)
354
+ }
355
+ ],
356
+ "state": "String",
357
+ "locked": "Boolean",
358
+ "assignee": "Null/Struct",
359
+ "assignees": [
360
+ // List of Structs (User objects)
361
+ ],
362
+ "milestone": "Null/Struct",
363
+ "comments": "Integer",
364
+ "created_at": "Timestamp",
365
+ "updated_at": "Timestamp",
366
+ "closed_at": "Timestamp/Null",
367
+ "author_association": "String",
368
+ "type": "String",
369
+ "active_lock_reason": "Null/String",
370
+ "draft": "Boolean",
371
+ "pull_request": {
372
+ "url": "String",
373
+ "html_url": "String",
374
+ "diff_url": "String",
375
+ "patch_url": "String",
376
+ "merged_at": "Timestamp/Null"
377
+ },
378
+ "body": "String",
379
+ "closed_by": "Null/Struct",
380
+ "reactions": {
381
+ "url": "String",
382
+ "total_count": "Integer"
383
+ // ... (other reaction counts)
384
+ },
385
+ "timeline_url": "String",
386
+ "performed_via_github_app": "Null",
387
+ "state_reason": "Null/String"
388
+ }
389
+
390
+ ### Data Fields
391
+ All original fields are retained except for milestone. This includes complex nested structures and all timestamps.
392
+
393
+ id: int64. The unique GitHub identifier.
394
+
395
+ number: int64. The sequential issue/PR number.
396
+
397
+ title: string. The title of the issue/PR (Primary input).
398
+
399
+ state: string. The current status (open or closed).
400
+
401
+ comments: int32. The number of comments.
402
+
403
+ created_at: timestamp. The creation time.
404
+
405
+ updated_at: timestamp. The last update time.
406
+
407
+ closed_at: timestamp. The closure time (null if open).
408
+
409
+ user: struct. Metadata about the author.
410
+
411
+ labels: list[struct]. A list of labels applied.
412
+
413
+ pull_request: struct. Metadata specific to a Pull Request (includes merged_at timestamp).
414
+
415
+ body: string. The main text description (Primary input).
416
+
417
+ assignees: list[struct]. A list of assigned users.
418
+
419
+ reactions: struct. Reaction counts (e.g., +1, heart).
420
+
421
+ milestone: (DROPPED) This is the only field removed in normalization.
422
+
423
+ ### Data Splits
424
+
425
+ Split Name,Sample Type,Number of Examples
426
+ initial_fetch,Issues & PRs,"7,808"
427
+ train,Pure Issues,"3,261"
428
+
429
+ ### Criteria for Splitting:
430
+
431
+ The data was collected as a single stream and is presented as one split (train).
432
+
433
+ ## Dataset Creation
434
+
435
+ ### Curation Rationale
436
+
437
+ The dataset was curated to provide a high-quality corpus of pure issues for supervised learning. The primary motivation was to: 1) Separate Issues from PRs to avoid class confusion; 2) Augment Issues with Comments to provide the full conversational context necessary for realistic AI applications like automatic issue response or summarization.
438
+
439
+ ### Initial Data Collection and Normalization
440
+
441
+ Process: Data was collected using the GitHub REST API (/repos/huggingface/datasets/issues) with state=all, resulting in 7,808 total records (issues and PRs).
442
+
443
+ Separation: The initial records were filtered to include only records where the pull_request field was null, resulting in 3,261 issues.
444
+
445
+ Augmentation: A subsequent fetch was performed to retrieve all associated comments for each of the 3,261 issues.
446
+
447
+ Normalization: Only the milestone field was explicitly dropped prior to saving the .jsonl file. All other original fields were retained intact.
448
+
449
+ ### Who are the source language producers?
450
+
451
+ The data was human-generated by developers, engineers, and community members interacting with the huggingface/datasets repository on GitHub.
452
+
453
+
454
+ ## Personal and Sensitive Information
455
+
456
+ Fields like user contain publicly available GitHub login names and user IDs, which are considered personal identifiers.
457
+ The raw text in the body field may contain indirect personal or sensitive information posted by users.
458
+
459
+ ## Considerations for Using the Data
460
+
461
+ ### Discussion of Biases
462
+
463
+ The primary bias is Selection Bias: The data is specific to the datasets library and is heavily biased toward technical, engineering, and machine learning terminology.
464
+
465
+ ## Additional Information
466
+
467
+ ### Licensing Information
468
+
469
+ The source data is derived from a public GitHub repository under the Apache-2.0 License.