Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 7 new columns ({'begins', 'lon', 'ends', 'id', 'extra', 'name', 'lat'}) and 2 missing columns ({'person', 'row_id'}).

This happened while the csv dataset builder was generating data using

hf://datasets/goldpotatoes/ice-id/raw_data/counties.csv (at revision 8d283868f9e78e7fe7ec192043ca8d7929181bae)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              id: int64
              name: string
              extra: string
              begins: double
              ends: double
              lat: double
              lon: double
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1016
              to
              {'row_id': Value('int64'), 'person': Value('float64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1455, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1054, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 7 new columns ({'begins', 'lon', 'ends', 'id', 'extra', 'name', 'lat'}) and 2 missing columns ({'person', 'row_id'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/goldpotatoes/ice-id/raw_data/counties.csv (at revision 8d283868f9e78e7fe7ec192043ca8d7929181bae)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

row_id
int64
person
float64
0
11,848
1
null
2
16,476
3
17,338
4
19,392
5
21,392
6
21,658
7
28,008
8
34,175
9
22,960
10
26,207
11
26,481
12
31,462
13
31,653
14
31,971
15
33,444
16
36,459
17
39,639
18
44,003
19
47,173
20
278,128
21
41,216
22
41,913
23
31,816
24
31,799
25
50,991
26
13,308
27
54,895
28
42,141
29
42,839
30
44,175
31
560
32
264,047
33
3,665
34
7,773
35
304,359
36
18,783
37
267,556
38
null
39
50,348
40
50,344
41
50,280
42
51,218
43
49,354
44
6,762
45
null
46
49,702
47
49,703
48
49,379
49
12,466
50
10,848
51
10,790
52
10,766
53
10,768
54
11,005
55
null
56
10,955
57
22,615
58
22,614
59
22,600
60
22,599
61
11,064
62
11,050
63
11,055
64
21,826
65
null
66
21,828
67
21,829
68
21,830
69
11,027
70
11,028
71
11,037
72
304,540
73
10,966
74
10,926
75
10,928
76
279,403
77
10,894
78
10,919
79
11,044
80
25,096
81
11,045
82
11,057
83
10,993
84
10,996
85
21,831
86
21,832
87
21,833
88
11,308
89
11,183
90
11,106
91
11,201
92
11,141
93
11,281
94
11,286
95
11,288
96
276,695
97
11,190
98
11,191
99
11,113
End of preview.
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

ICE-ID Dataset

Overview

ICE-ID is a benchmark dataset of Icelandic census records (1703–1920) for longitudinal identity resolution. It includes cleaned tabular features and a temporal graph of person records across census waves.

Files

raw_data/
├─ people.csv
├─ manntol_einstaklingar_new.csv
├─ parishes.csv, districts.csv, counties.csv

artifacts/
├─ row_labels.csv           # row_id, person mapping
├─ rows_with_person.csv     # linked subset of rows
├─ iceid_ml_ready.npz       # CSR matrix with numeric, one-hot, ordinal features
├─ temporal_graph.pt        # PyTorch Geometric data (edge_index, node_id)

Requirements

  • Python 3.7+
  • numpy, pandas, scipy, scikit-learn, torch, torch-geometric

Install via:

pip install numpy pandas scipy scikit-learn torch torch-geometric

Preprocessing

Run the preprocessing script to generate artifacts:

python preprocess.py --data-dir raw_data --out-dir artifacts

Usage

  • Load iceid_ml_ready.npz for ML features (rows × features).
  • Use row_labels.csv to map rows to persons.
  • Load temporal_graph.pt in PyG:
    import torch
    from torch_geometric.data import Data
    d = torch.load('artifacts/temporal_graph.pt')
    graph = Data(edge_index=d['edge_index'])
    graph.node_id = d['node_id']
    

Code requirements

  • The repository has been split across Huggingface and GitHub. As such, the code required to run the program and interact with the can be found at https://github.com/IIIM-IS/ICE-ID-2.0. Please, match the folder structure as indicated in the README found therein.

license: mit

Downloads last month
14