The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/src/services/worker/.venv/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/torchmil/Camelyon16_MIL/tree/de6653516bcd0c1a3604ef6cd457a76791d70f99/dataset%2Fpatches_512?recursive=False&expand=False
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 81, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 32, in _get_pipeline_from_tar
for filename, f in tar_iterator:
^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/utils/track.py", line 49, in __iter__
for x in self.generator(*self.args):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1325, in _iter_from_urlpath
with xopen(urlpath, "rb", download_config=download_config, block_size=0) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 935, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/fsspec/core.py", line 135, in open
return self.__enter__()
^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__
f = self.fs.open(self.path, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open
f = self._open(
^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 275, in _open
return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 947, in __init__
self.details = fs.info(self.resolved_path.unresolve(), expand_info=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 711, in info
self.ls(parent_path, expand_info=False)
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 372, in ls
out = self._ls_tree(path, refresh=refresh, revision=revision, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 463, in _ls_tree
for path_info in tree:
^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 3140, in list_repo_tree
for path_info in paginate(path=tree_url, headers=headers, params={"recursive": recursive, "expand": expand}):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_pagination.py", line 37, in paginate
hf_raise_for_status(r)
File "/src/services/worker/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/torchmil/Camelyon16_MIL/tree/de6653516bcd0c1a3604ef6cd457a76791d70f99/dataset%2Fpatches_512?recursive=False&expand=False (Request ID: Root=1-68f5fb83-67b4cac668b13bba5ed164eb;4990c5f0-2ae3-4cd5-a20e-a7d78c9c985a)
Internal Error - We're working hard to fix this as soon as possible!
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CAMELYON16 - Multiple Instance Learning (MIL)
Important. This dataset is part of the torchmil library.
This repository provides an adapted version of the CAMELYON16 dataset tailored for Multiple Instance Learning (MIL). It is designed for use with the CAMELYON16Dataset class from the torchmil library. CAMELYON16 is a widely used benchmark in MIL research, making this adaptation particularly valuable for developing and evaluating MIL models.
About the Original CAMELYON16 Dataset
The original CAMELYON16 dataset contains WSIs of hematoxylin and eosin (H&E) stained lymph node sections. The task is to identify whether each slide contains metastatic tissue and to localize it precisely. The dataset includes high-quality pixel-level annotations marking the metastases.
Dataset Description
We have preprocessed the whole-slide images (WSIs) by extracting relevant patches and computing features for each patch using various feature extractors.
- A patch is labeled as positive (
patch_label=1) if more than 50% of its pixels are annotated as metastatic. - A WSI is labeled as positive (
label=1) if it contains at least one positive patch.
This means a slide is considered positive if there is any evidence of metastatic tissue.
Directory Structure
After extracting the contents of the .tar.gz archives, the following directory structure is expected:
root
βββ patches_{patch_size}
β βββ features
β β βββ features_{features_name}
β β β βββ wsi1.npy
β β β βββ wsi2.npy
β β β βββ ...
β βββ labels
β β βββ wsi1.npy
β β βββ wsi2.npy
β β βββ ...
β βββ patch_labels
β β βββ wsi1.npy
β β βββ wsi2.npy
β β βββ ...
β βββ coords
β β βββ wsi1.npy
β β βββ wsi2.npy
β β βββ ...
βββ splits.csv
Each .npy file corresponds to a single WSI. The splits.csv file defines train/test splits for standardized experimentation.
- Downloads last month
- 44