Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: RuntimeError
Message: Dataset scripts are no longer supported, but found MedVision.py
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1032, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 992, in dataset_module_factory
raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
RuntimeError: Dataset scripts are no longer supported, but found MedVision.pyNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

| ๐ Project | ๐ง๐ปโ๐ป Code | ๐ฉป Dataset | ๐ณ Docker | ๐ค Models | ๐ arXiv |
๐ Benchmarking VLMs for detection, tumor/lesion size estimation, and angle/distance measurement from medical images ๐
๐ฟ 30.8M annotated samples | multi-modality | multi-anatomy | 3D/2D medical image ๐ฟ
@misc{yao2025medvisiondatasetbenchmarkquantitative,
title={MedVision: Dataset and Benchmark for Quantitative Medical Image Analysis},
author={Yongcheng Yao and Yongshuo Zong and Raman Dutt and Yongxin Yang and Sotirios A Tsaftaris and Timothy Hospedales},
year={2025},
eprint={2511.18676},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.18676},
}
News
- [Oct 8, 2025] ๐ Release MedVision dataset v1.0.0
Change Log
For essential updates, check the change log.
Datasets
File structure: raw data will be automatically downloaded and processed, and our annotations are in each dataset folder
๐ The MedVision dataset consists of public medical images and quantitative annotations from this study. MRI: Magnetic Resonance Imaging; CT: Computed Tomography; PET: positron emission tomography; US: Ultrasound; b-box: bounding box; T/L: tumor/lesion size; A/D: angle/distance; HF: HuggingFace; GC: Grand-Challenge; * redistributed.
| Dataset | Anatomy | Modality | Annotation | Availability | Source | # Sample (Train/Test) | Status | ||
|---|---|---|---|---|---|---|---|---|---|
| b-box | T/L | A/D | |||||||
| AbdomenAtlas | abdomen | CT | b-box | open | HF | 6.8 / 2.9M | 0 | 0 | โ |
| AbdomenCT-1K | abdomen | CT | b-box | open | Zenodo | 0.7 / 0.3M | 0 | 0 | โ |
| ACDC | heart | MRI | b-box | open | HF*, others | 9.5 / 4.8K | 0 | 0 | โ |
| AMOS22 | abdomen | CT, MRI | b-box | open | Zenodo | 0.8 / 0.3M | 0 | 0 | โ |
| autoPEI-III | whole body | CT, PET | b-box, T/L | open | HF*, others | 22 / 9.7K | 0.5 / 0.2K | 0 | โ |
| BCV15 | abdomen | CT | b-box | open | HF*, Synapse | 71 / 30K | 0 | 0 | โ |
| BraTS24 | brain | MRI | b-box, T/L | open | HF*, Synapse | 0.8 / 0.3M | 7.9 / 3.1K | 0 | โ |
| CAMUS | heart | US | b-box | open | HF*, others | 0.7 / 0.3M | 0 | 0 | โ |
| Ceph-Bio-400 | head and neck | X-ray | b-box, A/D | open | HF*, others | 0 | 0 | 5.3 / 2.3K | โ |
| CrossModDA | brain | MRI | b-box | open | HF*, Zenodo | 3.0 / 1.0K | 0 | 0 | โ |
| FeTA24 | fetal brain | MRI | b-box, A/D | registration | Synapse | 34 / 15K | 0 | 0.2 / 0.1K | โ |
| FLARE22 | abdomen | CT | b-box | open | HF*, others | 72 / 33K | 0 | 0 | โ |
| HNTSMRG24 | head and neck | MRI | b-box, T/L | open | Zenodo | 18 / 6.6K | 1.0 / 0.4K | 0 | โ |
| ISLES24 | brain | MRI | b-box | open | HF*, GC | 7.3 / 2.5K | 0 | 0 | โ |
| KiPA22 | kidney | CT | b-box, T/L | open | HF*, GC | 26 / 11K | 2.1 / 1.0K | 0 | โ |
| KiTS23 | kidney | CT | b-box, T/L | open | HF*, GC | 80 / 35K | 5.9 / 2.6K | 0 | โ |
| MSD | multiple | CT, MRI | b-box, T/L | open | others | 0.2 / 0.1M | 5.3 / 2.2K | 0 | โ |
| OAIZIB-CM | knee | MRI | b-box | open | HF | 0.5 / 0.2M | 0 | 0 | โ |
| SKM-TEA | knee | MRI | b-box | registration | others | 0.2 / 0.1M | 0 | 0 | โ |
| ToothFairy2 | tooth | CT | b-box | registration | others | 1.0 / 0.4M | 0 | 0 | โ |
| TopCoW24 | brain | CT, MRI | b-box | open | HF*, Zenodo | 43 / 20K | 0 | 0 | โ |
| TotalSegmentator | multiple | CT, MRI | b-box | open | HF*, Zenodo | 9.6 / 4.0M | 0 | 0 | โ |
| Total | 22 / 9.2M | 23 / 9.6K | 5.6 / 2.4K |
โ ๏ธ For the following datasets, which do not allow redistribution, you need to apply for access from data owners, (optionally) upload to your private HF dataset repo, and set corresponding environment variables.
| Dataset | Source | Host Platform | Env Var |
|---|---|---|---|
| FeTA24 | https://www.synapse.org/Synapse:syn25649159/wiki/610007 | Synapse | SYNAPSE_TOKEN |
| SKM-TEA | https://aimi.stanford.edu/datasets/skm-tea-knee-mri | Huggingface | MedVision_SKMTEA_HF_ID |
| ToothFairy2 | https://ditto.ing.unimore.it/toothfairy2/ | Huggingface | MedVision_ToothFairy2_HF_ID |
๐ For SKM-TEA and ToothFairy2, you need to process the raw data and upload the preprocessed data to your private HF dataset repo. To use HF private dataset, you need to set HF_TOKEN and login with hf auth login --token $HF_TOKEN --add-to-git-credential
New Datasets Guide
To add new datasets, check this blog for an introduction of MedVision dataset.
Requirement
๐ Note: trust_remote_code is no longer supported in datasets>=4.0.0, install dataset with pip install datasets==3.6.0
Use
import os
from datasets import load_dataset
# Set data folder
os.environ["MedVision_DATA_DIR"] = <your/data/folder>
# Pick a dataset config name and split
config = <config-name> # e.g., "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
)
๐ List of config names here (./info)
Environment Variables
# Set where data will be saved, requires ~1T for the complete dataset
export MedVision_DATA_DIR=<your/data/folder>
# Force download and process raw images, default to "False"
export MedVision_FORCE_DOWNLOAD_DATA="False"
# Force install dataset codebase, default to "False"
export MedVision_FORCE_INSTALL_CODE="False"
๐ Essential Dataset Concept
We cover some essential concepts that help we use the MedVision dataset with ease.
Concepts: Dataset & Data Configuration
MedVision: the collection of public imaging data and our annotationsdataset: name of the public datasets, suchBraTS24,MSD,OAIZIB-CMdata-config: name of predefined subsets- naming convention:
{dataset}_{annotation-type}_{task-ID}_{slice}_{split}dataset: detailsannotation-type:BoxSize: detection annotations (bounding box)TumorLesionSize: tumor/lesion size annotationsBiometricsFromLandmarks: angle/distance annotations
task-ID:Task[xx](Note, this is a local ID in the dataset, not a glocal ID in MedVision.)- For datasets with multiple image-mask pairs, we defined tasks in
medvision_ds/datasets/*/preprocess_*.py - source: medvision_ds
- e.g., detection tasks for the
BraTS24dataset is defined in thebenchmark_planinmedvision_ds/datasets/BraTS24/preprocess_detection.py
- For datasets with multiple image-mask pairs, we defined tasks in
slice: [Sagittal,Coronal,Axial]split: [Train,Test]
- naming convention:
What's returned from MedVision Dataset?
We only share the annotations (https://huggingface.co/datasets/YongchengYAO/MedVision/tree/main/Datasets). The data loading script MedVision.py will handle raw image downloading and processing. The returned fields in each sample is defined as followed.
โ ๏ธ In MedVision.py, the class MedVision(GeneratorBasedBuilder) defines the feature dict and the method _generate_examples() builds the dataset.
Code block in `MedVision(GeneratorBasedBuilder)` (Click to expand)
"""
MedVision dataset.
NOTE: To update the features returned by the load_dataset() method, the followings should be updated:
- the feature dict in this class
- the dict yielded by the _generate_examples() method
"""
# The feature dict for the task:
# - Mask-Size
features_dict_MaskSize = {
"dataset_name": Value("string"),
"taskID": Value("string"),
"taskType": Value("string"),
"image_file": Value("string"),
"mask_file": Value("string"),
"slice_dim": Value("uint8"),
"slice_idx": Value("uint16"),
"label": Value("uint16"),
"image_size_2d": Sequence(Value("uint16"), length=2),
"pixel_size": Sequence(Value("float16"), length=2),
"image_size_3d": Sequence(Value("uint16"), length=3),
"voxel_size": Sequence(Value("float16"), length=3),
"pixel_count": Value("uint32"),
"ROI_area": Value("float16"),
}
# The feature dict for the task:
# - Box-Size
features_dict_BoxSize = {
"dataset_name": Value("string"),
"taskID": Value("string"),
"taskType": Value("string"),
"image_file": Value("string"),
"mask_file": Value("string"),
"slice_dim": Value("uint8"),
"slice_idx": Value("uint16"),
"label": Value("uint16"),
"image_size_2d": Sequence(Value("uint16"), length=2),
"pixel_size": Sequence(Value("float16"), length=2),
"image_size_3d": Sequence(Value("uint16"), length=3),
"voxel_size": Sequence(Value("float16"), length=3),
"bounding_boxes": Sequence(
{
"min_coords": Sequence(Value("uint16"), length=2),
"max_coords": Sequence(Value("uint16"), length=2),
"center_coords": Sequence(Value("uint16"), length=2),
"dimensions": Sequence(Value("uint16"), length=2),
"sizes": Sequence(Value("float16"), length=2),
},
),
}
features_dict_BiometricsFromLandmarks = {
"dataset_name": Value("string"),
"taskID": Value("string"),
"taskType": Value("string"),
"image_file": Value("string"),
"landmark_file": Value("string"),
"slice_dim": Value("uint8"),
"slice_idx": Value("uint16"),
"image_size_2d": Sequence(Value("uint16"), length=2),
"pixel_size": Sequence(Value("float16"), length=2),
"image_size_3d": Sequence(Value("uint16"), length=3),
"voxel_size": Sequence(Value("float16"), length=3),
"biometric_profile": {
"metric_type": Value("string"),
"metric_map_name": Value("string"),
"metric_key": Value("string"),
"metric_value": Value("float16"),
"metric_unit": Value("string"),
"slice_dim": Value("uint8"),
},
}
features_dict_TumorLesionSize = {
"dataset_name": Value("string"),
"taskID": Value("string"),
"taskType": Value("string"),
"image_file": Value("string"),
"landmark_file": Value("string"),
"mask_file": Value("string"),
"slice_dim": Value("uint8"),
"slice_idx": Value("uint16"),
"label": Value("uint16"),
"image_size_2d": Sequence(Value("uint16"), length=2),
"pixel_size": Sequence(Value("float16"), length=2),
"image_size_3d": Sequence(Value("uint16"), length=3),
"voxel_size": Sequence(Value("float16"), length=3),
"biometric_profile": Sequence(
{
"metric_type": Value("string"),
"metric_map_name": Value("string"),
"metric_key_major_axis": Value("string"),
"metric_value_major_axis": Value("float16"),
"metric_key_minor_axis": Value("string"),
"metric_value_minor_axis": Value("float16"),
"metric_unit": Value("string"),
},
),
}
Code block in `_generate_examples` (Click to expand)
# Task type: Mask-Size
if taskType == "Mask-Size":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerSegmentation.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(biometricData, slice_dim)
for idx, case in enumerate(slice_profile_flattened):
# Skip cases with a mask size smaller than 200 pixels
if case["pixel_count"] < 200:
continue
else:
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"mask_file": os.path.join(dataset_dir, case["mask_file"]),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"label": case["label"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"pixel_count": case["pixel_count"],
"ROI_area": case["ROI_area"],
}
# Task type: Box-Size
if taskType == "Box-Size":
if imageType.lower() == "2d":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerDetection.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
# Skip cases with multiple bounding boxes in the same slice
if len(case["bounding_boxes"]) > 1:
continue
# Skip cases with a bounding box size smaller than 10 pixels in any dimension
elif (
case["bounding_boxes"][0]["dimensions"][0] < 10
or case["bounding_boxes"][0]["dimensions"][1] < 10
):
continue
else:
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"mask_file": os.path.join(dataset_dir, case["mask_file"]),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"label": case["label"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"bounding_boxes": case["bounding_boxes"],
}
# Task type: Biometrics-From-Landmarks
if taskType == "Biometrics-From-Landmarks":
if imageType.lower() == "2d":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerBiometry.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"landmark_file": os.path.join(
dataset_dir, case["landmark_file"]
),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"biometric_profile": case["biometric_profile"],
}
# Task type: Biometrics-From-Landmarks-Distance
if taskType == "Biometrics-From-Landmarks-Distance":
if imageType.lower() == "2d":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerBiometry.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
if case["biometric_profile"]["metric_type"] == "distance":
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"landmark_file": os.path.join(
dataset_dir, case["landmark_file"]
),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"biometric_profile": case["biometric_profile"],
}
# Task type: Biometrics-From-Landmarks-Angle
if taskType == "Biometrics-From-Landmarks-Angle":
if imageType.lower() == "2d":
flatten_slice_profiles = (
MedVision_BenchmarkPlannerBiometry.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
if case["biometric_profile"]["metric_type"] == "angle":
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"landmark_file": os.path.join(
dataset_dir, case["landmark_file"]
),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"biometric_profile": case["biometric_profile"],
}
# Task type: Tumor-Lesion-Size
if taskType == "Tumor-Lesion-Size":
if imageType.lower() == "2d":
# Get the target label for the task
target_label = benchmark_plan["tasks"][int(taskID) - 1]["target_label"]
flatten_slice_profiles = (
MedVision_BenchmarkPlannerBiometry_fromSeg.flatten_slice_profiles_2d
)
if imageSliceType.lower() == "sagittal":
slice_dim = 0
elif imageSliceType.lower() == "coronal":
slice_dim = 1
elif imageSliceType.lower() == "axial":
slice_dim = 2
slice_profile_flattened = flatten_slice_profiles(
biometricData, slice_dim
)
for idx, case in enumerate(slice_profile_flattened):
# Skip cases with multiple fitted ellipses in the same slice
if len(case["biometric_profile"]) > 1:
continue
else:
yield idx, {
"dataset_name": dataset_name,
"taskID": taskID,
"taskType": taskType,
"image_file": os.path.join(dataset_dir, case["image_file"]),
"mask_file": os.path.join(dataset_dir, case["mask_file"]),
"landmark_file": os.path.join(
dataset_dir, case["landmark_file"]
),
"slice_dim": case["slice_dim"],
"slice_idx": case["slice_idx"],
"label": target_label,
"image_size_2d": case["image_size_2d"],
"pixel_size": case["pixel_size"],
"image_size_3d": case["image_size_3d"],
"voxel_size": case["voxel_size"],
"biometric_profile": case["biometric_profile"],
}
Dataset Building Workflow
Workflow
MedVision Dataset Building Workflow (Black)

MedVision Dataset Building Workflow (White)

There are a few venues to control the dataset loading and building behavior:
- Rebuild Dataset (Arrow files): Use the
download_modeargument inload_dataset()(docs).- [1] Set
download_mode="force_redownload"to ignore the cached Arrow files and trigger the data loading scriptMedVision.pyto rebuild the dataset.
- [1] Set
- Redownload Raw Data:
- [2]
MedVision_FORCE_DOWNLOAD_DATA: Set this environment variable toTrueto force re-downloading raw images and annotations. - [3]
.downloaded_datasets.json: This tracker file records downloaded status. Removing a dataset's entry here will trigger a re-download of the raw data for that dataset.
- [2]
โ ๏ธ How to properly update/redownload raw data?
If you need to update raw data (images, masks, landmarks) using [2] or [3], you MUST ALSO use [1] (
download_mode="force_redownload").Why? Because if Hugging Face finds a valid cached dataset (Arrow files), it will load it directly and skip running the script entirely. Without running the script, the environment variable [2] or tracker file [3] will never be checked.
Summary:
- Update Arrow/Fields only: Use [1].
- Update Raw Data: Use [1] AND ([2] or [3]).
๐ฅ We will maintain a change log for essential updates.
Examples
Run this for the first time will download the raw data and build the dataset
import os
from datasets import load_dataset
# Set data folder
wd = os.path.join(os.getcwd(), "Data-testing")
os.makedirs(wd, exist_ok=True)
os.environ["MedVision_DATA_DIR"] = wd
# Pick a dataset config name and split
config = "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
)
Run the same script again will use the cached dataset
import os
from datasets import load_dataset
# Set data folder
wd = os.path.join(os.getcwd(), "Data-testing")
os.makedirs(wd, exist_ok=True)
os.environ["MedVision_DATA_DIR"] = wd
# Pick a dataset config name and split
config = "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
)
Adding `download_mode="force_redownload"` will skip raw data downloading and rebuild the dataset
import os
from datasets import load_dataset
# Set data folder
wd = os.path.join(os.getcwd(), "Data-testing")
os.makedirs(wd, exist_ok=True)
os.environ["MedVision_DATA_DIR"] = wd
# Pick a dataset config name and split
config = "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
download_mode="force_redownload",
)
Adding `download_mode="force_redownload"` and `os.environ["MedVision_FORCE_DOWNLOAD_DATA"] = "True"` will redownload raw data and rebuild the dataset
import os
from datasets import load_dataset
# Set data folder
wd = os.path.join(os.getcwd(), "Data-testing")
os.makedirs(wd, exist_ok=True)
os.environ["MedVision_DATA_DIR"] = wd
# Pick a dataset config name and split
config = "OAIZIB-CM_BoxSize_Task01_Axial_Test"
split_name = "test" # use "test" for testing set config; use "train" for training set config
# Force redownload
os.environ["MedVision_FORCE_DOWNLOAD_DATA"] = "True"
# Get dataset
ds = load_dataset(
"YongchengYAO/MedVision",
name=config,
trust_remote_code=True,
split=split_name,
download_mode="force_redownload",
)
Download Mode in MedVision Dataset
(Advanced) Understand how the customized dataset loading script `MedVision.py` changes the behavior of `download_mode` in `load_dataset()`
download_modecan be one of these:"reuse_dataset_if_exists"(default),"reuse_cache_if_exists","force_redownload"Default behavior of
download_modeinload_dataset():Downloads Dataset reuse_dataset_if_exists (default) Reuse Reuse reuse_cache_if_exists Reuse Fresh force_redownload Fresh Fresh download_modein MedVision dataset:Downloads Dataset reuse_dataset_if_exists (default) Reuse Reuse reuse_cache_if_exists Reuse Fresh force_redownload (MedVision_FORCE_DOWNLOAD_DATA=False) Reuse Fresh force_redownload (MedVision_FORCE_DOWNLOAD_DATA=True) Fresh Fresh
Advanced Usage
The dataset codebase medvision_ds can be used to scale the dataset, including adding new annotation types and datasets.
๐ ๏ธ Install
pip install "git+https://huggingface.co/datasets/YongchengYAO/MedVision.git#subdirectory=src"
pip show medvision_ds
or
# First, install the benchmark codebase: medvision_bm
pip install "git+https://github.com/YongchengYAO/MedVision.git"
# Install the dataset codebase: medvision_ds
python -m medvision_bm.benchmark.install_medvision_ds --data_dir <local-data-folder>
๐ง๐ปโ๐ป Use utility functions for image processing
from medvision_ds.utils.data_conversion import (
convert_nrrd_to_nifti,
convert_mha_to_nifti,
convert_nii_to_niigz,
convert_bmp_to_niigz,
copy_img_header_to_mask,
reorient_niigz_RASplus_batch_inplace,
)
from medvision_ds.utils.preprocess_utils import (
split_4d_nifti,
)
๐ฉ๐ผโ๐ปExamples of dataset scaling:
Setup automatic data processing pipeline
- Download preprocessed data from HF: medvision_ds/datasets/OAIZIB_CM/download.py
- Download and processed data from source: medvision_ds/datasets/BraTS24/download_raw.py
Prepare annotations
Generate b-box annotations from segmentation masks:
Generate tumor/lesion size (TL) annotations from segmentation masks:
Generate angle/distance (AD) annotations from landmarks:
License: CC-BY-NC-4.0
This repository is released under CC-BY-NC 4.0 (https://creativecommons.org/licenses/by-nc/4.0/).
โ Do
- Copy and redistribute the material in any medium or format.
- Adapt, remix, transform, and build upon the material.
- Use it privately or in non-commercial educational, research, or personal projects.
๐ซ Do not
- Use the material for commercial purposes
๐ Requirement
| Requirement | Description |
|---|---|
| Attribution (BY) | You must give appropriate credit, provide a link to this dataset, and indicate if changes were made. |
| NonCommercial (NC) | You may not use the material for commercial purposes. |
| Indicate changes | If you modify the work, you must note that it has been changed. |
- Downloads last month
- 1,257