Skip to content

Latest commit

 

History

History
107 lines (61 loc) · 6.55 KB

File metadata and controls

107 lines (61 loc) · 6.55 KB
graph LR
    image_sets_partitioner["image_sets_partitioner"]
    image_reader["image_reader"]
    image_loader["image_loader"]
    image_type["image_type"]
    affine_augmentation["affine_augmentation"]
    image_window["image_window"]
    sampler__["sampler_*"]
    image_window_dataset["image_window_dataset"]
    image_sets_partitioner -- "provides organized file paths to" --> image_reader
    image_reader -- "delegates low-level image reading to" --> image_loader
    image_loader -- "returns standardized image data to" --> image_reader
    image_reader -- "uses for consistent data encapsulation" --> image_type
    image_reader -- "integrates for data augmentation" --> affine_augmentation
    image_reader -- "passes preprocessed data to" --> image_window_dataset
    image_window_dataset -- "relies on for patch properties" --> image_window
    image_window_dataset -- "employs for patch coordinate generation" --> sampler__
Loading

CodeBoardingDemoContact

Details

The NiftyNet input pipeline subsystem is designed to efficiently load, preprocess, and prepare medical image data for deep learning models. It orchestrates a flow from raw image file paths to network-ready data tensors. The image_sets_partitioner initiates the process by organizing file paths. The image_reader then acts as a central data ingestion hub, utilizing image_loader for low-level file reading and image_type for consistent data representation. Data augmentation is handled by affine_augmentation. Finally, image_window_dataset prepares the data into TensorFlow datasets, relying on image_window for patch definitions and sampler_* components for generating patch coordinates.

image_sets_partitioner

Manages the initial organization and partitioning of raw image file paths into distinct datasets (e.g., training, validation, inference).

Related Classes/Methods:

image_reader

Acts as the primary interface for loading image data, managing input sources, and orchestrating initial preprocessing layers. It serves as a central hub for data ingestion.

Related Classes/Methods:

image_loader

Provides low-level functionality for reading diverse image file formats and standardizing them into a consistent nibabel format. It's a utility component supporting image_reader. It includes functions like load_image_obj and the ImageAsNibabel class for wrapping image data.

Related Classes/Methods:

image_type

Encapsulates image data and its associated metadata, ensuring a standardized and consistent representation of image volumes throughout the pipeline. Key classes include Loadable, DataFromFile, and SpatialImage2D, which manage file paths, data types, and image properties.

Related Classes/Methods:

affine_augmentation

Applies random affine transformations (e.g., rotation, scaling, shearing) to image data, serving as a key preprocessing step for data augmentation.

Related Classes/Methods:

image_window

Defines and manages the properties and structure of image windows or patches that are extracted from larger images, crucial for processing large medical volumes. This component is responsible for handling spatial and temporal dimensions of image patches.

Related Classes/Methods:

sampler_*

A set of specialized components responsible for generating spatial coordinates for extracting image patches/windows based on various sampling strategies (e.g., uniform, weighted). These samplers provide the coordinates to image_window_dataset for patch extraction.

Related Classes/Methods:

image_window_dataset

The final stage of the input pipeline, responsible for creating a TensorFlow dataset from the preprocessed image data. It handles patch extraction, padding, and batching to produce efficient, network-ready data tensors.

Related Classes/Methods: