API Reference

This document is for developers of ivadomed, it contains the API functions.

Loader API

loader.film

normalize_metadata(ds_in, clustering_models, debugging, metadata_type, train_set=False)[source]

Categorize each metadata value using a KDE clustering method, then apply a one-hot-encoding.

Parameters
  • ds_in (BidsDataset) – Dataset with metadata.

  • clustering_models – Pre-trained clustering model that has been trained on metadata of the training set.

  • debugging (bool) – If True, extended verbosity and intermediate outputs.

  • metadata_type (str) – Choice between ‘mri_params’, ‘constrasts’ or the name of a column from the participants.tsv file.

  • train_set (bool) – Indicates if the input dataset is the training dataset (True) or the validation or testing dataset (False).

Returns

Dataset with normalized metadata. If train_set is True, then the one-hot-encoder model is also

returned.

Return type

BidsDataset

class Kde_model[source]

Bases: object

Kernel Density Estimation.

Apply this clustering method to metadata values, using (sklearn implementation.)

Attributes
  • kde (sklearn.neighbors.KernelDensity)

  • minima (float) – Local minima.

__init__()[source]
clustering_fit(dataset, key_lst)[source]

This function creates clustering models for each metadata type, using Kernel Density Estimation algorithm.

Parameters
  • datasets (list) – data

  • key_lst (list of str) – names of metadata to cluster

Returns

Clustering model for each metadata type in a dictionary where the keys are the metadata names.

Return type

dict

check_isMRIparam(mri_param_type, mri_param, subject, metadata)[source]

Check if a given metadata belongs to the MRI parameters.

Parameters
  • mri_param_type (str) – Metadata type name.

  • mri_param (list) – List of MRI params names.

  • subject (str) – Current subject name.

  • metadata (dict) – Metadata.

Returns

True if mri_param_type is part of mri_param.

Return type

bool

get_film_metadata_models(ds_train, metadata_type, debugging=False)[source]

Get FiLM models.

This function pulls the clustering and one-hot encoder models that are used by FiLMedUnet. It also calls the normalization of metadata.

Parameters
  • ds_train (MRI2DSegmentationDataset) – training dataset

  • metadata_type (string) – eg mri_params, contrasts

  • debugging (bool) –

Returns

dataset, one-hot encoder and KDE model

Return type

MRI2DSegmentationDataset, OneHotEncoder, KernelDensity

store_film_params(gammas, betas, metadata_values, metadata, model, film_layers, depth, film_metadata)[source]

Store FiLM params.

Parameters
  • gammas (dict) –

  • betas (dict) –

  • metadata_values (list) – list of the batch sample’s metadata values (e.g., T2w, astrocytoma)

  • metadata (list) –

  • model (nn.Module) –

  • film_layers (list) –

  • depth (int) –

  • film_metadata (str) – Metadata of interest used to modulate the network (e.g., contrast, tumor_type).

Returns

gammas, betas

Return type

dict, dict

save_film_params(gammas, betas, metadata_values, depth, ofolder)[source]

Save FiLM params as npy files.

These parameters can be further used for visualisation purposes. They are saved in the ofolder with .npy format.

Parameters
  • gammas (dict) –

  • betas (dict) –

  • metadata_values (list) – list of the batch sample’s metadata values (eg T2w, T1w, if metadata type used is

  • contrast)

  • depth (int) –

  • ofolder (str) –

loader.loader

loader.utils

Object Detection API

object_detection.utils

Evaluation API

Losses API

class MultiClassDiceLoss(classes_of_interest=None)[source]

Bases: torch.nn.modules.module.Module

Multi-class Dice Loss.

Inspired from https://arxiv.org/pdf/1802.10508.

Parameters

classes_of_interest (list) – List containing the index of a class which its dice will be added to the loss. If is None all classes are considered.

Attributes
  • classes_of_interest (list) – List containing the index of a class which its dice will be added to the loss. If is None all classes are considered.

  • dice_loss (DiceLoss) – Class computing the Dice loss.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(classes_of_interest=None)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(prediction, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class DiceLoss(smooth=1.0)[source]

Bases: torch.nn.modules.module.Module

DiceLoss.

See also

Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. “V-net: Fully convolutional neural networks for volumetric medical image segmentation.” 2016 fourth international conference on 3D vision (3DV). IEEE, 2016.

Parameters

smooth (float) – Value to avoid division by zero when images and predictions are empty.

Attributes

smooth (float) – Value to avoid division by zero when images and predictions are empty.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(smooth=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(prediction, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class BinaryCrossEntropyLoss[source]

Bases: torch.nn.modules.module.Module

(BinaryCrossEntropyLoss).

Attributes

loss_fct (BCELoss) – Binary cross entropy loss function from torch library.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__()[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(prediction, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class FocalLoss(gamma=2, alpha=0.25, eps=1e-07)[source]

Bases: torch.nn.modules.module.Module

FocalLoss.

See also

Lin, Tsung-Yi, et al. “Focal loss for dense object detection.” Proceedings of the IEEE international conference on computer vision. 2017.

Parameters
  • gamma (float) – Value from 0 to 5, Control between easy background and hard ROI training examples. If set to 0, equivalent to cross-entropy.

  • alpha (float) – Value from 0 to 1, usually corresponding to the inverse of class frequency to address class imbalance.

  • eps (float) – Epsilon to avoid division by zero.

Attributes
  • gamma (float) – Value from 0 to 5, Control between easy background and hard ROI training examples. If set to 0, equivalent to cross-entropy.

  • alpha (float) – Value from 0 to 1, usually corresponding to the inverse of class frequency to address class imbalance.

  • eps (float) – Epsilon to avoid division by zero.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(gamma=2, alpha=0.25, eps=1e-07)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class FocalDiceLoss(beta=1, gamma=2, alpha=0.25)[source]

Bases: torch.nn.modules.module.Module

FocalDiceLoss.

See also

Wong, Ken CL, et al. “3D segmentation with exponential logarithmic loss for highly unbalanced object sizes.” International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018.

Parameters
  • beta (float) – Value from 0 to 1, indicating the weight of the dice loss.

  • gamma (float) – Value from 0 to 5, Control between easy background and hard ROI training examples. If set to 0, equivalent to cross-entropy.

  • alpha (float) – Value from 0 to 1, usually corresponding to the inverse of class frequency to address class imbalance.

Attributes
  • beta (float) – Value from 0 to 1, indicating the weight of the dice loss.

  • gamma (float) – Value from 0 to 5, Control between easy background and hard ROI training examples. If set to 0, equivalent to cross-entropy.

  • alpha (float) – Value from 0 to 1, usually corresponding to the inverse of class frequency to address class imbalance.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(beta=1, gamma=2, alpha=0.25)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class GeneralizedDiceLoss(epsilon=1e-05, include_background=True)[source]

Bases: torch.nn.modules.module.Module

GeneralizedDiceLoss.

See also

Sudre, Carole H., et al. “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations.” Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, Cham, 2017. 240-248.

Parameters
  • epsilon (float) – Epsilon to avoid division by zero.

  • include_background (bool) – If True, then an extra channel is added, which represents the background class.

Attributes
  • epsilon (float) – Epsilon to avoid division by zero.

  • include_background (bool) – If True, then an extra channel is added, which represents the background class.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(epsilon=1e-05, include_background=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class TverskyLoss(alpha=0.7, beta=0.3, smooth=1.0)[source]

Bases: torch.nn.modules.module.Module

Tversky Loss.

See also

Salehi, Seyed Sadegh Mohseni, Deniz Erdogmus, and Ali Gholipour. “Tversky loss function for image segmentation using 3D fully convolutional deep networks.” International Workshop on Machine Learning in Medical Imaging. Springer, Cham, 2017.

Parameters
  • alpha (float) – Weight of false positive voxels.

  • beta (float) – Weight of false negative voxels.

  • smooth (float) – Epsilon to avoid division by zero, when both Numerator and Denominator of Tversky are zeros.

Attributes
  • alpha (float) – Weight of false positive voxels.

  • beta (float) – Weight of false negative voxels.

  • smooth (float) – Epsilon to avoid division by zero, when both Numerator and Denominator of Tversky are zeros.

Notes

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(alpha=0.7, beta=0.3, smooth=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

tversky_index(y_pred, y_true)[source]

Compute Tversky index.

Parameters
  • y_pred (torch Tensor) – Prediction.

  • y_true (torch Tensor) – Target.

Returns

Tversky index.

Return type

float

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class FocalTverskyLoss(alpha=0.7, beta=0.3, gamma=1.33, smooth=1.0)[source]

Bases: ivadomed.losses.TverskyLoss

Focal Tversky Loss.

See also

Abraham, Nabila, and Naimul Mefraz Khan. “A novel focal tversky loss function with improved attention u-net for lesion segmentation.” 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE, 2019.

Parameters
  • alpha (float) – Weight of false positive voxels.

  • beta (float) – Weight of false negative voxels.

  • gamma (float) – Typically between 1 and 3. Control between easy background and hard ROI training examples.

  • smooth (float) – Epsilon to avoid division by zero, when both Numerator and Denominator of Tversky are zeros.

Attributes

gamma (float) – Typically between 1 and 3. Control between easy background and hard ROI training examples.

Notes

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(alpha=0.7, beta=0.3, gamma=1.33, smooth=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class L2loss[source]

Bases: torch.nn.modules.module.Module

Euclidean loss also known as L2 loss. Compute the sum of the squared difference between the two images.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__()[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class AdapWingLoss(theta=0.5, alpha=2.1, omega=14, epsilon=1)[source]

Bases: torch.nn.modules.module.Module

Adaptive Wing loss Used for heatmap ground truth.

See also

Wang, Xinyao, Liefeng Bo, and Li Fuxin. “Adaptive wing loss for robust face alignment via heatmap regression.” Proceedings of the IEEE International Conference on Computer Vision. 2019.

Parameters
  • theta (float) – Threshold between linear and non linear loss.

  • alpha (float) – Used to adapt loss shape to input shape and make loss smooth at 0 (background).

  • properties. (It needs to be slightly above 2 to maintain ideal) –

  • omega (float) – Multiplicating factor for non linear part of the loss.

  • epsilon (float) – factor to avoid gradient explosion. It must not be too small

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(theta=0.5, alpha=2.1, omega=14, epsilon=1)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class LossCombination(losses_list, params_list=None)[source]

Bases: torch.nn.modules.module.Module

Loss that sums other implemented losses.

Parameters
  • losses_list (list) – list of losses that will be summed. Elements should be string.

  • params_list (list) – list of params for the losses, contain None or dictionnary definition of params for the loss

  • used. (at same index. If no params list is given all default parameter will be) –

  • (e.g. – params_list = [None,{“param1:0.5”}])

  • ["L2loss" (losses_list =) – params_list = [None,{“param1:0.5”}])

  • "DiceLoss"] – params_list = [None,{“param1:0.5”}])

Returns

sum of losses computed on (input,target) with the params

Return type

tensor

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__init__(losses_list, params_list=None)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Main API

Metrics API

class MetricManager(metric_fns)[source]

Bases: object

Computes specified metrics and stores them in a dictionary.

Parameters

metric_fns (list) – List of metric functions.

Attributes
  • metric_fns (list) – List of metric functions.

  • result_dict (dict) – Dictionary storing metrics.

  • num_samples (int) – Number of samples.

__init__(metric_fns)[source]
__call__(prediction, ground_truth)[source]

Call self as a function.

numeric_score(prediction, groundtruth)[source]

Computation of statistical numerical scores:

  • FP = Soft False Positives

  • FN = Soft False Negatives

  • TP = Soft True Positives

  • TN = Soft True Negatives

Robust to hard or soft input masks. For example::

prediction=np.asarray([0, 0.5, 1]) groundtruth=np.asarray([0, 1, 1]) Leads to FP = 1.5

Note: It assumes input values are between 0 and 1.

Parameters
  • prediction (ndarray) – Binary prediction.

  • groundtruth (ndarray) – Binary groundtruth.

Returns

FP, FN, TP, TN

Return type

float, float, float, float

dice_score(im1, im2, empty_score=nan)[source]

Computes the Dice coefficient between im1 and im2.

Compute a soft Dice coefficient between im1 and im2, ie equals twice the sum of the two masks product, divided by the sum of each mask sum. If both images are empty, then it returns empty_score.

Parameters
  • im1 (ndarray) – First array.

  • im2 (ndarray) – Second array.

  • empty_score (float) – Returned value if both input array are empty.

Returns

Dice coefficient.

Return type

float

mse(im1, im2)[source]

Compute the Mean Squared Error.

Compute the Mean Squared Error between the two images, i.e. sum of the squared difference.

Parameters
  • im1 (ndarray) – First array.

  • im2 (ndarray) – Second array.

Returns

Mean Squared Error.

Return type

float

hausdorff_score(prediction, groundtruth)[source]

Compute the directed Hausdorff distance between two N-D arrays.

Parameters
  • prediction (ndarray) – First array.

  • groundtruth (ndarray) – Second array.

Returns

Hausdorff distance.

Return type

float

precision_score(prediction, groundtruth, err_value=0.0)[source]

Positive predictive value (PPV).

Precision equals the number of true positive voxels divided by the sum of true and false positive voxels. True and false positives are computed on soft masks, see "numeric_score".

Parameters
  • prediction (ndarray) – First array.

  • groundtruth (ndarray) – Second array.

  • err_value (float) – Value returned in case of error.

Returns

Precision score.

Return type

float

recall_score(prediction, groundtruth, err_value=0.0)[source]

True positive rate (TPR).

Recall equals the number of true positive voxels divided by the sum of true positive and false negative voxels. True positive and false negative values are computed on soft masks, see "numeric_score".

Parameters
  • prediction (ndarray) – First array.

  • groundtruth (ndarray) – Second array.

  • err_value (float) – Value returned in case of error.

Returns

Recall score.

Return type

float

specificity_score(prediction, groundtruth, err_value=0.0)[source]

True negative rate (TNR).

Specificity equals the number of true negative voxels divided by the sum of true negative and false positive voxels. True negative and false positive values are computed on soft masks, see "numeric_score".

Parameters
  • prediction (ndarray) – First array.

  • groundtruth (ndarray) – Second array.

  • err_value (float) – Value returned in case of error.

Returns

Specificity score.

Return type

float

intersection_over_union(prediction, groundtruth, err_value=0.0)[source]

Intersection of two (soft) arrays over their union (IoU).

Parameters
  • prediction (ndarray) – First array.

  • groundtruth (ndarray) – Second array.

  • err_value (float) – Value returned in case of error.

Returns

IoU.

Return type

float

accuracy_score(prediction, groundtruth, err_value=0.0)[source]

Accuracy.

Accuracy equals the number of true positive and true negative voxels divided by the total number of voxels. True positive/negative and false positive/negative values are computed on soft masks, see "numeric_score".

Parameters
  • prediction (ndarray) – First array.

  • groundtruth (ndarray) – Second array.

Returns

Accuracy.

Return type

float

multi_class_dice_score(im1, im2)[source]

Dice score for multi-label images.

Multi-class Dice score equals the average of the Dice score for each class. The first dimension of the input arrays is assumed to represent the classes.

Parameters
  • im1 (ndarray) – First array.

  • im2 (ndarray) – Second array.

Returns

Multi-class dice.

Return type

float

plot_roc_curve(tpr, fpr, opt_thr_idx, fname_out)[source]

Plot ROC curve.

Parameters
  • tpr (list) – True positive rates.

  • fpr (list) – False positive rates.

  • opt_thr_idx (int) – Index of the optimal threshold.

  • fname_out (str) – Output filename.

plot_dice_thr(thr_list, dice_list, opt_thr_idx, fname_out)[source]

Plot Dice results against thresholds.

Parameters
  • thr_list (list) – Thresholds list.

  • dice_list (list) – Dice results.

  • opt_thr_idx (int) – Index of the optimal threshold.

  • fname_out (str) – Output filename.

Postprocessing API

nifti_capable(wrapped)[source]

Decorator to make a given function compatible with input being Nifti objects.

Parameters

wrapped – Given function.

Returns

Functions’ return.

binarize_with_low_threshold(wrapped)[source]

Decorator to set low values (< 0.001) to 0.

Parameters

wrapped – Given function.

Returns

Functions’ return.

multilabel_capable(wrapped)[source]

Decorator to make a given function compatible multilabel images.

Parameters

wrapped – Given function.

Returns

Functions’ return.

threshold_predictions(predictions, thr=0.5)[source]

Threshold a soft (i.e. not binary) array of predictions given a threshold value, and returns a binary array.

Parameters
  • predictions (ndarray or nibabel object) – Image to binarize.

  • thr (float) – Threshold value: voxels with a value < to thr are assigned 0 as value, 1 otherwise.

Returns

ndarray or nibabel (same object as the input) containing only zeros or ones. Output type is int.

Return type

ndarray

keep_largest_object(predictions)[source]

Keep the largest connected object from the input array (2D or 3D).

Parameters

predictions (ndarray or nibabel object) – Input segmentation. Image could be 2D or 3D.

Returns

ndarray or nibabel (same object as the input).

keep_largest_object_per_slice(predictions, axis=2)[source]

Keep the largest connected object for each 2D slice, along a specified axis.

Parameters
  • predictions (ndarray or nibabel object) – Input segmentation. Image could be 2D or 3D.

  • axis (int) – 2D slices are extracted along this axis.

Returns

ndarray or nibabel (same object as the input).

fill_holes(predictions, structure=(3, 3, 3))[source]

Fill holes in the predictions using a given structuring element. Note: This function only works for binary segmentation.

Parameters
  • predictions (ndarray or nibabel object) – Input binary segmentation. Image could be 2D or 3D.

  • structure (tuple of integers) – Structuring element, number of ints equals number of dimensions in the input array.

Returns

ndrray or nibabel (same object as the input). Output type is int.

mask_predictions(predictions, mask_binary)[source]

Mask predictions using a binary mask: sets everything outside the mask to zero.

Parameters
  • predictions (ndarray or nibabel object) – Input binary segmentation. Image could be 2D or 3D.

  • mask_binary (ndarray) – Numpy array with the same shape as predictions, containing only zeros or ones.

Returns

ndarray or nibabel (same object as the input).

coordinate_from_heatmap(nifti_image, thresh=0.3)[source]

Retrieve coordinates of local maxima in a soft segmentation. :param nifti_image: nifti image of the soft segmentation. :type nifti_image: nibabel object :param thresh: Relative threshold for local maxima, i.e., after normalizing :type thresh: float :param the min and max between 0 and 1: :param respectively.:

Returns

A list of computed coordinates found by local maximum. each element will be a list composed of [x, y, z]

Return type

list

label_file_from_coordinates(nifti_image, coord_list)[source]

Creates a nifti object with single-voxel labels. Each label has a value of 1. The nifti object as the same orientation as the input. :param nifti_image: Path to the image which affine matrix will be used to generate a new image with :type nifti_image: nibabel object :param labels.: :param coord_list: list of coordinates. Each element is [x, y, z]. Orientation should be the same as the image :type coord_list: list

Returns

A nifti object containing the singe-voxel label of value 1. The matrix will be the same size as nifti_image.

Return type

nib_pred

remove_small_objects(data, bin_structure, size_min)[source]

Removes all unconnected objects smaller than the minimum specified size.

Parameters
  • data (ndarray) – Input data.

  • bin_structure (ndarray) – Structuring element that defines feature connections.

  • size_min (int) – Minimal object size to keep in input data.

Returns

Array with small objects.

Return type

ndarray

class Postprocessing(postprocessing_params, data_pred, dim_lst, filename_prefix)[source]

Bases: object

Postprocessing steps manager

Parameters
  • postprocessing_params (dict) – Indicates postprocessing steps (in the right order)

  • data_pred (ndarray) – Prediction from the model.

  • dim_lst (list) – Dimensions of a voxel in mm.

  • filename_prefix (str) – Path to prediction file without suffix.

Attributes
  • postprocessing_params (dict) – Indicates postprocessing steps (in the right order)

  • data_pred (ndarray) – Prediction from the model.

  • px (float) – Resolution (mm) along the first axis.

  • py (float) – Resolution (mm) along the second axis.

  • pz (float) – Resolution (mm) along the third axis.

  • filename_prefix (str) – Path to prediction file without suffix.

  • n_classes (int) – Number of classes.

  • bin_struct (ndarray) – Binary structure.

__init__(postprocessing_params, data_pred, dim_lst, filename_prefix)[source]
apply()[source]

Parse postprocessing parameters and apply postprocessing steps to data.

binarize_prediction(thr)[source]

Binarize output.

binarize_maxpooling()[source]

Binarize by setting to 1 the voxel having the max prediction across all classes.

uncertainty(thr, suffix)[source]

Removes the most uncertain predictions.

Parameters
  • thr (float) – Uncertainty threshold.

  • suffix (str) – Suffix of uncertainty filename.

remove_small(unit, thr)[source]

Remove small objects

Parameters
  • unit (str) – Indicates the units of the objects: “mm3” or “vox”

  • thr (int or list) – Minimal object size to keep in input data.

fill_holes()[source]

Fill holes in the predictions

keep_largest()[source]

Keep largest object in prediction

remove_noise(thr)[source]

Remove prediction values under the given threshold

Parameters

thr (float) – Threshold under which predictions are set to 0.

Testing API

Training API

Transformations API

Utils API

class Metavar(value)[source]

Bases: enum.Enum

This class is used to display intuitive input types via the metavar field of argparse.

cuda(input_var, cuda_available=True, non_blocking=False)[source]

Passes input_var to GPU.

Parameters
  • input_var (Tensor) – either a tensor or a list of tensors.

  • cuda_available (bool) – If False, then return identity

  • non_blocking (bool) –

Returns

Tensor

unstack_tensors(sample)[source]

Unstack tensors.

Parameters

sample (Tensor) –

Returns

list of Tensors.

Return type

list

generate_sha_256(context: dict, df, file_lst: List[str]) None[source]

generate sha256 for a training file

Parameters
  • context (dict) – configuration context.

  • df (pd.DataFrame) – Dataframe containing all BIDS image files indexed and their metadata.

  • file_lst (List[str]) – list of strings containing training files

save_onnx_model(model, inputs, model_path)[source]

Convert PyTorch model to ONNX model and save it as model_path.

Parameters
  • model (nn.Module) – PyTorch model.

  • inputs (Tensor) – Tensor, used to inform shape and axes.

  • model_path (str) – Output filename for the ONNX model.

define_device(gpu_id)[source]

Define the device used for the process of interest.

Parameters

gpu_id (int) – GPU ID.

Returns

True if cuda is available.

Return type

Bool, device

display_selected_model_spec(params)[source]

Display in terminal the selected model and its parameters.

Parameters

params (dict) – Keys are param names and values are param values.

display_selected_transfoms(params, dataset_type)[source]

Display in terminal the selected transforms for a given dataset.

Parameters
  • params (dict) –

  • dataset_type (list) – e.g. [‘testing’] or [‘training’, ‘validation’]

plot_transformed_sample(before, after, list_title=None, fname_out='', cmap='jet')[source]

Utils tool to plot sample before and after transform, for debugging.

Parameters
  • before (ndarray) – Sample before transform.

  • after (ndarray) – Sample after transform.

  • list_title (list of str) – Sub titles of before and after, resp.

  • fname_out (str) – Output filename where the plot is saved if provided.

  • cmap (str) – Matplotlib colour map.

check_exe(name)[source]

Ensure that a program exists.

Parameters

name (str) – Name or path to program.

Returns

path of the program or None

Return type

str or None

exception ArgParseException[source]

Bases: Exception

get_arguments(parser, args)[source]

Get arguments from function input or command line.

Parameters
  • parser (argparse.ArgumentParser) – ArgumentParser object

  • args (list) – either a list of arguments or None. The list should be formatted like this: [“-d”, “SOME_ARG”, “–model”, “SOME_ARG”]

format_path_data(path_data)[source]
Parameters

path_data (list or str) – Either a list of paths, or just one path.

Returns

A list of paths

Return type

list

init_ivadomed()[source]

Initialize the ivadomed for typical terminal usage.

Visualize API

Inference API

Mixup API

mixup(data, targets, alpha, debugging=False, ofolder=None)[source]

Compute the mixup data.

See also

Zhang, Hongyi, et al. “mixup: Beyond empirical risk minimization.” arXiv preprint arXiv:1710.09412 (2017).

Parameters
  • data (Tensor) – Input images.

  • targets (Tensor) – Input masks.

  • alpha (float) – MixUp parameter.

  • debugging (Bool) – If True, then samples of mixup are saved as png files.

  • ofolder (str) – If debugging, output folder where “mixup” folder is created and samples are saved.

Returns

Mixed image, Mixed mask.

Return type

Tensor, Tensor

save_mixup_sample(ofolder, input_data, labeled_data, lambda_tensor)[source]

Save mixup samples as png files in a “mixup” folder.

Parameters
  • ofolder (str) – Output folder where “mixup” folder is created and samples are saved.

  • input_data (Tensor) – Input image.

  • labeled_data (Tensor) – Input masks.

  • lambda_tensor (Tensor) –

Uncertainty API

run_uncertainty(image_folder)[source]

Compute uncertainty from model prediction.

This function loops across the model predictions (nifti masks) and estimates the uncertainty from the Monte Carlo samples. Both voxel-wise and structure-wise uncertainty are estimates.

Parameters

image_folder (str) – Folder containing the Monte Carlo samples.

combine_predictions(fname_lst, fname_hard, fname_prob, thr=0.5)[source]

Combine predictions from Monte Carlo simulations.

Combine predictions from Monte Carlo simulations and save the resulting as:
  1. fname_prob, a soft segmentation obtained by averaging the Monte Carlo samples.

  2. fname_hard, a hard segmentation obtained thresholding with thr.

Parameters
  • fname_lst (list of str) – List of the Monte Carlo samples.

  • fname_hard (str) – Filename for the output hard segmentation.

  • fname_prob (str) – Filename for the output soft segmentation.

  • thr (float) – Between 0 and 1. Used to threshold the soft segmentation and generate the hard segmentation.

voxelwise_uncertainty(fname_lst, fname_out, eps=1e-05)[source]

Estimate voxel wise uncertainty.

Voxel-wise uncertainty is estimated as entropy over all N MC probability maps, and saved in fname_out.

Parameters
  • fname_lst (list of str) – List of the Monte Carlo samples.

  • fname_out (str) – Output filename.

  • eps (float) – Epsilon value to deal with np.log(0).

structurewise_uncertainty(fname_lst, fname_hard, fname_unc_vox, fname_out)[source]

Estimate structure wise uncertainty.

Structure-wise uncertainty from N MC probability maps (fname_lst) and saved in fname_out with the following suffixes:

  • ‘-cv.nii.gz’: coefficient of variation

  • ‘-iou.nii.gz’: intersection over union

  • ‘-avgUnc.nii.gz’: average voxel-wise uncertainty within the structure.

Parameters
  • fname_lst (list of str) – List of the Monte Carlo samples.

  • fname_hard (str) – Filename of the hard segmentation, which is used to compute the avgUnc by providing a mask of the structures.

  • fname_unc_vox (str) – Filename of the voxel-wise uncertainty, which is used to compute the avgUnc.

  • fname_out (str) – Output filename.

Maths API

rescale_values_array(arr, minv=0.0, maxv=1.0, dtype=<class 'numpy.float32'>)[source]

Rescale the values of numpy array arr to be from minv to maxv.

Parameters
  • arr (ndarry) – Array whose values will be rescaled.

  • minv (float) – Minimum value of the output array.

  • maxv (float) – Maximum value of the output array.

  • dtype (type) – Cast array to this type before performing the rescaling.

gaussian_kernel(kernlen=10)[source]

Create a 2D gaussian kernel with user-defined size.

Parameters

kernlen (int) – size of kernel

Returns

a 2D array of size (kernlen,kernlen)

Return type

ndarray

heatmap_generation(image, kernel_size)[source]

Generate heatmap from image containing sing voxel label using convolution with gaussian kernel :param image: 2D array containing single voxel label :type image: ndarray :param kernel_size: size of gaussian kernel :type kernel_size: int

Returns

2D array heatmap matching the label.

Return type

ndarray