Scripts

This section contains a collection of useful scripts for quality control during the training of models.

ivadomed_visualize_transforms

run_visualization(input, config, number, output, roi)[source]

Utility function to visualize Data Augmentation transformations.

Data augmentation is a key part of the Deep Learning training scheme. This script aims at facilitating the fine-tuning of data augmentation parameters. To do so, this script provides a step-by-step visualization of the transformations that are applied on data.

This function applies a series of transformations (defined in a configuration file -c) to -n 2D slices randomly extracted from an input image (-i), and save as png the resulting sample after each transform.

For example:

ivadomed_visualize_transforms -i t2s.nii.gz -n 1 -c config.json -r t2s_seg.nii.gz

Provides a visualization of a series of three transformation on a randomly selected slice:

_images/transforms_im.png

And on a binary mask:

ivadomed_visualize_transforms -i t2s_gmseg.nii.gz -n 1 -c config.json -r t2s_seg.nii.gz

Gives:

_images/transforms_gt.png
Parameters:
  • input (string) – Image filename. Flag: --input, -i
  • config (string) – Configuration file filename. Flag: --config, -c
  • number (int) – Number of slices randomly extracted. Flag: --number, -n
  • output (string) – Folder path where the results are saved. Flag: --ofolder, -o
  • roi (string) – Filename of the region of interest. Only needed if ROICrop is part of the transformations. Flag: --roi, -r

ivadomed_convert_to_onnx

convert_pytorch_to_onnx(model, dimension, gpu=0)[source]

Convert PyTorch model to ONNX.

The integration of Deep Learning models into the clinical routine requires cpu optimized models. To export the PyTorch models to ONNX format and to run the inference using ONNX Runtime is a time and memory efficient way to answer this need.

This function converts a model from PyTorch to ONNX format, with information of whether it is a 2D or 3D model (-d).

Parameters:
  • model (string) – Model filename. Flag: --model, -m.
  • dimension (int) – Indicates whether the model is 2D or 3D. Choice between 2 or 3. Flag: --dimension, -d
  • gpu (string) – GPU ID, if available. Flag: --gpu, -g

ivadomed_automate_training

automate_training(config, param, fixed_split, all_combin, n_iterations=1, run_test=False, all_logs=False, thr_increment=None)[source]

Automate multiple training processes on multiple GPUs.

Hyperparameter optimization of models is tedious and time-consuming. This function automatizes this optimization across multiple GPUs. It runs trainings, on the same training and validation datasets, by combining a given set of parameters and set of values for each of these parameters. Results are collected for each combination and reported into a dataframe to allow their comparison. The script efficiently allocates each training to one of the available GPUs.

Usage example:

ivadomed_automate_training -c config.json -p params.json -n n_iterations
Example of dataframe
  log_directory training_parameters best_training_dice best_training_loss best_validation_dice best_validation_loss test_dice
0 testing_script-batch_size=2 {‘batch_size’: 2, ‘loss’: {‘name’: ‘DiceLoss’}, ‘training_time’: {‘num_epochs’: 1, ‘early_stopping_patience’: 50, ‘early_stopping_epsilon’: 0.001}, ‘scheduler’: {‘initial_lr’: 0.001, ‘lr_scheduler’: {‘name’: ‘CosineAnnealingLR’, ‘base_lr’: 1e-05, ‘max_lr’: 0.01}}, ‘balance_samples’: False, ‘mixup_alpha’: None, ‘transfer_learning’: {‘retrain_model’: None, ‘retrain_fraction’: 1.0}} -0.002152157641830854 -0.002152157641830854 -0.0013434450065687997 -0.0013434450065687997 0.011467444120505346
1 testing_script-batch_size=4 {‘batch_size’: 4, ‘loss’: {‘name’: ‘DiceLoss’}, ‘training_time’: {‘num_epochs’: 1, ‘early_stopping_patience’: 50, ‘early_stopping_epsilon’: 0.001}, ‘scheduler’: {‘initial_lr’: 0.001, ‘lr_scheduler’: {‘name’: ‘CosineAnnealingLR’, ‘base_lr’: 1e-05, ‘max_lr’: 0.01}}, ‘balance_samples’: False, ‘mixup_alpha’: None, ‘transfer_learning’: {‘retrain_model’: None, ‘retrain_fraction’: 1.0}} -0.0017151038045994937 -0.0017151038045994937 -0.000813339815067593 -0.000813339815067593 0.030501089324618737
Parameters:
  • config (string) – Configuration filename, which is used as skeleton to configure the training. Some of its parameters (defined in param file) are modified across experiments. Flag: --config, -c
  • param (string) –

    json file containing parameters configurations to compare. Parameter “keys” of this file need to match the parameter “keys” of config file. Parameter “values” are in a list. Flag: --param, -p

    Example:

    {"default_model": {"depth": [2, 3, 4]}}
    
  • fixed_split (bool) – If True, all the experiments are run on the same training/validation/testing subdatasets. Flag: --fixed-split
  • all_combin (bool) – If True, all parameters combinations are run. Flag: --all-combin
  • n_iterations (int) – Controls the number of time that each experiment (ie set of parameter) are run. Flag: --n-iteration, -n
  • run_test (bool) – If True, the trained model is also run on the testing subdataset. flag: --run-test
  • all_logs (bool) – If True, all the log directories are kept for every iteration. Flag: --all-logs, -l
  • thr_increment (float) – A threshold analysis is performed at the end of the training using the trained model and the validation sub-dataset to find the optimal binarization threshold. The specified value indicates the increment between 0 and 1 used during the ROC analysis (e.g. 0.1). Flag: -t, --thr-increment

ivadomed_compare_models

compute_statistics(dataframe, n_iterations, run_test=True, csv_out='comparison_models.csv')[source]

Compares the performance of models at inference time on a common testing dataset using paired t-tests.

It uses a dataframe generated by scripts/automate_training.py with the parameter --run-test (used to run the models on the testing dataset). It output dataframes that stores the different statistic (average, std and p_value between runs). All can be combined and stored in a csv.

Example of dataframe
log_directory avg_best_training_dice avg_best_training_loss avg_best_validation_dice avg_best_validation_loss avg_test_dice std_best_training_dice std_best_training_loss std_best_validation_dice std_best_validation_loss std_test_dice p-value_testing_script-batch_size=2 p-value_testing_script-batch_size=4
testing_script-batch_size=2 -0.0019473766224109568 -0.0019473766224109568 -0.0024093631698178797 -0.0024093631698178797 0.0009537434430138293 0.0009893736332192554 0.0009893736332192554 3.545588614363517e-05 3.545588614363517e-05 0.0 1.0 0.030020368472776473
testing_script-batch_size=4 -0.0016124938847497106 -0.0016124938847497106 -0.001482845204009209 -0.001482845204009209 0.0009537434430138293 0.00011548220028372273 0.00011548220028372273 0.00022956790548947826 0.00022956790548947826 0.0 0.030020368472776473 1.0

Usage example:

ivadomed_compare_models -df results.csv -n 2 --run_test
Parameters:
  • dataframe (pandas.Dataframe) – Dataframe of results generated by automate_training. Flag: --dataframe, -df
  • n_iterations (int) – Indicates the number of time that each experiment (ie set of parameter) was run. Flag: --n_iteration, -n
  • run_test (int) – Indicates if the comparison is done on the performances on either the testing subdataset (True) either on the training/validation subdatasets. Flag: --run_test
  • csv_out (string) – Output csv name to store computed value (e.g., df.csv). Default value is model_comparison.csv. Flag -o, --output

ivadomed_prepare_dataset_vertebral_labeling

extract_mid_slice_and_convert_coordinates_to_heatmaps(path, suffix, aim=-1)[source]

This function takes as input a path to a dataset and generates a set of images: (i) mid-sagittal image and (ii) heatmap of disc labels associated with the mid-sagittal image.

Example:

ivadomed_prepare_dataset_vertebral_labeling -p path/to/bids -s _T2w -a 0
Parameters:
  • path (string) – path to BIDS dataset form which images will be generated. Flag: --path, -p
  • suffix (string) – suffix of image that will be processed (e.g., T2w). Flag: --suffix, -s
  • aim (int) – If aim is not 0, retrieves only labels with value = aim, else create heatmap with all labels. Flag: --aim, -a
Returns:

None. Images are saved in BIDS folder

ivadomed_extract_small_dataset

extract_small_dataset(input, output, n=10, contrast_list=None, include_derivatives=True, seed=-1)[source]

Extract small BIDS dataset from a larger BIDS dataset.

Example:

ivadomed_extract_small_dataset -i path/to/BIDS/dataset -o path/of/small/BIDS/dataset -n 10 -c T1w,T2w -d 0 -s 1234
Parameters:
  • input (str) – Input BIDS folder. Flag: --input, -i
  • output (str) – Output folder. Flag: --output, -o
  • n (int) – Number of subjects in the output folder. Flag: --number, -n
  • contrast_list (list) – List of image contrasts to include. If set to None, then all available contrasts are included. Flag: --contrasts, -c
  • include_derivatives (bool) – If True, derivatives/labels/ content is also copied, only the raw images otherwise. Flag: --derivatives, -d
  • seed (int) – Set np.random.RandomState to ensure reproducibility: the same subjects will be selected if the function is run several times on the same dataset. If set to -1, each function run is independent. Flag: --seed, -s.

ivadomed_training_curve

run_plot_training_curves(input_folder, output_folder, multiple_training=False, y_lim_loss=None)[source]

Utility function to plot the training curves.

This function uses the TensorFlow summary that is generated during a training to plot for each epoch:

  • the training against the validation loss
  • the metrics computed on the validation sub-dataset.

It could consider one log directory at a time, for example:

_images/plot_loss_single.png

… or multiple (using multiple_training=True). In that case, the hard line represents the mean value across the trainings whereas the envelope represents the standard deviation:

_images/plot_loss_multiple.png

It is also possible to compare multiple trainings (or set of trainings) by listing them in -i, separeted by commas:

_images/plot_loss_mosaic.png
Parameters:
  • input_folder (str) – Log directory name. Flag: --input, -i. If using --multiple, this parameter indicates the suffix path of all log directories of interest. To compare trainings or set of trainings (using --multiple) with subplots, please list the paths by separating them with commas, eg path_log_dir1,path_logdir2
  • output_folder (str) – Output folder. Flag: --output, -o.
  • multiple_training (bool) – Indicates if multiple log directories are considered (True) or not (False). Flag: --multiple. All available folders with -i as prefix are considered. The plot represents the mean value (hard line) surrounded by the standard deviation (envelope).
  • y_lim_loss (list) – List of the lower and upper limits of the y-axis of the loss plot.

ivadomed_download_data

install_data(url, dest_folder, keep=False)[source]

Download a data bundle from an URL and install it in the destination folder.

Usage example

ivadomed_download_data -d data_testing -o ivado_testing_data

Existing data bundles:

  • data_example_spinegeneric : 10 randomly picked subject from Spine Generic. Used for Tutorial and example in Ivadomed.
  • data_testing : Data Used for integration/unit test in Ivadomed.
  • t2_tumor : Cord tumor segmentation model, trained on T2-weighted contrast.
  • t2star_sc : spinal cord segmentation model, trained on T2-star contrast.
  • mice_uqueensland_gm : Gray matter segmentation model on mouse MRI. Data from University of Queensland.
  • mice_uqueensland_sc : Cord segmentation model on mouse MRI. Data from University of Queensland.
  • findcord_tumor : Cord localisation model, trained on T2-weighted images with tumor.
  • model_find_disc_t1 : Intervertebral disc detection model trained on T1-weighted images.
  • model_find_disc_t2 : Intervertebral disc detection model trained on T2-weighted images.

Note

The function tries to be smart about the data contents. Examples:

a. If the archive only contains a README.md, and the destination folder is ${dst}, ${dst}/README.md will be created. Note: an archive not containing a single folder is commonly known as a “bomb” because it puts files anywhere in the current working directory.( see Tarbomb)

b. If the archive contains a ${dir}/README.md, and the destination folder is ${dst}, ${dst}/README.md will be created. Note: typically the package will be called ${basename}-${revision}.zip and contain a root folder named ${basename}-${revision}/ under which all the other files will be located. The right thing to do in this case is to take the files from there and install them in ${dst}. - Uses download_data() to retrieve the data. - Uses unzip() to extract the bundle.

Parameters:
  • url (string) – URL or sequence thereof (if mirrors). For this package there is a dictionnary listing existing data bundle with their url. Type ivadomed_download_data -h to see possible value. Flag -d
  • dest_folder (string) – destination directory for the data (to be created). If not used the output folder will be the name of the data bundle. Flag -o, --output
  • keep (bool) – whether to keep existing data in the destination folder (if it exists). Flag -k, --keep