• Dragonfly Free Trial

    Advance your image processing results to new levels by harnessing the power of convolutional neural networks.


    Image courtesy of Christopher Bleck, Ph.D. (NIH)

  • Dragonfly Free Trial

    Advance your image processing results to new levels by harnessing the power of convolutional neural networks.


    Image courtesy of TESCAN

  • Dragonfly Free Trial

    Advance your image processing results to new levels by harnessing the power of convolutional neural networks.


    Image courtesy of Angel Paredes, Ph.D. (FDA)

Simple Toolchain for Upscaled and Distributed Training and Tuning of CNNs

A MICCAI 2021 (Strasbourg) Tutorial by Mike Marsh, Ph.D. and Nicolas Piché, Ph.D.

This half-day tutorial, a satellite event in conjunction with MICCAI 2021, uses lectures, demonstrations, and hands-on exercises to introduce participants to easy and high-productivity testing and deployment of convolutional neural networks for semantic segmentation and image enhancement with Dragonfly.

Exercises will be carried out on the Dragonfly software platform. Attention will be given to tools for the preparation and curation of training data, as well as to a flexible and extensive framework for image pre-processing. Behavior and parameters of recent and popular semantic segmentation CNN models will be reviewed, and participants will be taught how to quickly import, edit, and test such models, as well as how to reproduce such models de novo. Recent lessons in hyperparameter tuning will be discussed, and participants will be taught how to batch experimental training to cover the appropriate parameter space using cluster GPU resources. All exercises will be performed with GUI interfaces and, when appropriate, equivalent console based interaction will be demonstrated.

Please share this invitation with colleagues who will attend MICCAI 2021.

Friday, October 1, 2021 from 9:00 -13:00 (UTC)
NOTE Take advantge of early fees by registering before Spetmeber 3, 2021.

Register on the MICCAI 2021 Website


The tutorial will guide users through prototyping, parameter-tuning, evaluating, and deploying convolutional neural network models with Dragonfly for semantic image segmentation and image enhancement in medical imaging, as well as research imaging.

Dragonfly is a software platform for the interactive visualization and processing of 2D, 3D, 4D, and hyperspectral scientific images. It is widely adopted in academic research and has been free for non-commercial use since 2016. The practical tutorial will use Dragonfly to address all practical matters related to: image import/export (including DICOM), toolkit for image pre/post-processing, toolkit for image visualization, toolkit (and wizard) for ground-truth curation of image segmentation, loss-functions, optimizers, model depth, architecture variants, training volume, data augmentation, epoch count, patch-size, batch size, batch processing scale-up with on-premises or Cloud GPU servers, import of H5 models, design and editing of network models (activation function tuning, etc), semantic segmentation models, multi-input models, regression models, and super-resolution models.

Participants who complete this tutorial will know how to train and use deep learning models for semantic segmentation and other image enhancement of scientific images. They will understand the parameter-space associated with training and learn how to use batch-distributed training over compute resources to explore the parameters efficiently for performance optimization.

Deep learning is of broad interest in all advanced technology sectors, and to medical imaging in particular. A few years ago, deep learning expertise was limited to experts in artificial intelligence with great software aptitude. The software toolchain has become more broadly available to those with less expertise, but advanced users still require great expertise. This tutorial will give casual and expert users exposure to a free (for non-commercial use) toolkit that greatly simplifies deep learning optimization and scalability for medical imaging tasks.


The first unit (20 minutes) will be a lecture on the roles of deep learning in scientific image enhancement and semantic segmentation and then discussion of the parameters associated with optimizing deep learning performance.

The second unit will be a demonstration and hands-on exercises covering the Dragonfly toolchain and workflows for image import and image processing, including discussion of:

  • Image import (DICOM and conventional file formats) from disk and from PACS repositories
  • 2D and 3D image visualization
  • Correlative framework for full spatial coordinate bookkeeping in multi-modal analyses
  • Manual and automatic tools for labeling tissue, organs, or other biomedically relevant regions of interest

The third unit will be a demonstration of the Python console control and developer tools for fully interactive plugin development, along with advanced CNN tuning tools.

  • Programmer tools
    • Python console interactivity
    • Macro engine for encapsulation and distribution of recurring tasks
    • Image history logging for self-documentation of image processing protocols
    • Developer wizards for new feature development
  • CNN Tools
    • Designing and editing novel CNN architectures
    • Direct import of pre-existing Keras models
    • Parameter tuning for automatic data augmentation
    • Parameter tuning for validation and testing
    • Parameter tuning for training parameters (batch size, patch size, loss function, optimizer, etc.)
    • Parameter tuning for early stopping callbacks

The fourth unit will be a demonstration and hands-on exercise for the deep learning Segmentation Wizard workflows, initiated with limited training data, followed by iterations of:

  • Training
  • Making predictions
  • Correcting mistakes in those predictions


Anyone who trains convolutional neural network models for medical images and wants to increase their productivity will benefit from this tutorial.

Participants are expected to have some previous experience in deep learning, either as a practitioner or developer, and should have some understanding of convolutional neural networks (objectives, training-inference lifecycle, and computational frameworks such as TensorFlow, Pytorch). An understanding of loss-functions, activation functions, and working knowledge of the Python console is preferred, but not required.

Participants will be given account credentials to access Cloud computing services for use during the tutorial.


Dr Mike Marsh earned his Ph.D. in Structural and Computational Biology from Baylor College of Medicine. He has worked in imaging technology and solutions, with an emphasis on 3D visualization and analysis, for 20 years. His academic training was in biological sciences, but he has been tasked to develop other areas of expertise for collaboration and software support, including materials sciences and geosciences. This expertise has led to contributed book chapters, review articles or invited talks spanning these multiple disciplines. His scientific imaging and image processing covers X-ray CT, TEM, SEM and SEM-related microanalysis. For the last five years, Dr Marsh has worked as the Dragonfly Product Manager at Object Research Systems.

Dr Nicolas Piché earned his Ph.D. at École Polytechnique de Montréal. He has worked for more than 25 years in industry and academic leadership experience related to software design and scientific imaging. He was instrumental in the development of the ORS's core imaging platforms and currently leads the research team that is making deep learning toolchain accessible by nonexperts across a variety of imaging applications. Prior to co-founding ORS, Dr Piché held senior positions in prominent North American and European technology companies and has authored papers within the field of image processing, microanalysis, deep learning, and medical imaging.

Has Dragonfly Convinced You?

Start enjoying all Dragonfly included features with a free 30-day trial version.