Introduction
Allen Cell Segmenter - Machine Learning (Segmenter ML) is a plugin for the napari viewer and the companion of the Classic Segmenter plugin, allowing users to easily curate the segmentation outputs and generate robust and accurate deep learning models. The Segmenter ML plugin is based on the iterative deep learning workflow of the Allen Cell & Structure Segmenter, a Python-based open source toolkit developed at the Allen Institute for Cell Science for 3D segmentation of intracellular structures in fluorescence microscope images.
In this user guide, you'll learn how to get started and use the Segmenter ML plugin. The output of this plugin are ML segmentation models that are trained based on users’ data to produce binary segmentation images of cellular structures of interest from raw 2D/3D microscopy images.
In this user guide, you'll learn how to get started and use the Segmenter ML plugin. The output of this plugin are ML segmentation models that are trained based on users’ data to produce binary segmentation images of cellular structures of interest from raw 2D/3D microscopy images.
<some figures here>
Prerequisites
- Operating systems: Mac, Windows, and Linux
- NVIDIA GPU preferred but not required - MDL model training and prediction take significantly longer on CPU-only computers (from seconds on GPU to several minutes on CPU)
- Raw & segmentation images stored locally on user’s computer
- Current supported file types: .czi, .ome.tiff, .tiff
- Python 3.10 installed
- napari installed
Use Cases
- Use Case 1: Create a segmentation ML model for minor variances in a cellular structure’s common morphology that often requires fine-tuning of segmentation parameters
- Users have segmented their raw images using the Classic Allen Cell & Structure Segmenter tool. However, due to inconsistencies in raw images, users often have to adjust the same recipe to get consistent results. A segmentation DL model would help to circumvent this issue
- Use Case 2: Create a segmentation ml model that can identify multiple distinct morphologies for a given cellular structure
- If the cellular structure of interest has multiple morphologies, a single segmentation recipe can not perform well on all known morphologies; instead, multiple recipes would be used - each targets a certain morphology. Thus, the user has multiple segmentation results from the same raw image. A segmentation DL model can be trained on all morphologies to produce segmentations in one single step.
General workflow
< image diagram here (Curation, Training, Prediction)>
- Users should start by curating an image dataset from their existing pool of images (raw & segmentation pairs) to be used for training (“Curation”)
- Users then train a ML model using the curated dataset (“Training”)
- Once model training is done, the user can evaluate the segmentation model performance by applying it to raw images and comparing them to their segmentation ground truths
- If the user is satisfied, the segmentation ML model is ready to be used and applied to other data (“Prediction”)
If not, the user can go back to curation to include: (1) more images, (2) better images; or 3) adjust training parameters and/or increase the training epoch/time.
Installation
Via the napari plugin manager:
- From napari menu, navigate to the Plugins tab, select “Install/Uninstall Plugins…”
- In the “filter…” input field, type “allencell-ml-segmenter”
- Click “install”
Support/Help
At any point you can find help under the “Help” dropdown:
- Tutorial: links to this online webpage that we’ll keep up-to-date
- GitHub: links to the source code of the plugin, you can submit any technical issues you encounter here
- Forum: links to Image.sc forum with our specific tag for the plugin where you can ask us any questions regarding the use of the plugin - we are actively monitoring this forum section
- Website: links to our overview webpage of the Segmenter tool, including information about the original Python toolkit, our Classic Segmenter tool with lookup table of available segmentation recipes, relevant links etc.
- Experiment Home: here you can find out where you have set your home directory and can change it if needed
Running the plugin for the first time
- From napari menu, navigate to the Plugins tab, select “Allen Cell Segmenter ML”
- A pop up window will appear asking for a home directory to story plugin’s related data
- Create a directory prior to selecting it
- This directory will store your DL models and curation datasheet
- This directory persists if you reinstall or update your Allen Cell ML Segmenter plugin
- This directory can be change later if needed
Start a new model
In this mode, you have access to all three modules: Curation, Training, and PredictionA pop up window will appear asking for a home directory to story plugin’s related data
- Create a directory prior to selecting it
- This directory will store your DL models and curation datasheet
- This directory persists if you reinstall or update your Allen Cell ML Segmenter plugin
- This directory can be change later if needed
- Starting point: Select an action you would like to do:
- Options:
- “Start a new model”: a new directory will be created in the home directory to store your future model. In this mode, you have access to all three modules: Curation, Training, and Prediction
- “Select an existing model”: any model directory that was previously created by the plugin will be selectable from the dropdown. In this mode, currently you only have access to the Prediction module. This might change soon based on the plugin’s development progress.
- To get started, select “Create new model” option AND name your new model
- Name your model after the data that you will be using to train the model; e.g. LaminB1-interphase_1
- Click “Apply”
- At any point, if you want to start over, you can close then reopen the plugin. However, if you are in the middle of the curation or training process and close the plugin, this early version of the plugin does not save your progress
- Currently, existing model directories can only be deleted outside of the plugin using the regular file explorer window
- Options:
- Using Curation module
- Curation: Sorting through your existing pools of images to create a good selected set to use for model training
- Prerequisites: set up your curation input
- Raw images and their corresponding segmentation ground truths share corresponding names
- Raw & segmentation images are stored in separate respective directories
- Raw & segmentation images can be different channels within the same image stack (Note: single-channel images proceed faster in compared to multi-channel images)
- Seg 2 is optional
- Steps
- Curation start screen input (need updated screenshot)
- Make sure “Curation” tab is active
- Browse & select the directories for Raw and Seg 1
- Curation start screen input (need updated screenshot)
- Browse & select the directory for Seg 2 (Optional)
- Click “Start”
- Curation Main screen overview (need updated screenshot)
- Your images will be displayed in the viewport area one set at a time:
- Raw: image layer: grayscale
- Seg 1: label layer: orange
- Seg 2 (optional): label layer: teal
- The image set is selected “Yes” for use in training by default
- Select “No” if you do not want to use this image set in training a ML model
- The curation progress bar shows how far along you are
- Inspect the on-screen image set
- If you have 2 segmentation inputs, by default Seg 1 will be used as the accompanied segmentation when training; if you wish to use Seg 2 instead - select Seg 2 in the dropdown for the “base” segmentation
- If satisfied, click “Next” to proceed to the next image set
- Your images will be displayed in the viewport area one set at a time:
- Using excluding mask (optional): to exclude areas from being used in training
- Click “+Create” under Excluding Mask
- Your cursor automatically changes into the polygon tool
- Draw shape(s) to cover area(s) that you want to exclude
- Tips:
- double click to close the shape
- Use other tools available in napari Layer Control panel to manipulate the shape(s)
- Although the mask is drawn in 2D, it is in fact recognized by the plugin as if the shapes of the masks had been propagated in 3D through all the z slices
- Tips:
- If you no longer need an excluding mask, simply ignore or delete the “Excluding Mask” layer in the Layer List
- Click “Save” if you have created an Excluding Mask
- You can overwrite the saved masked by creating a new mask
- Using merging mask (optional): to merge areas of both segmentation inputs into a single segmentation to use with the raw image in training a model
- Merging - replacing part of the “base” segmentation by the other segmentation: If you are practicing Use Case #2 <link above> then specific parts of the raw image might be segmented better by one recipe/algorithm over the main segmentation recipe, you can combine these segmentations into one single segmentation
- Select your “base” segmentation: Seg 1 selected by default
- Click “+Create” under Merging Mask
- Your cursor automatically changes into the polygon tool
- Draw shape(s) to cover area(s) that you want to replace the “base” segmentation with the other segmentation
- Click “Save”
- At any point you can save your progress by clicking “Save Curation CSV”
- Merging - replacing part of the “base” segmentation by the other segmentation: If you are practicing Use Case #2 <link above> then specific parts of the raw image might be segmented better by one recipe/algorithm over the main segmentation recipe, you can combine these segmentations into one single segmentation
- Continue until you reach the last image in your dataset, click “Finish”
- Note: closing the plugin or napari during curation process, will result in the loss of your progress unless a curation CSV has been saved
- Using Training module (need updated screenshot)
- Here you will use your previously curated dataset to train your model
- Prerequisites
- Saved or completed curation CSV (see step above)
- Steps
- Make sure the “Training” tab is active
- The saved/completed curation CSV directory (named “Data”) will be automatically loaded in the Training Image source
- Image channel will be automatically detected and channel 0 is selected by default
- Select Input Patch size - choose the approximate size of your cellular structure of interest
- Tips:
- Load a sample/representative image from your dataset
- Turn on napari’s scale bar: From napari menu, navigate to the View tab, select “Scale Bar”, then make sure Scale Bar Visible is selected
- Use the scale bar to estimate the size of your cellular structure
- IMPORTANT: make sure the size values are multiples of 4
- Tips:
- Select Model size: Model size defines the complexity of the model - smaller models train faster while larger models train slower but may learn complex relationships better
- Select Image dimension
- Input the number of Training epochs: you can start out with a smaller number to test how quickly your computer can process each step
- Time out: OPTIONAL, check and add a time here if you would like the model to be stopped training by a certain amount of time
- Computers with GPU will train significantly faster than CPU-only
- Computers might need to be left on for extended amount of time (i.e. overnight) for training; during that time, you can continue to use your computer for other tasks
- Click “Start training”
- A progress bar will appear
- The first training step / epoch takes the most time, the progress bar might stay at 0% for several minutes
- A loss value is also displayed (error between model prediction and ground truth) during and upon the completion of training
- Note: training will automatically stop when the model can’t no longer be improved
- The plugin will notify you when the training is finished
- Using Prediction module
- Prerequisites
- A complete trained model (see step above)
- Steps
- Make sure the “Prediction” tab is active
- You can choose to apply the segmentation model on:
- On-screen image(s)
- Load image(s) into the napari viewport first
- Select the option “On-screen image(s)”
- Select image(s) you want to apply the model to
- Select Image input channel: The plugin automatically detect the number of channels - channel 0 is selected by default
- Browse & select an output directory
- Click “Run”
- Segmentation result(s) will be displayed on screen
- On-screen image(s)
- Prerequisites
- Image(s) from a directory
- Select the option “Image(s) from a directory”
- Browse & select an input image directory
- Select Image input channel: The plugin automatically detect the number of channels - channel 0 is selected by default
- Browse & select an output directory
- Click “Run”
- A progress bar will appear, reflecting the number of image completed
- The plugin will notify you when the prediction output is completed with an option to open the output directory
- You can load your segmentation prediction results into napari for viewing
- Troubleshooting
- FAQs
- Conclusion
Check back on our website (napari hub?) for plans of upcoming new features!
- Bonus content:
- Classic segmentation generation
- Simple tutorial on Classic workflow
- Workflow editor
- Add a 3D image in the viewer
- Select channel
- Select a workflow based on the thumbnail that most resembles your current image
- “Run all” steps
- Inspect image
- Fine tuning parameters in each step and rerun the workflow
- Save the workflow
- Batch processing:
- Load the saved workflow
- Input directory of raw images
- Input directory for output segmentations
- Run batch and watch the progress bar
- A notification dialog will pop up once the batch is completed
- Open the output directory and drag the result segmentations into the viewer to review
- Tips:
- Also can open the raw and overlay to compare
- If there is more than one segmentation algorithms are used on the same raw image, rename your segmentation appropriately (e.g. seg1 (main segmentation), seg2 (secondary segmentation))