ALLEN CELL EXPLORER
  • About
      Institute
      1. Our science: CellScapes
      2. Past foundational projects
      3. News feed
      4. About us
      5. Careers
  • Allen Cell Collection
      Order cells & plasmids
      1. Cell Catalog
      2. Disease Collection Cell Catalog
      3. Cell Catalog quickview
      4. Cell video shorts
      Lab methods
      1. Video protocols
      2. Written protocols
      3. Our methodology
      4. Support forum
      About our hiPS cells
      1. hiPS Cell Structure Overview
      2. Visual Guide to Human Cells
      3. Cell structure observations
      4. Why endogenous tagging?
      5. Differentiation into cardiomyocytes
      6. Genomics
      7. Download cell data: Images, genomics, & features
  • Data & Digital Tools
      General
      1. Tools and resources overview
      2. Download cell data (images, genomics, features)
      3. Code repositories & software
      Desktop tools
      1. Allen Cell & Structure Segmenter
      2. AGAVE 3D pathtrace image viewer
      Web tools
      1. BioFile Finder
      2. Cell Feature Explorer
      3. Integrated Mitotic Stem Cell
      4. └ Z-stack viewer
      5. └ 3D viewer
      Web tools (con't)
      1. Simularium viewer
      2. Timelapse Feature Explorer
      3. Visual Guide to Human Cells
      4. Vol-E (Web Volume Viewer)
      5. 3D Cell Viewer
  • Analysis & Modeling
      Allen Integrated Cell models
      1. Visual Guide to Human Cells
      2. Integrated Mitotic Stem Cell
      3. └ Z-stack viewer
      4. └ 3D viewer
      5. Allen Integrated Cell
      6. └ 3D Probabilistic Modeling
      7. └ Label-free Determination
      4D biology models
      1. Simularium viewer
      Methodologies
      1. Drug perturbation pilot study
      2. hiPS cells during mitosis
      3. Differentiation into cardiomyocytes
  • Publications
      Articles
      1. Publications
      2. Preprints
      Presentations
      1. Talks & posters
  • Education
      Educational resources
      1. All resources
      2. Teaching materials
      Online tools popular with teachers
      1. Visual Guide to Human Cells
      2. Integrated Mitotic Stem Cell
      3. 3D Cell Feature Explorer
      4. 3D Cell Viewer
      5. hiPS cell structure overview
  • Support
      Questions
      1. FAQs
      2. Forum
      Tutorials for digital tools
      1. Video tutorials
      2. Visual Guide tutorial
      3. AGAVE documentation
      Lab methods
      1. Video protocols
      2. Written protocols
      3. Our methodology
  • 🔍
      SEARCHBAR

User Guide

Allen Cell Segmenter - Machine Learning
a napari plugin

​Introduction

Allen Cell Segmenter - Machine Learning (Segmenter ML) is a plugin for the napari viewer and the companion of the Classic Segmenter plugin, allowing users to easily curate the segmentation outputs and generate robust and accurate deep learning models. The Segmenter ML plugin is based on the iterative deep learning workflow of the Allen Cell & Structure Segmenter, a Python-based open source toolkit developed at the Allen Institute for Cell Science for 3D segmentation of intracellular structures in fluorescence microscope images.

In this user guide, you'll learn the basics concept behind the plugin and how to get started on using the Segmenter ML plugin. 
The output of this plugin are ML segmentation models that are trained based on users’ data to produce binary segmentation images of cellular structures of interest from raw 2D/3D microscopy images.
<some figures here>

Prerequisites

  1. Operating systems: Mac, Windows, and Linux
  2. NVIDIA GPU preferred but not required - MDL model training and prediction take significantly longer on CPU-only computers (from seconds on GPU to several minutes on CPU)
  3. Raw & segmentation images stored locally on user’s computer
  4. Current supported file types: .czi, .ome.tiff, .tiff
  5. Python 3.10 installed
  6. napari installed

Use Cases

  1. Use Case 1: Create a segmentation ML model for minor variances in a cellular structure’s common morphology that often requires fine-tuning of segmentation parameters
    1. Users have segmented their raw images using the Classic Allen Cell & Structure Segmenter tool. However, due to inconsistencies in raw images, users often have to adjust the same recipe to get consistent results. A segmentation DL model would help to circumvent this issue
  2. Use Case 2: Create a segmentation ml model that can identify multiple distinct morphologies for a given cellular structure
    1. If the cellular structure of interest has multiple morphologies, a single segmentation recipe can not perform well on all known morphologies; instead, multiple recipes would be used - each targets a certain morphology. Thus, the user has multiple segmentation results from the same raw image. A segmentation DL model can be trained on all morphologies to produce segmentations in one single step. 

General workflow

< image diagram here (Curation, Training, Prediction)>
  1. Users should start by curating an image dataset from their existing pool of images (raw & segmentation pairs) to be used for training (“Curation”)
  2. Users then train a ML model using the curated dataset (“Training”)
  3. Once model training is done, the user can evaluate the segmentation model performance by applying it to raw images and comparing them to their segmentation ground truths
  4. If the user is satisfied, the segmentation ML model is ready to be used and applied to other data (“Prediction”)
    If not, the user can go back to curation to include: (1) more images, (2) better images; or 3) adjust training parameters and/or increase the training epoch/time. 

Installation

Via the napari plugin manager:
  1. From napari menu, navigate to the Plugins tab, select “Install/Uninstall Plugins…”
  2. In the “filter…” input field, type “allencell-ml-segmenter” 
  3. Click “install”

Support/Help

At any point you can find help under the “Help” dropdown:
  1. Tutorial: links to this online webpage that we’ll keep up-to-date
  2. GitHub: links to the source code of the plugin, you can submit any technical issues you encounter here
  3. Forum: links to Image.sc forum with our specific tag for the plugin where you can ask us any questions regarding the use of the plugin - we are actively monitoring this forum section
  4. Website: links to our overview webpage of the Segmenter tool, including information about the original Python toolkit, our Classic Segmenter tool with lookup table of available segmentation recipes, relevant links etc.
  5. Experiment Home: here you can find out where you have set your home directory and can change it if needed

Running the plugin for the first time

  1. From napari menu, navigate to the Plugins tab, select “Allen Cell Segmenter ML”
  2. A pop up window will appear asking for a home directory to story plugin’s related data
    1. Create a directory prior to selecting it
    2. This directory will store your DL models and curation datasheet
    3. This directory persists if you reinstall or update your Allen Cell ML Segmenter plugin
    4. This directory can be change later if needed

Start a new model

​In this mode, you have access to all three modules: Curation, Training, and PredictionA pop up window will appear asking for a home directory to story plugin’s related data
  1. Create a directory prior to selecting it
  2. This directory will store your DL models and curation datasheet
  3. This directory persists if you reinstall or update your Allen Cell ML Segmenter plugin
  4. This directory can be change later if needed
  1. Starting point: Select an action you would like to do:
    1. Options:
      1. “Start a new model”: a new directory will be created in the home directory to store your future model. In this mode, you have access to all three modules: Curation, Training, and Prediction
      2. “Select an existing model”: any model directory that was previously created by the plugin will be selectable from the dropdown. In this mode, currently you only have access to the Prediction module. This might change soon based on the plugin’s development progress.
    2. To get started, select “Create new model” option AND name your new model
      1. Name your model after the data that you will be using to train the model; e.g. LaminB1-interphase_1
    3. Click “Apply”
    4. At any point, if you want to start over, you can close then reopen the plugin. However, if you are in the middle of the curation or training process and close the plugin, this early version of the plugin does not save your progress 
    5. Currently, existing model directories can only be deleted outside of the plugin using the regular file explorer window

  1. Using Curation module
    1. Curation: Sorting through your existing pools of images to create a good selected set to use for model training
    2. Prerequisites: set up your curation input
      1. Raw images and their corresponding segmentation ground truths share corresponding names
E.g. “mitochondria-1_raw” and “mitochondria-1_seg” 
  1. Raw & segmentation images are stored in separate respective directories
  2. Raw & segmentation images can be different channels within the same image stack (Note: single-channel images proceed faster in compared to multi-channel images)
  3. Seg 2 is optional
  1. Steps
    1. Curation start screen input (need updated screenshot)
      1. Make sure “Curation” tab is active
      2. Browse & select the directories for Raw and Seg 1
The Image Channel dropdown will automatically detect the available channel and select channel 0 by default
  1. Browse & select the directory for Seg 2 (Optional)
  2. Click “Start”
  1. Curation Main screen overview (need updated screenshot)
    1. Your images will be displayed in the viewport area one set at a time:
      1. Raw: image layer: grayscale
      2. Seg 1: label layer: orange
      3. Seg 2 (optional): label layer: teal
    2. The image set is selected “Yes” for use in training by default
      1. Select “No” if you do not want to use this image set in training a ML model
    3. The curation progress bar shows how far along you are
    4. Inspect the on-screen image set
      1. If you have 2 segmentation inputs, by default Seg 1 will be used as the accompanied segmentation when training; if you wish to use Seg 2 instead - select Seg 2 in the dropdown for the “base” segmentation
      2. If satisfied, click “Next” to proceed to the next image set
  2. Using excluding mask (optional): to exclude areas from being used in training
Excluding certain area of the images: If the segmentation(s) generally looks good except for some minor areas, you can exclude these areas from being used in training a model
  1. Click “+Create” under Excluding Mask
  2. Your cursor automatically changes into the polygon tool
  3. Draw shape(s) to cover area(s) that you want to exclude
    1. Tips: 
      1. double click to close the shape
      2. Use other tools available in napari Layer Control panel to manipulate the shape(s)
      3. Although the mask is drawn in 2D, it is in fact recognized by the plugin as if the shapes of the masks had been propagated in 3D through all the z slices 
  4. If you no longer need an excluding mask, simply ignore or delete the “Excluding Mask” layer in the Layer List
  5. Click “Save” if you have created an Excluding Mask
  6. You can overwrite the saved masked by creating a new mask
  1. Using merging mask (optional): to merge areas of both segmentation inputs into a single segmentation to use with the raw image in training a model
    1. Merging - replacing part of the “base” segmentation by the other segmentation: If you are practicing Use Case #2 <link above> then specific parts of the raw image might be segmented better by one recipe/algorithm over the main segmentation recipe, you can combine these segmentations into one single segmentation 
      1. Select your “base” segmentation: Seg 1 selected by default
      2. Click “+Create” under Merging Mask
      3. Your cursor automatically changes into the polygon tool
      4. Draw shape(s) to cover area(s) that you want to replace the “base” segmentation with the other segmentation
      5. Click “Save”
    2. At any point you can save your progress by clicking “Save Curation CSV”
You can start training as soon as you have a saved Curation CSV or continue curation on all images in your dataset before starting training
  1. Continue until you reach the last image in your dataset, click “Finish” 
  2. Note: closing the plugin or napari during curation process, will result in the loss of your progress unless a curation CSV has been saved




  1. Using Training module (need updated screenshot)
    1. Here you will use your previously curated dataset to train your model
    2. Prerequisites
      1. Saved or completed curation CSV (see step above)
    3. Steps
      1. Make sure the “Training” tab is active
      2. The saved/completed curation CSV directory (named “Data”) will be automatically loaded in the Training Image source 
      3. Image channel will be automatically detected and channel 0 is selected by default
      4. Select Input Patch size - choose the approximate size of your cellular structure of interest
        1. Tips:
          1. Load a sample/representative image from your dataset
          2. Turn on napari’s scale bar: From napari menu, navigate to the View tab, select “Scale Bar”, then make sure Scale Bar Visible is selected
          3. Use the scale bar to estimate the size of your cellular structure
          4. IMPORTANT: make sure the size values are multiples of 4
      5. Select Model size: Model size defines the complexity of the model - smaller models train faster  while larger models train slower but may learn complex relationships better
      6. Select Image dimension
      7. Input the number of Training epochs: you can start out with a smaller number to test how quickly your computer can process each step

      8. Time out: OPTIONAL, check and add a time here if you would like the model to be stopped training by a certain amount of time
        1. Computers with GPU will train significantly faster than CPU-only 
        2. Computers might need to be left on for extended amount of time (i.e. overnight) for training; during that time, you can continue to use your computer for other tasks
      9. Click “Start training”
        1. A progress bar will appear
        2. The first training step / epoch takes the most time, the progress bar might stay at 0% for several minutes
        3. A loss value is also displayed (error between model prediction and ground truth) during and upon the completion of training
        4. Note: training will automatically stop when the model can’t no longer be improved
      10. The plugin will notify you when the training is finished
 
  1. Using Prediction module
    1. Prerequisites
      1. A complete trained model (see step above)
    2. Steps
      1. Make sure the “Prediction” tab is active
      2. You can choose to apply the segmentation model on:
        1. On-screen image(s)
          1. Load image(s) into the napari viewport first
          2. Select the option “On-screen image(s)”
          3. Select image(s) you want to apply the model to
          4. Select Image input channel: The plugin automatically detect the number of channels - channel 0 is selected by default
          5. Browse & select an output directory
          6. Click “Run”
          7. Segmentation result(s) will be displayed on screen
 
  1. Image(s) from a directory
    1. Select the option “Image(s) from a directory”
    2. Browse & select an input image directory
    3. Select Image input channel: The plugin automatically detect the number of channels - channel 0 is selected by default
    4. Browse & select an output directory
    5. Click “Run”
    6. A progress bar will appear, reflecting the number of image completed
    7. The plugin will notify you when the prediction output is completed with an option to open the output directory
    8. You can load your segmentation prediction results into napari for viewing
  1. Troubleshooting
  2. FAQs
 
  1. Conclusion
Please leave any comments for us to improve both the plugin and the tutorials at <forum link>!
Check back on our website (napari hub?) for plans of upcoming new features!



  1. Bonus content:
    1. Classic segmentation generation 
  1. Simple tutorial on Classic workflow 
  1. Workflow editor 
  1. Add a 3D image in the viewer 
  1. Select channel 
  1. Select a workflow based on the thumbnail that most resembles your current image 
  1. “Run all” steps 
  1. Inspect image 
  1. Fine tuning parameters in each step and rerun the workflow 
  1. Save the workflow 
  1. Batch processing: 
  1. Load the saved workflow 
  1. Input directory of raw images 
  1. Input directory for output segmentations 
  1. Run batch and watch the progress bar 
  1. A notification dialog will pop up once the batch is completed 
  1. Open the output directory and drag the result segmentations into the viewer to review 
  1. Tips: 
  1. Also can open the raw and overlay to compare 
  1. If there is more than one segmentation algorithms are used on the same raw image, rename your segmentation appropriately (e.g. seg1 (main segmentation), seg2 (secondary segmentation)) 


​

The Institute

Home
Careers
Culture & Community
Archived Content

Legal

Terms of Use
Citation Policy
Privacy Policy
Cookie Settings

Help & contact

​FAQs
Help
​Send us a Message
​Sign up for our Newsletter
Allen Institute for Cell Science is a part of the Allen Institute. The mission of the Allen Institute is to understand the principles that govern life, and to advance health. Our creative and multi-dimensional teams focus on answering some of the biggest questions in bioscience. We accelerate foundational research, catalyze bold ideas, develop tools and models, and openly share our science to make a broad, transformational impact on the world.
Follow Us  
​Copyright © 2025 Allen Institute. All Rights Reserved.
  • About
      Institute
      1. Our science: CellScapes
      2. Past foundational projects
      3. News feed
      4. About us
      5. Careers
  • Allen Cell Collection
      Order cells & plasmids
      1. Cell Catalog
      2. Disease Collection Cell Catalog
      3. Cell Catalog quickview
      4. Cell video shorts
      Lab methods
      1. Video protocols
      2. Written protocols
      3. Our methodology
      4. Support forum
      About our hiPS cells
      1. hiPS Cell Structure Overview
      2. Visual Guide to Human Cells
      3. Cell structure observations
      4. Why endogenous tagging?
      5. Differentiation into cardiomyocytes
      6. Genomics
      7. Download cell data: Images, genomics, & features
  • Data & Digital Tools
      General
      1. Tools and resources overview
      2. Download cell data (images, genomics, features)
      3. Code repositories & software
      Desktop tools
      1. Allen Cell & Structure Segmenter
      2. AGAVE 3D pathtrace image viewer
      Web tools
      1. BioFile Finder
      2. Cell Feature Explorer
      3. Integrated Mitotic Stem Cell
      4. └ Z-stack viewer
      5. └ 3D viewer
      Web tools (con't)
      1. Simularium viewer
      2. Timelapse Feature Explorer
      3. Visual Guide to Human Cells
      4. Vol-E (Web Volume Viewer)
      5. 3D Cell Viewer
  • Analysis & Modeling
      Allen Integrated Cell models
      1. Visual Guide to Human Cells
      2. Integrated Mitotic Stem Cell
      3. └ Z-stack viewer
      4. └ 3D viewer
      5. Allen Integrated Cell
      6. └ 3D Probabilistic Modeling
      7. └ Label-free Determination
      4D biology models
      1. Simularium viewer
      Methodologies
      1. Drug perturbation pilot study
      2. hiPS cells during mitosis
      3. Differentiation into cardiomyocytes
  • Publications
      Articles
      1. Publications
      2. Preprints
      Presentations
      1. Talks & posters
  • Education
      Educational resources
      1. All resources
      2. Teaching materials
      Online tools popular with teachers
      1. Visual Guide to Human Cells
      2. Integrated Mitotic Stem Cell
      3. 3D Cell Feature Explorer
      4. 3D Cell Viewer
      5. hiPS cell structure overview
  • Support
      Questions
      1. FAQs
      2. Forum
      Tutorials for digital tools
      1. Video tutorials
      2. Visual Guide tutorial
      3. AGAVE documentation
      Lab methods
      1. Video protocols
      2. Written protocols
      3. Our methodology
  • 🔍
      SEARCHBAR