Label-free determination of human stem cell organization
As part of the Allen Integrated Cell, we have developed and implemented a label-free prediction method for determining the location of 3D structures inside the cell, directly from transmitted light images.
Figure 1. Images from our collection of cells can be used to generate models that produce integrated images of 3D subcellular structures—all from an input brightfield image of a field of cells. Cell segmentations can then be applied to the predictions to determine structure localization for individual cells.
Various imaging methods are currently used to capture details of subcellular organization. These, however, present trade-offs:
fluorescence microscopy allows the visualization of specific structures, but is both expensive and time consuming
live cells can be damaged by microscope laser light
the number of fluorescent labels which can be used at the same time is restricted by the current technology
the quantity of tagged structures which may be simultaneously imaged is limited
Therefore, it is difficult to achieve an integrated representation of cellular organization using current methods. The Label-Free Determination model can leverage the specificity of fluorescence microscopy while removing many restrictions on numbers of simultaneous labels, and presents a potentially important tool for biologists for achieving insight into the integrated activities of subcellular structures.
The “discrete” label-free structure determination tool consists of a convolutional neural network (CNN)-based method (see Label-free imaging tool pipeline section below), employing a U-Net architecture to model the relationships between 3D transmitted light (brightfield) and fluorescence images corresponding to several major subcellular structures (e.g. nuclear envelope, nucleoli, endoplasmic reticulum, mitochondria, etc) tagged with fluorescent proteins (see What we do, cell methods, and microscopy methods pages for further detail). The tool can train a model to learn this relationship for the structure of interest given only spatially registered pairs of images, even with a relatively small image set for training (30 image pairs per structure). The resultant model can, in turn, be used to predict a 3D fluorescence image from a new transmitted light input.
Model predictions for a variety of subcellular structures can be combined, enabling multi-channel, integrated fluorescence imaging from a single transmitted light input.
Figure 2a. Given the input of a transmitted light and fluorescence image pairs, the model is trained to minimize the mean squared error (MSE) between the fluorescence ground-truth and output of the model.
Figure 2b. Example of a 3D input transmitted light image, a ground-truth confocal DNA fluorescence image, and a tool prediction.
Figure 2c. Distributions of the image-wise correlation coefficient (r) between target and predicted test images from models trained on 30 3D images for the indicated subcellular structure, plotted as a box across 25th, 50th and 75th percentile, with whiskers indicating the last data points within 1.5 interquartile range of the lower and upper quartiles. For a complete description of structure labels, see Publications. Black bars indicate maximum correlation between the target fluorescence image and a theoretical, noise-free image (Cmax; details of metric in Publications).
Figure 2d. Different models applied to the same input and combined to predict multiple imaging modes. Predicted localization of DNA (blue), cell membrane (red), nuclear envelope (cyan) and mitochondria (orange) of a sample taken at 5-minute intervals. The center z-slice is shown. A mitotic event, along with stereotypical reorganization of subcellular structures, is clearly observed.
Label-free determination over time
Figure 3. Notably, models trained on static images (from a single time point) can be used to predict 3D fluorescence time-lapse movies, given a 3D transmitted light input (Figure 2 panel e). See Modeling Publications for additional information.
Label-free Data Download
Label-free Prediction Training Data
The images used to train models described above are provided here.
The label-free method is applied to our entire publicly available collection of images, and the images are provided here. Note: This data is intended for scientific exploration and the accuracy and utility of the predictions must be assessed structure by structure in the context of the intended use case.