“The joy of discovery is certainly the liveliest that the mind of man can ever feel”
- Claude Bernard -
Abstract: Examining specific sub-cellular structures while minimizing cell perturbation is important in the life sciences. Fluorescence labeling and imaging is widely used for introducing specificity despite its perturbative and photo-toxic nature.With the advancement of deep learning, digital staining routines for label-free analysis have emerged as a replacement for fluorescence imaging. Nonetheless, digital staining of sub-cellular structures such as mitochondria is sub-optimal. This is because the models designed for computer vision are directly applied instead of optimizing them for the nature of microscopy data. We propose a new loss function with multiple thresholding steps to promote more effective learning for microscopy data. Through this, we demonstrate a deep learning approach to translate the label-free brightfield images of living cells into equivalent fluorescence images of mitochondria with an average structural similarity of 0.77, thus surpassing the state-of-the-art of 0.7 with L1. Our results provide insightful examples of some unique opportunities generated by data-driven deep-learning enabled image translations.
Meet The Research Team
VirtualStain is an ambitious project that could have a far-reaching impact on the way we analyse and interpret tissue- and cell-images. This large, collaborative effort, involving four departments from three different faculties, is part of the UiT Tematiske satsninger, a funding program intended to encourage innovative interdepartmental and interdisciplinary projects.
Rising up from a (formerly) small integrated optics group, ERC grant jump-started the Nanoscopy group in 2015. Our team is made up of physicists, computer-scientists, engineers and biologists with research topics stretching from advanced imaging of cells and tissues via image processing to the development of the emerging field of chip-based nanoscopy.