“The joy of discovery is certainly the liveliest that the mind of man can ever feel”
- Claude Bernard -
Abstract: Scanning Acoustic Microscopy (SAM) uses high-frequency acoustic waves to generate non-ionizing, label-free images of the surface and internal structures of industrial objects and biological specimens. The resolution of SAM images is limited by several factors such as the frequency of excitation signals, the signal-to-noise ratio, and the pixel size. We propose to use a hypergraphs image inpainting technique for SAM that fills in missing information to improve the resolution of the SAM image. We compared the performance of our technique with four other different techniques based on generative adversarial networks (GANs), including AOTGAN, DeepFill v2, Edge-Connect and DMFN. Our results show that the hypergraphs image inpainting model provides the SOTA average SSIM of 0.82 with a PSNR of 27.96 for 4x image size enhancement over the raw SAM image. We emphasize the importance of hypergraphs' interpretability to bridge the gap between human and machine perception, particularly for robust image recovery tools for acoustic scan imaging. We show that combining SAM with hypergraphs can yield more noise-robust explanations.
Abstract: Examining specific sub-cellular structures while minimizing cell perturbation is important in the life sciences. Fluorescence labeling and imaging is widely used for introducing specificity despite its perturbative and photo-toxic nature.With the advancement of deep learning, digital staining routines for label-free analysis have emerged as a replacement for fluorescence imaging. Nonetheless, digital staining of sub-cellular structures such as mitochondria is sub-optimal. This is because the models designed for computer vision are directly applied instead of optimizing them for the nature of microscopy data. We propose a new loss function with multiple thresholding steps to promote more effective learning for microscopy data. Through this, we demonstrate a deep learning approach to translate the label-free brightfield images of living cells into equivalent fluorescence images of mitochondria with an average structural similarity of 0.77, thus surpassing the state-of-the-art of 0.7 with L1. Our results provide insightful examples of some unique opportunities generated by data-driven deep-learning enabled image translations.
Image Inpainting With Hypergraphs for Resolution Improvement in Scanning Acoustic Microscopy. A. Somani, P. Banerjee,M. Rastogi, K. Agarwal,D.K. Prasad, A. Habib. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 3112-3121).
Analyzing Mitochondrial Morphology Through Simulation Supervised Learning. A.R. Punnakkal, G. Godtliebsen, A. Somani, S.A. Maldonado, Å.B. Birgisdottir, D.K. Prasad, A. Horsch, K. Agarwal. Journal of Visualized Experiments, In-press (2023).
Virtual labeling of mitochondria in living cells using correlative imaging and physics-guided deep learning. A. Somani, A.A. Sekh, I.S. Opstad, Å.B. Birgisdottir, T. Myrmel, B.S. Ahluwalia, A. Horsch, K. Agarwal, and D.K. Prasad. Biomedical Optics Express 13 (10), 2022.
Counterfactual explainable gastrointestinal and colonoscopy image segmentation. D. Singh, A. Somani, A. Horsch, D.K. Prasad. IEEE 19th International Symposium on Biomedical Imaging (ISBI) 2022.
T-MIS: Transparency Adaptation in Medical Image Segmentation. A. Somani, D. Singh, D.K. Prasad, A. Horsch. Nordic Machine Intelligence, 2021.
Digital Staining of Mitochondria in Label-Free Live-Cell Microscopy. A. Somani, A.A. Sekh, I.S. Opstad, Å.B. Birgisdottir, T. Myrmel, B.S. Ahluwalia, A. Horsch, K. Agarwal, and D.K. Prasad. Bildverarbeitung für die Medizin (BVM) Workshop, 2021.
Performance improvement in deep learning models for outdoor semantic segmentation for autonomous driving for unstructured environment. D. Singh, A. Somani, A. Horsch, D.K. Prasad. Nordic AI Young Researchers Symposium 2021.
Research Lab Associations
Bio-AI Lab, UiT Norway
One of the founding members and active researcher in Bio-AI Lab in Department of Computer Science, UiT The Arctic University of Norway. At Bio-AI, a highly collaborative data science lab located in Tromsø which facilitates the conduction of cutting-edge AI research and its applications in biology. Here, we are interested in a range of concepts and ideas in artificial intelligence, both from science and from society in general, combined with exciting biological challenges researchers at the department have been working on together with partners from physics and biology. Our findings have been widely published in numerous publications.
N-CRiPT Lab, NUS Singapore
I have joined the NUS Centre for Research in Privacy Technologies (N-CRiPT) at the School of Computing, National University of Singapore as a Research Intern to work on developing methods for interpretable deep learning models on February 2023. N-CRiPT is a strategic capability research centre in privacy-preserving technologies established by the National University of Singapore. The Centre is funded by National Research Foundation Singapore, and administered by the Smart Systems Research Programme Office, Info-communications Media Development Authority. Working ‘Towards a privacy-aware Smart Nation’, our goal is to develop privacy-preserving technologies to protect privacy at an individual and organizational level in a holistic manner – with focus on, but not limited to, unstructured data – along the whole data life cycle.
CLIMB, Beckman Institute, UIUC, USA
Visiting researcher under Erasmus+ mobility grant at the Center for Label-free Imaging and Multiscale Biophotonics (CLIMB) in Beckman Institute for Advanced Science and Technology, The University of Illinois Urbana-Champaign, USA. At CLIMB, through the three Technology Research and Development Projects (TRDs), current eight Collaborative Projects, and eight Service Projects, they have a national and international network of researchers investigating cutting-edge optical imaging and sensing methods, along with developing novel computational imaging and AI/ML algorithms, to enable label-free optical imaging technologies for both clinical applications as well as basic biological discovery.
VirtualStain is an ambitious project that could have a far-reaching impact on the way we analyse and interpret tissue- and cell-images. This large, collaborative effort, involving four departments from three different faculties, is part of the UiT Tematiske satsninger, a funding program intended to encourage innovative interdepartmental and interdisciplinary projects.
Rising up from a (formerly) small integrated optics group, ERC grant jump-started the Nanoscopy group in 2015. Our team is made up of physicists, computer-scientists, engineers and biologists with research topics stretching from advanced imaging of cells and tissues via image processing to the development of the emerging field of chip-based nanoscopy.