Learning to see colours: generating biologically relevant fluorescent labels from bright-field images

← Back to publications

Published: 2021-10-15

Formatted citation

Wieslander H, Gupta A, Bergman E, Hallström E, Harrison PJ. Learning to see colours: generating biologically relevant fluorescent labels from bright-field images.
PLOS ONE. 16, 10 (2021). DOI: 10.1371/journal.pone.0258546

Abstract

Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images, various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images would get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images to enable virtual staining for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key indicators of successful nanoparticle uptake.