Learned SPARCOM: unfolded deep super-resolution microscopy

Gili Dardikman-Yoffe, Yonina C. Eldar

Research output: Contribution to journalArticlepeer-review

Abstract

The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization, but at the cost of low temporal resolution. We suggest combining SPARCOM, a recent high-performing classical method, with model-based deep learning, using the algorithm unfolding approach, to design a compact neural network incorporating domain knowledge. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets using the proposed learned SPARCOM (LSPARCOM) network. We believe LSPARCOM can pave the way to interpretable, efficient live-cell imaging in many settings, and find broad use in single molecule localization microscopy of biological structures.

Original languageEnglish
Pages (from-to)27736-27763
Number of pages28
JournalOptics Express
Volume28
Issue number19
DOIs
StatePublished - 14 Sep 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Learned SPARCOM: unfolded deep super-resolution microscopy'. Together they form a unique fingerprint.

Cite this