TY - GEN
T1 - On divergence approximations for unsupervised training of deep denoisers based on Stein's unbiased risk estimator
AU - Soltanayev, Shakarim
AU - Giryes, Raja
AU - Chun, Se Young
AU - Eldar, Yonina C.
N1 - Publisher Copyright:
© 2020 IEEE
PY - 2020/5
Y1 - 2020/5
N2 - Recently, there have been several works on unsupervised learning for training deep learning based denoisers without clean images. Approaches based on Stein's unbiased risk estimator (SURE) have shown promising results for training Gaussian deep denoisers. However, their performance is sensitive to hyper-parameter selection in approximating the divergence term in the SURE expression. In this work, we briefly study the computational efficiency of Monte-Carlo (MC) divergence approximation over recently available exact divergence computation using backpropagation. Then, we investigate the relationship between smoothness of nonlinear activation functions in deep denoisers and robust divergence term approximations. Lastly, we propose a new divergence term that does not contain hyper-parameters. Both unsupervised training methods yield comparable performance to supervised training methods with ground truth for denoising on various datasets. While the former method still requires roughly tuned hyper parameter selection, the latter method removes the necessity of choosing one.
AB - Recently, there have been several works on unsupervised learning for training deep learning based denoisers without clean images. Approaches based on Stein's unbiased risk estimator (SURE) have shown promising results for training Gaussian deep denoisers. However, their performance is sensitive to hyper-parameter selection in approximating the divergence term in the SURE expression. In this work, we briefly study the computational efficiency of Monte-Carlo (MC) divergence approximation over recently available exact divergence computation using backpropagation. Then, we investigate the relationship between smoothness of nonlinear activation functions in deep denoisers and robust divergence term approximations. Lastly, we propose a new divergence term that does not contain hyper-parameters. Both unsupervised training methods yield comparable performance to supervised training methods with ground truth for denoising on various datasets. While the former method still requires roughly tuned hyper parameter selection, the latter method removes the necessity of choosing one.
KW - Deep learning
KW - Denoising
KW - Divergence term
KW - SURE
KW - Unsupervised training
UR - http://www.scopus.com/inward/record.url?scp=85091188637&partnerID=8YFLogxK
U2 - 10.1109/ICASSP40776.2020.9054593
DO - 10.1109/ICASSP40776.2020.9054593
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85091188637
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 3592
EP - 3596
BT - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
Y2 - 4 May 2020 through 8 May 2020
ER -