TY - GEN
T1 - First Trimester Video Saliency Prediction Using Clstmu-Net with Stochastic Augmentation
AU - Savochkina, Elizaveta
AU - Lee, Lok Hin
AU - Zhao, He
AU - Drukker, Lior
AU - Papageorghiou, Aris T.
AU - Alison Noble, J.
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).
AB - In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).
KW - Fetal ultrasound
KW - U-Net
KW - convolutional LSTM
KW - first trimester
KW - gaze tracking
KW - stochastic augmentation
KW - video saliency prediction
UR - http://www.scopus.com/inward/record.url?scp=85129604040&partnerID=8YFLogxK
U2 - 10.1109/ISBI52829.2022.9761585
DO - 10.1109/ISBI52829.2022.9761585
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
C2 - 36643818
AN - SCOPUS:85129604040
T3 - Proceedings - International Symposium on Biomedical Imaging
BT - ISBI 2022 - Proceedings
PB - IEEE Computer Society
T2 - 19th IEEE International Symposium on Biomedical Imaging, ISBI 2022
Y2 - 28 March 2022 through 31 March 2022
ER -