TY - GEN
T1 - Learning Vector Quantized Shape Code for Amodal Blastomere Instance Segmentation
AU - Jang, Won Dong
AU - Wei, Donglai
AU - Zhang, Xingxuan
AU - Leahy, Brian
AU - Yang, Helen
AU - Tompkin, James
AU - Ben-Yosef, Dalit
AU - Needleman, Daniel
AU - Pfister, Hanspeter
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Blastomere instance segmentation is important for analyzing embryos' abnormality. To measure the accurate shapes and sizes of blastomeres, their amodal segmentation is necessary. Amodal instance segmentation aims to recover an object's complete silhouette even when the object is not fully visible. For each detected object, previous methods directly regress the target mask from input features. However, images of an object under different amounts of occlusion should have the same amodal mask output, making it harder to train the regression model. To alleviate the problem, we propose to classify input features into intermediate shape codes and recover complete object shapes. First, we pre-train the Vector Quantized Variational Autoencoder (VQ-VAE) model to learn these discrete shape codes from ground truth amodal masks. Then, we incorporate the VQ-VAE model into the amodal instance segmentation pipeline with an additional refinement module. We also detect an occlusion map to integrate occlusion information with a backbone feature. As such, our network faithfully detects bounding boxes of amodal objects. On an internal embryo cell image benchmark, the proposed method outperforms previous state-of-the-art methods. To show generalizability, we show segmentation results on the public KINS natural image benchmark. Our method would enable accurate measurement of blastomeres in In Vitro Fertilization (IVF) clinics, potentially increasing the IVF success rate.
AB - Blastomere instance segmentation is important for analyzing embryos' abnormality. To measure the accurate shapes and sizes of blastomeres, their amodal segmentation is necessary. Amodal instance segmentation aims to recover an object's complete silhouette even when the object is not fully visible. For each detected object, previous methods directly regress the target mask from input features. However, images of an object under different amounts of occlusion should have the same amodal mask output, making it harder to train the regression model. To alleviate the problem, we propose to classify input features into intermediate shape codes and recover complete object shapes. First, we pre-train the Vector Quantized Variational Autoencoder (VQ-VAE) model to learn these discrete shape codes from ground truth amodal masks. Then, we incorporate the VQ-VAE model into the amodal instance segmentation pipeline with an additional refinement module. We also detect an occlusion map to integrate occlusion information with a backbone feature. As such, our network faithfully detects bounding boxes of amodal objects. On an internal embryo cell image benchmark, the proposed method outperforms previous state-of-the-art methods. To show generalizability, we show segmentation results on the public KINS natural image benchmark. Our method would enable accurate measurement of blastomeres in In Vitro Fertilization (IVF) clinics, potentially increasing the IVF success rate.
UR - http://www.scopus.com/inward/record.url?scp=85172091719&partnerID=8YFLogxK
U2 - 10.1109/ISBI53787.2023.10230774
DO - 10.1109/ISBI53787.2023.10230774
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85172091719
T3 - Proceedings - International Symposium on Biomedical Imaging
BT - 2023 IEEE International Symposium on Biomedical Imaging, ISBI 2023
PB - IEEE Computer Society
T2 - 20th IEEE International Symposium on Biomedical Imaging, ISBI 2023
Y2 - 18 April 2023 through 21 April 2023
ER -