TY - GEN
T1 - NASA NeMO-Net - A Neural Multimodal Observation and Training Network for Marine Ecosystem Mapping at Diverse Spatiotemporal Scales
AU - Chirayath, Ved
AU - Li, Alan
AU - Torres-Perez, Juan
AU - Segal-Rozenhaimer, Michal
AU - Van Den Bergh, Jarrett
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/9/26
Y1 - 2020/9/26
N2 - We present NeMO-Net, the first open-source fully convolutional neural network (FCNN) and interactive learning and training software aimed at assessing the present and past dynamics of shallow marine systems through habitat mapping into geomorphological (9 classes) and biological classes (22 classes). Shallow marine systems, particularly coral reefs, are under significant pressures due to climate change, ocean acidification, and other anthropogenic pressures, leading to rapid, often devastating changes, in these fragile and diverse ecosystems. Historically, remote sensing of shallow marine habitats has been limited to meter-scale imagery due to the optical effects of ocean wave distortion, refraction, and optical attenuation. NeMO-Net combines 3D cm-scale distortion-free imagery captured using NASA's airborne FluidCam and fluid lensing remote sensing technology with low resolution airborne and spaceborne datasets of varying spatial resolutions, spectral spaces, calibrations, and temporal cadence in a supercomputer-based deep learning framework. NeMO-Net augments and improves the benthic habitat classification accuracy of low-resolution datasets across large geographic and temporal scales using high-resolution training data from FluidCam. NeMO-Net's FCNN uses ResNet and RefineNet to perform semantic segmentation and cloud masking of remote sensing imagery of shallow marine systems from drones, manned aircraft, and satellites, including FluidCam, WorldView, Planet, Sentinel, and Landsat. Deep Laplacian Pyramid Super-Resolution Networks (LapSRN) alongside Domain Adversarial Neural Networks (DANNs) are used to augment low resolution imagery with high resolution drone-based datasets as well as recognize domain-invariant features across multiple instruments to achieve high classification accuracies, ameliorating inter-sensor spatial, spectral and temporal heterogeneities. An online active learning and citizen science application is used to allows users to provide interactive training data for NeMO-Net in 2D and 3D, fully integrated within an active learning framework. Preliminary results from a test case in Fiji demonstrate 9-class classification accuracy exceeding 84%.
AB - We present NeMO-Net, the first open-source fully convolutional neural network (FCNN) and interactive learning and training software aimed at assessing the present and past dynamics of shallow marine systems through habitat mapping into geomorphological (9 classes) and biological classes (22 classes). Shallow marine systems, particularly coral reefs, are under significant pressures due to climate change, ocean acidification, and other anthropogenic pressures, leading to rapid, often devastating changes, in these fragile and diverse ecosystems. Historically, remote sensing of shallow marine habitats has been limited to meter-scale imagery due to the optical effects of ocean wave distortion, refraction, and optical attenuation. NeMO-Net combines 3D cm-scale distortion-free imagery captured using NASA's airborne FluidCam and fluid lensing remote sensing technology with low resolution airborne and spaceborne datasets of varying spatial resolutions, spectral spaces, calibrations, and temporal cadence in a supercomputer-based deep learning framework. NeMO-Net augments and improves the benthic habitat classification accuracy of low-resolution datasets across large geographic and temporal scales using high-resolution training data from FluidCam. NeMO-Net's FCNN uses ResNet and RefineNet to perform semantic segmentation and cloud masking of remote sensing imagery of shallow marine systems from drones, manned aircraft, and satellites, including FluidCam, WorldView, Planet, Sentinel, and Landsat. Deep Laplacian Pyramid Super-Resolution Networks (LapSRN) alongside Domain Adversarial Neural Networks (DANNs) are used to augment low resolution imagery with high resolution drone-based datasets as well as recognize domain-invariant features across multiple instruments to achieve high classification accuracies, ameliorating inter-sensor spatial, spectral and temporal heterogeneities. An online active learning and citizen science application is used to allows users to provide interactive training data for NeMO-Net in 2D and 3D, fully integrated within an active learning framework. Preliminary results from a test case in Fiji demonstrate 9-class classification accuracy exceeding 84%.
KW - Multimodal remote sensing
KW - coral reefs
KW - fluid lensing
KW - machine learning
KW - neural networks
UR - http://www.scopus.com/inward/record.url?scp=85101992804&partnerID=8YFLogxK
U2 - 10.1109/IGARSS39084.2020.9323188
DO - 10.1109/IGARSS39084.2020.9323188
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85101992804
T3 - International Geoscience and Remote Sensing Symposium (IGARSS)
SP - 3633
EP - 3636
BT - 2020 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2020
Y2 - 26 September 2020 through 2 October 2020
ER -