Cloud detection algorithm for multi-modal satellite imagery using convolutional neural-networks (CNN)

Michal Segal-Rozenhaimer*, Alan Li, Kamalika Das, Ved Chirayath

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

121 Scopus citations

Abstract

Cloud detection algorithms are crucial in many remote-sensing applications to allow an optimized processing of the acquired data, without the interference of the cloud fields above the surfaces of interest (e.g., land, coral reefs, etc.). While this is a well-established area of research, replete with a number of cloud detection methodologies, many issues persist for detecting clouds over areas of high albedo surfaces (snow and sand), detecting cloud shadows, and transferring a given algorithm between observational platforms. Particularly for the latter, algorithms are often platform-specific with corresponding rule-based tests and thresholds inherent to instruments and applied corrections. Here, we present a convolutional neural network (CNN) algorithm for the detection of cloud and cloud shadow fields in multi-channel satellite imagery from World-View-2 (WV-2) and Sentinel-2 (S-2), using their Red, Green, Blue, and Near-Infrared (RGB, NIR) channels. This algorithm is developed within the NASA NeMO-Net project, a multi-modal CNN for global coral reef classification which utilizes imagery from multiple remote sensing aircraft and satellites with heterogeneous spatial resolution and spectral coverage. Our cloud detection algorithm is novel in that it attempts to learn deep invariant features for cloud detection utilizing both the spectral and the spatial information inherent in satellite imagery. The first part of our work presents the CNN cloud and cloud shadow algorithm development (trained using WV-2 data) and its application to WV-2 (with a cloud detection accuracy of 89%) and to S-2 imagery (referred to as augmented CNN). The second part presents a new domain adaptation CNN-based approach (domain adversarial NN) that allows for better adaptation between the two satellite platforms during the prediction step, without the need to train for each platform separately. Our augmented CNN algorithm results in better cloud prediction rates as compared to the original S-2 cloud mask (81% versus 48%), but still, clear pixels prediction rate is lower than S-2 (81% versus 91%). Nevertheless, the application of the domain adaptation approach shows promise in better transferring the knowledge gained from one trained domain (WV-2) to another (S-2), increasing the prediction accuracy of both clear and cloudy pixels when compared to a network trained only by WV-2. As such, domain adaptation may offer a novel means of additional augmentation for our CNN-based cloud detection algorithm, increasing robustness towards predictions from multiple remote sensing platforms. The approach presented here may be further developed and optimized for global and multi-modal (multi-channel and multi-platform) satellite cloud detection capability by utilizing a more global dataset.

Original languageEnglish
Article number111446
JournalRemote Sensing of Environment
Volume237
DOIs
StatePublished - Feb 2020

Funding

FundersFunder number
National Institute of Advanced Industrial Science and Technology
NASA Earth Science Technology OfficeNNH16ZDA001N, NNH16ZDA001N-AIST

    Keywords

    • Cloud Shadows
    • Clouds
    • Convolutional Neural Networks
    • Domain adaptation
    • Remote Sensing
    • Sentinel-2
    • Worldview-2

    Fingerprint

    Dive into the research topics of 'Cloud detection algorithm for multi-modal satellite imagery using convolutional neural-networks (CNN)'. Together they form a unique fingerprint.

    Cite this