We present NeMO-Net, the first open-source fully convolutional neural network (FCNN) and interactive learning and training software aimed at assessing the present and past dynamics of shallow marine systems through habitat mapping into geomorphological (9 classes) and biological classes (22 classes). Shallow marine systems, particularly coral reefs, are under significant pressures due to climate change, ocean acidification, and other anthropogenic pressures, leading to rapid, often devastating changes, in these fragile and diverse ecosystems. Historically, remote sensing of shallow marine habitats has been limited to meter-scale imagery due to the optical effects of ocean wave distortion, refraction, and optical attenuation. NeMO-Net combines 3D cm-scale distortion-free imagery captured using NASA's airborne FluidCam and fluid lensing remote sensing technology with low resolution airborne and spaceborne datasets of varying spatial resolutions, spectral spaces, calibrations, and temporal cadence in a supercomputer-based deep learning framework. NeMO-Net augments and improves the benthic habitat classification accuracy of low-resolution datasets across large geographic and temporal scales using high-resolution training data from FluidCam. NeMO-Net's FCNN uses ResNet and RefineNet to perform semantic segmentation and cloud masking of remote sensing imagery of shallow marine systems from drones, manned aircraft, and satellites, including FluidCam, WorldView, Planet, Sentinel, and Landsat. Deep Laplacian Pyramid Super-Resolution Networks (LapSRN) alongside Domain Adversarial Neural Networks (DANNs) are used to augment low resolution imagery with high resolution drone-based datasets as well as recognize domain-invariant features across multiple instruments to achieve high classification accuracies, ameliorating inter-sensor spatial, spectral and temporal heterogeneities. An online active learning and citizen science application is used to allows users to provide interactive training data for NeMO-Net in 2D and 3D, fully integrated within an active learning framework. Preliminary results from a test case in Fiji demonstrate 9-class classification accuracy exceeding 84%.