Underwater Cave Mapping using Stereo Vision

This project introduces a Cyber Physical Systems (CPS) to the underwater cave exploration community ensuring there is no interference with the standard operations. Underwater cave mapping is crucial in monitoring and tracking groundwater flows in karstic aquifers. In this project we construct a volumetric maps of the cave system by deploying a stereo camera to be used in conjunction with a structured video-light carried by the diver. This detailed 3-D representations of underwater caves will give researchers a better perspective about their size, structure and connectivity as well as the insights to the hydrogeological processes that formed the caves.
picture

Sonar Visual Inertial SLAM of Underwater Structures

This paper presents an extension to a state-of-the-art Visual-Inertial state estimation package (OKVIS) in order to accommodate data from an underwater acoustic sensor. Mapping underwater structures is important in several fields, such as marine archaeology, search and rescue, resource management, hydrogeology, and speleology. Collecting the data, however, is a challenging, dangerous, and exhausting task. The underwater domain presents unique challenges in the quality of the visual data available; as such, augmenting the exteroceptive sensing with acoustic range data results in improved reconstructions of the underwater structures. Experimental results from underwater wrecks, an underwater cave, and a submerged bus demonstrate the performance of our approach.
picture

Underwater Cave Mapping Using Sonar, Visual, Inertial, and Depth Sensors

Underwater caves present a unique challenge when it comes to mapping. While an important environment as a repository of fresh water, presenting evidence of geological processes, and often containing unique archaeological findings, they are also difficult to access and dangerous to navigate through. Before venturing beyond the light zone with autonomous robots it is crucial to ensure that localization and mapping abilities have been developed and are adequately robust. In this paper we will present a novel approach that combines acoustic, visual, inertial, and depth data in order to simultaneously track the trajectory of the sensor suite and map the floor, ceiling, and walls of the cave. More specifically, we utilize the artificial light in the scene to identify additional features on the boundaries (walls) of the cave. The popular open-source package OKVIS has been augmented with improved features, and processing range data. Experimental results from different Florida caves validate the quality of the proposed approach. Furthermore, a comparison is presented between a few compatible packages and a discussion on the effectiveness of different feature choices.
picture

Experimental Comparison of open source Vision based State Estimation Algorithms

In robot autonomy, estimating state and mapping, popularly known as Simultaneous Localization and Mapping (SLAM), is an active research field. There has been many works trying to solve SLAM problem, yet there exists no method that can be called robust such as it works in all environment and conditions. In this comparative study, we take recent and popular SLAM systems avaliable open source and try them on our datasets. These datasets are created as if they represent different environments with different conditions i.e. illumination, for multiple robots.
picture