Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Simultaneous Localization and Mapping (SLAM) is pivotal for autonomous robots to navigate and operate in complex environments autonomously. Despite strides in SLAM technology, challenges persist in achieving accuracy and efficiency, especially in dynamic and resource-limited scenarios. This dissertation tackles three critical facets of the visual SLAM problem: environment simulation and sensor data processing, resource efficiency, and 3D object representation.The first segment concentrates on simulating realistic environments and optimizing sensor data processing for mapping applications. It aims to enhance algorithm evaluation and refinement across diverse operational contexts, including UAV flights and challenging lighting conditions. The second part introduces a novel SLAM solution emphasizing low-bandwidth and computational constraints by capitalizing on planar semantic maps. Finally, the dissertation proposes an advanced method for 3D shape generation, integrating deep learning systems with shape grammars, to provide more accurate representations of common objects.Contributions encompass a simulation framework tailored for mapping applications, a thermal sensor photometric correction model, an efficient RGB-D SLAM system emphasizing planar semantic mapping, and a fusion technique for enhanced 3D shape representation. These advancements collectively empower SLAM systems to perceive, navigate, and interact with spatial environments more effectively in the digital age. They enable agents to generate and communicate compressed map information within resource constraints, fostering closer collaboration between humans and robots.

Details

PDF

Statistics

from
to
Export
Download Full History