In order to operate and to understand human commands, robots must be provided with a knowledge representation integrating both geometric and symbolic knowledge. In the literature, such a representation is referred to as a semantic map that enables the robot to interpret user commands by grounding them to its sensory observations. However, even though a semantic map is key to enable cognition and high-level reasoning, it is a complex challenge to address due to generalization to various scenarios. As a consequence, commonly used techniques do not always guarantee rich and accurate representations of the environment and of the objects therein. In this paper, we set aside from previous approaches by attacking the problem of semantic mapping from a different perspective. While proposed approaches mainly focus on generating a reliable map starting from sensory observations often collected with a human user teleoperating the mobile platform, in this paper, we argue that the process of semantic mapping starts at the data gathering phase and it is a combination of both perception and motion. To tackle these issues, we design a new family of approaches to semantic mapping that exploit both active vision and domain knowledge to improve the overall mapping performance with respect to other map-exploration methodologies.
2021, 2021 European Conference on Mobile Robots (ECMR), Pages 1-8
S-AvE: Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots (02a Capitolo o Articolo)
Suriani Vincenzo, Kaszuba Sara, Sabbella Sandeep R., Riccio Francesco, Nardi Daniele