Efficient human-machine interaction for cooperative learning of shared categories


Project Details

Project start:
01/2017
Project end:
12/2019
Total project budget:
214.500,00



Principal Investigator(s)



Tags

Künstliche Intelligenz



Research areas



DFG Areas



Official Statistics Areas


Abstract

Self-localization (SL) is a core technology for autonomaus mobile robots that perform tasks in
extended spatial scenarios. This has raised great research interest in technologies that enable a
sufficiently precise localization of a mobile device in an indoor or outdoor environment. Techniques
that use satellite-based localization or distance-based sensors like Iaser-range finders have been
weil established but still suffer from drawbacks of limited satellite availability or high hardware effort
and costs. Vision-based localization can provide an interesting alternative, but is still a field of
intensive research due to the challenges of visual perception such as occlusion, changing light
conditions and 30 environment appearance changes.
Autonomously navigating robots are now becoming parts of everyday life. The first affordable robots
were autonomaus lawn mowers and indoor cleaning robots, which started as purely reactive
systems without much sensing or intelligence. However, their application domain is much simpler
than, for example, an autonomaus driving car. A navigating cleaning robot can perform useful work
most of the time and even occasional failure will not cause damage. Thus, extending the sensory
and cognitive abilities of small hausehold robots is a domain where new concepts can become
useful in the near future. We consider as especially interesting those solutions that try to mirnie the
performance and robustness of biological navigational processing.
Compared to current technical systems, most animals have an amazing capability of self-localization
and navigation within their natural habitat. Different sensory modalities like vision , olfaction, audition
or magnetic sensing play an important role for this ability. Same species can robustly navigate using
only vision , like e.g. rodents [2], and the discovery of place-cells [1] in the rat hippocampus was a
major breakthrough in the understanding of space representation in mammals. lt has been shown
that the neural learning approach of slow feature analysis (SFA) [1 0] can provide a model for the
formation of place cells in the brain. Recently it was demonstrated that these methods are actually
robustly applicable in an outdoor environment and can enable a self-localization of a mobile robot
[12].
The goal of this research project is the further investigation and extension of the SFA-based visual
self-localization method in challenging outdoor environments to solve problems like short time-scale
environment changes and seasonal variations. An SL system has to be either invariant to these
changes, or adapt its environmental representation to them, or both. While research in the past has
focussed on static scenes and on short-term environmental changes (light, weather, moved objects),
this project will focus on SL for Iang-term changes.
A small robot can only carry small Ioads, which means small and simple sensors, battery, and limited
computational power. However, cameras are small and cheap and vision processing has become
increasingly powerful in the recent years. Rodents, for example, manage to navigate only based on
vision [2], which shifts the complexity from the sensor (eye, camera) to the processing of the visual
data. Thus, instead of using specialized dedicated sensors for different tasks, the goal of this project
is to use a single camera or stereo cameras for SL.


Last updated on 2021-18-01 at 13:14