|Financial entity:||Consejería de Economía, Comercio e Innovación. Junta de Extremadura|
|Principal Investigator:||Bachiller Burgos, Pilar|
Vision in humans provides much of the information received from the outside world. Undoubtedly, a robot with this sensory capability can overcome larger challenges than one who lacks it. Traditionally, there has been some decoupling between the studies on artificial vision and those related to robot control. In the first group, the main effort has focused on producing symbolic and/or geometry interpretations of the world from sensory information obtained by one or more views of a scene.
Although progress in this regard are numerous, this isolated conception of visual perception has not produced the desired results in robotics. In addition, several studies show, with increasing force, that perception is closely related to action and therefore should not be considered as a separate process. In order to establish appropriate links between visual perception and action control, it is necessary to answer two fundamental questions: how to include the influence of actions in the perceptual process? and how to regulate actions according to perception? The main assumption we raised is that these two issues can be solved through attention. More specifically, the basis of our proposal is that attention acts as a connection means between perception and action, which allows either, to drive the perceptual process according to actions and to modulate actions according to the perceptual result of the attentional control.
Based on this idea, we have successfully developed a control architecture that allows solving navigation tasks in an autonomous mobile robot with stereo vision. The main objective of this project is to expand the control architecture of our robots to give them the ability to know and recognize their environment. The new system is a process of visual SLAM (Simultaneous Localization And Mapping) to be integrated into the architecture as a new behavior, acting effectively with other behaviors to support the overall control and to allow the robot to adapt to its environment.
This new behavior will keep the attentional philosophy of the system, which will provide it the capability to act in both active and passive ways. On one hand, in a situation where the environment is unknown, the SLAM behavior will behave actively, guiding attention and actions of the robot to explore the environment and to build an internal representation of it. Faced with a family environment, the SLAM behavior will remain in a passive state while other behaviors act. In such a situation, its processing activity will be dedicated to remain localized and to update any change produced in the environment by using the previously obtained representation and the information provided by the visual attention system. These two ways of self-localization allow the robot to adapt progressively to its environment, without prior knowledge about it. Moreover, this adaptation only takes play from the interaction between the robot and the environment. As a result, the robot will act effectively in any area without the need to be reprogrammed or explicitly adapted.