Inner-Simulation for Epistemic Reasoning in Robotics (INSIGHT)
Funding Entity: Ministry of Science and Innovation, 2022 call for KNOWLEDGE GENERATION PROJECTS
Current autonomous machines lack a deep understanding of their environment, their missions, and the unforeseen situations that affect their performance. Although advances have been made in computer vision and machine learning, these capabilities are not sufficient to endow robots with a level of awareness comparable to that of the humans around them or the engineers who designed them. There is a critical lack of system-level awareness, which seriously limits their reliability, performance, and ability to inspire trust. The core of the problem cannot be solved by deep learning alone: what is needed is a cognitive architecture capable of reasoning and generating causal explanations about its own experience, especially in the face of perceptual anomalies.
Unlike approaches that aim for the robot to explain its own planned actions for human benefit (e.g., traceability or accountability), this project focuses on a deeper challenge: discovering new knowledge from explanations of unexpected perceptual experiences while the robot carries out tasks guided by epistemic goals. This process of knowledge generation—comparable to human scientific insight—relies on pre-existing explicit knowledge and an active control architecture. Its purpose is to transform those anomalies into reusable knowledge, strengthening the system's intelligence for future situations.
The central goal of the INSIGHT proposal is to achieve measurable progress toward robots with self-understanding and self-extending capabilities, that is, systems capable of recognizing their own knowledge gaps and generating new content to fill them. These capabilities will enable the robot to interpret perceptual events within a domain model and share those explanations with human interlocutors, enhancing human-robot collaboration from a more solid and cognitively grounded foundation.



Subproject 1
Inner simulation as a contrastive causal engine (INNER)
Reference: PID2022-137344OB-C31
The INNER subproject aims to integrate a physical simulator into the CORTEX cognitive architecture. To achieve this, a management agent will be designed to enable efficient integration, and an episodic memory will be implemented, connected to the working memory.
Algorithms will be developed to translate knowledge stored in the semantic memory into configurations of the simulator’s scene graph, and evaluation methods and metrics will be established to compare simulated results with real data from anomalous experiences.
In addition, real-time synthesis of perceptive detectors from causal rules and synthetic data will be explored, incorporating them as part of the system’s procedural learning.
Coordinating PIs
Subproject 2
Explanation and learning from self-extension (ELSE)
Reference: PID2022-137344OB-C32
The ELSE subproject aims to extend the CORTEX architecture by incorporating cognitive agents capable of monitoring the development of the robot's missions and detecting inconsistencies that act as triggers for epistemic goals.
One of the main focuses is the implementation of an introspective inner speech capability, where different agents within CORTEX verbally interact to represent the evolution of the mission. This ability is then extended to a situated human-robot interaction environment, allowing the system to explain its decisions and reasoning in a way that is understandable to users, thereby improving transparency and human-robot collaboration.
Principal Investigators
Subproject 3
Learning causal knowledge representations (LEAKER)
Referencia: PID2022-137344OB-C33
The LEAKER subproject, led by UCLM, aims to extend the CORTEX architecture by incorporating a specialized agent that plays the role of semantic memory. This agent will be fully integrated with the key system modules, such as working memory and the internal simulator, thus forming a coherent and functional cognitive structure.
It will also serve as a key tool for generating causal explanations in the face of unexpected events, enhancing the system's ability to reason and communicate its decisions in a comprehensible and transparent manner.
The project includes the validation of the semantic memory based on its adaptability to the selected use cases and its suitability for operating in real-world environments through deployment in the Shadow robot.
Principal Investigators
