Why did I fail? A cause-based method for finding explanations for robot failures

What a robot can explain will increase trust and transparency in robots and help correct their behavior. A recent paper published on arXiv.org proposes a method for generating explanations for failures based on a causal model that gives robots a partial understanding of their environment.

Robotic Grippers. Image credits: Ars Electronics, CC BY-NC-ND 2.0 via Flickr

Researchers use Bayesian networks to tackle the problem of knowledge acquisition. A new method is proposed to generate explanations for performance failures based on learned causal knowledge. It is based on the comparison of the variable parametrization associated with a failed operation with its closest parametrization that can lead to a successful execution.

The researchers demonstrate how causal Bayesian networks can be learned from simulations and provide real-world experiments showing that causal models are transferable from simulations to reality without any retraining.

Robot failure is inevitable in a human-centered environment. Therefore, the ability of robots to explain such failures to interact with humans is of paramount importance in order to increase trust and transparency. To acquire this skill, the main challenges addressed in this paper are I) obtaining enough data to learn an environmental cause-effect model and II) generating causal explanations based on that model. We address I) by learning a causal Bayesian network from simulation data. Regarding II), we propose a new method that enables robots to generate contrasting explanations on task failures. The explanation is based on establishing the failure condition as opposed to the closest state that would have allowed successful execution, which is found through breadth-first search and based on success predictions from learned causal models. We assess the sim2 true transferability of the causal model on a cube stacking scenario. Based on real-world experiments with robots with two different avatars, we obtain sim2’s actual accuracy of 70% without any optimization or retraining. Our method thus allowed real robots to have failure explanations, such as, ‘The upper cube was dropped too high and the lower cube too far to the right.’

Research Article: Diehl, M. and Ramirez-Amaro, K., “Why did I fail? A reason-based method for finding explanations for robot failures”, 2022. Link: https://arxiv.org/abs/2204.04483


Related Posts

Leave a Reply

Your email address will not be published.