AI for Explaining Decisions in Multi-Agent Environments


Explanations are necessary for humans to understand and accept decisions made by an AI system when the system’s goal is known. It is even more important when the AI system makes decisions in multi-agent environments where humans do not know the systems’ goals since they may depend on other agents’ preferences. In such situations, explanations should aim to increase user satisfaction, considering the system’s decision, the user’s and the other agents’ preferences, the environment settings, and properties such as fairness, envy, and privacy. We will discuss three cases of Explainable decisions in Multi-Agent Environments (xMASE) : explanations for multi-agent Reinforcement Learning, advice explanations in complex repeated decision-making environments and explaining preferences and constraints-driven optimization problems. For each case, we will present an algorithm for generating explanations and report on human experiments that demonstrate the benefits of providing the resulting explanations for increasing human satisfaction from the AI system.