Explications Robustes : Le Cas des Implicants Premiers


This paper addresses the problem of quantifying the robustness level of explanations, defined as the maximum level of perturbation to a classifier’s parameters under which a given explanation remains valid. We establish computational complexity results in the case of linear models, which motivate the design of a log-linear time algorithm. The proposed notion is then used to analyze the robustness of various types of prime implicants, particularly the shortest and the most robust ones, leading to practical recommendations on the relevance of each type depending on the application scenario.