On January 14th, 2020, I woke up seeing a message on my phone. AAAI tried to contact me. What could this mean?
This was completely unexpected: the AAAI 2004 paper on QuickXplain has won the 2020 AAAI Classic Paper Award. That paper! The one of which I thought that I was lucky that it got accepted. Yes, it was not the best written paper, but it left an impact. And the constraint programming community celebrated this award. Eugene Freuder even gave a shoutout to it when receiving the IJCAI Award for Research Excellence some months later.
The AAAI Classic Paper Award comes with a retrospective presentation at the AAAI conference. There are different ways to give this presentation. For me, it was an occasion to show that my scientific capabilities are still there and I did my best to put an excellent academic talk together, which I titled The QuickXplain Story. As prominent guests, I had Jon Doyle, Luis Lamb, Barry O’Sullivan, and others in the audience. Jon Doyle told me in the end that I am too modest. This certainly is the best compliment you can give me.
Since then, four further years have passed. Explainable AI has become an extremely important topic. It is great that I could make progress on this topic as far as constraint solvers and logical reasoning systems are concerned and propose an elegant algorithm. Even if there are now variants of QuickXplain that are better in certain cases, they follow the same design choices: they are non-intrusive methods which are using a solver as a blackbox to check the inconsistency of different subsets of constraints, they are moving constraints from foreground to background and vice versa, and they are decomposing the explanation problem in one or the other way. QuickXplain has indeed paved new ways for explanation generation and its basic design choices remain valid.
QuickXplain also is a good example for a principled approach that has many benefits: it is simple and elegant, generic, effective, and meaningful. At a moment where researchers struggle to find meaningful explanations for deep learning networks (probably since these networks are lacking a meaningful structure) and prominent researchers are saying that explainable AI is in trouble, explanation detection has been solved for constraint solvers and logical reasoners. This does not mean that everything has been done, but the basic explanation mechanisms are there and will stay. ◉
This web site expresses personal reflections about scientific topics with the purpose of contributing to discussions in scientific communities. As such, this content is not related to any organization, association, or employment.