Logic-based learning for Interpretable AI
Alessandra Russo
Learning interpretable programs from data is one of the main challenges of AI. Over the last two decades there has been a growing interest in logic-based learning, a field of machine learning that aimed at developing algorithms and systems for learning programs that can explain labelled data in the context of given background knowledge. Contrary to the black-box Deep Learning methods, logic-based learning systems learn programs that can be easily expressed into plain English and explained to human users, so facilitating a closer interaction between humans and machine. In this talk, I presence recent advancements in the field of logic-based learning. I will introduce frameworks and algorithms for learning different classes of programs, ranging from definite programs under the Least Herbrand model semantics, to highly expressive non-monotonic programs under the Answer Set Semantics. Such programs include normal-rules, non-determinism, choice rules, hard and weak constraints, which are particularly useful for modelling human preferences. I will briefly discuss the relationship between these frameworks and illustrate a range of real-world problems that have been addressed using these systems. I will then conclude with the next open challenges in this area.