Interpretable Machine Learning via Linear Temporal Logic

Authors

  • Simon Lutz
  • Daniel Neider

DOI:

https://doi.org/10.11576/dataninja-1176

Keywords:

Explainable AI, Learning of logic formulas, Linear Temporal Logic

Abstract

In recent years, deep neural networks have shown excellent performance, outperforming even human experts in various tasks. However, their inherent complexity and black-box nature often make it hard, if not impossible, to understand the decisions made by these models, hindering their practical application in high-stakes scenarios.

We propose a framework for learning LTL formulas as inherently interpretable machine learning models. These models can be trained both in a supervised and unsupervised setting. Furthermore, they can easily be extended to handle noisy data and to incorporate expert knowledge.

Downloads

Published

2024-10-11