Interpretable artificial intelligence and machine learning

In some applications, such as in medicine and the law, having an AIML model that can be explained is as important as the result that it arrives at.

In the foreground a woman sits by a laptop; nearby, a young man stands in front of a giant multi-display screen

Deep neural networks—particularly deep convolutional neural networks and variations—have achieved great results in image classification, speech recognition and signal processing, and natural language processing, as well as in some other domains.

However, the (learned) models are often too big and complex, and they are very hard to explain and interpret. In some applications where accuracy is the primary focus, that isn't a problem. But in other cases, it is.

The value of explainable artificial intelligence (XAI)

For some other domains, such as medicine and the law, why and how the learned models derive the solutions that can achieve good performance is more important than—or at least as important as—the accuracy performance itself.

In such areas, it is important that the workings of learning models can be interpreted and understood.

Our research into explainable AI and machine learning

The Group has been carrying out research in explainable AI (XAI) and machine learning that can derive interpretable models. Our main approaches include:

  • feature selection and construction
  • online learning simplified models
  • designing interpretable components to learn interpretable models
  • post-hoc processing of learned complex models.