This dissertation was written as a part of the MSc in Data Science at the International
Hellenic University.
Deep neural networks have achieved considerable success in several tasks, e.g., text classification, fault detection, etc., yet there is rising worry over their black-box nature. The
issue of interpretability has an impact on people's confidence in deep learning systems.
Explainability on models focusing on computer vision and natural language processing
has received a lot of attention in recent years. Because of their pervasiveness, time series
contain temporal data with variables that change throughout time. Fault detection in
chemical processes is a dynamic time series classification problem. The interpretability
of neural network models on time series is a relatively new study area with many challenges.
This thesis implements the Integrated Gradients method on the best trained Long-Short
Term Memory (LSTM) model in a fault detection task for chemical process, using the
Mode 1 of Tennessee Eastman Process (TEP) as the dataset to explain the predictions
made by the model.
Collections
Show Collections