Interpretability of Time Series Forecasting Models  
Author Aishwarya Ramakrishnan

 

Co-Author(s) Akshay Sadanandan; Lyzanne Erika Dsouza; Somesh Patel

 

Abstract Interpretability is the degree to which a human can understand the cause of a decision. The higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made. The problem statement of this paper focuses on interpreting several time series models and explaining the predictions produced by the system. The four aspects based on Prediction Based Interpretability that our project aims at solving are as follows: 1) Feature Impact 2) Feature Effect 3) Prediction Explanation 4) WhatIF Explainer. In this project, we followed a model agnostic approach to implement the interpretability for time series models. These time series models include several flavors of ARIMA and a few neural net models. We also created a custom explainer in Python for the FB Prophet time series algorithm and a respective translator for it in the R language. This project can help benefit the AI (Artificial Intelligence) and ML (Machine Learning) industry by developing an approach to interpret time series models and identify bias in their forecasts. The approaches described in this paper implements interpretability on six forecasting algorithms, thereby helping in demystifying the black box nature of these models.

 

Keywords Time Series Forecasting Models, Interpretability & Explainability, Neural Networks, Model Agnostic, Linear SHAP & LIME, Python to R Portability
   
    Article #:  RQD26-278
 

Proceedings of 26th ISSAT International Conference on Reliability & Quality in Design
Virtual Event

August 5-7, 2021