Using weight matrices to build a polynomial representation to assist with interpretability and explainability of deep learning models

No Thumbnail Available

Date

2020

Authors

Nthabiseng, Selwane

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Deep Learning (DL) models use representational learning to discover the representation for classification and detection. Learned representations present a problem because the representation is encoded in such a way that cannot be explained using human intuition and known mathematics, thus not allowing us to reason out and interpret results from DL models. Knowledge learned by a neural network is internally stored in the weights. In this research project, we present a novel approach to building a polynomial representation of the learned function using the weight matrices. We were able to represent the learned function and also use convolution to map and to visualise the behaviour of the model. A popular explanation method, Local interpretable model agnostic explanations(LIME) was also used to obtain an understanding of predictions made using the Feedforward neural network (FFNN)

Description

A research proposal submitted in partial fulfilment of the requirements for the degree of Master of Science in the field of e-Science in the School of Computer Science and Applied Mathematics University of the Witwatersrand, 2020

Keywords

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By