Explainable artificial intelligence for image classification via feature prediction and localization
LE3 .A278 2020
Silver, Daniel L.
Bachelor of Computer Science
Artifcial neural networks (ANNs) are state-of-the-art systems for image classifcation tasks. Despite having superior performance to other machine learning tools, ANNs are unusable in some applications due to their lack of explanation of predictions. Currently, prominent explanation tools produce heatmaps which, in essence, show where the network is focusing. This thesis investigates an alternative method for adding explainability to applications of image classifcation by taking a more human-like approach of identifying the features of classifed objects within the images. The theory is that this method will also help classifcation accuracies, alongside adding an explanation. Experiments using various multiple task learners have shown promising results indicating that it is possible to classify and localize objects and their features in a single model. Certain network architectures also prove capable of at least maintaining (and in certain cases improving) performance after consolidation. The top-performing system provides explanations that are useful for the average input example and areas that require additional improvements are discussed.
The author retains copyright in this thesis. Any substantial copying or any other actions that exceed fair dealing or other exceptions in the Copyright Act require the permission of the author.