Investigator: Wynne Hsu.
Deep learning has remarkable performance in many applications. But can we really trust the decision of deep learning systems? The goal of this project is to build trust by providing decision justification to explain how a model makes its decision. We propose a framework called FLEX to generate justifications to rationalize the decision of a CNN in terms of features that are responsible for the decision. FLEX generates explanations that match the visual features used by the model in its prediction and provides insights into incorrect model decisions. We find that 70.5% of the misclassified examples have explanations that involve common features between the true class and the predicted class. FLEX enables automatic annotations of images with 88% of images in the CUB collection have at least 1 part correctly annotated.