Rationalizing Deep Learning Model Decisions

Investigator: Wynne Hsu.

Deep learning has remarkable performance in many applications. But can we really trust the decision of deep learning systems? The goal of this project is to build trust by providing decision justification to explain how a model makes its decision. We propose a framework called FLEX to generate justifications to rationalize the decision of a CNN in terms of features that are responsible for the decision. FLEX generates explanations that match the visual features used by the model in its prediction and provides insights into incorrect model decisions. We find that 70.5% of the misclassified examples have explanations that involve common features between the true class and the predicted class. FLEX enables automatic annotations of images with 88% of images in the CUB collection have at least 1 part correctly annotated.

 

 

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google