MISC Talk: Samuel Carton

Date: 
Tue, 02/12/2019 - 11:30am

Ehrlicher Room, 3100 North Quad 

The Design and Evaluation of Algorithms for Explaining Text Classifiers

Abstract

The machine learning community has recently begun to recognize the need for interpretable predictive models. While such models can be trained to be very accurate, sometimes even more accurate than their human counterparts on average, they have a tendency to fail unexpectedly and are ill-equipped to deal with nuance and outliers. One of the biggest challenges in this area is in defining what it means for an explanation to be effective in the first place, and then in designing algorithms optimized for this quality. In this talk I discuss two papers: the first is an algorithm for explaining text classifier decisions by producing high-recall attention masks, and the second is a crowdsourced experiment exploring the impact of this type of explanation on human performance in a model-assisted decision task.

Speaker Bio

Sam Carton is a PhD candidate in the school of information, advised by Paul Resnick and Qiaozhu Mei. He received a BS in computer science from Northwestern University. Sam's current research interests are in explainable machine learning, where he is interested both in engineering new explanation methods as well as understanding the human factors that determine what explanations are effective in real-world settings. His past work includes projects on tracking and visualizing the spread of rumors over social media as well as predictive modeling of police misconduct. His professional experience includes an internship with Microsoft Research as well as the Data Science for Social Good Fellowship at the University of Chicago. 

The Michigan Interactive and Social Computing research group connects researchers studying human-computer interaction, social computing, and computer-supported cooperative work across the University of Michigan.