IAR Seminar: Elijah Mayfield
Room 3330 North Quad
Explainable Group Decision-Making for Explainable Machine Learning
Ethical machine learning researchers have established that algorithmic decision-making encodes biases and preferences from training data into the features and weights used in future predictions. The high stakes of today’s algorithmic tools have led to a call for interpretability and accountability, demanding algorithms that can explain the reasons that led to an automated decision and assign responsibility for those judgments. In this work I go further, showing that trained models can make visible the complex human discourse processes that led to a dataset’s outcomes in the first place. Using Wikipedia’s Articles for Deletion as a case study, I show that we can describe how debates are won and lost online, and discuss how that impacts explainability research. I will also describe the 14-year historical corpus that we collected for this research, which we release for open use by the broader social computing community.
Elijah Mayfield is an Entrepreneur-in-Residence at Carnegie Mellon University. Previously, he was Vice President of New Technologies at Turnitin, managing machine learning and NLP research for educational products used by more than 30 million students globally. He joined Turnitin when they acquired LightSide Labs, which he founded as CEO with support from the Gates Foundation, the College Board, the US Department of Education, and others. Mayfield has coauthored more than 40 peer-reviewed publications on language technologies and human-computer interaction, receiving awards including a Siebel Scholarship, an IBM Ph.D. Fellowship, and being named to Forbes 30 under 30 in Education.