Study on bias in learning analytics earns Brooks Best Full Research Paper Award at LAK conference
A paper co-authored by University of Michigan School of Information research assistant professor Christopher Brooks received the Best Full Research Paper Award at the International Conference on Learning Analytics & Knowledge (LAK) Conference in Tempe, Arizona. The award was announced on the final day of the conference, March 7, 2019.
The paper, “Evaluating the Fairness of Predictive Student Models Through Slicing Analysis,” describes a tool designed to test the bias in algorithms used to predict student success.
The goal of the paper, Brooks says, was to evaluate whether the algorithms used to predict whether students would succeed in massive online courses (MOOCs) was skewed by the gender makeup of the classes.
“We were able to find that some have more bias than others do,” says Brooks. “First we were able to show that different MOOCs tend to have different bias in gender representation inside of the MOOCs.”
STEM MOOCS, for instance tended to have more men in them, while MOOCS that were about global affairs tended to have more women in them.
“What we found is as your population became more imbalanced, the bias in the algorithm increased for these MOOCs.”
The other co-authors on the study are Josh Gardner, an alumnus of UMSI and current PhD student at University of Washington, and Ryan Baker from University of Pennsylvania.
“A lot of machine learning algorithms are based on the input data you give,” Brooks says. “So if you give it 100 examples of something and just one example of something else, it will over-train to the 100 examples. It is something the community is grappling with broadly.”
This is not unique to machine learning, Brooks says, but there has not been a way of quantifying that bias for different groups of users in education.
“So even if we acknowledge the bias probably exists, nobody is reporting on that in their scientific papers when they build the next data-mining algorithm.”
The metric that Brooks and his fellow researchers have come up with, which they call ABROCA, allows people to look at a population group and describe the bias “across all of the different parameterizations of that model, showing how accuracy changes between different groups.”
“I think the unique part of what we have done is taken a very large data set of more than 4 million students, and we’ve been able to show trends across large sample of extremely diverse courses.”
This work was supported in part under the Holistic Modeling of Education (HOME) project funded by the Michigan Institute for Data Science (MIDAS).
- Jessica Webster, UMSI PR specialist