HeartMob: A Response to Online Harassment

Recently, when a woman broke up with her boyfriend, he wanted revenge. So he created defamatory websites claiming that she had been a prostitute, listing her personal and professional contact information. Then he sent the websites to 300 of the woman’s friends, family and clients. 

She worked more than four years with local and federal law enforcement and with Google to have the sites removed. 

Another Internet user was frightened by what social media sites such as Facebook and Twitter said did not violate their policies. “If [someone says], ‘You should shut up and keep your legs together, whore,’ that's not a violation, because they're not actually threatening me,” she said. 

“It’s really complicated and frustrating, and it makes me not interested in using those platforms.”

Online harassment is now so common, nearly half of adult internet users have experienced it, according to Pew Research. So social media platform providers have taken formal measures to combat various forms of harassment.

That’s good news. The caveat: these measures use dated, inadequate approaches that leave certain groups more vulnerable to unacknowledged, and thus unrestricted, online harassment. But when a new online forum, HeartMob (iHeartMob.org), allowed users to share their stories and receive support from other HeartMob users, these users felt more validated, less isolated, and less fearful.

These are the findings of a recent study conducted by the University of Michigan School of Information (UMSI) and Sassafras Tech Collective in Ann Arbor, Mich. “Classification and Its Consequences for Online Harassment: Design Insights from HeartMob” was published in the November 2017 issue of Proceedings of the ACM on Human-Computer Interaction

Lead author of the study is Lindsay Blackwell, UMSI PhD candidate in information; co-authors are Jill Dimond, PhD, worker/owner of Sassafras and University of Michigan alumna; Sarita Schoenebeck, UMSI assistant professor of information; and Cliff Lampe, UMSI associate professor of information.

The study itemizes how major companies’ “scripted responses” to online harassment do not acknowledge individual experiences or the impacts of harassment, which include personal or professional disruptions, physical and emotional distress, and self-censorship or withdrawal.

Sarita Schoenebeck

Other recent efforts to combat online harassment rely on conventional techniques such as natural language processing and machine learning to identify offensive language online.

“But they fail to address structural power imbalances perpetuated by automated labeling and classification,” according to the study. The authors use intersectional feminist theory to better understand current approaches’ limitations to online harassment and to consider user-driven alternatives. 

“There is no value-free approach to technology,” argues Blackwell. “Top-down definitions and policies typically privilege the experiences and concerns of socially dominant groups.”

Cliff Lampe

Online harassment in general has become a critical concern, and many experts fear things only will get worse. According to a 2017 Pew Research Center survey the study cited, 41 percent of adult internet users have personally experienced online harassment, and 66 percent of users have witnessed someone being harassed online.

Some groups are more vulnerable to online harassment than others. For example, Pew numbers showed that 25 percent of young women surveyed had been sexually harassed, compared to just 6 percent of all users. 

Additionally, 59 percent of black Internet users had experienced online harassment, and 25 percent of blacks and 10 percent of Hispanics had been targeted online because of race; this compared to only 3 percent of white respondents. 

Among lesbian, gay and bisexual internet users, 38 percent had experienced “intimate partner digital abuse,” compared to only 10 percent of heterosexuals.

HeartMob, launched in January 2016, is a private online community designed to offer space for harassment targets to document their experiences, to accommodate users’ supportive messages, and to provide assistance reporting harassment to platforms where it occurs. Users also can publicize their harassment experiences and the perceived motives for them.

It was created by leaders of Hollaback!, an advocacy organization seeking to end harassment in public spaces. Today, HeartMob has more than 1,500 users. 

One participant said that HeartMob helped make being online “bearable.” Another said that it gave targets of online harassment “an opportunity to have other people sympathize with them.” 

Others even reported that HeartMob’s supportive framework helped keep harassment targets “from going off the deep end—from feeling alone in this.” The solitary nature of experiencing harassment prior to this support was best described by one participant: “I feel like we’re all in silos.”

While HeartMob is still evolving, the researchers say their study revealed key insights. First, just identifying experiences as online harassment provides powerful validation of these experiences. 

Second, this labeling enables bystanders to “grasp the scope of this problem.” Third, for online spaces, visibly labeling harassment as unacceptable is critical for creating expectations around appropriate user behavior.

The bottom line, says co-author and HeartMob co-founder Dimond, is that “social media sites and other online platforms must keep power and social oppression in mind when creating harassment classification systems.

“Centering vulnerable users in the design and moderation of online platforms ultimately benefits everyone.” 

Posted on December 20, 2017