Belfer Fellowship with the Anti-Defamation League's Center for Technology & Society
Extremist groups are especially adept at hiding in plain sight by using language that differs only slightly from acceptable speech (e.g., “blame on both sides”) or employing thinly-veiled phrases that mask nefarious intent (e.g., “preserve our culture”). The subtleties of white supremacist language, especially, are not effectively captured by existing computational approaches to detecting and addressing it online. This project attempts to address this challenge by building a culturally-sensitive adaptive language model (CALM) to detect white supremacist speech online. The model will use existing knowledge about the language of white supremacy from ADL’s Center on Extremism (COE) to build a baseline model and then will use state-of-the-art computational techniques for modeling languages and their adaptations to make the model automatically learn how white supremacists adapt their speech to avoid detection.
Students/GSRAs: Charlie Logan, Hayden Le, Olivia Shulman