“Leveraging LLM-assistants to mediate conflict in online discussions about divisive topics”
Political discourse is the soul of democracy, but misunderstanding and conflict can fester in divisive conversations. The widespread shift to online discourse exacerbates many of these problems and corrodes the capacity of diverse societies to cooperate in solving social problems. Scholars and civil society groups promote interventions that make conversations less divisive or more productive, but scaling these efforts to online discourse is challenging. This talk will describe a large-scale experiment that demonstrates how online conversations about divisive topics can be improved with foundation models. Specifically, my colleagues and I employed a fine-tuned large language model to make real-time, evidence-based recommendations about how to bridge social divides during online conversations between two people discussing gun control in an online forum. Respondents could accept such recommendations, ignore them, or revise their posts after seeing the suggested edits. Our results indicate this intervention improved reported conversation quality, promoted democratic reciprocity, and improved the tone of conversations, without systematically changing the content of the conversation or moving people's policy attitudes. Finally, I will discuss a half year replication of this experiment on a large social media platform and discuss other opportunities to employ foundation models to reduce conflict in online settings and study complex social systems.