What do we mean when we say language models are biased?
Please join the Digital Humanities Initiative @ the Franklin Humanities Institute (DHI@FHI) for a talk by Evan Donahue on AI and language learning.
This is a hybrid event. Register to attend in-person or on-line at this link: https://duke.is/rt4mp
Over the last 10 years, language models have revolutionized virtually every aspect of natural language processing. This rise has been accompanied by anxieties stemming from the discovery that these models can encode not only knowledge of language but race, gender, and other social biases contained within the texts on which they have been trained. Much work has gone into detecting and eliminating these biases, but as this talk will suggest, many of our methods for studying this bias rest on interpretive assumptions that may be worth investigating.
Evan Donahue is a postdoctoral researcher at the Tokyo College Institute of Advanced Study at the University of Tokyo. His work focuses on the history of artificial intelligence and its implications for contemporary research. He is currently working on a book entitled /Android Linguistics: How Machines Do Things With Words/.