We seek a postdoc for research on human sources of algorithmic bias in face classification models. Our aim is to trace the effects of prejudice and ideology in human social perception into dataset creation, to model performance, and then to end-user decision making in order to understand the role of AI in propagating human biases.
Key qualifications:
- A PhD in psychology, communication science, data science, computer science, or a related field
- Strong quantitative skills, including proficiency with Python and R (and ideally also Javascript)
- Experience with machine learning or computational modeling; particularly with models of image classification (e.g., with faces)
- Interest in research on the intersection of AI & society
The position is for one year (at least) and may begin between May 1 and Sept 1, 2025. Review of applications will begin April 15 and continue until the position is filled.