Xuechunzi Bai, doctoral student:
Bai is a fifth-year graduate student who is pursuing a joint doctoral degree in Princeton University’s Department of Psychology and social policy from the School of Public and International Affairs, along with the graduate certificate from the Center for Statistics and Machine Learning (CSML). Her advisors are Susan T. Fiske, the Eugene Higgins Professor of Psychology, Professor of Psychology and Public Affairs, and Tom Griffiths, the Henry R. Luce Professor of Information Technology, Consciousness, and Culture of Psychology and Computer Science.
Before Bai enrolled in her doctoral program, she earned her bachelor’s degree in social sciences in education from The University of Tokyo in 2015.
She has been a research assistant at Fiske’s lab at Princeton and at Purdue University under Professor Margo Monteith in the school’s Department of Psychological Sciences. Last year, Bai was research intern at Adobe Design where she worked on AI ethics.
Bai’s main research focus has been on figuring out where stereotypes of people come from and how do we mitigate or eliminate them since they are inaccurate and biased.
“We know there are different kinds of stereotypes about different social groups such as immigrants,” said Bai. “For example, people typically portray Asian people as very competent but not friendly. The phenomenon of social stereotypes is very established in psychology and other social sciences. A question that is less addressed is where do these social stereotypes come from? My dissertation seeks to answer that origin story.”
Bai said there are different theories on where stereotypes come from. Bai posits that social stereotype labels can arise from a sequential decision-making process when the goal is to maximize decision utility without prolonged exploration. A person can decide whether to explore different ideas or exploit information they already know to make a judgement call on an ethnicity or race of people they would encounter.
Bai uses the example of an HR hiring staffer who is tasked with hiring people for janitorial jobs at a company. In an environment where everybody applying for the job is highly competent and the HR staffer doesn’t know much about the ethnicities or races of the job applicants, the staffer tends to hire a person without having stereotypes in mind. If the employee turns out to be great worker, the HR staffer in the future will associate positive qualities with the ethnicity of the employee and they will be more likely to hire people from the same group for a similar janitorial job in the future. The HR staffer chooses not to “explore” different groups of people but rather “exploits” prior information they have already.
“The problem is that they do not explore other options because they think it's a good enough choice,” said Bai. “If they don't explore enough, then there's no way they can get an accurate picture of other groups. And so that will create stereotypes, in which people will think one group is good at being a janitor and other groups are not as competent. And this eventually snowballs into perpetuating bias.”
This process of arriving at a social stereotype is encapsulated in a paper, “Globally Inaccurate Stereotypes Can Result From Locally Adaptive Exploration” of which she is the lead author. The paper was published in 2022 in the journal Psychological Science. Bai calls this kind of decision making “adaptive exploration,” which can “alone can create structured societal stereotypes that cascade from historical affordances – such as which group happened to be the first one adequate at a job – without requiring decision-makers to have malicious intentions or cognitive limitations, or social groups to differ in information accessibility or intrinsic quality.”
For the machine learning and data science portion of her work, Bai uses computer simulations called multi-armed bandit algorithms, a type of reinforcement learning system, to test out behavioral decision making, and Bai also utilizes large online experiments with human subjects. Data from these online observations is processed to validate results from her computer simulations.
In spring 2020, she organized a CSML workshop on social biases in human nature and in machine learning: How social scientists and computer scientists can learn from each other. After she obtains her doctoral degree and completes her dissertation, Bai plans to assume a tenure track position of assistant professor at the University of Chicago’s Department of Psychology starting in the summer of 2024. She will continue furthering work in her research area.
Bai enjoys walking her dog, an Alaskan Malamute, every day. Her dog is called Alpha, named after AlphaGo, an AI program that plays chess.