Bias-a-thon

The 2023 Bias-a-thon was held in November 2023. View information on winners below, and watch this page for details about the 2024 Bias-a-thon.

Sponsored by the Penn State Center for Socially Responsible AI (CSRAI), Bias-a-thon challenges participants to create prompts that expose biases and confront stereotypes perpetuated by current generative AI tools. Through this challenge and engagement with CSRAI, participants can collaborate with Penn State's brightest minds in AI and ethics, while also shedding light on the pitfalls and shortcomings of existing AI tools that will pave the way for a more inclusive and unbiased AI future. To learn more about bias in AI, check out the articles below and visit the AI Fairness & Bias page in the CSRAI Resource Hub.

Bias-a-thon logo

2023 Bias-a-thon

The 2023 Bias-a-thon was held Monday, November 13, to Thursday, November 16, 2023, via the event's Microsoft Teams channel and was open to all members of the Penn State community with an active @psu.edu email address.

Winners

Read about the 2023 Bias-a-thon Winners

  • Top Prize Overall ($1,000) - Mukund Srinath, doctoral student in informatics, College of Information Sciences and Technology

    Srinath prompted ChatGPT 3.5 to select one of two individuals who would be more likely to possess a certain trait — such as trustworthiness, financial success, or employability — based on a single piece of information — such as height, weight, complexion or facial structure. In each instance, ChatGPT 3.5 selected the individual who aligned more with traditional standards of beauty and success, according to Srinath.

  • First Place ($750) – Nargess Tahmasbi, associate professor of information sciences and technology, Penn State Hazleton

    Tahmasbi prompted Midjourney to create images that showed both a group of academics and a group of computer scientists winning awards at a conference. While the photo of non-specific academics showed somewhat limited diversity in age, gender and race, the four generated photos of computer scientists showed almost exclusively younger, white men with only one woman included among the 27 individuals featured across the four photos.

  • Second Place ($500) – Eunchae Jang, doctoral student in Mass Communications, Donald P. Bellisario College of Communications

    Jang created a scenario in ChatGPT 3.5 where an "engineer" and “secretary” are being harassed by a colleague. In its responses, ChatGPT assumed the engineer was a man and the secretary was a woman.

  • Third Place ($250) – Marjan Davoodi, doctoral student in Sociology and Social Data Analytics, College of the Liberal Arts

    Davoodi prompted DeepMind’s image generators to create an image representing Iran in 1950. The results highlighted head coverings as prominent features associated with Iranian women, even though they were not required by law during that time.

Bias Categories

  1. Socio-Cultural Bias
    1. Ethnicity and Race: Biases related to specific ethnic groups or races.
    2. Gender: Stereotypes and biases based on gender identities.
    3. Nationality: Prejudices associated with different national origins.
    4. Religion: Biases related to religious beliefs or practices.
  2. Contextual Bias
    1. Contextual Stereotyping: Biases arising from specific contexts or situations.
    2. Profession and Education: Biases related to professions or educational backgrounds.
    3. Socioeconomic Status: Biases associated with economic and social standing.
    4. Geographical Bias: Prejudices based on specific regions or locations.
  3. Language and Dialect Bias
    1. Language Proficiency: Biases related to fluency or proficiency in a particular language.
    2. Dialect and Accent: Stereotypes based on specific dialects or accents.
  4. Age-Related Bias
    1. Generational Bias: Biases associated with different age groups or generations.
    2. Ageism: Prejudices against individuals of a certain age.
  5. Cognitive and Physical Ability Bias
    1. Disability: Biases related to physical or cognitive disabilities.
    2. Mental Health: Stereotypes associated with mental health conditions.
  6. Historical Bias
    1. Historical Events: Biases originating from historical events and their interpretations.
    2. Colonialism: Biases rooted in colonial history and its effects on cultures and societies.
  7. Out-of-the-Box Bias
    1. This is a bias you've identified that does not fit into one of the six categories above.