AI Fairness & Bias

Research Publications

Dai, E., Zhao, T., Zhu, H., Xu, J., Guo, Z., Liu, H., Tang, J., & Wang, S. (2022). A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability (arXiv:2204.08570).  

Dergiades, T., Mavragani, E., & Pan, B. (2018). Google Trends and tourists’ arrivals: Emerging biases and proposed corrections. Tourism Management, 66, 108–120.  

Khademi, A., Lee, S., Foley, D., & Honavar, V. (2019). Fairness in algorithmic decision making: An excursion through the lens of causality. In The World Wide Web Conference (pp. 2907-2914). 

Rahmattalabi, A., Vayanos, P., Fulginiti, A., Rice, E., Wilder, B., Yadav, A., & Tambe, M. (2019). Exploring Algorithmic Fairness in Robust Graph Covering Problems. Advances in Neural Information Processing Systems, 32. 

Venkit, P. N., & Wilson, S. (2021). Identification of Bias Against People with Disabilities in Sentiment Analysis and Toxicity Detection Models (arXiv:2111.13259).  

Xiao, T., Chen, Z., & Wang, S. (2022). Towards Bridging Algorithm and Theory for Unbiased Recommendation (arXiv:2206.03851).  

Xiao, T., & Wang, S. (2022). Towards Unbiased and Robust Causal Ranking for Recommender Systems. Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 1158–1167. 

Zhao, T., Dai, E., Shu, K., & Wang, S. (2022). Towards Fair Classifiers Without Sensitive Attributes: Exploring Biases in Related Features. Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 1433–1442.  

 

Ferrara, E. (2023). Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models 

Landers, R. N., & Behrend, T. S. (2023). Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist, 78(1), 36–49.

Agarwal, A., Agarwal, H., & Agarwal, N. (2023). Fairness Score and process standardization: Framework for fairness certification in artificial intelligence systems. AI and Ethics, 3(1), 267–279.

Angerschmid, A., Zhou, J., Theuermann, K., Chen, F., & Holzinger, A. (2022). Fairness and Explanation in AI-Informed Decision Making. Machine Learning and Knowledge Extraction, 4(2), Article 2.

 

Fisher, S. L., & Howardson, G. N. (2022). Fairness of artificial intelligence in human resources—Held to a higher standard? Handbook of Research on Artificial Intelligence in Human Resource Management, 303–322.

Giovanola, B., & Tiribelli, S. (2022). Beyond bias and discrimination: Redefining the AI ethics principle of fairness in healthcare machine-learning algorithms. AI & SOCIETY.

 

Hagendorff, T., Bossert, L. N., Tse, Y. F., & Singer, P. (2022). Speciesist bias in AI: How AI applications perpetuate discrimination and unfair outcomes against animals. AI and Ethics.

 

John-Mathews, J.-M., Cardon, D., & Balagué, C. (2022). From Reality to World. A Critical Perspective on AI Fairness. Journal of Business Ethics, 178(4), 945–959.

 

Lo, S. K., Liu, Y., Lu, Q., Wang, C., Xu, X., Paik, H.-Y., & Zhu, L. (2023). Toward Trustworthy AI: Blockchain-Based Architecture Design for Accountability and Fairness of Federated Learning Systems. IEEE Internet of Things Journal, 10(4), 3276–3284.

 

MacDonald, S., Steven, K., & Trzaskowski, M. (2022). Interpretable AI in Healthcare: Enhancing Fairness, Safety, and Trust. In M. Raz, T. C. Nguyen, & E. Loh (Eds.), Artificial Intelligence in Medicine: Applications, Limitations and Future Directions (pp. 241–258). Springer Nature.

 

Martinez-Martin, N., & Cho, M. K. (2022). Bridging the AI Chasm: Can EBM Address Representation and Fairness in Clinical Machine Learning? The American Journal of Bioethics, 22(5), 30–32.

 

Ruf, B., & Detyniecki, M. (2022). A Tool Bundle for AI Fairness in Practice. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, 1–3.

 

Schoeffer, J., De-Arteaga, M., & Kuehl, N. (2022). On the Relationship Between Explanations, Fairness Perceptions, and Decisions (arXiv:2204.13156). arXiv.

 

Shulner-Tal, A., Kuflik, T., & Kliger, D. (2022). Enhancing Fairness Perception – Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic Decisions. International Journal of Human–Computer Interaction, 0(0), 1–28.

 

Tilmes, N. (2022). Disability, fairness, and algorithmic bias in AI recruitment. Ethics and Information Technology, 24(2), 21.


Zhou, J., Chen, F., & Holzinger, A. (2022). Towards Explainability for AI Fairness. In A. Holzinger, R. Goebel, R. Fong, T. Moon, K.-R. Müller, & W. Samek (Eds.), XxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (pp. 375–386). Springer International Publishing.

 

Aimiuwu, E. E. (2022). Enhancing Social Justice: A Virtual Reality and Artificial Intelligence Model. International Journal of Technology in Education and Science, 6(1), 32–43.

Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, 102387.

Baniecki, H., Kretowicz, W., Piatyszek, P., Wisniewski, J., & Biecek, P. (n.d.). dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python. 7.

Belenguer, L. (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and Ethics.

Caton, S., & Haas, C. (2020). Fairness in Machine Learning: A Survey (arXiv:2010.04053). arXiv.

Chohlas-Wood, A., Nudell, J., Yao, K., Lin, Z. (Jerry), Nyarko, J., & Goel, S. (2021). Blind Justice: Algorithmically Masking Race in Charging Decisions. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 35–45.

Chouldechova, A., & Roth, A. (2020). A snapshot of the frontiers of fairness in machine learning. Communications of the ACM, 63(5), 82–89.

Dastin, J. (2022). Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women *. In Ethics of Data and Analytics. Auerbach Publications.

Domnich, A., & Anbarjafari, G. (2021). Responsible AI: Gender bias assessment in emotion recognition (arXiv:2103.11436). arXiv.

Du, M., Yang, F., Zou, N., & Hu, X. (2021). Fairness in Deep Learning: A Computational Perspective. IEEE Intelligent Systems, 36(4), 25–34.

Ferrer, X., Nuenen, T. van, Such, J. M., Coté, M., & Criado, N. (2021). Bias and Discrimination in AI: A Cross-Disciplinary Perspective. IEEE Technology and Society Magazine, 40(2), 72–80.

Fu, R., Huang, Y., & Singh, P. V. (2020). Artificial Intelligence and Algorithmic Bias: Source, Detection, Mitigation, and Implications. In Pushing the Boundaries: Frontiers in Impactful OR/OM Research (pp. 39–63). INFORMS.

Gilbert, J. E. (2021). Equitable AI: Using AI to Achieve Diversity in Admissions. 26th International Conference on Intelligent User Interfaces, 1.

Givens, A. R., & Morris, M. R. (2020). Centering disability perspectives in algorithmic fairness, accountability, & transparency. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 684–684.

Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(4).

Holstein, K., Wortman Vaughan, J., Daumé, H., Dudik, M., & Wallach, H. (2019). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–16.

Howard, A., & Borenstein, J. (2018). The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity. Science and Engineering Ethics, 24(5), 1521–1536.

Jacobs, A. Z., & Wallach, H. (2021). Measurement and Fairness. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 375–385.

Joo, J., & Kärkkäinen, K. (2020). Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation. Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia, 1–5.

Lack of Transparency and Potential Bias in Artificial Intelligence Data Sets and Algorithms: A Scoping Review | Dermatology | JAMA Dermatology | JAMA Network. (n.d.). Retrieved May 24, 2022, from

Landers, R. N., & Behrend, T. S. (20220214). Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist.

Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, 14–16.

Lee, M. K., Grgić-Hlača, N., Tschantz, M. C., Binns, R., Weller, A., Carney, M., & Inkpen, K. (2020). Human-Centered Approaches to Fair and Responsible AI. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 1–8.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1–35.

Noriega, M. (2020). The application of artificial intelligence in police interrogations: An analysis addressing the proposed effect AI has on racial and gender bias, cooperation, and false confessions. Futures, 117, 102510.

Noriega-Campero, A., Garcia-Bulle, B., Cantu, L. F., Bakker, M. A., Tejerina, L., & Pentland, A. (2020). Algorithmic targeting of social policies: Fairness, accuracy, and distributed governance. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 241–251.

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.-E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., … Staab, S. (2020). Bias in data-driven artificial intelligence systems—An introductory survey. WIREs Data Mining and Knowledge Discovery, 10(3), e1356.

Obermeyer, Z., & Mullainathan, S. (2019). Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People. Proceedings of the Conference on Fairness, Accountability, and Transparency, 89–89.

Panch, T., Mattie, H., & Atun, R. (n.d.). Artificial intelligence and algorithmic bias: Implications for health systems. Journal of Global Health, 9(2), 020318.

Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing Bias in Artificial Intelligence in Health Care. JAMA, 322(24), 2377.

Pena, A., Serna, I., Morales, A., & Fierrez, J. (2020). Bias in Multimodal AI: Testbed for Fair Automatic Recruitment. 28–29.

Pessach, D., & Shmueli, E. (2021). Improving fairness of artificial intelligence algorithms in Privileged-Group Selection Bias data settings. Expert Systems with Applications, 185, 115667.

Roselli, D., Matthews, J., & Talagala, N. (2019). Managing Bias in AI. Companion Proceedings of The 2019 World Wide Web Conference, 539–544.

Sarraf, D., Vasiliu, V., Imberman, B., & Lindeman, B. (2021). Use of artificial intelligence for gender bias analysis in letters of recommendation for general surgery residency candidates. The American Journal of Surgery, 222(6), 1051–1059.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68.

Sharma, S., Henderson, J., & Ghosh, J. (2020). CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 166–172.

Sloan, R. H., & Warner, R. (2020). Beyond Bias: Artificial Intelligence and Social Justice. Virginia Journal of Law & Technology, 24(1), 1–32.

Srivastava, B., & Rossi, F. (2018). Towards Composable Bias Rating of AI Services. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 284–289.

Srivastava, M., Heidari, H., & Krause, A. (2019). Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2459–2468.

Sun, T., Gaut, A., Tang, S., Huang, Y., ElSherief, M., Zhao, J., Mirza, D., Belding, E., Chang, K.-W., & Wang, W. Y. (2019). Mitigating Gender Bias in Natural Language Processing: Literature Review (arXiv:1906.08976). arXiv.

Trewin, S., Basson, S., Muller, M., Branham, S., Treviranus, J., Gruen, D., Hebert, D., Lyckowski, N., & Manser, E. (2019). Considerations for AI fairness for people with disabilities. AI Matters, 5(3), 40–63.

Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–14.

Whittaker, M., Alper, M., College, O., Kaziunas, L., & Morris, M. R. (n.d.). Disability, Bias, and AI. 32.

 

Weapons of Math Destruction – Cathy O’Neill   

Georghiou, A. (2020). AI: My Story; The Story AI Tells; Bias & Privacy. Life Betterment Through God, LLC. 

Osoba, O. A., & IV, W. W. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Rand Corporation.   

Research Opportunities

Resources