Research Publications
Kumar, P. C., Cotter, K., & Cabrera, L. Y. (2024). Taking responsibility for meaning and mattering: An agential realist approach to generative AI and literacy. Reading Research Quarterly, 59(4), 570-578.
Workman, D., & Dancy, C. L. (2023). Identifying potential inlets of man in the artificial intelligence development process: Man and antiblackness in AI development. In Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing (pp. 348-353).
Inzlicht, M., Cameron, C. D., D’Cruz, J., & Bloom, P. (2024). In praise of empathic AI. Trends in Cognitive Sciences, 28(2), 89–91.
Atkins, A. A., Brown, M. S., & Dancy, C. L. (2021). Examining the Effects of Race on Human-AI Cooperation. In R. Thomson, M. N. Hussain, C. Dancy, & A. Pyke (Eds.), Social, Cultural, and Behavioral Modeling (Vol. 12720, pp. 279–288). Springer International Publishing.
Gomes, C., Dietterich, T., Barrett, C., Conrad, J., Dilkina, B., Ermon, S., Fang, F., Farnsworth, A., Fern, A., Fern, X., Fink, D., Fisher, D., Flecker, A., Freund, D., Fuller, A., Gregoire, J., Hopcroft, J., Kelling, S., Kolter, Z., Yadav, A., ... Zeeman, M. L. (2019). Computational sustainability: Computing for a better world and a sustainable future. Communications of the ACM, 62(9), 56–65.
Kou, Y., & Gui, X. (2020). Mediating community-AI interaction through situated explanation: the case of AI-Led moderation. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1-27.
Taylor, R. D. (2020). Quantum Artificial Intelligence: A “precautionary” U.S. approach? Telecommunications Policy, 44(6), 101909.
Tanprasert, T., Fels, S. S., Sinnamon, L., & Yoon, D. (2024, May). Debate Chatbots to Facilitate Critical Thinking on YouTube: Social Identity and Conversational Style Make A Difference. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-24).
DeVrio, A., Eslami, M., & Holstein, K. (2024, June). Building, Shifting, & Employing Power: A Taxonomy of Responses From Below to Algorithmic Harm. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 1093-1106).
Zabel, S., & Otto, S. (2024, May). SustAInable: How Values in the Form of Individual Motivation Shape Algorithms’ Outcomes. An Example Promoting Ecological and Social Sustainability. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-11).
Hu, J., El-Rashid, F., & Bertelsen, S. E. (2024, May). SHADE: Empowering Consumer Choice for Sustainable Fashion with AI and Digital Tooling. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-5).
Berney, M., Ouaazki, A., Macko, V., Kocher, B., & Holzer, A. (2024, May). Care-Based Eco-Feedback Augmented with Generative AI: Fostering Pro-Environmental Behavior through Emotional Attachment. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-15).
Strubell, E., Ganesh, A., & McCallum, A. (2020). Energy and Policy Considerations for Modern Deep Learning Research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), Article 09.
Lacoste, A., Luccioni, A., Schmidt, V., & Dandres, T. (2019). Quantifying the Carbon Emissions of Machine Learning (arXiv:1910.09700). arXiv.
Wu, C.-J., Raghavendra, R., Gupta, U., Acun, B., Ardalani, N., Maeng, K., Chang, G., Aga, F., Huang, J., Bai, C., Gschwind, M., Gupta, A., Ott, M., Melnikov, A., Candido, S., Brooks, D., Chauhan, G., Lee, B., Lee, H.-H., … Hazelwood, K. (2022). Sustainable AI: Environmental Implications, Challenges and Opportunities. Proceedings of Machine Learning and Systems, 4, 795–813.
Ahmad, A., Waseem, M., Liang, P., Fehmideh, M., Aktar, M. S., & Mikkonen, T. (2023). Towards Human-Bot Collaborative Software Architecting with ChatGPT (arXiv:2302.14600). arXiv.
Gu, S., Kshirsagar, A., Du, Y., Chen, G., Peters, J., & Knoll, A. (2023). A Human-Centered Safe Robot Reinforcement Learning Framework with Interactive Behaviors (arXiv:2302.13137). arXiv.
Guo, B., Zhang, X., Wang, Z., Jiang, M., Nie, J., Ding, Y., Yue, J., & Wu, Y. (2023). How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection (arXiv:2301.07597). arXiv
Rao, H., Leung, C., & Miao, C. (2023). Can ChatGPT Assess Human Personalities? A General Evaluation Framework (arXiv:2303.01248). arXiv.
Wang, F.-Y., Li, J., Qin, R., Zhu, J., Mo, H., & Hu, B. (2023). ChatGPT for Computational Social Systems: From Conversational Applications to Human-Oriented Operating Systems. IEEE Transactions on Computational Social Systems, 10(2), 414–425.
Baum, S. D. (2020). Social choice ethics in artificial intelligence. AI & SOCIETY, 35(1), 165–176.
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707.
Howard, A., & Borenstein, J. (2018). The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity. Science and Engineering Ethics, 24(5), 1521–1536.
Kurzweil, R. (2024). The Singularity Is Nearer: When We Merge with AI. Penguin Random House.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Publishing Group.
- https://ainowinstitute.org/spotlight/climate#footnote-list-5
- https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
- https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
- https://www.anthropocenemagazine.org/2020/11/time-to-talk-about-carbon-footprint-artificial-intelligence/