Graduate Student Rump Session

2:30 pm - 4:00 pm
E202 Westgate and Virtual via Zoom

The Center for Socially Responsible AI will host a graduate student rump session, which will feature short talks by Penn State doctoral students. This event is free and open to the Penn State community.

Watch This Talk

About the Talks

"What is Safety? Corporate Discourse, Power, and the Politics of Generative AI"Ankolika De, College of Information Sciences and Technology

This work critically examines how leading generative AI companies construct and communicate the notion of “safety” through public discourse, revealing how authority and legitimacy are discursively produced. Through critical discourse analyses of materials generated by OpenAI, GoogleAI, and Anthropic, it encourages scholars to consider the norms embedded within these discourses and how they may shape particular ideas and self-governing ideals.

"With Great Power Comes Great Responsibility: The Rise of Autonomous AI Agents and the Coordination Challenge"Gonzalo Ballestero, College of the Liberal Arts

We experimentally study how generative AI agents coordinate when interacting with each other. We document systematic behavioral tendencies in large language models that limit their ability to coordinate effectively, raising concerns about the safety of multi-agent AI systems.

"From Algorithm to Affect: Regenerative Remix in the Algorithm in the Case of K-pop Playlist Activism"Youngjoo Kim, College of Arts and Architecture

By exploring K-pop playlist activism as a form of algorithmic literacy and regenerative remix on streaming platforms such as YouTube Music and Spotify, this study reimagines fans’ algorithmic curation as an affective and political practice. Drawing on feltness and remix theory, I argue that these playlists function as pedagogical sites where sensing, resisting, and co-creating with technology foster critical awareness of algorithms in digital art education.

"Escalating Costs of Unfairness: Flexible Fairness via Flow"Atasi Panda, College of Information Sciences and Technology (Indian Institute of Science/Visiting Scholar)

How do we fairly distribute items across platforms when different groups have competing needs? Our approach to platform assignment problems balances utility, capacity, and fairness through the use of soft constraints. Traditional fairness models, such as Restricted Dominance and Minority Protection, employ hard bounds on group representation. We introduce a more flexible framework that utilizes convex cost functions to penalize imbalances, thereby enforcing a soft fairness constraint. The core challenge is finding assignments that minimize these costs while guaranteeing a minimum utility threshold, a problem that requires carefully balancing platform-wide and group-specific convex penalties. We present a framework for balancing utility and fairness without rigid constraints, utilizing linear programming and network flow techniques.

"Using Computational Cognitive Modeling to Understand Socio-Cultural Perspectives in Human-AI Interaction"Swapnika Dulam, College of Engineering

This study explores how human-AI cooperation may be impacted by the belief that data used to train an AI system is racialized. A cognitive model of the task using ACT-R was used to understand a cognitive-level, process-based explanation of the participants' perspectives.

"The Algorithmic Gaze: Teaching AI Bias through Visuality"Ye Sul Park, College of Arts and Architecture

This presentation examines how generative AI systems visualize identity and culture through biased datasets and algorithmic processes. Drawing from personal experience, I share visual experiments with AI image generators and pedagogical strategies that employ semantic visual analysis to reveal how AI reproduces racialized, gendered, and cultural stereotypes.

About the Speakers

Anolika DeAnkolika De is a fourth-year Ph.D. candidate at Penn State’s College of Information Sciences and Technology studying how people engage with and adapt to platform technologies, including AI and social media. Her interdisciplinary research bridges HCI, media studies, and STS to inform more inclusive and accountable technology design and governance.
Gonzalo BallesteroGonzalo Ballestero is a Ph.D. candidate in Economics at Penn State University. His research focuses on empirical market design, with a focus on AI and education. He has been recognized with the Young Researcher Award from the Argentine Association of Political Economy.
Youngjoo KimYoungjoo Kim is a Ph.D. student in Art Education at Penn State. Her research examines feminist remix studies, post-digital art education, and affective activism, focusing on how digital visual culture, specifically fan culture, AI-generated images, and sensory aesthetics inform feminist pedagogies and socially engaged art practices.
Atasi PandaAtasi Panda is a Ph.D. student at the Indian Institute of Science and is visiting Hadi Hosseini's lab in the College of Information Sciences and Technology at Penn State. Her research broadly focuses on fairness in algorithms.
Dulam SwapnikaSwapnika Dulam is a second-year Ph.D. student in Computer Science. Her research focuses on using computational cognitive modeling to aid in building socio-culturally competent AI systems.
Ye Sul ParkYe Sul Park (she/her) is a Ph.D. student in Art Education at Penn State. As an art educator, curator, and researcher, she is interested in the evolving relational dynamics between humans and nonhuman machines as well as the complex social and ethical dimensions of generative AI models to understand their educational implications.

About the Young Achievers Symposium

The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.