The CSRAI Young Achievers Symposium series highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit.
All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Note: When watching recordings of past talks, closed captions can be enabled by selecting the [CC] button in the YouTube video player.
Past Events
Serena Wang, a doctoral student at the University of California, Berkeley, will deliver a talk as part of CSRAI's Young Achievers Symposium.
"Bridging Gaps Between Metrics and Goals to Improve Societal Impacts of Machine Learning"
The increasing sophistication and proliferation of machine learning (ML) across public and private sectors has been met with both excitement and apprehension – how do we study societal impacts in this new frontier? Key to understanding the societal impacts of ML is understanding the development and deployment of such systems, which is driven by numerical metrics such as offline performance on datasets, performance on A/B tests, etc. Unfortunately, these metrics often don’t capture all developer goals or eventual societal impacts, which makes auditing and improving these systems difficult for both developers and policymakers. In this talk, Wang will discuss three approaches to bridging the gap between metrics and goals. First, she will discuss technical gaps between theory and practice in Fair ML. Second, moving beyond Fair ML, she will present technical and qualitative work on expanding the design scope of ML problem formulation. Finally, she will give a preview of ongoing work on understanding how metrics can interact with incentives in an ecosystem of agents.
About the Speaker
Serena Wang is a fifth-year Ph.D. student in computer science at the University of California, Berkeley. She has also concurrently worked at Google Research at 20% time for the last six years, where she is part of the Discrete Algorithms Group. Her research focuses on understanding and improving the long-term societal impacts of machine learning by rethinking ML algorithms and their surrounding incentives and practices. She is particularly interested in the gaps that arise between quantitative metrics and qualitative goals in algorithmic systems. She employs tools from optimization, statistics, and mechanism design. Serena is supported by the NSF Graduate Research Fellowship and the Apple Scholars in AI/ML PhD fellowship.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Christine Herlihy, a doctoral student at the University of Maryland, College Park, will deliver a talk as part of CSRAI's Young Achievers Symposium.
"Incorporating Prosocial Constraints and Exploiting Problem Structure in Sequential Decision-Making"
Sequential decision-making tasks canonically feature an agent who must explore the environment and exploit the knowledge it gains to maximize total expected reward over some time horizon. However, when algorithms are used to make decisions and induce behaviors over time in high-stakes domains, it is often necessary to trade-off reward maximization against competing objectives, such as individual or group fairness, cooperation, or risk mitigation. Additionally, when the decisions to be made are combinatorial, careful use of the structural information which characterizes or connects our decision points may facilitate our search for efficient solutions.
In this talk, Herlihy considers a constrained resource allocation task characterized by: (1) the presence of multiple objectives; and (2) the need to exploit different types of structure contained within the problem instances in order to ensure tractability and exploit externalities. She specifically considers the restless bandit setting, where a decision-maker is tasked with determining which subset of individuals (referred to as arms) should receive a beneficial intervention at each timestep, subject to the satisfaction of a budget constraint. Each restless arm is formalized as a Markov decision process (MDP), and receipt of the intervention results in an increased probability of a favorable state transition at the next timestep, relative to lack of receipt. Her group’s core contributions include the introduction of novel algorithms to address two limitations of Whittle index-based policies, including (1) the lack of distributive fairness guarantees; and (2) the inability to exploit externalities when resources are allocated within a community.
About the Speaker
Christine Herlihy is a Ph.D. student in computer science at the University of Maryland, College Park, where she is advised by John P. Dickerson. Her research interests include sequential decision-making under uncertainty (i.e., multi-armed bandits; reinforcement learning), algorithmic fairness, knowledge representation and reasoning, and health care. During her Ph.D., she has interned at Amazon Robotics, Microsoft Research, and Google Research. She earned her M.S. from Georgia Tech and completed her undergraduate studies at Georgetown University. To learn more, you can visit her website.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Lily Xu, a doctoral candidate at Harvard University, will deliver a talk as part of CSRAI's Young Achievers Symposium.
"Learning and Planning Under Uncertainty for Wildlife Conservation"
Wildlife poaching fuels the multi-billion dollar illegal wildlife trade and pushes countless species to the brink of extinction. To aid rangers in preventing poaching in protected areas around the world, we have developed PAWS, the Protection Assistant for Wildlife Security. We present technical advances in multi-armed bandits and robust sequential decision-making using reinforcement learning, with research questions that emerged from on-the-ground challenges. We also discuss bridging the gap between research and practice, presenting results from field deployment in Cambodia and large-scale deployment through integration with SMART, the leading software system for protected area management used by over 1,000 wildlife parks worldwide.
About the Speaker
Lily Xu is a Ph.D. student at Harvard where she is developing AI techniques to address environmental planning challenges. She has focused on advancing methods in machine learning and game theory for wildlife conservation through preventing wildlife poaching. Xu co-organizes the Mechanism Design for Social Good (MD4SG) research initiative, and her research has been recognized with the best paper runner-up at AAAI, the INFORMS Doing Good with Good OR award, and a Google Ph.D. Fellowship.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Chinasa T. Okolo, a doctoral candidate in the Department of Computer Science at Cornell University, will deliver a talk as part of CSRAI's Young Achievers Symposium.
"Navigating the Limits of AI Explainability: Designing for Novice Technology Users in Low-Resource Settings"
As researchers and technology companies rush to develop artificial intelligence (AI) applications that aid the health of marginalized communities, it is critical to consider the needs of the community health workers (CHWs) who will be increasingly expected to operate tools that incorporate these technologies. Okolo’s previous work has shown that these users have low levels of AI knowledge, form incorrect mental models about how AI works, and at times, may trust algorithmic decisions more than their own. This is concerning, given that AI applications targeting the work of CHWs are already in active development and early deployments in low-resource health care settings have already reported failures that created additional workflow inefficiencies and inconvenienced patients. Explainable AI (XAI) can help avoid such pitfalls, but nearly all prior work has focused on users that live in relatively resource-rich settings (e.g., the U.S. and Europe) and that arguably have substantially more experience with digital technologies such as AI. Okolo’s research works to develop XAI for people with low levels of formal education and technical literacy, with a focus on health care in low-resource domains. This work involves demoing interactive prototypes with CHWs to understand what aspects of model decision-making need to be explained and how they can be explained most effectively, with the goal of improving how current XAI methods target novice technology users.
About the Speaker
Chinasa T. Okolo is a fifth-year doctoral candidate in the Department of Computer Science at Cornell University. Before coming to Cornell, she graduated from Pomona College with a degree in Computer Science. Her research interests include explainable AI, human-AI interaction, global health, and information and communication technologies for development (ICTD). Within these fields, she works on projects to understand how frontline health care workers in rural India perceive and value artificial intelligence and examines how explainability can be best leveraged in AI-enabled technologies deployed throughout the Global South.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Chun Kai Ling, a doctoral student at Carnegie Mellon University, will deliver a talk as part of CSRAI's Young Achievers Symposium.
"Towards Scalable Game Theoretic Approaches for Addressing Societal Challenges"
Game Theory underpins many exciting breakthroughs, ranging from superhuman performance in video and board games to societal applications such as airport security and wildlife poaching prevention. However, realizing the full potential of game theory requires overcoming two obstacles: (i) reasoning about games where game parameters are not available upfront, and (ii) efficiently solving large general-sum games seen in real-world applications. This talk will discuss three directions to address these challenges. First, Ling will introduce an end-to-end framework based on differentiable optimization which is able to infer unknown game parameters using only samples of player actions in an equilibrium. Second, they will discuss how online subgame resolving, a widely used method in efficient zero-sum game solvers, can be generalized in a principled fashion to various general-sum equilibrium, allowing us to solve games orders of magnitude larger than purely offline methods. Third, they will show how to solve large general-sum games by learning the Enforceable Payoff Frontier (EPF) — a generalization of state value that captures the set of joint future payoffs across players. Ling will show how to learn EPFs using appropriate extensions of Bellman backups, allowing them to solve (in some instances) games too large to traverse while maintaining theoretical performance guarantees.
About the Speaker
Chun Kai Ling is a final-year doctoral candidate at Carnegie Mellon University, where they are co-advised by Professors Zico Kolter and Fei Fang. His research interest is in machine learning for noncooperative games, with a focus on inverse game theory and scalable solvers for large general-sum games. He is the recipient of the 2018 IJCAI distinguished paper award. Prior to starting his Ph.D., he completed his undergraduate studies in the National University of Singapore and worked in DSO National Laboratories.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Umang Bhatt, a doctoral candidate in the Machine Learning Group at the University of Cambridge, will deliver a talk as part of CSRAI's Young Achievers Symposium.
"Challenges and Frontiers in Deploying Transparent Machine Learning"
Explainable machine learning offers the potential to provide stakeholders with insights into model behavior, yet there is little understanding of how organizations use these methods in practice. In this talk, Bhatt will discuss recent research exploring how organizations view and use explainability. He finds that most deployments are not for end-users but rather for machine learning engineers, who use explainability to debug the model. There is thus a gap between explainability in practice and the goal of external transparency since explanations are primarily serving internal stakeholders. Providing useful external explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate inter-stakeholder conversation around explainable machine learning. To help address this gap, Bhatt reports findings from a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in the service of external transparency goals.
About the Speaker
Umang Bhatt is a Ph.D. candidate in the Machine Learning Group at the University of Cambridge. His expertise lies in human-machine collaboration and in trustworthy machine learning, spanning the fairness, robustness, and explainability of AI systems. He studies how to create AI systems that explain their predictions to stakeholders, leverage stakeholder expertise for better human-machine team performance, and interact with stakeholders to account for their goals and values. Currently, Umang is an Enrichment Student at The Alan Turing Institute and a Student Fellow at the Leverhulme Centre for the Future of Intelligence. Previously, he was a Fellow at the Mozilla Foundation and a Research Fellow at the Partnership on AI. He holds a B.S. and M.S. in Electrical and Computer Engineering from Carnegie Mellon University.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Su Lin Blodgett, a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal, will deliver a talk as part of CSRAI's Young Achievers Symposium.
“Towards Equitable Language Technologies”
Language technologies are now ubiquitous. Yet the benefits of these technologies do not accrue evenly to all people, and they can be harmful; they can reproduce stereotypes, prevent speakers of “non-standard” language varieties from participating fully in public discourse, and reinscribe historical patterns of linguistic discrimination. In this talk, Blodgett will take a tour through the rapidly emerging body of research examining bias and harm in language technologies. She will offer some perspective on the many challenges of this work, ranging from how we anticipate and measure language-related harms to how we grapple with the complexities of where and how language technologies are encountered, and with the institutions that produce them.
About the Speaker:
Su Lin Blodgett is a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal. Her research focuses on the ethical and social implications of language technologies. She completed her Ph.D. in computer science at the University of Massachusetts Amherst, where she was supported by the NSF Graduate Research Fellowship.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Ryan Shi, a doctoral candidate of societal computing in the School of Computer Science at Carnegie Mellon University and founder of 98Connect, will deliver “From a Bag of Bagels to Bandit Data-Driven Optimization” as part of CSRAI's Young Achievers Symposium.
“From a Bag of Bagels to Bandit Data-Driven Optimization”
In this talk, Shi will start with his three-year collaboration with a large food rescue organization. His group developed a recommender system to selectively advertise available rescues to food rescue volunteers, which improves the notification system's hit rate from 44% to 78%. Motivated by the pain points experienced in this and other works, Shi proposes bandit data-driven optimization, a new learning paradigm that combines online bandit learning and offline predictive models to address the unique challenges that arise in machine learning projects for the public and nonprofit sectors.
About the Speaker
Ryan Shi is a doctoral candidate of societal computing in the School of Computer Science at Carnegie Mellon University, and founder of 98Connect, an organization to promote open communication and sustainable collaboration between the technology and nonprofit worlds. He works with nonprofit organizations to address societal challenges in food security, wildlife conservation, and public health using AI. Some of his works have been adopted or slated for field tests. He organized the AI for Social Good Symposia in 2020 and 2021. He is the recipient of a Siebel Scholarship and a Carnegie Mellon Presidential Fellowship.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Praneeth Vepakomma, a doctoral student at MIT, will deliver a talk as part of CSRAI's Young Achievers Symposium, will deliver a talk as part of CSRAI's Young Achievers Symposium.
"Differential Privacy for Measuring Nonlinear Correlations between Sensitive Data at Multiple Parties”
Vepakomma’s work introduces a differentially private method to measure nonlinear correlations between sensitive data hosted across two entities. His group provides utility guarantees of their private estimator. Theirs is the first such private estimator of nonlinear correlations, to the best of their knowledge within a multi-party setup. The important measure of nonlinear correlation they consider is distance correlation. This work has direct applications to private feature screening, private independence testing, private k-sample tests, private multi-party causal inference and private data synthesis in addition to exploratory data analysis.
About the Speaker
Praneeth Vepakomma is currently a Ph.D. student at MIT. His research focuses on developing algorithms for distributed computation in statistics and machine learning under constraints of privacy, communication and efficiency. He won the Meta (previously Facebook) Ph.D. research fellowship and has been selected as a Social and Ethical Responsibilities of Computing Scholar by MIT’s Schwarzman College of Computing. He won a Baidu Best Paper Award at NeurIPS 2020-SpicyFL for his work on FedML. His work on NoPeek-Infer won the Mukh Best Paper Runner Up Award at IEEE FG-2021. He was Interviewed in the book, Data Scientist: The Definitive Guide to Becoming a Data Scientist, and his work on Split Learning was featured in Technology Review. He was previously a scientist at Apple (intern), Amazon, Motorola Solutions, PublicEngines, Corning (intern), and various startups, all of which were eventually acquired. A small sampling of problems that he works on includes private independence testing and private k-sample testing in statistics, bridging privacy with social choice theory, private mechanisms for training and inference in ML, privately estimating non-linear measures of statistical dependence between multiple parties, and split learning.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Kai Wang, a doctoral candidate at Harvard University, will deliver a talk as part of CSRAI's Young Achievers Symposium.
“Decision-focused Learning: Integrating Optimization Problems into Training Pipeline to Resolve Social Challenges”
This talk focuses on solving social challenges formulated as optimization problems with missing parameters. For example, wildlife conservation challenges are commonly modeled as game theory problems between patrollers and poachers with unknown utility functions. Health service scheduling problems are formulated as resource allocation problems with unknown intervention effectiveness. A common way to address missing information is to learn a predictive model to predict missing parameters from domain-specific features, where actionable decisions can be obtained from solving the optimization problems with predicted parameters. However, the predictive model is trained to maximize the predictive accuracy but not the performance of the chosen decisions, leading to a mismatch between the training and evaluation objectives. Wang's research focuses on addressing the issue of mismatch objectives by expressing optimization problems, including non-convex, multi-agent, and sequential problems, as differentiable layers to integrate into the training pipeline. This novel training method leads to decision-focused learning that learns the predictive model to directly optimize the performance of the proposed decisions. Lastly, the talk concludes with experimental results in various social challenges to demonstrate the performance boost led by decision-focused learning.
About the Speaker
Kai Wang is a Ph.D. candidate studying Computer Science at Harvard University working with Professor Milind Tambe. Prior to his Ph.D., Kai graduated from National Taiwan University with a B.S. in Math and Electrical Engineering, where he won two silver medals at the International Mathematical Olympiad. Kai’s work focuses on providing actionable decisions to solve wildlife conservation and healthcare challenges. Both domains are multi-agent systems that require using machine learning to address the uncertainty involved in the system and optimization to suggest actionable solutions. Kai identifies the issue of solving machine learning and optimization problems separately, where he proposes various new techniques to integrate optimization problems into the machine learning pipeline to achieve decision-focused learning.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Ana-Andreea Stoica, a doctoral candidate at Columbia University, will deliver “Diversity and Inequality in Social Networks” as part of CSRAI's Young Achievers Symposium.
“Diversity and Inequality in Social Networks”
Online social networks often mirror inequality in real-world networks, from historical prejudice, economic or social factors. Such disparities are often picked up and amplified by algorithms that leverage social data for the purpose of providing recommendations, diffusing information, or forming groups. In this talk, Stoica discusses an overview of my research involving explanations for algorithmic bias in social networks, briefly describing my work in information diffusion, grouping, and general definitions of inequality. Using network models that reproduce inequality seen in online networks, we'll characterize the relationship between pre-existing bias and algorithms in creating inequality, discussing different algorithmic solutions for mitigating bias.
About the Speaker
Ana-Andreea Stoica is a doctoral candidate at Columbia University. Her work focuses on mathematical models, data analysis, and inequality in social networks. From recommendation algorithms to the way information spreads in networks, Stoica is particularly interested in studying the effect of algorithms on people's sense of community and access to information and opportunities. She strives to integrate tools from mathematical models—from graph theory to opinion dynamics—with sociology to gain a deeper understanding of the ethics and implications of technology in our everyday lives. Ana grew up in Bucharest, Romania, and moved to the US for college, where she graduated from Princeton in 2016 with a bachelor's degree in Mathematics. Since 2019, she has been co-organizing the Mechanism Design for Social Good initiative.
About the Young Achievers Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.
Sherry Tongshuang Wu, a final year doctoral candidate in computer science and engineering at the University of Washington, will deliver "Interactive AI Model Debugging and Correction" as part of CSRAI's Young Achievers Symposium.
“Interactive AI Model Debugging and Correction”
Research in artificial intelligence has advanced at an incredible pace, to the point where it is making its way into our everyday lives, explicitly and behind the scenes. However, beneath their impressive progress, many AI models hide deficiencies that amplify social biases or even cause fatal accidents. How do we identify, improve, and cope with imperfect models, while still benefiting from their use? “Interactive AI Model Debugging and Correction” will discuss Wu’s work empowering humans to interact with AI models in order to debug and correct them. She will describe both (1) how she helps experts run scalable and testable analyses on models in development, and (2) how she helps end users collaborate with deployed AI in a transparent and controllable way. In her final remarks, she will discuss her future research perspectives on building human-centered AI through data-centric approaches.
About the Speaker
Sherry Tongshuang Wu is a final year doctoral candidate in computer science and engineering at the University of Washington, where she is advised by Jeffrey Heer and Dan Weld. She received her B.Eng in computer science and engineering from the Hong Kong University of Science and Technology. Her research lies at the intersection of Human-Computer Interaction (HCI) and Natural Language Processing (NLP), and aims to empower humans to debug and correct AI models interactively, both when the model is under active development and after it is deployed for end users. Sherry has authored 19 papers in top-tier NLP, HCI and visualization conferences and journals such as ACL, CHI, TOCHI, and TVCG, including a best paper award (top-1) and an honorable mention (top-3).
About the Symposium
The Young Achievers Symposium highlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.