Recent News

'Cheat-a-thon' contest explores AI’s strengths and flaws in higher education
Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host a virtual "Cheat-a-thon" competition March 3-April 6. The event, open to faculty and students across the U.S., explores the use of generative AI in academic environments.

Q&A: Can AI be governed by an ‘equity by design’ framework?
Approaches to regulating artificial intelligence (AI), from creation to deployment and use in practice, vary internationally. Daryl Lim, CSRAI Affiliate, Penn State Dickinson Law associate dean for research and innovation, H. Laddie Montague Jr. Chair in Law and Penn State Institute for Computational and Data Sciences (ICDS) co-hire, has proposed an “equity by design” framework to better govern the technology and protect marginalized communities from potential harm in an article published on Jan. 27 in the Duke Technology Law Review. Lim spoke about AI governance and his proposed framework in the following Q&A on Penn State News.

Competition highlights generative AI’s power, pitfalls for medical diagnoses
The winners of the first-ever "Diagnose-a-thon" were announced by Penn State's Center for Socially Responsible Artificial Intelligence. Prizes were awarded for accurate and misleading health diagnoses presented by large language models.

Center for Socially Responsible AI awards seed funding to seven diverse projects
The Penn State Center for Socially Responsible Artificial Intelligence awarded more than $159,000 to seven interdisciplinary research projects across the University.

Media Mention: "Can We Trust AI? Safety, Ethics, and the Future of Technology"
In this episode of Agents of Tech, hosts Stephen Horn and Autria Godfrey hear from CSRAI director Shyam Sundar to explore the rapidly evolving world of Artificial Intelligence and ask the pressing questions: Can we trust AI? Is it safe? AI is becoming deeply embedded in every aspect of our lives, from healthcare to transportation, but how do we ensure it aligns with ethical principles and remains trustworthy?

Contest explores artificial intelligence’s strengths, flaws for medical diagnoses
Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Diagnose-a-thon,” a competition that aims to uncover the power and potential dangers of using generative AI for medical inquiries. The virtual event will take place Nov. 11-17 with top prizes of $1,000.

Showing AI users diversity in training data boosts perceived fairness and trust
The availability of an artificial intelligence system's training data can promote transparency and accountability of that system, according to Penn State researchers.

Center for Socially Responsible AI invites seed funding proposals
Penn State’s Center for Socially Responsible Artificial Intelligence invites short proposals for its annual seed funding program. Applications will be accepted through Nov. 1, with projects expected to start in spring 2024 and last up to two years.

Ask an expert: AI and disinformation in the 2024 presidential election
Penn State researchers discuss how to spot AI-generated election misinformation and what voters can do to protect themselves.

NIH grant supports developing voice assistant for persons living with dementia
Researchers from the College of Information Sciences and Technology, the Donald P. Bellisario College of Communications and the Ross and Carol Nese College of Nursing received a $432,198 grant from the National Institute on Aging to work on voice assistants to support dementia care.

User control of autoplay can alter awareness of online video ‘rabbit holes’
A new study by Penn State researchers suggests that giving users control over the interface feature of autoplay can help them realize that they are going down a rabbit hole of extreme content. The work — which the researchers said has implications for responsibly designing online content viewing platforms and algorithms, as well as helping users better recognize extreme content — is available online and will be published in the October issue of the International Journal of Human-Computer Studies.

Q&A: In ChatGPT we trust?
Combining artificial intelligence (AI) and online search engines may make AI more trustworthy and search results easier to use, according to Penn State researchers.

Competition highlights believable fake news created with generative AI tools
Penn State’s Center for Socially Responsible Artificial Intelligence announced the winners of its first-ever “Fake-a-thon.” The competition, held April 1-5, invited Penn Staters to use generative AI tools like ChatGPT to create fake news stories that then underwent scrutiny for clues during the two-part challenge.

Contest invites Penn Staters to write believable fake news with generative AI
Starting April Fools’ Day, Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Fake-a-thon,” a five-day competition to better understand the role of generative AI in the creation and detection of fake news. The event challenges participants to use generative AI to write believable fake news stories and is open to all members of the University community with a valid Penn State email address.

Media Mention: "How we can make AI less biased against disabled people"
AI has become omnipresent in nearly every industry, but its inherent biases often go unaddressed in meaningful ways. For people with disabilities, that can be a significant problem.
This article from Fast Company dives into the issue, citing recent research from CSRAI Student Affiliates Pranav Narayanan Venkit and Mukund Srinath, and CSRAI Affiliate and College of Information Sciences and Technology Assistant Professor Shomir Wilson.