Recent News
![Photo of Daryl Lim](/sites/default/files/styles/276_x_184_news_page/public/directory/Lim-Daryl.jpg?itok=8shcLCL0)
Q&A: Can AI be governed by an ‘equity by design’ framework?
Approaches to regulating artificial intelligence (AI), from creation to deployment and use in practice, vary internationally. Daryl Lim, CSRAI Affiliate, Penn State Dickinson Law associate dean for research and innovation, H. Laddie Montague Jr. Chair in Law and Penn State Institute for Computational and Data Sciences (ICDS) co-hire, has proposed an “equity by design” framework to better govern the technology and protect marginalized communities from potential harm in an article published on Jan. 27 in the Duke Technology Law Review. Lim spoke about AI governance and his proposed framework in the following Q&A on Penn State News.
![Competition highlights generative AI’s power, pitfalls for medical diagnoses](/sites/default/files/styles/276_x_184_news_page/public/news/lead-image-diagnose-a-thon-psn-story.jpg?itok=DA2ZoJ0r)
Competition highlights generative AI’s power, pitfalls for medical diagnoses
The winners of the first-ever "Diagnose-a-thon" were announced by Penn State's Center for Socially Responsible Artificial Intelligence. Prizes were awarded for accurate and misleading health diagnoses presented by large language models.
![](/sites/default/files/news/default.png)
Center for Socially Responsible AI awards seed funding to seven diverse projects
The Penn State Center for Socially Responsible Artificial Intelligence awarded more than $159,000 to seven interdisciplinary research projects across the University.
![A person holding a phone that is scanning their face](/sites/default/files/styles/276_x_184_news_page/public/news/AI-face-scan.jpg?itok=LO_wiu52)
Media Mention: "Can We Trust AI? Safety, Ethics, and the Future of Technology"
In this episode of Agents of Tech, hosts Stephen Horn and Autria Godfrey hear from CSRAI director Shyam Sundar to explore the rapidly evolving world of Artificial Intelligence and ask the pressing questions: Can we trust AI? Is it safe? AI is becoming deeply embedded in every aspect of our lives, from healthcare to transportation, but how do we ensure it aligns with ethical principles and remains trustworthy?
![Contest explores artificial intelligence’s strengths, flaws for medical diagnoses](/sites/default/files/styles/276_x_184_news_page/public/news/health-ai-tech.jpg?itok=NLae5xIb)
Contest explores artificial intelligence’s strengths, flaws for medical diagnoses
Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Diagnose-a-thon,” a competition that aims to uncover the power and potential dangers of using generative AI for medical inquiries. The virtual event will take place Nov. 11-17 with top prizes of $1,000.
![Showing AI users diversity in training data boosts perceived fairness and trust](/sites/default/files/styles/276_x_184_news_page/public/news/sundar_hireme_modelcards_psn.jpg?itok=JIAVUia9)
Showing AI users diversity in training data boosts perceived fairness and trust
The availability of an artificial intelligence system's training data can promote transparency and accountability of that system, according to Penn State researchers.
![Center for Socially Responsible AI invites seed funding proposals](/sites/default/files/styles/276_x_184_news_page/public/news/ai-icons.jpg?itok=GdP6Ee7N)
Center for Socially Responsible AI invites seed funding proposals
Penn State’s Center for Socially Responsible Artificial Intelligence invites short proposals for its annual seed funding program. Applications will be accepted through Nov. 1, with projects expected to start in spring 2024 and last up to two years.
![Ask an expert: AI and disinformation in the 2024 presidential election](/sites/default/files/styles/276_x_184_news_page/public/news/gettyimages-1279660371.jpg?itok=xdfGgrp2)
Ask an expert: AI and disinformation in the 2024 presidential election
Penn State researchers discuss how to spot AI-generated election misinformation and what voters can do to protect themselves.
![NIH grant supports developing voice assistant for persons living with dementia](/sites/default/files/styles/276_x_184_news_page/public/news/penn-state-stone-wall.png?itok=74QkpsMS)
NIH grant supports developing voice assistant for persons living with dementia
Researchers from the College of Information Sciences and Technology, the Donald P. Bellisario College of Communications and the Ross and Carol Nese College of Nursing received a $432,198 grant from the National Institute on Aging to work on voice assistants to support dementia care.
![User control of autoplay can alter awareness of online video ‘rabbit holes’](/sites/default/files/styles/276_x_184_news_page/public/news/mealprep_4x3.jpg?itok=ku55a4o3)
User control of autoplay can alter awareness of online video ‘rabbit holes’
A new study by Penn State researchers suggests that giving users control over the interface feature of autoplay can help them realize that they are going down a rabbit hole of extreme content. The work — which the researchers said has implications for responsibly designing online content viewing platforms and algorithms, as well as helping users better recognize extreme content — is available online and will be published in the October issue of the International Journal of Human-Computer Studies.
![Q&A: In ChatGPT we trust?](/sites/default/files/styles/276_x_184_news_page/public/news/gettyimages-1369712065.jpg?itok=lEyK52Gz)
Q&A: In ChatGPT we trust?
Combining artificial intelligence (AI) and online search engines may make AI more trustworthy and search results easier to use, according to Penn State researchers.
![Competition highlights believable fake news created with generative AI tools](/sites/default/files/styles/276_x_184_news_page/public/news/fake-news-keyboard_psn.jpg?itok=-VxnXxpL)
Competition highlights believable fake news created with generative AI tools
Penn State’s Center for Socially Responsible Artificial Intelligence announced the winners of its first-ever “Fake-a-thon.” The competition, held April 1-5, invited Penn Staters to use generative AI tools like ChatGPT to create fake news stories that then underwent scrutiny for clues during the two-part challenge.
![Contest invites Penn Staters to write believable fake news with generative AI](/sites/default/files/styles/276_x_184_news_page/public/news/fake-news-keyboard_psn.jpg?itok=-VxnXxpL)
Contest invites Penn Staters to write believable fake news with generative AI
Starting April Fools’ Day, Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Fake-a-thon,” a five-day competition to better understand the role of generative AI in the creation and detection of fake news. The event challenges participants to use generative AI to write believable fake news stories and is open to all members of the University community with a valid Penn State email address.
![A white handicapped icon against a green background](/sites/default/files/styles/276_x_184_news_page/public/news/AI-disabled-people.jpg?itok=yQh_v1EZ)
Media Mention: "How we can make AI less biased against disabled people"
AI has become omnipresent in nearly every industry, but its inherent biases often go unaddressed in meaningful ways. For people with disabilities, that can be a significant problem.
This article from Fast Company dives into the issue, citing recent research from CSRAI Student Affiliates Pranav Narayanan Venkit and Mukund Srinath, and CSRAI Affiliate and College of Information Sciences and Technology Assistant Professor Shomir Wilson.
![Decorative image](/sites/default/files/styles/276_x_184_news_page/public/news/AI%20and%20Big%20Data.jpg?itok=C2vusAaY)
Media Mention: "Pennsylvania's Path to Regulate Artificial Intelligence"
Pennsylvania has yet to pass any laws regulating artificial intelligence, but legislators are crafting resolutions they hope will help the state to enact informed legislation when the time comes.
In this article from Erie News Now, Daryl Lim, CSRAI affiliate and H. Laddie Montague Jr. Chair in Law at Dickinson Law, laid out several priorities a committee should take when regulating AI.