Recent News
Media Mention: "Can We Trust AI? Safety, Ethics, and the Future of Technology"
In this episode of Agents of Tech, hosts Stephen Horn and Autria Godfrey hear from CSRAI director Shyam Sundar to explore the rapidly evolving world of Artificial Intelligence and ask the pressing questions: Can we trust AI? Is it safe? AI is becoming deeply embedded in every aspect of our lives, from healthcare to transportation, but how do we ensure it aligns with ethical principles and remains trustworthy?
Contest explores artificial intelligence’s strengths, flaws for medical diagnoses
Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Diagnose-a-thon,” a competition that aims to uncover the power and potential dangers of using generative AI for medical inquiries. The virtual event will take place Nov. 11-17 with top prizes of $1,000.
Showing AI users diversity in training data boosts perceived fairness and trust
The availability of an artificial intelligence system's training data can promote transparency and accountability of that system, according to Penn State researchers.
Center for Socially Responsible AI invites seed funding proposals
Penn State’s Center for Socially Responsible Artificial Intelligence invites short proposals for its annual seed funding program. Applications will be accepted through Nov. 1, with projects expected to start in spring 2024 and last up to two years.
Ask an expert: AI and disinformation in the 2024 presidential election
Penn State researchers discuss how to spot AI-generated election misinformation and what voters can do to protect themselves.
NIH grant supports developing voice assistant for persons living with dementia
Researchers from the College of Information Sciences and Technology, the Donald P. Bellisario College of Communications and the Ross and Carol Nese College of Nursing received a $432,198 grant from the National Institute on Aging to work on voice assistants to support dementia care.
User control of autoplay can alter awareness of online video ‘rabbit holes’
A new study by Penn State researchers suggests that giving users control over the interface feature of autoplay can help them realize that they are going down a rabbit hole of extreme content. The work — which the researchers said has implications for responsibly designing online content viewing platforms and algorithms, as well as helping users better recognize extreme content — is available online and will be published in the October issue of the International Journal of Human-Computer Studies.
Q&A: In ChatGPT we trust?
Combining artificial intelligence (AI) and online search engines may make AI more trustworthy and search results easier to use, according to Penn State researchers.
Competition highlights believable fake news created with generative AI tools
Penn State’s Center for Socially Responsible Artificial Intelligence announced the winners of its first-ever “Fake-a-thon.” The competition, held April 1-5, invited Penn Staters to use generative AI tools like ChatGPT to create fake news stories that then underwent scrutiny for clues during the two-part challenge.
Contest invites Penn Staters to write believable fake news with generative AI
Starting April Fools’ Day, Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Fake-a-thon,” a five-day competition to better understand the role of generative AI in the creation and detection of fake news. The event challenges participants to use generative AI to write believable fake news stories and is open to all members of the University community with a valid Penn State email address.
Media Mention: "How we can make AI less biased against disabled people"
AI has become omnipresent in nearly every industry, but its inherent biases often go unaddressed in meaningful ways. For people with disabilities, that can be a significant problem.
This article from Fast Company dives into the issue, citing recent research from CSRAI Student Affiliates Pranav Narayanan Venkit and Mukund Srinath, and CSRAI Affiliate and College of Information Sciences and Technology Assistant Professor Shomir Wilson.
Media Mention: "Pennsylvania's Path to Regulate Artificial Intelligence"
Pennsylvania has yet to pass any laws regulating artificial intelligence, but legislators are crafting resolutions they hope will help the state to enact informed legislation when the time comes.
In this article from Erie News Now, Daryl Lim, CSRAI affiliate and H. Laddie Montague Jr. Chair in Law at Dickinson Law, laid out several priorities a committee should take when regulating AI.
Predictive model detects potential extremist propaganda on social media
The militant Islamic State group, or ISIS, lost its physical territory in 2019, but it remains an active force on social media, according to researchers from the Penn State College of Information Sciences and Technology, who set out to better understand the group’s online strategies. The research team includes CSRAI student affiliate Younes Karimi.
Media Mention: "Why You Might Want Alexa or Siri to Sound More Like You"
A recent research study suggests that consumers would like and trust AI assistants more if it reminds them of themselves.
In this article from The Wall Street Journal, S. Shyam Sundar, CSRAI director and James P. Jimirro Professor of Media Effects who co-wrote the study, explains the potential persuasive effects voice assistants can have on users and what designers need to consider when creating them.
Center for Socially Responsible AI awards seed funding to five projects
The Penn State Center for Socially Responsible Artificial Intelligence has announced the results of its most recent seed funding competition. The center awarded over $105,000 to five interdisciplinary research projects that feature teams of researchers representing six colleges and campuses.