Recent News

A hand holding a smartphone that is accessing Chat-GPT

Psychology Today: Chatbots Are a Valuable Tool—and a Moral Test for Us All

Originally written by CSRAI Affiliate Patrick Plaisance for Psychology Today.

"Generative AI” tools promise to provide us with many good things, but they also provide us with something else: a moral test of sorts.

We may well be on our way to failing the moral test posed by chatbots by ignoring the lessons of our response to the burgeoning dominance of social media that began 20 years ago.

Read more...

AI-generated image of an elderly nun

New York Times: How Easy Is It to Fool A.I.-Detection Tools?

CSRAI Director S. Shyam Sundar shares his perspective in the New York Times on the fast-burgeoning crop of companies that are offering services to detect AI-generated images to help society separate fact from fiction.

Read more...

 Center for Socially Responsible AI awards Big Ideas Grants to five projects

Center for Socially Responsible AI awards Big Ideas Grants to five projects

The Penn State Center for Socially Responsible Artificial Intelligence awarded more than $212,000 to advance five interdisciplinary research projects as part of its Big Ideas Grant program. Awarded projects feature researchers from six departments across four colleges and institutes.

Read more...

 Transparent labeling of training data may boost trust in artificial intelligence

Transparent labeling of training data may boost trust in artificial intelligence

Showing users that visual data fed into artificial intelligence (AI) systems was labeled correctly might make people trust AI more, according to researchers. The findings may also pave the way to help scientists better measure the connection between labeling credibility, AI performance and trust.

Read more...

What is ChatGPT and what can it be used for?

Penn State artificial intelligence experts offer perspectives on some of the most frequently asked questions.

Read more...

 IST assistant professor Shomir Wilson receives NSF CAREER Award

IST assistant professor Shomir Wilson receives NSF CAREER Award

Shomir Wilson, assistant professor in the Penn State College of Information Sciences and Technology, is the recipient of a Faculty Early Career Development (CAREER) award from the National Science Foundation in recognition of his work, “Large-Scale Exploration and Interpretation of Consumer-Oriented Legal Documents.”  

Read more...

Robotic hands pull apart bars of a prison cell

Meet the Jailbreakers Hypnotizing ChatGPT Into Bomb-Building

If there’s one rule about rules, it’s that they’re bound to be broken. That goes for life, law, and, on a much more specific note, ChatGPT.

In fact, that rule may go doubly for ChatGPT. As the chatbot’s popularity has ballooned, so too has the uncontrollable urge to make OpenAI’s language model do things it shouldn’t — for example, telling you step-by-step how to build explosives.

In this story from Inverse, CSRAI director S. Shyam Sundar offers his perspective on who is responsible for ensuring a safe experience on ChatGPT.

Read more...

Decorative image

Regulating AI: 3 experts explain why it’s difficult to do and important to get right

Given the potential for widespread harm as technology companies roll out new AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it’s so important to get it right.

Read more...

 Center now offering social responsibility consultations for AI researchers

Center now offering social responsibility consultations for AI researchers

Penn State’s Center for Socially Responsible Artificial Intelligence is now offering social responsibility consultations to University researchers engaged in AI-related research. 

Read more...

Center issues seed grant call for 'Big Ideas' in socially responsible AI

Center issues seed grant call for 'Big Ideas' in socially responsible AI

Penn State’s Center for Socially Responsible Artificial Intelligence is inviting short proposals for a special off-cycle round of "Big Idea Grant" seed funding earmarked for early concepts and research that has transformational potential. Applications will be accepted through May 1, with projects expected to start on July 1 and last for up to two years.

Read more...

Decorative image

CSRAI sponsoring workshop on AI for Social Good at AAAI Conference

The Center for Socially Responsible Artificial Intelligence (CSRAI) at Penn State is pleased to announce that it is sponsoring the International Workshop on AI for Social Good on February 14, 2023, during the prestigious AAAI Conference on Artificial Intelligence in Washington D.C.

Read more...

Center for Socially Responsible AI awards seed funding to 6 projects

The Penn State Center for Socially Responsible Artificial Intelligence recently announced the results of its third seed funding competition. The center awarded $145,000 to advance six interdisciplinary research projects that feature researchers from eight colleges and institutes.

Read more...

A team of researchers suggest that smart speakers — like Alexa and Siri — that demonstrate active listening cues might be able to serve a therapeutic role for users.

People open up more to smart speakers that listen actively

 Adding random, short expressions of understanding in a conversation may turn smart speakers, such as Alexa and Siri, into robot therapists that allow people to open up more without violating their privacy, according to a team of researchers.

 

Read more...

A man in a checkered shirt holding a phone that is using facial recognition software.

The White House’s ‘AI Bill of Rights’ outlines five principles to make artificial intelligence safer, more transparent and less discriminatory

Christopher Dancy, Harold and Inge Marcus Industrial and Manufacturing Career Development Associate Professor in the Penn State College of Engineering, shares his perspective on the Blueprint for an AI Bill of Rights, which was recently released by the White House Office of Science and Technology Policy.

Read more...

AI language models show bias against people with disabilities, study finds

AI language models show bias against people with disabilities, study finds

The algorithms that drive natural language processing technology — a type of artificial intelligence that allows machines to use text and spoken words in many different applications — often have tendencies that could be offensive or prejudiced toward individuals with disabilities, according to researchers at the Penn State College of Information Sciences and Technology.

 

Read more...