Recent News

A team of researchers suggest that smart speakers — like Alexa and Siri — that demonstrate active listening cues might be able to serve a therapeutic role for users.

People open up more to smart speakers that listen actively

 Adding random, short expressions of understanding in a conversation may turn smart speakers, such as Alexa and Siri, into robot therapists that allow people to open up more without violating their privacy, according to a team of researchers.

 

Read more...

A man in a checkered shirt holding a phone that is using facial recognition software.

The White House’s ‘AI Bill of Rights’ outlines five principles to make artificial intelligence safer, more transparent and less discriminatory

Christopher Dancy, Harold and Inge Marcus Industrial and Manufacturing Career Development Associate Professor in the Penn State College of Engineering, shares his perspective on the Blueprint for an AI Bill of Rights, which was recently released by the White House Office of Science and Technology Policy.

Read more...

AI language models show bias against people with disabilities, study finds

AI language models show bias against people with disabilities, study finds

The algorithms that drive natural language processing technology — a type of artificial intelligence that allows machines to use text and spoken words in many different applications — often have tendencies that could be offensive or prejudiced toward individuals with disabilities, according to researchers at the Penn State College of Information Sciences and Technology.

 

Read more...

Center for Socially Responsible AI accepting seed funding proposals

Center for Socially Responsible AI accepting seed funding proposals

Penn State’s Center for Socially Responsible Artificial Intelligence is inviting short proposals for its annual seed funding program. Applications will be accepted through Nov. 1, with projects expected to start in spring 2023 and last for up to two years.

Read more...

People who distrust fellow humans show greater trust in artificial intelligence

People who distrust fellow humans show greater trust in artificial intelligence

A person’s distrust in humans predicts they will have more trust in artificial intelligence’s ability to moderate content online, according to a recently published study. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media.

Read more...

Students encouraged to apply for AI center affiliate status

Students encouraged to apply for AI center affiliate status

Penn State’s Center for Socially Responsible Artificial Intelligence is accepting applications for its Student Affiliates Program. Graduate and undergraduate students from all Penn State campuses are eligible to apply. Applications will remain open indefinitely and can be submitted online.

Read more...

Users trust AI as much as humans for flagging problematic content

Users trust AI as much as humans for flagging problematic content

Social media users may trust artificial intelligence — AI — as much as human editors to flag hate speech and harmful content, according to researchers. 

Read more...

Researchers encouraged to apply for AI center affiliate status

Researchers encouraged to apply for AI center affiliate status

Penn State faculty pursuing research, education and outreach related to socially responsible AI are encouraged to apply for affiliate status with the Center for Socially Responsible Artificial Intelligence.

Read more...

Decorative image

AI for Social Impact talks available for viewing online

Recordings of recent talks presented as part of the Center for Socially Responsible Artificial Intelligence's "AI for Social Impact Seminar Series" are now available for viewing online.

Read more...

Decorative image

Young Achievers Symposium talks available for viewing online

Recordings of recent talks presented as part of the Center for Socially Responsible Artificial Intelligence's "Young Achievers Symposium" are now available for viewing online.

Read more...

Thumbnail image

Suicide vulnerability index, machine learning model help predict counties’ risk

Penn State researchers have developed a machine learning-based model that uses their newly developed suicide vulnerability index to identify at-risk communities at the U.S. county level.

Read more...

Lecture on explainable machine learning to be presented April 12

Umang Bhatt, a doctoral candidate in the Machine Learning Group at the University of Cambridge, will present a free public lecture titled “Challenges and Frontiers in Deploying Transparent Machine Learning” at 4 p.m. Tuesday, April 12.

Read more...

Lecture on bias in language technologies to be presented April 5

Su Lin Blodgett, a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics group at Microsoft Research Montréal, will present a free public lecture titled “Towards Equitable Language Technologies” at 4 p.m. Tuesday, April 5. The lecture will be held live via Zoom View Webinar. No registration is required.

Read more...

Tech activist Timnit Gebru to deliver distinguished lecture on ethical AI

Tech activist Timnit Gebru to deliver distinguished lecture on ethical AI

Timnit Gebru, a widely respected leader in artificial intelligence ethics research who said she lost her job at Google for raising issues about their AI practices and discrimination in the workplace, will present a free public lecture titled “The Quest for Ethical Artificial Intelligence” at 2 p.m. on Tuesday, March 29, via Zoom. No registration is required.

Read more...

Lecture on leveraging machine learning in nonprofits to be presented March 15

Ryan Shi, a doctoral candidate in the School of Computer Science at Carnegie Mellon University, will present a free, public webinar titled “From a Bag of Bagels to Bandit Data-Driven Optimization” at 4 p.m. on Tuesday, March 15.

Read more...