News Release

Measuring trust in AI

Researchers find public trust in AI varies greatly depending on the application

Peer-Reviewed Publication

University of Tokyo

Octagon chart

image: An example chart showing a respondent’s ratings of the eight themes for each of the four ethical scenarios on a different application of AI. view more 

Credit: © 2021 Yokoyama et al.

Prompted by the increasing prominence of artificial intelligence (AI) in society, University of Tokyo researchers investigated public attitudes toward the ethics of AI. Their findings quantify how different demographics and ethical scenarios affect these attitudes. As part of this study, the team developed an octagonal visual metric, analogous to a rating system, which could be useful to AI researchers who wish to know how their work may be perceived by the public.

Many people feel the rapid development of technology often outpaces that of the social structures that implicitly guide and regulate it, such as law or ethics. AI in particular exemplifies this as it has become so pervasive in everyday life for so many, seemingly overnight. This proliferation, coupled with the relative complexity of AI compared to more familiar technology, can breed fear and mistrust of this key component of modern living. Who distrusts AI and in what ways are matters that would be useful to know for developers and regulators of AI technology, but these kinds of questions are not easy to quantify.

Researchers at the University of Tokyo, led by Professor Hiromi Yokoyama from the Kavli Institute for the Physics and Mathematics of the Universe, set out to quantify public attitudes toward ethical issues around AI. There were two questions in particular the team, through analysis of surveys, sought to answer: how attitudes change depending on the scenario presented to a respondent, and how the demographic of the respondent themself changed attitudes.

Ethics cannot really be quantified, so to measure attitudes toward the ethics of AI, the team employed eight themes common to many AI applications that raised ethical questions: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. These, which the group has termed “octagon measurements,” were inspired by a 2020 paper by Harvard University researcher Jessica Fjeld and her team.

Survey respondents were given a series of four scenarios to judge according to these eight criteria. Each scenario looked at a different application of AI. They were: AI-generated art, customer service AI, autonomous weapons and crime prediction.

The survey respondents also gave the researchers information about themselves such as age, gender, occupation and level of education, as well as a measure of their level of interest in science and technology by way of an additional set of questions. This information was essential for the researchers to see what characteristics of people would correspond to certain attitudes.

“Prior studies have shown that risk is perceived more negatively by women, older people, and those with more subject knowledge. I was expecting to see something different in this survey given how commonplace AI has become, but surprisingly we saw similar trends here,” said Yokoyama. “Something we saw that was expected, however, was how the different scenarios were perceived, with the idea of AI weapons being met with far more skepticism than the other three scenarios.”

The team hopes the results could lead to the creation of a sort of universal scale to measure and compare ethical issues around AI. This survey was limited to Japan, but the team has already begun gathering data in several other countries.

“With a universal scale, researchers, developers and regulators could better measure the acceptance of specific AI applications or impacts and act accordingly,” said Assistant Professor Tilman Hartwig. “One thing I discovered while developing the scenarios and questionnaire is that many topics within AI require significant explanation, more so than we realized. This goes to show there is a huge gap between perception and reality when it comes to AI.”

 

###

Journal article
Yuko Ikkataia, Tilman Hartwig, Naohiro Takanashi and Hiromi M Yokoyama, “Octagon measurement: public attitudes toward AI ethicsInternational Journal of Human-Computer Interaction

Funding
This work was supported by the World Premier International Research Center Initiative (WPI), and financially supported by KAKENHI Grant No. 20K14464, MEXT, Japan and by SECOM Science and Technology Foundation.

Research contact
Professor Hiromi M. Yokoyama
Kavli Institute for the Physics and Mathematics of the Universe, The University of Tokyo Institutes for Advanced Study, The University of Tokyo
5-1-5 Kashiwanoha, Kashiwa, Chiba Prefecture 277-8583, Japan
Email: hiromi.yokoyama@ipmu.jp

Kavli Institute for the Physics and Mathematics of the Universe - https://www.ipmu.jp/en

Press Contact
Mr. Rohan Mehra
Division for Strategic Public Relations, The University of Tokyo
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan
Email: press-releases.adm@gs.mail.u-tokyo.ac.jp

About the University of Tokyo
The University of Tokyo is Japan's leading university and one of the world's top research universities. The vast research output of some 6,000 researchers is published in the world's top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 4,000 international students. Find out more at www.u-tokyo.ac.jp/en/ or follow us on Twitter at @UTokyo_News_en.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.