<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
As excitement and investment in artificial intelligence grow, a number of surveys have sought to understand public views. There have been very few attempts to understand the attitudes of AI researchers. Given the uncertainties around the opportunities and threats of AI technologies, the views of those closest to the technology are crucial. In summer 2024, a research team from University College London’s Department of Science and Technology Studies fielded a survey of AI researchers designed to understand their values, their visions for the future of AI, and what they though about the role of public voices in AI. Our survey included questions that had been asked in representative UK public surveys, to map overlaps and gaps between public and AI researchers’ views. We analysed the responses from 4,260 AI researchers, making it the largest survey of AI researchers to date. Our insights include the following: Researchers do not speak with one voice: they report diverse and divergent views about innovation and responsibilities in AI Researchers are more positive than members of the public about the benefits of AI Researchers and the public share concerns about disinformation, data use and cybercrime There is a sense of technological inevitability in AI research 'Optimist’ and ‘pessimist’ researchers report different views on AI Researchers tend to have a ‘deficit model’ of the public Researchers want the public involved downstream, not upstream Researchers want AI to reflect human values but do not pay attention to social science research Researchers think it is more important for society to debate risks than benefits Researchers and the public disagree about who should be responsible for the safe use of AI Researchers want greater care for training data Researchers are less concerned than the public are when it comes to explaining AI outputs Researchers are concerned about who sets research agendas for AI This project is part of the Public Voices in AI project under the UKRI Responsible AI programme. This report is the first publication from the UCL Centre for Responsible Innovation.
public participation, public good, ai, research policy
public participation, public good, ai, research policy
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |