Time for Trust? Scale and relationality in understanding trust relations between people, technologies and institutions 

Part 4: Trusted systems?

The term we have chosen to use to capture the contemporary dynamics of trust is mediated trust. In a paper focused on the rise of Blockchain as a technology of trust, Balasz Bodó has observed that ‘digital technologies shape how humans trust each other, and … in order to fulfill this task, they need to be trustworthy’ (Bodo, 2021, p. 2669). Focusing on the historical development of institutional trust, Bodó argues that the concept of mediated trust incorporates both the dimensions through which digital technologies promote trust through technology, and the discursive frameworks associated with trust in technology.

From the perspective of science communication, Mike Schäfer has made the point that ‘trust in science is, to a considerable extent, the outcome of mediated communication’ (Schafer, 2016, p. 1), since ‘public trust in science is, to a considerable degree, influenced by media representations of science, its protagonists and institutions … trust intermediaries “double” the configuration of trust … they are themselves potential objects whom the public may or may not trust’ (Schafer, 2016, p. 3).

We have here two conceptions of trust and two conceptions of technology. One is the question of whether digital technologies constitute the basis for trusted systems, as compared to whether institutional frameworks which govern their development and deployment promote public trust in the technologies. The second concerns public representations of technologies. Is the vision behind such technologies positive or negative? Are the spokespeople associated with these technologies trusted and public credible in the wider community?

The second brings us to a wider politics of expertise (Eyal, 2019) which we saw play put very sharply during the COVID-19 pandemic, and which continues to resonate in public discourses around the development and use of artificial intelligence. Insofar as it is seen as technologies being developed by ‘tech bros’ strongly motivated by private profit, and derived from data sources by contentious means, this will continue to prove to be a barrier to the development of AI, and one which will promote a strong regulatory response.

Trust has emerged as a significant, if shifting, focus across the various AI reviews. Trust has been variously described as ‘a central driver for widespread acceptance of AI’ (Australian Government Department of Industry, Science and Resources, 2023, p. 4), while Michael Birtwistle of the Ada Lovelace Institute observed to the UK House of Commons Inquiry that in order to realise the economic benefits of AI ‘we need public trust; we need those technologies to be trustworthy, and that is worth investing regulatory capability in’ (House of Commons Science, Innovation and Technology Committee, 2023, p. 29).

In proposing a ‘pro-innovation’ approach to AI regulation in the U.K., the Department for Science, Innovation and Technology and the Office for Artificial Intelligence observed:

Trust is a critical driver for AI adoption. If people do not trust AI, they will be reluctant to use it. Such reluctance can reduce demand for AI products and hinder innovation (Department for Science, Innovation & Technology and Office for Artificial Intelligence, 2023, 33).

There is a need for clarity in these policy discussions about what is meant by ‘trust in AI’. It often defaults to developing trustworthy AI systems, which equate to meeting defined technical standards or being seen to engage in risk assessment and risk mitigation. The question of trust in AI will inevitably be bound up with wider debates around trust in institutions and trust in both communicative processes and the corporate and government entities engaged with them. It can thus be seen as a subset of wider debates around the future of mediated trust.

References cited

Australian Government Department of Industry, Science and Resources. (2023). Safe and Responsible AI in Australia: Discussion Paper. Australian Government. https://consult.industry.gov.au/supporting-responsible-ai

Bodo, B. (2021). Mediated trust: A theoretical framework to address the trustworthiness of technological trust mediators. New Media & Society23(9), 2668–2690.

Department for Science, Innovation & Technology and Office for Artificial Intelligence. (2023). A pro-innovation approach to AI regulation. GOV.UK. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#part-3-an-innovative-and-iterative-approach

Eyal, G. (2019). The Crisis of Expertise. Polity.

House of Commons Science, Innovation and Technology Committee. (2023). The Governance of Artificial Intelligence: Interim Report (Ninth Report of Session 2022–23). House of Commons. https://committees.parliament.uk/work/6986/governance-of-artificial-intelligence-ai/

Schafer, M. (2016). Mediated trust in science: Concept, measurement and perspectives for the ‘science of science communication.’ Journal of Science Communication15(5), 1–7.

Back to Part 3.

Share this article

Related Articles

My Summer reading – Simon Schama, Citizens

Up to four PhD scholarships are available for research into the relationship of trust to news media, digital platforms, public institutions, and artificial intelligence.

New opportunities – Mediated Trust PhD scholarships

Up to four PhD scholarships are available for research into the relationship of trust to news media, digital platforms, public institutions, and artificial intelligence.

AANZCA Presentations 2024

I recently delivered two presentations at the AANZCA 2024 Conference in Melbourne, showcasing emerging research into digital media platforms, global streaming strategies, and local content production.

New Scholarship Opportunity with the Vietnam Media Innovation Hub

Come research the booming Vietnamese digital media industries on this fully funded research scholarship at the University of Sydney!