Time for Trust? Scale and relationality in understanding trust relations between people, technologies and institutions 

Part 4: Trusted systems?

The term we have chosen to use to capture the contemporary dynamics of trust is mediated trust. In a paper focused on the rise of Blockchain as a technology of trust, Balasz Bodó has observed that ‘digital technologies shape how humans trust each other, and … in order to fulfill this task, they need to be trustworthy’ (Bodo, 2021, p. 2669). Focusing on the historical development of institutional trust, Bodó argues that the concept of mediated trust incorporates both the dimensions through which digital technologies promote trust through technology, and the discursive frameworks associated with trust in technology.

From the perspective of science communication, Mike Schäfer has made the point that ‘trust in science is, to a considerable extent, the outcome of mediated communication’ (Schafer, 2016, p. 1), since ‘public trust in science is, to a considerable degree, influenced by media representations of science, its protagonists and institutions … trust intermediaries “double” the configuration of trust … they are themselves potential objects whom the public may or may not trust’ (Schafer, 2016, p. 3).

We have here two conceptions of trust and two conceptions of technology. One is the question of whether digital technologies constitute the basis for trusted systems, as compared to whether institutional frameworks which govern their development and deployment promote public trust in the technologies. The second concerns public representations of technologies. Is the vision behind such technologies positive or negative? Are the spokespeople associated with these technologies trusted and public credible in the wider community?

The second brings us to a wider politics of expertise (Eyal, 2019) which we saw play put very sharply during the COVID-19 pandemic, and which continues to resonate in public discourses around the development and use of artificial intelligence. Insofar as it is seen as technologies being developed by ‘tech bros’ strongly motivated by private profit, and derived from data sources by contentious means, this will continue to prove to be a barrier to the development of AI, and one which will promote a strong regulatory response.

Trust has emerged as a significant, if shifting, focus across the various AI reviews. Trust has been variously described as ‘a central driver for widespread acceptance of AI’ (Australian Government Department of Industry, Science and Resources, 2023, p. 4), while Michael Birtwistle of the Ada Lovelace Institute observed to the UK House of Commons Inquiry that in order to realise the economic benefits of AI ‘we need public trust; we need those technologies to be trustworthy, and that is worth investing regulatory capability in’ (House of Commons Science, Innovation and Technology Committee, 2023, p. 29).

In proposing a ‘pro-innovation’ approach to AI regulation in the U.K., the Department for Science, Innovation and Technology and the Office for Artificial Intelligence observed:

Trust is a critical driver for AI adoption. If people do not trust AI, they will be reluctant to use it. Such reluctance can reduce demand for AI products and hinder innovation (Department for Science, Innovation & Technology and Office for Artificial Intelligence, 2023, 33).

There is a need for clarity in these policy discussions about what is meant by ‘trust in AI’. It often defaults to developing trustworthy AI systems, which equate to meeting defined technical standards or being seen to engage in risk assessment and risk mitigation. The question of trust in AI will inevitably be bound up with wider debates around trust in institutions and trust in both communicative processes and the corporate and government entities engaged with them. It can thus be seen as a subset of wider debates around the future of mediated trust.

References cited

Australian Government Department of Industry, Science and Resources. (2023). Safe and Responsible AI in Australia: Discussion Paper. Australian Government. https://consult.industry.gov.au/supporting-responsible-ai

Bodo, B. (2021). Mediated trust: A theoretical framework to address the trustworthiness of technological trust mediators. New Media & Society23(9), 2668–2690.

Department for Science, Innovation & Technology and Office for Artificial Intelligence. (2023). A pro-innovation approach to AI regulation. GOV.UK. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#part-3-an-innovative-and-iterative-approach

Eyal, G. (2019). The Crisis of Expertise. Polity.

House of Commons Science, Innovation and Technology Committee. (2023). The Governance of Artificial Intelligence: Interim Report (Ninth Report of Session 2022–23). House of Commons. https://committees.parliament.uk/work/6986/governance-of-artificial-intelligence-ai/

Schafer, M. (2016). Mediated trust in science: Concept, measurement and perspectives for the ‘science of science communication.’ Journal of Science Communication15(5), 1–7.

Back to Part 3.

Share this article

Related Articles

Summer Reflections on Australia’s Social Media Minimum Age Laws

It is unusual to find yourself as a digital media researcher in Australia being at the forefront of global policy debates. Given the talk about the three great Digital Empires – the US, EU and China – who set the global agenda, the place for middle-sized powers to be taking a policy lead around digital tech would seem to be limited.

How do Platforms Matter?

The paper ‘How do platforms matter? Media power, platform power and the digital domination of Australian media’, co-authored by Terry Flew (University of Sydney) and Cameron McTernan (Adelaide University) has now been published by International Communications Gazette. The paper is part of a special issue ‘Networks of Power: Media and Internet Concentration, Platform Capitalism, and the Future of Democracy’, edited by Dwayne Winseck (Carleton University). The special issue is part of the Global Media and Internet Concentration Project (GMICP), funded through the Canadian Social Sciences and Humanities Research Council.

Digital policy as problem space: Australia’s social media age restrictions for under-16s

On December 10, 2025, the Online Safety Act (Social Media Minimum Age) Amendment, which was passed by both Australian Federal Houses of Parliament 12 months earlier, was implemented. This marked the onset of what is known globally as Australia’s social media ban for under-16s. In practice it involves those under 16 being restricted from holding accounts on ten platforms designated by the Office of the eSafety Commissioner, including Facebook, Instagram, TikTok, X, Reddit and Snapchat.

Believing What You See: Trust and Vision from the French Revolution to Generative AI

The seminar brought together a cross disciplinary cohort of scholars to present papers on the veracity of the image and information in public life, and the relationship between visual understanding and societal trust.