In this video interview, Professor Terry Flew explores the evolving concept of trust in the context of media, technology, and artificial intelligence. He highlights how AI challenges traditional notions of human-machine interaction, raising concerns about authenticity and reliability. Professor Flew discusses the shift from minimal government involvement in the internet to increased regulation due to monopolisation and social media harms. He emphasises the importance of governance frameworks, institutional oversight, and ethical considerations in AI development, including data bias and global capacity disparities. Ultimately, he calls for public-interest-driven regulation to ensure responsible technological advancement and trustworthy communication systems.
Transcript
“So the question of trust has had a long, if somewhat subterranean history in debates about society and media and communications literature has not for the most part dealt with the question of trust, yet this question of trust at a distance, it’s fundamentally around questions of how we communicate with one another and the technologies that we use for those purposes, as well as the institutions that we look for as guarantor of trust.
I think the biggest impact of AI is we are never quite sure whether we are dealing with a human or a machine. Think for instance, when you deal with a telco and you’re dealing with a bot or an airline or whatever it might be, it looks like humans have produced it or are doing it.
And this whole question of what is real and what is fake has taken on a whole new impetus in the last few years. In particular, as we deal with the morass of AI generated text and AI generated images. And the more complex these tasks become, the more difficult it is to simply rely on computer programs to get it right for at least two, possibly three decades.
There was largely a view that the role of government was to stay off the internet from about the mid 2010s. However, we see a growing backlash to that, particularly around concerns about harmful impacts to social media. In that respect, we also see a growing monopolisation of the tech sector. The few corporate giants largely displacing the open internet. And this has given us an environment where there’s a greater preparedness on the part of governments to engage more directly in policies related to the internet.
But of course, importantly here is the question of who oversees such frameworks. Do we rely on companies themselves in a self-regulatory model to do so? Well, there’s a lot more doubt about that now, I would say, particularly in light of how we’ve seen social media evolve over a 20 to 30 year period, a technology is never simply a thing or a device or an artefact. It comes packaged with ways of thinking about that technology.
We’ve been talking about things that we now understand to be artificial intelligence for almost two centuries. Yet the technologies that enable this are only really starting to emerge in a serious way. Now, we are not simply relying on technologies to carry human communication, but the technologies are themselves communicators in their own right.
We have interests or if you like, the materiality of the technologies, who is developing, what sort of data are they using, what sort of biases are there in that data? Is there evidence of discrimination through it? What about the global distribution of capacity to do this?
And finally, the institutions, what will be the governance frameworks and regulatory organisations that will have oversight of this? What do we expect them to do? How will we, be sure that they act in the public interest?”
Dr Agata Stepnik discusses digital ethnography