Meet Louisa Shen, new Mediated Trust Post-Doctoral Associate

Louisa Shen recently joined the Mediated Trust team as a Post-Doctoral Associate for Trust and AI. She originally trained in literature and history in Auckland, NZ, before working as a technical communicator in the software sector. Her doctoral research undertaken in Cambridge, UK developed an extended history of electronic display technology from the 19th century to the present. Her research areas include material and phenomenological histories of electrical and computational engineering, examining their designs, uses, and effects in context. She was previously an early-career co-lead for the Integrated AI Network at the Australian National University, and has taught intersectionally on telecommunications and computing history, and technology and visual culture. 

Can you tell us a bit about your latest research/publication?

My most recent publication is ‘Not the Machine’s Fault: Taxonomising AI Failure as Computational (Mis)Use’ published in AI & Society, April 2025.

The paper argues that AI failures (controversial incidents) should be considered more carefully in order to understand the nature of these commonly-reported events. Drawing on the history and philosophy of computing, it advances four categories of failure: (1) technically-sound outputs inherent to connectionist programming; (2) machine-world misconfiguration; (3) motivational failure that deploys technology for illegitimate ends; and (4) epistemic failure of misapplication where computing and AI are being used to solve for the wrong sets of social problems.

The premise of these categories is that they allow us to parse more carefully when and why AI is failing to live up to its promises. Not every instance of AI gone wrong is due to technical errors or inadequacies; in fact, many harmful AI incidents result from computing being used properly to achieve poor ends or being ‘over-extended’ (used improperly) to solve for difficult problems we do not really understand well. Examples of these kinds of misuse include using AI to create software that enables exploitative labour practices or using AI to try to predict child abuse risk.

What interests you the most about this area?

One of questions that gets obscured by much contemporary AI rhetoric about its ‘revolutionary’ capabilities and seemingly universal applicability is how the technology is limited. For all its approximation of aspects of human intelligence (such as pattern recognition), current SOTA AI still runs on modern binary computing, a technology that has not fundamentally changed since its wartime inception. Given that computers are machines that are limited in their form and function, it stands to reason that AI implementations do not overcome these ontological boundaries. By getting back to basics, and re-engaging with computers as electro-numerical machines deployed infrastructurally at scale both in R&D contexts and for business, we can make more meaningful diagnoses of what isn’t working in our current attempts to use AI. Historicising technological deployment has been a significant part of my doctoral and postdoctoral work to date, and provides a longitudinal, long-term perspective on this particular moment of AI commercialisation.

In terms of your work with the ARC Laureate Mediated Trust program, what are you planning to investigate?

Part of building truly trustworthy AI is to create a realistic understanding of what the technology can and can’t do, with a view to making sure we use our tools properly. AI is a sophisticated and powerful set of computational techniques, but it also has shortcomings that are continuing to create or amplify civic problems in deployment. As part of the ARC Laureate on Mediated Trust, research work that recognises not just the capabilities but also the limitations of AI will help to inform evidence-based governance for appropriate, fit-for-purpose deployment across public and private sectors. My research seeks to move beyond trustworthy AI as an umbrella term for responsible and ethical AI, and instead focus on trustworthy AI as useful and reliable AI.

You can read the full research paper here.

Share this article

Related Articles

Exploring new perspectives through sabbatical

We were delighted to host Professor Jörg Matthes from the University of Vienna during his recent sabbatical with the Mediated Trust team in Sydney.

What does trust mean in the age of AI?

Professor Terry Flew explores the evolving concept of trust in the context of media, technology, and artificial intelligence. He highlights how AI challenges traditional notions of human-machine interaction, raising concerns about authenticity and reliability.

New opportunities – Mediated Trust PhD scholarships

Are you passionate about understanding the intersection of trust, news media, digital platforms, public institutions, and AI, and contributing to current debates on these issues? The University of Sydney is offering Mediated Trust ARC Laureate PhD Scholarships.

What are people actually doing with AI? A better way to measure performance

If you’ve ever asked ChatGPT to rewrite an email, check your grammar, or summarise an article, you’re not alone. Yet surprisingly, most AI benchmarks rarely test these everyday tasks.