Summary
This seminar brought together a cross disciplinary cohort of scholars to present papers on the veracity of the image and information in public life, and the relationship between visual understanding and societal trust. The event was held as part of the University of Sydney’s School of Art, Communication and English (SACE) 2025 Research Events Series, and included leading scholars in the field. Presentations included discussion of deepfakes, rising concerns about misinformation, declining trust in public institutions such as traditional media, and generative artificial intelligence (GenAI).
Format
The event took a 3:3 format with two 90 min sessions in the morning and one in the afternoon, each with three presentations (a total of 9 presentations).

Image from left to right: Bruce Issacs, Donna Brett, Martyn Jolly, Terry Flew, Mark Ledbury, Francesco Bailo, Andrea Carson
Presenters
- Terry Flew, Professor of Digital Communication and Culture, University of Sydney
- Andrea Carson, Professor of Political Communication, LaTrobe University
- Martyn Jolly, Honorary Associate Professor, School of Art and Design, ANU
- Mark Ledbury, Professor of Art History and Visual Culture, University of Sydney
- Donna Brett, Associate Professor and Chair of Art History, University of Sydney
- Bruce Issacs, Associate Professor in Film Studies, University of Sydney
- Dr Francesco Bailo, Lecturer in Data Analytics in the Social Sciences, University of Sydney
- Dr Olga Boichak, Senior Lecturer in Digital Cultures, University of Sydney
- Dr Anna Brionowski, Director of Master of Fim and Screen Arts, University of Sydney
Session 1

Prof Flew considered contemporary concerns about misinformation as largely being consumer protection issues. Drawing on the theories of Jurgen Habermas, the paper considered the ‘system trust’ model of the German sociologist Niklas Luhmann, and in particular the way Luhmann brackets off personal trust from system trust.
Prof Carson presented on the topic of electoral misinformation, public trust and democracy, drawing on her research survey of 7000 Australians during the 2025 federal election campaign. She outlined how the findings aligned with international studies highlighting the dangers of disinformation and the difficulty faced when judging accuracy online.
Associate Prof Issacs applied his work studying film to present an analysis of the final image used in Alex Garland’s film, Civil War. He posed the question of whether the digitally rendered image can carry the effect of pain, suffering, moral purpose and political intentionality.
Session 2

Prof Ledbury discussed the long history of fakes and what we can learn from early cases of all kinds in the material world of art to help equip ourselves for the flood of fake digital images.
Associate Prof Jolly delved into the history of spirit images using a collection of images from an 1870s photographic album held at the National Gallery of Australia. The album was compiled at a time when established beliefs were being tested on all fronts by science, technology, ideology and religion.
Dr Bailo drew on research (conducted with Milca Stillnovic and Jonathon Hutchinson for a New Media & Society article) which considers Gen AI and the undersphere. Using r/Stable Diffusion as an example, the talk argued that rigid ‘risk based’ AI governance frameworks are inadequate for addressing the complex risks that emerge in the repurposing of images.
Session 3
Associate Prof Brett presented an analysis of disaster photographs in the news. Starting with the inaugural issue of the Illustrated London News from May 14, 1842, the paper highlighted how news images have been manipulated and modified to present an image that is both ‘true and untrue’.
Dr Boichak used coverage of large-scale ecological disasters on social media to examine the changing role of eyewitness testimony in public representations of disaster and the facilitation of international humanitarian involvement.
Dr Broinowski used examples from her research on deepfakes to argue that anyone with access to open-source AI software can create credible, often undetectable counterfeits of people and to explore the ethical, aesthetic and regulatory challenges of deepfaking celebrity.
Dr Agata Stepnik discusses digital ethnography