Agenda – AI, Governance and Trust in Digital Societies
Timing |
Session Title |
Chair |
09:00 – 09:15 |
Welcome
Professor Terry Flew (USYD) |
|
09:15 – 10:00 |
Opening Keynote (Zoom):
Professor Kate Crawford (USC Annenberg)Ground Truth & Generative AIWe are living in a period of rapid acceleration for generative AI, where large language and text-to-image diffusion models are being deployed in a multitude of everyday contexts. From ChatGPT’s training set of hundreds of billions of words to LAION-5B’s corpus of almost 6 billion image-text pairs, these vast datasets – scraped from the internet and treated as “ground truth” – play a critical role in shaping the epistemic boundaries that govern machine learning models. Yet training data is beset with complex social, political, and epistemological challenges. What happens when data is stripped of context, meaning, and provenance? How does training data limit what and how machine learning systems interpret the world? And most importantly, what forms of power do these approaches enhance and enable? This lecture is an invitation to reflect on the epistemic foundations of generative AI, and to consider the wide-ranging impacts of the current generative turn. |
|
10:00 – 10:30 |
Keynote Address:
AHRC Commissioner Lorraine FinlayWhy Ethical AI should Promote Human Rights.AI promises to improve efficiency and deliver expeditious outcomes. However when AI is integrated into government decision making without safeguards, there can be adverse outcomes for all Australians. Listen to Human Rights Commissioner, Lorraine Finlay, speak on ethical AI and why it should aim to protect and promote human rights. |
|
10:30 – 11:00 | Morning Tea |
|
11:00 – 11:30 |
In conversation:
Professor Johanna Weaver (ANU)
|
|
11:30 – 12:00 |
Presentation:
Dr Justine Humphry (USYD)
|
|
12:00 – 13:00 |
Panel:
AI, News, Media and TrustProfessor Terry Flew (USYD)Mediated Trust and Artificial IntelligenceIn this short presentation I will outline the concept of mediated trust, and how it relates to what I term the ‘Three I’s’ of ideas, interests, and institutions engaged with digital technologies. I will argue that, in contrast with the early Internet, where ideas about digital freedom preceded corporate dominance, Artificial intelligence (AI) is already dominated by a small number of powerful corporate interests. The critical question will be what powers governments seek on behalf of their citizens to regulate AI in the public interest, particularly in a post-globalised world characterised by competing tech nationalisms. __ Professor Catharine Lumby (USYD)The Fourth Estate and the Fallibility of the Human and the TechnologicalThis paper will explore the current anxieties that attend the use of AI in news and related reporting. I will use my own experience as a young journalist when computers were first introduced into the newsroom and reflect on the relationship between human subjectivity and machine- based learning when it comes to building trust with an audience and the Fourth Estate’s aspirations to objectivity. __ Dr James Meese (RMIT)The Press as PlatformThis paper discusses the news media’s growing adoption of platform-like features and highlights the important of trust during this transition. I go on to offer a short case study exploring the adoption and deployment of recommendation systems across the sector and identify a series of practical challenges that are yet to be resolved. |
|
13:00 – 14:00 | Lunch |
|
14:00 – 15:00 |
Panel:
AI, Education Policy and the Future of LearningProfessor Kal Gulson (USYD),Associate Professor Kirsty Kitto (UTS),Associate Professor Roman Marchant (UTS),& Associate Professor Chika Anyanwu (USYD)
|
|
15:00 – 16:00 |
Panel:
Legal and Ethical Challenges of AIProfessor Lyria Bennett Moses (UNSW)Governance of what? Regulation of Artificial Intelligence, Algorithms and AutomationWith artificial intelligence and ‘algorithms’ being rapidly applied across all sectors of society and the economy, there are understandably calls for increased regulation. But where to start? Many proposals for regulation focus too deeply on technical concepts (such as artificial intelligence, algorithms, automated processing, bots, big data, robots, autonomous systems) rather than the values we seek to protect. By starting with what we want to preserve (for example, fairness and accountability) rather than with how we regulate a set of technical practices (for example, ‘artificial intelligence’), we can achieve the same thing without the drawbacks of unnecessary technological specificity and capture more of what is on the technological horizon. __ Professor Kimberlee Weatherall (USYD)The “Race” to Regulate
The discussion over “regulation of AI” is at an … interesting juncture, with Australia in the midst of a Commonwealth-government led process aiming to identify what, if any changes need to be made to our legal and regulatory system to ensure human and societal interests are protected in the race to extend the use of artificial intelligence and related technologies. Among the many interesting questions and challenges being raised in this discussion, Weatherall is thinking about two in particular. The first is: what changes do emerging technologies and uses of AI and automation bring that require us to perhaps rethink not just the obvious areas of law (privacy, discrimination) but more. And the second is: if we are convinced that problems are emerging, how, within the practical constraints of Australia’s parliamentary democracy and regulatory system, do we address them? If we can’t get to perfect, what would be good enough? |
|
16:00 – 16:15 |
Close
Professor Terry Flew (USYD)
|