Summer Reflections on Australia’s Social Media Minimum Age Laws

Terry Flew

It is unusual to find yourself as a digital media researcher in Australia being at the forefront of global policy debates. Given the talk about the three great Digital Empires – the US, EU and China – who set the global agenda, the place for middle-sized powers to be taking a policy lead around digital tech would seem to be limited.

Also, in December-January, Australia traditionally goes into something of a summer snooze: Parliament ends for the year, universities close, media personalities take a “well-earned break” and wish us all a Merry Xmas and a Happy New Year, and cricket and tennis dominate our media and our public debates. The word for this summer was “Bazball”, the short-lived tactic adopted by the English cricket team that failed on Australia’s bouncy cricket pitches.

The passing by the Australian Federal Parliament of the Online Safety Act (Social Media Minimum Age) Amendment on 10 December 2024, and its implementation 12 months later, has changed that. A lot of global attention has been given to what is wrongly referred to as Australia’s social media ban for under-16s (It is in fact a restriction on holding accounts with ten designated social media platforms for those under the age of 16).

I have done 10 print and online media interviews, seven TV interviews, and six radio interviews. This has been with media in the US, Canada, France, UK, Japan, South Korea, China, Spain, Germany, Poland, UAE and Latvia, as well as Australia. It has included CNN, BBC, The Times, Al-Jazeera, Reuters and Le Monde. My University of Sydney Media and Communications colleagues Catherine Page Jeffrey, Justine Humphry, Mark Johnson, Tim Koskie and Brittany Fernandez have also done many interviews, as have other leading Australian digital media researchers, including Daniel Angus, Tama Leaver, Amanda Third, Lisa Given, Michael Dezuanni, Susan Grantham and Tanya Notley.

With France, Denmark and the UK both announcing plans to develop social media age restriction laws similar to those of Australia, and with Norway, Poland, Malaysia and New Zealand having flagged intentions to adopt similar legislation, as well as a number of US states, this media interest is unlikely to stop. What appeared in 2024 to be yet another of many policy consultation processes that academics, NGOs, civil society groups and universities were engaging in has become part of a significant worldwide change in social media governance.

In that light, it seems timely to audit where we have been so far with these debates, and some wider lessons for those working in the field, as researchers, activists, policymakers, industry participants, children’s rights campaigners, media professionals, and regulators. I have identified ten emerging characteristics of these debates.

  1. Age restrictions on social media accounts are popular. Research undertaken throguh my Mediated Trust ARC Laureate project has found that public support for social media age restrictions in Australia has consistently been at about 60-80%. My team and I found that support went up with more public debate, including criticism of such policies. Post-implementation polling in Australia in late 2025 found 65% support for age restrictions, with only 17% opposed. Figures seem to be similar for other countries. Public support in France has been in the 70-80% range, and it was also found that the majority (67%) of those aged 11-17 supported such a measure. In the UK, a December 2025 YouGov poll found 74% support for social media age restrictions, with clear majority support across all demographics, gender, political party support, all regions, all social classes and all levels of education.
  1. There is a divide between public opinion and the views of many academic experts. While there is strong public support for social media age restrictions, most academics in the field are critical of them. In Australia, 140 academics and representatives of advocacy groups signed an Open Letter in October 2024 opposing such restrictions, on the grounds that such a restriction would impinge upon child rights and that a ‘ban’ is too blunt an instrument to address risks effectively. An impasse effectively remains: academic opinion on the topic has not reshaped public opinion at all, and explanations of why there is majority public support sometimes lapses into conspiracy theory (i.e. it is driven by the mainstream media for their own interests). One can, of course, be critical of public policy that caters to populism, but whether the alternative is an appeal to what Jurgen Habermas termed the lure of technocracy, or government by experts unburdened by popular opinion or the preferences of democratically elected governments, is a moot point. There is a sense that simply forming committees of academic experts, or funding academic research, is not sufficient evidence that the government is “doing something” about the concerns about young people and the impacts of social media on their mental health and well-being.
  1. ‘Actually existing social media’. There is a growing disjuncture between social media as a normative ideal and how corporate social media platforms actually work  The Open Letter signatories proposed that:

    ‘The online world is a place where children and young people access information, build social and technical skills, connect with family and friends, learn about the world around them and relax and play’.

    Well maybe. Would we say that this is the world that we see each day on X, to take one example? Moreover, the argument that the problems lie with bad actors rather than platform design has become increasingly less credible as leading platform companies increasingly wind back their commitments to content moderation. This discussion starts to resemble the challenge that faced socialists in Western liberal democracies (particularly the U.S) in the 1970s and 1980s, where they had to differentiate socialism as a normative ideal (egalitarianism, collective ownership, enhanced social provision, etc.) from ‘actually existing socialism’ as practised in the Soviet Union and other ‘people’s democracies’.
  1. Strange bedfellows: I received an email on 24 November 2025 from Solicitor Katherine Deves, seeking input as ‘an expert witness [to] identify a digital civics and practices/centrality of social media to political discourse (especially for young people) expert who could give evidence as to the under 16 social media ban.’ Katherine Deves would be familiar to Australian readers as the unsuccessful Liberal Party candidate for the Federal electorate of Warringah (northern beaches of Sydney), who ran on a platform opposing transgender people being involved in women’s sports, with the Liberals experiencing a 6.6% swing against them in the once safe Liberal seat of Warringah. Deves was acting on behalf of the Digital Freedom Project, founded by Libertarian Party MLC John Ruddick as ‘a group of Australians concerned about increasing government intervention in the digital space’. Most critics of Australia’s social media are restrictions are not libertarians in a commonly understood sense: indeed, many are calling for more, not less, government regulation of social media platforms for the protection of children. But there are some strange political bedfellows in these debates.
  1. State and civil society. Part of the reason why people whose politics are otherwise divergent are finding common ground on this and related issues (such as online hate speech laws – see here and here) is around the difficulty of defining the relationship between state and civil society that is sought in online environments. For libertarians, this is not complex. The principle is that responsibility lies primarily with parents, secondarily with companies, and government involvement in matters related to communications is a last resort. But the message from critics of social media age restrictions is more muddled given that they nonetheless want more government regulation of social media platforms. Greater support for digital media literacy is frequently invoked, but there is a question about how much this is a feature of relatively stable middle-class families, which is not where the primary risks associated with social media use lie. There are also calls for safety-by-design principles to be incorporated into platforms, but it is notable that the Office of the Australian eSafety Commissioner has been pursuing such measures with global tech companies for over a decade, and it is embedded into their regulatory approach. There are also arguments for a Digital Duty of Care for platform companies operating in Australia. All such measures can be complementary to social media age restrictions. It is not the case that if you do one, you cannot do the others.
  1. Age matters. Early in the Australian debate, the minimum age being considered for holding a social media account was 14; it became 16 after strong lobbying by State Premiers Peter Malinauskas (South Australia) and Chris Minns (New South Wales).  Had the Australian minimum age been set at 14 rather than 16, it would have been in line with what has been meant to be the policies of most social media platforms. Facebook and Instagram have had such a policy in place since 2007, but it has only rarely been enforced; had those age limits been enforced, the Australian legislation may never have been required. This raises the question of whether age matters when we discuss children and social media. Because there is in principle no limit to identifying the rights of a child, there cannot be a case made for some content to not be available to, say, 10-year-olds that could be appropriate for 15-year-olds. This has been the case for every other media prior to social media – think of the age rules applied to cinemas, for instance. There has been a reflex tendency to dismiss any claims that age matters in terms of social media impacts that has proven to be disastrous when such arguments are put out into the field, where they contend with the perceptions and lived experiences of children, young people and parents.
  1. Interdisciplinarity (except for psychologists?). A striking feature of the Australian academic debate is the disciplinary polarisation it has produced between those who study social media and children from a humanist/cultural studies perspective (including educationalists) and those who approach the work from a social psychology perspective. The latter are engaged in debates about correlation and causation, and seek to track variables that may or may not link changes in the mental health and well-being of young people to smartphones and/or social media. By contrast, the cultural studies approach has tended to draw upon smaller-scale testimonies about the importance of social media to the lives of young people. My own hunch is that the best methodology would involve a mix of these two approaches. But this has proven difficult in practice, due in part to a culture of name-calling towards so-called ‘celebrity academics’ (NYU Professor Jonathon Haidt features prominently here). It is to be hoped that the Evaluation Panel that has been developed through the eSafety Commissioner, chaired by Jeff Hancock from the Stanford Social Media Lab, will be able to use interdisciplinary methods to progress this discussion.
  1. The failure of self-regulation. Social media age restrictions in Australia and elsewhere around the world come about as a result of what has largely been a failure of 30 years of industry self-regulation. In the wake of Section 230 of the Communications Decency Act, passed in the U.S. in 1996, digital platform companies have had considerable freedom to develop their own arrangements for content moderation and site management without government interference. The limits of this have been apparent for at least a decade, with the Cambridge Analytica scandal of 2017 being an important point of inflection in these debates. Indeed, when appearing before the U.S. Congress in 2018, Facebook CEO Mark Zuckerberg stated that ‘The real question, as the Internet becomes more vital in people’s lives, is what the right regulation is, not whether there should be regulation or not’. But even relatively advanced forms of self-regulation, such as the Oversight Board established by Meta in 2020, have failed to address the underlying problems, which brings the onus of public interest regulation back to nation-state governments, however imperfectly placed they are to regulate platform behaviour. In the absence of some form of global governance, nation-states remain the only truly accountable and legitimate regulatory agent for global platform companies, and the only agent able to issue sanctions and punishment for non-compliance.
  1. Beyond fining and public shaming. For about a decade, governments have responded to public scandals around social media companies in one of two ways. One has been to issue fines in response to non-compliance with laws: the European Union has taken the lead on this, but it has become more common around the world. The other has been that forms of ‘public shaming’ have been seen as catalysts to action, promoting corporate social responsibility under principles associated with social licence to operate. Two problems have become apparent with this approach. The first is that fines have increasingly become a part of the cost of doing business for the global tech giants. The underlying social, moral and ethical issues around unregulated social media platforms have proven to be too great to be addressed through ad hoc responses to public scandals. As Nick Couldry has observed in his 2025 book The Space of the World:  

Between two and three decades ago, humanity made a huge mistake. The mistake was to delegate to businesses, whose overriding goal is profit and value extraction, the construction and management of the spaces where our social life unfolds. We handed over to business the design of our social world. This is something we should never have done (Couldry, 2025, p. 3).

Rather than concerns about the impacts of social media algorithms on young people being a ‘moral panic’ akin to worrying about punk rockers, comics or violent video games, they are at the heart of the contemporary construction of the social world. As Couldry notes, digital space has become social space, and ‘social media platforms, for many purposes, have become our world (p. 4). The design of such digital spaces was outsourced to private commercial corporations, so the ‘space of the world’ was designed to optimise sales, attention, traffic and advertising. With the benefit of hindsight over the last 30 years, Couldry concludes that ‘some tasks, like designing the conditions under which social life can be conducted, are just too important to be left to anyone to design, except perhaps those guided by public, indeed socially negotiated, values’ (p. 19).

  1. The Trump factor. The major U.S. tech companies have expressed various concerns about Australian digital platform regulations, including social media age restrictions, the News Media Bargaining Initiative, and proposed Australian content quotas for streaming platforms such as Netflix. There has been lobbying of the Trump Administration that such laws may be in breach of the US-Australia Free Trade Agreement as they ‘unfairly discriminate’ against U.S. tech companies, and prominent US Congress members such as Jim Jordan have campaigned against the  Australian eSafety Commissioner, Julie Inman Grant. The platform company Reddit is undertaking a legal challenge against the Online Safety Act (Social Media Minimum Age) Amendment, which will be heard in March 2026 in the High Court of Australia, alongside the action undertaken by the Digital Freedom Project. Should these cases prove to be successful in the Australian courts, it would be read internationally as showing that nation-state governments do not have legislative authority to regulate matters affecting U.S.-based global tech giants. As I have noted elsewhere, this would thwart the ambitions of those opponents of social media age restrictions who nonetheless want the Australian Federal government to implement stronger controls over digital platform companies, including quality standards for children’s content, data extraction controls, algorithmic manipulation, and stronger privacy laws. It would be a bitter irony if nation-state regulations were struck down in favour of a global deregulatory agenda led by U.S. tech companies in coalition with the Trump administration.

Share this article

Related Articles

Time for Trust: Can we trust Hollywood?

In this episode, Associate Professor Bruce Isaacs dives into the crisis of trust in images – from Hollywood to Instagram – and explains why we may no longer know what’s real. It’s a timely, provocative discussion about how cinema, digital media and AI are reshaping our relationship to truth itself.

How do Platforms Matter?

The paper ‘How do platforms matter? Media power, platform power and the digital domination of Australian media’, co-authored by Terry Flew (University of Sydney) and Cameron McTernan (Adelaide University) has now been published by International Communications Gazette. The paper is part of a special issue ‘Networks of Power: Media and Internet Concentration, Platform Capitalism, and the Future of Democracy’, edited by Dwayne Winseck (Carleton University). The special issue is part of the Global Media and Internet Concentration Project (GMICP), funded through the Canadian Social Sciences and Humanities Research Council.

Digital policy as problem space: Australia’s social media age restrictions for under-16s

On December 10, 2025, the Online Safety Act (Social Media Minimum Age) Amendment, which was passed by both Australian Federal Houses of Parliament 12 months earlier, was implemented. This marked the onset of what is known globally as Australia’s social media ban for under-16s. In practice it involves those under 16 being restricted from holding accounts on ten platforms designated by the Office of the eSafety Commissioner, including Facebook, Instagram, TikTok, X, Reddit and Snapchat.

Believing What You See: Trust and Vision from the French Revolution to Generative AI

The seminar brought together a cross disciplinary cohort of scholars to present papers on the veracity of the image and information in public life, and the relationship between visual understanding and societal trust.