Skip to Content

AI Transparency: Why We Need Mandatory Labelling Before It's Too Late

31 January 2026 by
AI Transparency: Why We Need Mandatory Labelling Before It's Too Late
Tricore Tech


The Tricore Tech team carries an unwavering passion for France: its culture, its rigour, its insistence on provenance. In France, you can't call it Champagne unless it comes from Champagne. You can't call it Roquefort unless it's matured in the caves of Roquefort-sur-Soulzon. French consumers demand to know authenticity, origin, truth. Centuries of experience protecting consumers through mandatory disclosure built that expectation. Sometimes it may appear excessive, but the principle holds.

So here is what puzzles us: why do we accept less transparency about AI-generated content than we demand about where our brie comes from? Just as GMO food labelling protects consumers' right to know what they're consuming, AI-generated content must carry clear disclosure. The question is not whether transparency should be mandated. It is whether we will act before uncertainty erodes education, before justice is distorted by unverifiable evidence, and before democratic debate is weakened by content no one can reliably trace or trust. Hesitation does not merely create victims, it corrodes the systems that rely on truth to function.

Regulating Truth in the Age of "Synthetic Media"

September 2025. The Federal Court in Australia ordered Anthony Rotondo to pay $343,500 in penalties for posting deepfake images of six high-profile Australian women on MrDeepFakes.com. One victim told the court she felt "horrified" and "completely without agency" despite knowing the images were fake. Justice Erin Longbottom described his actions as "serious, deliberate and sustained." Rotondo admitted he did it because it was "fun."

December 2024. France's National Assembly introduced Bill No. 675 - XVII Legislature, requiring social media users to explicitly label AI-generated or AI-altered images. This followed mounting pressure from a society watching deepfakes disrupt elections, spread disinformation, and destroy reputations. These aren't isolated incidents. They're symptoms of a crisis we're not addressing fast enough.

The Rotondo case represents Australia's first major enforcement action under the Online Safety Act for deepfake abuse. It sent a message. But the technology moves faster than legal precedent. By the time judges hand down penalties, thousands more deepfakes have been created, shared, monetised. France's legislative response emerged from practical necessity. When democratic processes get destabilised by AI-generated political propaganda, when citizens can't distinguish authentic campaign messages from fabricated ones, you either regulate or watch democracy erode. Both cases reveal the same truth: we can no longer trust what we see, hear, or read without verification. And that's not just a philosophical problem: it's an operational crisis affecting courts, elections, businesses, and individual lives.

Beyond Individual Cases: A Systemic Collapse


The Rotondo penalty and French legislation address symptoms. The disease runs deeper. Consider what happens when AI-generated content operates without disclosure requirements across different domains.

Digital evidence carries evidentiary weight because we've historically assumed authenticity. That assumption no longer holds. A recent NSW parliamentary research paper notes there's currently no foolproof way to classify images, videos, or audio as authentic or AI-generated. Legal experts warn this threatens fundamental principles of justice. Innocent people could be convicted based on fabricated evidence. Guilty parties might evade accountability by claiming genuine evidence is AI-generated. The implications extend beyond criminal proceedings into family law cases involving custody disputes, civil litigation where photographic evidence determines liability, workplace tribunals where video recordings supposedly document misconduct. Every domain relying on digital evidence now operates in an environment where authentication has become nearly impossible. Australian courts are developing authentication protocols, but these remain reactive. We're treating symptoms while the disease spreads.

February 2024. An audio deepfake falsely impersonated President Biden telling voters not to go to the polls. While quickly debunked, the incident exposed democracy's vulnerability. When voters cannot distinguish authentic campaign messages from AI-generated propaganda, informed democratic choice becomes impossible. France experienced similar threats during recent elections. Their legislative response wasn't ideological: it was pragmatic recognition that democracies require verifiable information. When the foundation of informed consent erodes, everything built upon it collapses. Australia's federal election approaches. The Criminal Code Amendment (Deepfake Sexual Material) Act 2024 criminalises sexually explicit deepfakes but doesn't address political manipulation through AI-generated content. We've protected individuals from intimate image abuse while leaving our democratic processes vulnerable to coordinated disinformation campaigns.

Early 2024 saw an employee of UK engineering firm Arup transfer $25 million to criminals after participating in what appeared to be a legitimate video conference with senior management. Every participant was an AI-generated deepfake. The sophistication required for this attack a year ago now exists in consumer-grade software. An 82-year-old American retiree, Steve Beauchamp, lost his entire $690,000 retirement fund to an AI deepfake investment scam featuring what appeared to be Elon Musk. He told The New York Times: "The picture of him, it was him." The visual authenticity overcame a lifetime of caution. The U.S. Financial Crimes Enforcement Network observed increasing suspicious activity reports from financial institutions describing suspected deepfake media use in fraud schemes throughout 2024. Australia's financial regulators issued similar warnings, but without mandatory AI content labelling, financial institutions and consumers operate without the basic tool needed to verify authenticity.

Musicians discover AI-generated tracks using their vocal signatures appear on streaming platforms. Visual artists find their styles replicated through AI systems trained on their portfolios without compensation or attribution. Writers encounter AI-generated content mimicking their voice. France's High Council for Literary and Artistic Property (ARCOM) launched a mission in April 2024 addressing remuneration for cultural content used by AI systems. Their December 2024 amendments to intellectual property law require any artwork generated using AI tools to include the mention "work generated by AI" and credit the authors whose works inspired the generation. Australia lacks comparable protections. Consumers purchasing what they believe is authentic human creativity often fund AI prompt engineering instead. This doesn't just defraud consumers but it systematically undermines creative professionals' livelihoods.

The Internet Watch Foundation's July 2024 report revealed over 3,500 AI-generated criminal child sexual abuse images uploaded to a dark web forum. Australia's eSafety Commissioner took enforcement action against providers of AI "nudify" services creating deepfake sexualised images of Australian schoolchildren. In 2025, entire school communities across Australia faced turmoil when students discovered fake nude images of themselves or their peers circulating. These weren't leaked intimate images. They were synthetic creations requiring only one photograph and accessible AI tools. eSafety Commissioner Julie Inman Grant noted that nudify services attract approximately 100,000 Australian visitors per month. She described deepfake image-based abuse as the "fastest growing threat to women and girls online today," with 99% of deepfake pornography targeting females.

Where We Stand: The "Global Patchwork"


The regulatory landscape resembles a construction site. Some jurisdictions are building comprehensive frameworks. Others remain in planning stages. Many haven't broken ground.

The European Union's AI Act entered force on August 1, 2024, with full enforcement expected by August 2026. Article 50 imposes disclosure requirements on AI-generated content. The framework operates on risk-based classifications, with stricter requirements for high-risk AI applications. France moved beyond EU minimums. Bill No. 675 - XVII Legislature requires explicit labelling of AI-generated social media images and mandates platform implementation of technical tools for AI content detection and verification. Their amendments to intellectual property law establish that transparency regarding AI-generated works carries paramount importance, not secondary consideration. The principle mirrors France's centuries-old appellation system: disclosure enables informed choice.

Without comprehensive federal legislation, U.S. states introduced over 1,080 AI-related bills in 2024 and 2025, enacting 186 laws. California passed 30 AI-related laws. The California AI Transparency Act (SB 942), effective January 1, 2026, mandates AI systems publicly accessible within California with over 1 million monthly visitors implement measures disclosing when content has been generated or modified by AI. Penalties reach $5,000 per violation per day for non-compliance. Assembly Bill 2655 requires large online platforms to identify and block deepfakes related to elections and label certain content as inauthentic during specified periods before and after elections. Colorado enacted the Colorado AI Act, establishing risk-based frameworks requiring impact assessments and documentation for high-risk AI systems used in consequential decision-making. Michigan criminalised creating and distributing nonconsensual intimate AI deepfakes, with enhanced penalties for extortion, harassment, or profit motives. The pattern across U.S. states suggests mandatory disclosure requirements aren't theoretical ideals abut they're practical governance responding to demonstrated harms.

Australia has taken what diplomats call a measured approach, which translates to "we're still working this out." The first tranche of privacy reforms passed in 2024 introduced transparency obligations around automated decision-making, effective December 2026. The Office of the Australian Information Commissioner released guidance in October 2024 on privacy considerations for businesses using AI products and for developers training generative AI models. The Criminal Code Amendment (Deepfake Sexual Material) Act 2024 created offenses around non-consensual transmission of sexually explicit material, with specific provisions addressing AI-generated deepfakes. Maximum penalties reach six years' imprisonment. The eSafety Commissioner has been proactive within existing authority. The Rotondo case demonstrated enforcement capability. But enforcement actions address individual violations, not systemic challenges.

Australia still lacks comprehensive AI-specific legislation. The government's September 2024 proposal for mandatory guardrails for high-risk AI identified ten key areas including governance, transparency, human oversight, and challenge mechanisms. This proposal hasn't progressed into law. In December 2025, the government paused work on standalone AI legislation, instead relying on existing technology-neutral laws and sector regulators. This approach assumes current frameworks adequately address AI challenges. They don't.

While Australia hasn't mandated AI labelling through legislation, several initiatives are developing voluntary frameworks that deserve recognition. Perth-based AIUC Global has created the AI Usage Classification system, a nuanced framework providing five distinct classifications: AI-Free, Human-Led, Co-Created, AI-Led, and AI-Generated. Unlike binary disclosure approaches, AIUC recognises the spectrum of human involvement in AI-assisted work. The framework promotes transparency without imposing value judgments, enabling organisations to describe honestly how AI contributed to their deliverables while maintaining accountability.

AIUC Global's approach addresses a gap that purely technical solutions miss. The C2PA (Coalition for Content Provenance and Authenticity) watermarking tells you whether content has been modified. AIUC classification tells you how and to what extent. The framework includes a Code of Practice, Navigator platform for accessing classification badges, and a public licensee register enabling verification. Perth companies including Spot Solutions, Mentor it Forward, and others have adopted the framework, demonstrating local leadership in AI transparency.

Australia's Department of Industry released complementary guidance in December 2025 titled "Being Clear About AI-Generated Content," which references the Coalition for Content Provenance and Authenticity (C2PA) for technical watermarking. C2PA, developed by a coalition including Adobe, Microsoft, BBC, Intel, and over 300 organisations, provides specifications for embedding cryptographically signed "Content Credentials" into digital media. These credentials create tamper-evident records showing who created content, when, with what tools, and whether it's been edited or generated by AI.

The combination of AIUC's disclosure language and C2PA's technical watermarking offers comprehensive transparency. One provides the semantic framework for describing AI involvement. The other provides cryptographic verification. Together, they enable both meaningful disclosure and technical authentication. The problem is that voluntary adoption moves too slowly. Harms are already demonstrated. We don't need more evidence that deepfakes destroy lives, corrupt elections, and enable fraud. We need mandatory requirements built on these existing foundations.

The Australian Prudential Regulation Authority (APRA) and Australian Securities and Investments Commission (ASIC) elevated AI governance to strategic priority for 2025-26. ASIC's report "Beware the Gap: Governance Arrangements in the Face of AI Innovation" urged financial services licensees to ensure governance practices keep pace with AI adoption. Directors and senior executives face pressure from regulators under the Financial Accountability Regime while lacking clear compliance benchmarks specific to AI deployment. The Privacy and Other Legislation Amendment Bill 2024 introduces penalties up to fifty million dollars or thirty percent of annual revenue for serious privacy violations. Australia operates in regulatory limbo. Accountability expectations increase while guidance remains voluntary and fragmented.

What Must Change for Australian Organisations and Politics


We're past the point where voluntary frameworks suffice. The technology has outpaced gentlemen's agreements. Australia needs mandatory AI content labelling with meaningful enforcement.

Every piece of AI-generated or AI-modified content accessible in Australia must carry clear, visible disclosure. The labelling must be standardised and immediately recognisable. Not buried in metadata. Not hidden in terms of service. Visible, clear, unmistakable. California's approach provides a useful template. Systems with significant reach face daily penalties for non-compliance. This creates economic incentives for compliance while avoiding criminalisation of minor violations.

Australia should build upon existing frameworks. AIUC Global's five-tier classification system offers precisely the nuanced disclosure language needed. Combined with C2PA's cryptographic watermarking for technical verification, we have comprehensive foundations ready for mandatory implementation. The voluntary guidance released in December 2025 should become mandatory requirements by 2027. Industry has had time to experiment. Perth companies are already demonstrating leadership. Now we need legislative teeth behind what they've pioneered voluntarily.

The Rotondo case demonstrated that significant financial penalties can be imposed for malicious deepfake creation. We need to extend this principle to commercial and political contexts. Organisations that fail to disclose AI-generated content in commercial advertising should face fines proportional to their revenue. Political campaigns using undisclosed AI content in electoral materials should face penalties that actually deter the behaviour. Criminal penalties should apply when undisclosed AI content is used for fraud, defamation, or election interference. The existing Criminal Code provisions address sexually explicit deepfakes but leave other harmful uses unaddressed.

Social media platforms, content-sharing services, and online marketplaces operating in Australia must implement AI detection tools. Platforms should face liability for knowingly hosting undisclosed AI content used for harmful purposes after being notified. The eSafety Commissioner's removal notice powers provide foundation, but these need expansion to cover non-intimate AI content causing verifiable harm. Clear processes for users to challenge and verify content authenticity must be standard features. When authenticity determines whether content influences elections, affects court proceedings, or drives financial decisions, verification can't be an afterthought. The C2PA verification tools already exist and are freely available. Platforms should be required to integrate them.

Certain domains require stricter AI regulation beyond general labelling requirements. Courts should prohibit or strictly regulate AI-generated evidence without authentication protocols that judges can actually apply. Medical AI systems must maintain human oversight for diagnostic decisions. Employment AI tools should require impact assessments demonstrating they don't perpetuate discrimination. Financial institutions using AI for credit decisions, investment advice, or fraud detection need clear regulatory frameworks, not voluntary guidelines.

Public education campaigns explaining AI capabilities and risks need government funding and coordination. Judges, law enforcement, regulatory bodies require training to identify and handle AI-generated content. Digital literacy programs in schools should include AI authentication as core curriculum. Research into AI detection technologies deserves public investment. The arms race between AI generation and AI detection will determine whether authentication remains feasible. The Australian AI Safety Institute, announced in November 2025, should prioritise development of detection technologies and authentication protocols as core research areas.

International alignment with Australian adaptation is essential. Australia shouldn't reinvent frameworks when effective models exist. The EU AI Act provides comprehensive foundation. We should align with its principles while adapting implementation to Australian context. Perth's AIUC Global has already created disclosure language that works. Over 300 organisations globally participate in C2PA technical standards. Australia should formally endorse both AIUC classifications and C2PA watermarking as complementary foundations for mandatory labelling requirements. Sharing enforcement learnings with jurisdictions tackling similar challenges accelerates everyone's progress. The Rotondo case offers insights other countries can learn from. Australia should systematically document and share these experiences.

The objection is familiar, heard often in both France and Australia: regulation stifles innovation. History proves otherwise. Food safety regulations didn't destroy food industries: they built consumer trust that enabled those industries to flourish. Privacy laws didn't eliminate data-driven businesses: they established guardrails within which legitimate operators could thrive while bad actors faced consequences. A 2025 University of Melbourne and KPMG study found only 30% of Australians believe AI benefits outweigh risks. This trust deficit represents both challenge and opportunity. Strong transparency requirements can rebuild public confidence while creating competitive advantage for Australian businesses committed to ethical AI practices.

Australian policymakers must weigh the cost of regulation against the far greater cost of inaction on AI transparency. According to eSafety Commissioner data, 99% of deepfake pornography targets women. Nudify services attract 100,000 Australian visitors monthly. School communities face turmoil as students discover synthetic explicit images of themselves circulating. Financial fraud using deepfakes cost victims hundreds of millions globally in 2024. These aren't theoretical risks requiring more study. They're current harms demanding immediate response.

The technology exists to implement mandatory labelling. C2PA specifications are freely available. The regulatory models exist in jurisdictions that acted decisively. The public support exists among Australians who recognise current approaches aren't working. The only missing ingredient is political will to act before the next victim pays the price for our hesitation.

Transparency as Foundation


The Rotondo case, French legislation, European frameworks, U.S. state laws: all point to the same conclusion. AI transparency isn't a luxury feature for when we have time. It's a fundamental necessity for a functioning society. We demand to know what's in our food, what chemicals are in our products, what ingredients comprise our medicines. We have an equal right to know when content has been created or modified by artificial intelligence.

Australia has the opportunity to lead in the Asia-Pacific by establishing robust AI transparency frameworks that protect citizens while enabling innovation. The path forward exists. The need is urgent. Whether we'll act while there's still time to build trust rather than rebuild it after systematic failure: that's the question facing Australian leaders in 2026. The technology exists. The regulatory models exist. The only question is whether we'll implement them before the costs of delay become unbearable.

Trust does not slow progress: it secures it. Transparency makes trust possible. Without trust, progress ultimately collapses.


At TriCore Tech, we believe that responsible AI isn't a constraint on innovation. We believe it's the foundation that makes it sustainable. Our team combines deep expertise in digital transformation, ethical governance, and human-centred design to help organisations implement AI with confidence and integrity.

We work with Australian businesses at every stage of their AI journey, from assessing governance readiness to building frameworks that meet today's standards and anticipate tomorrow's regulatory landscape.

If the challenges outlined in this article resonate with your organisation, we'd welcome the conversation.

Let's build AI you can trust.

Contact Us

Written by a human. Grammar polished by AI. Arguments and opinions entirely the author's own. Image generated by AI for Tricore Tech | Perth.

Share this post
The AI Insurance Gap: Understanding the Liability Challenge for Australian Businesses