A quiet transformation is reshaping the global insurance landscape. Across the United States, major insurers including AIG, Great American and W.R. Berkley are demanding authorisation before extending coverage to AI-related exposures. Some have introduced broad exclusions that could erode protection worth billions in potential claims. The message is unmistakable: the insurance industry is reconsidering its exposure to artificial intelligence risks.
For Australian businesses, this development carries profound implications. We've embraced AI with comparable enthusiasm to our American counterparts, embedding machine learning into credit decisions, medical diagnostics and operational planning. Yet our insurance landscape differs markedly. We lack the litigation culture that drives much of US insurance pricing, but we also lack the regulatory clarity that European firms now navigate under the EU's AI Act. This positions Australian enterprises in a peculiar middle ground where liability questions remain largely unanswered, and the protective mechanisms we've long relied upon may prove inadequate when tested.
How AI Transformed Operations Beneath Conscious Risk Assessment
Australian businesses have embedded artificial intelligence into operations with remarkable speed. Financial institutions deploy machine learning algorithms for credit assessments and fraud detection. Healthcare providers analyse medical imaging through AI-powered diagnostics. Manufacturing operations anticipate equipment failures using predictive maintenance systems. What began as experimental programmes barely three years ago now forms the infrastructure across industries.
This transformation occurred beneath conscious risk assessment. Previous technological shifts allowed insurance coverage to evolve alongside adoption. Artificial intelligence differs. Liability crystallises faster than protective mechanisms can form. Companies that implemented AI solutions to gain competitive advantage now discover they may have created uninsured exposures of uncertain magnitude.
Consider specific deployments observed in the Australian market. A logistics company uses AI to optimise delivery routes, creating algorithmic decision-making that affects driver employment conditions. A property developer employs machine learning for valuations, introducing questions of professional liability if the models produce inaccurate assessments. A retailer implements AI-driven inventory management that makes autonomous purchasing decisions. Each represents a different risk profile. Existing insurance frameworks struggle to address any of them comprehensively.
The Regulatory Paradox: Accountability Without Clarity
The Australian Prudential Regulation Authority and the Australian Securities and Investments Commission elevated AI governance to a strategic priority for 2025-26, signalling heightened supervisory scrutiny across the financial services sector. ASIC, through its report "Beware the Gap: Governance Arrangements in the Face of AI Innovation", urges financial services and credit licensees to ensure their governance practices keep pace with accelerating AI adoption.
Directors and senior executives face immediate pressure from this regulatory attention. The Financial Accountability Regime, which commenced for insurance companies in March 2025, extends the banking sector's executive responsibility framework. The FAR imposes strengthened accountability requirements. Executives potentially face income loss, sector disqualification and individual civil penalties for organisational contraventions involving AI governance failures.
Australia deliberately eschews prescriptive regulation, favouring voluntary frameworks instead. The National AI Centre's Guidance for AI Adoption, introducing six essential practices known as AI6, represents the primary government reference point for organisations using AI. This guidance remains voluntary, despite its comprehensive scope. The Australian government paused work on standalone AI-specific legislation in December 2025, instead relying on existing technology-neutral laws and sector regulators.
Directors operate under frameworks like FAR whilst lacking clear compliance benchmarks specific to AI deployment. The Privacy and Other Legislation Amendment Bill 2024 introduces penalties up to fifty million dollars or thirty per cent of annual revenue for serious privacy violations. AI systems that process personal data for training, deployment or decision-making create substantial regulatory exposure.
The Coverage Void: Insurance Ambiguity in Practice
Australian insurance policies remain largely devoid of explicit AI exclusions, creating an appearance of protection that proves misleading upon examination. Professional indemnity policies, designed to cover errors and omissions in professional services, contain ambiguities around algorithmic decision-making attribution. When an AI system produces advice or analysis causing client loss, determining whether claims fall within policy coverage requires untangling questions the policy language never anticipated.
Directors and officers liability policies face similar challenges. These policies typically require causal links between claims against directors or officers and wrongful acts committed in their capacity as company leaders. Autonomous AI decision-making scenarios complicate establishing whose conduct gave rise to claims. Do the acts belong to humans who deployed the AI, to the AI itself as an autonomous agent, or to the software provider who created the system?
Herbert Smith Freehills, in recent analysis of the Australian insurance landscape, identified specific complications affecting claims as AI technology evolves. Determining liability when AI causes harm may prove unclear regarding whether responsibility falls on the company, the AI provider or another party. Most policies don't yet include AI-specific exclusions. Insurers will likely take a view on whether to price in or exclude these risks as they become better defined.
The concept of "silent AI" coverage mirrors the silent cyber problem that plagued insurance over the past decade. Insurers unknowingly covered cyber incidents under general policies not designed for such risks. Silent AI may now be emerging, where insurers inadvertently cover AI risks including financial, operational, regulatory and reputational exposures arising from deployment and use.
Lockton Australia emphasises that regulatory scrutiny of AI is increasing, particularly regarding data privacy and consumer protection. Under the Privacy and Other Legislation Amendment Bill 2024 framework, businesses could face fines reaching fifty million dollars for serious privacy violations involving AI systems. This regulatory exposure exists independently of insurance coverage, creating scenarios where organisations confront penalties that no current policy contemplates covering.
Limited Market Solutions for Australian Businesses
Nascent affirmative AI coverage exists globally, yet Australian businesses face limited domestic options. Munich Re developed policies covering losses when AI models fail to perform as expected. Coalition, a cyber insurance provider, recently added an AI endorsement to its cyber policies. Armilla Insurance, underwritten by Lloyd's syndicates, offers warranties ensuring AI models perform as intended by developers.
These products represent efforts to recognise and insure AI exposures, potentially providing policyholders with clearer protection. They remain in early stages, with limited market penetration and uncertain scope. Mid-market Australian businesses face sparse options. Insurance brokers, according to recent industry analysis, have tended to reassure clients that existing policies suffice for AI unless apparent gaps exist.
Our Approach: Integrated Governance From Inception
At Tricore Tech, we recognise that the insurance gap facing Australian businesses requires solutions beyond waiting for market products to emerge. We've built our approach on the premise that technology should connect people rather than isolate them, and that effective AI governance cannot be retrofitted onto existing deployments but must be architected from the beginning.
Our AI advisory services embed comprehensive AI governance, risk assessment and ethics frameworks aligned with Australian standards like AI6 directly into technology solutions. We combine expertise in development, AI systems, ERP integration, compliance and strategic thinking to bridge technology implementation with human connection. Grounded in rigorous ethical standards and Australian compliance frameworks, we demonstrate that innovation emerges when diverse perspectives unite to deploy technology responsibly.
This commitment to ethical AI governance addresses the concerns that regulators like ASIC and APRA have articulated. By integrating transparency, human oversight and commitment to values that protect people alongside efficiency gains, organisations can deploy AI's transformative power whilst mitigating the liability risks that concern insurers. Properly governed AI deployment can simultaneously advance business objectives and reduce organisational exposure.
What Australian Organisations Should Do Now
The convergence of regulatory scrutiny and insurance ambiguity demands proactive governance rather than reactive compliance. We recommend organisations begin by auditing current AI deployments comprehensively. Where has AI been embedded in business processes? Which functions rely on algorithmic decision-making? What data sources train these models? Who maintains oversight of AI system performance?
Following this audit, assess which existing policies might respond to AI-related claims. Review professional indemnity coverage for limitations on algorithmic advice or analysis. Examine directors and officers policies for language around autonomous decision-making attribution. Consider product liability frameworks in the context of AI-powered devices or services. Scrutinise cyber policies for both AI-related coverage and potential AI exclusions.
Human oversight protocols become critical both for operational integrity and for demonstrating reasonable care in potential liability scenarios. ASIC's guidance emphasises that AI-generated decisions should be reviewed by professionals to validate accuracy and reliability. This human-in-the-loop approach not only reduces error rates but also establishes evidence of reasonable governance should disputes arise.
Board-level engagement with AI governance cannot be delegated entirely to technology functions. Directors require sufficient understanding of AI deployments to discharge their oversight responsibilities under frameworks like FAR. Regular board reporting on AI governance, including incident reviews and compliance assessments, establishes the documented oversight that regulators expect.
Why Delay Increases Exposure
The temptation to adopt a wait-and-see approach whilst insurance markets develop clearer products is understandable but misguided. Each month of delay represents additional AI deployment without adequate governance or insurance protection. Claims arising from current AI operations could materialise years into the future. A credit decision algorithm deployed today might generate discrimination claims in 2027.
Insurers are moving faster than organisations anticipate. Australian policies currently lack widespread AI exclusions, yet global insurers are introducing them at increasing rates. These exclusions, developed in offshore markets, often find their way into Australian policies through global insurance programmes and market precedents.
Australia's voluntary regulatory approach and emerging insurance landscape position businesses as primary risk bearers during this period. Treating this as merely a technical question or compliance checklist misreads the shift occurring. AI adoption without commensurate risk architecture doesn't represent innovation but exposure.
Organisations that integrate robust governance from inception, support initiatives prioritising ethical AI implementation and engage proactively with insurers about coverage needs position themselves to navigate this period successfully. The alternative is operating in an expanding liability gap where regulatory accountability increases whilst insurance protection contracts.
The gap between regulatory expectations and insurance protection won't close by itself. Business leaders must act now, before liability materialises and coverage vanishes.
At Tricore Tech, we help organisations navigate the AI insurance gap through comprehensive AI governance and advisory services. Our approach integrates risk assessment, ethics frameworks and compliance with Australian standards from the very beginning of your AI journey. Contact us to discuss how we can support your organisation's responsible AI deployment.
Contact US
