Transparency and explainability in AI systems: global perspectives and the road to regulation in Australia

Ken Saurajen, Alex Horder and James Yuan
19 Nov 2025
13 minutes

As the global business landscape becomes increasingly AI-driven, AI systems are being deployed in a wide variety of contexts. From simple customer service chatbots powered by natural language processing models, to systems designed to detect and diagnose cancers trained using deep learning techniques on vast medical data sets, AI is incrementally integrating itself into our lives, being deployed at scale to enhance services, streamline business processes and lower operating costs.

While some use cases may not present a large degree of risk to the user or the organisation deploying them, this is not the case for all AI systems. Some systems designed to provide recommendations, generate predictions or make decisions may carry a real risk of harm, or at least have the potential to impact certain individuals, depending on their intended use cases. For example, automated decision-making (ADM) systems that help insurers decide an individual's eligibility for insurance coverage and the price at which that coverage is offered could have a material impact on an individual's financial profile. In another example, "predictive policing" systems used, among other things, to predict the likelihood of recidivism, could have a significant impact on a person depending on how the system's predictions are construed and relied upon by law enforcement.

Despite the potential economic upside of AI, if these systems behave in a way that is unexpected – if they exhibit bias or produce sub-optimal or even discriminatory outcomes – this could not only cause material detriment to individuals, but also to deploying organisations, who will in many cases be legally (and reputationally) liable for the behaviours of the AI systems they deploy. Accordingly, it is vital that AI systems are subject to stringent governance controls, including (and especially) human oversight.

A key factor in being able to govern AI systems properly, especially those described above, is the extent to which those systems are transparent and their outputs explainable. The greater the degree of transparency and explainability, the more effectively the deploying organisation will be able to monitor system behaviours, proactively mitigate risk, comply with applicable legal requirements and resolve incidents.

Here, we give you:

  • an overview of the concepts of transparency and explainability in AI systems;

  • the relevant legal and regulatory landscape in key jurisdictions;

  • the current treatment of these concepts by Australian lawmakers and regulators; and

  • the steps you can take to ensure your governance regimes effectively pursue and support the deployment of transparent and explainable AI.

What are transparency and explainability?

Transparency refers to the extent to which an AI system's architecture, training data, code and mode of operation are visible and open to interrogation. A transparent AI system enables developers and users to understand how the system thinks, including what logics and assumptions underpin the operation of the system and its underlying model(s) (for example, weightings within neural network architectures), how it was trained and how it uses its training and input data.

Meanwhile, explainability refers to the extent to which an AI system's outputs, decisions, predictions and recommendations, as well as the manner in which they are generated, can be explained in a contextually relevant way. For example, where an AI system flags a credit card transaction as potentially fraudulent, explainability means that the user is able to trace that output back to what caused it to be generated – in this example, it could have been because the transaction originated in a high-risk jurisdiction or exhibited other suspicious features.

Conversely, "Black Box" AI systems are, to varying degrees, opaque. They lack transparency and explainability, in some cases completely. This opacity can be caused by a number of factors, including:

  • the complexity and scale of the models on which an AI system is built, making it nearly impossible to (ie. impractical or prohibitively expensive to attempt to) explain their inner workings;

  • insufficient information about model design, performance and training, making understanding model behaviour difficult; and

  • the use of closed source code and proprietary architectures by developing organisations which, if not able to be accessed by those seeking to interrogate a system, will make interrogation difficult.

Why transparency and explainability in AI are important

Where responsibility for any sort of decision-making, prediction or recommendation is being placed into the hands of an AI system, transparency and explainability are essential in ensuring that the system can be governed effectively, that its behaviours can be monitored and that any troubling behaviours, incidents, errors, biases or deficiencies in its operation or outputs can be identified and remedied proactively. This will ultimately assist in building trust and confidence in the AI system, its operation and the organisation deploying it – particularly where the operation of that system has the potential to affect that organisation's stakeholders.

Beyond these governance-related and reputational considerations however, AI transparency and explainability is a powerful mitigant of legal risk, not only in relation to new AI-related laws that may specifically require some degree of transparency and/or explainability (a raft of which have been enacted in other jurisdictions, in recent years), but also in relation to many existing legal and regulatory frameworks.

AI transparency requirements in key jurisdictions

Around the world, we are seeing regulators and lawmakers emphasise transparency and explainability as a key tenet of effective AI governance and responsible deployment. Increasingly, this policy emphasis is being enshrined in laws and regulations. Below, we summarise the ways in which some other jurisdictions, particularly Australia's key trading partners, have sought to reflect the key principles of transparency and explainability into their respective legislative frameworks.

European Union

The regulation of AI in the EU comprises an overarching, ex ante legislative regime in the Artificial Intelligence Act, as well as indirect regulation such as the EU General Data Protection Regulation (GDPR) and a 2024 revision to the EU Product Liability Directive (PLD 2024/2853).

Artificial Intelligence Act

The EU is relatively unique in terms of AI regulation, by virtue of the AI Act. It came into force in 2024, but features a staggered schedule of enactment and a 24-month grace period. This means that certain obligations only come into force from August 2026, with full effectiveness in 2027.

This is the first example of broad, preventatively-focused legislation that regulates the development and deployment of AI according to the level of risk presented by a particular use case; use cases that pose "unacceptable risk" are prohibited, "high-risk" use cases are subject to stringent requirements due to their potential to pose serious risk (to health, safety or fundamental rights) and "limited" and "minimal" risk use cases are subject to less onerous regulation, or none at all.

The Act contains transparency requirements in respect of high-risk use cases and "general purpose AI" (GPAI) models specifically. It also imposes general transparency rules for particular use cases irrespective of risk level. It follows therefore, that in order for those requirements to be complied with, relevant systems would also need to be sufficiently transparent and their outputs explainable. However, the Act does not actually contemplate a threshold test against which a system may be deemed sufficiently transparent, or its outputs sufficiently explainable (whether in consideration of the broader risk of the use case or otherwise) – this is presumably a test that will be determined based on particular facts and circumstances, and considered in due course through judicial interpretation as the Act is implemented and enforced.

  • For high-risk use cases, providers must supply clear information and instructions to the deployer about the capabilities, limitations, potential risks, mode of operation and other key features of the relevant system, and design and develop the system in a way that allows deployers to understand its function, capabilities and limitations, allows the system to be effectively monitored and overseen and allows its outputs to be correctly interpreted.

  • For GPAI models, developers must produce technical documentation contemplating the training, testing and evaluation processes that went into the development of the model, including a detailed summary of the training data used, and provide information to users to aid their understanding of the model's capabilities and limitations. Providers of GPAI models may also sign-up to the EU's GPAI Code of Practice (published in July 2025) to demonstrate their compliance with the Act. The code is voluntary and intended to assist organisations in demonstrating compliance with DPAI-related obligations under the Act (albeit that signing the code does not create a presumption of compliance). It also requires that providers maintain a prescribed form of documentation containing core technical and operational information about their GPAI model(s) and ensure the accuracy, security and reliability of that documentation.

  • For certain other use cases (systems directly interacting with individuals, generating synthetic content or deep fakes or performing emotional recognition or biometric categorisation), the Act mandates disclosure to users of their interaction with the system per se, and the AI-generated nature of system outputs. In the case of systems performing emotional recognition or biometric categorisation, deployers must also inform individuals about the operation of that system.

GDPR

Article 22 of the GDPR gives individuals ("data subjects") the right not to be subject to solely automated decisions that produce concerning legal effects or similarly significant effects, subject to certain exceptions – such as the where the relevant decision is necessary for the performance of a contract between that individual and the party making the decision. Article 15(1) also gives data subjects the right to obtain (from the relevant data controller) information about the use of automated decision-making processes in connection with the processing of their personal data, including confirmation as to the existence of such processes, meaningful information about the decision logic involved, as well as the significance and envisaged consequences of such data processing for the relevant data subject.

Importantly, both the GDPR and the AI Act carry potential extra-territorial application. The GDPR will apply to non-EU organisations where they process the personal data of EU residents or offer services or monitor the behaviour of EU citizens and the AI Act will apply to non-EU organisations where they are supplying (for distribution or use) an AI system in the EU market, or making the outputs produced by an AI system available in the EU.

China

China's AI regulatory regime is one of the world's most stringent, comprising a combination of general law that regulates AI indirectly, as well as a network of laws and administrative measures targeting specific issues that are, to date, predominantly focused on disclosure, transparency and explainability. While sectoral regulators play a role in governing AI by creating rules applicable to their sector, regulation of AI is led centrally by the Cyberspace Administration of China (CAC).

Personal Information Protection Law (PIPL)

Article 24 of the PIPL requires that a "personal information handler" that uses personal information for ADM ensures the transparency of its decision-making. The person may be required under certain circumstances to provide an explanation of the relevant decision logic. Compliance with this obligation would become difficult if the relevant ADM system was lacking in transparency and its outputs could therefore not be explained to the required standard.

Administrative Measures

The CAC has also issued various Administrative Measures, some of which relate to transparency and explainability; for these to be complied with, some degree of model transparency would be required. Relevantly:

  • the Administrative Provisions on Recommendation Algorithms in Internet Information Services (2022) (互联网信息服务算法推荐管理规定) require that internet platform providers disclose the principles, intended purposes and operational mechanisms of the algorithms underlying AI-driven internet search capability; and

  • the Administrative Measures for Generative Artificial Intelligence Services (2023) (生成式人工智能服务管理暂行办法) require service providers to disclose to the CAC their models' design, core logic and performance metrics.

United States of America

The United States regulates AI through both executive orders at the federal level and legislation at the state level. There is currently no single, federal law specifically regulating AI nationally.

At the federal level, recent years have seen a number of executive orders issued by the US Government:

  • President Biden's Executive Order 14110 of October 30 2023 – Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, rescinded by the Trump administration this year, encouraged federal agencies to emphasise to their respective regulated populations, their requirements and expectations related to transparency and explainability; further, the Biden executive order contained a number of bias-prevention-related requirements; and

  • President Trump's Executive Order 13859 of 11 February 2019 – Maintaining American Leadership in Artificial Intelligence, calls on federal agencies to enhance access to fully-traceable models and to ensure that inputs and outputs are traceable, and Executive Order 13960 of 8 December 2020 – Promoting the Use of Trustworthy AI, articulates various principles around disclosure of AI use by federal agencies to stakeholders and requires that the design, development, acquisition, and use of AI, as well as inputs and outputs, should be well documented and traceable.

Various states, such as New York, Illinois, California and Colorado, have also enacted their own AI legislation, with a common theme of transparency and explainability.

United Kingdom

Like many other jurisdictions, including Australia, the United Kingdom has no single and universally applicable AI legislation or regulation; rather, AI is regulated indirectly, via existing legal frameworks.

However, this year, the Artificial Intelligence (Regulation) Bill was re-introduced to Parliament (as a private member's bill) and is currently in its Second Reading. This bill, if enacted, would establish an AI authority to investigate the need for broader AI-minded legal and regulatory reform to close regulatory gaps and advise the Government on AI risk. While the proposed bill mandates transparency in a stakeholder disclosure sense, it does not, in its current form, contemplate model-level transparency or explainability requirements.

The previously proposed Data Protection and Digital Information Bill (DPDI Bill), introduced in 2023 (and which lapsed following the dissolution of the UK Parliament in 2024), if enacted, would have sought to impose transparency requirements on high-impact AI systems.

Australia’s existing regulatory approach

Much like the United Kingdom, Australia currently regulates AI indirectly via its existing legal frameworks, non-binding "soft law" principles (such as those in the Australian Government"s AI Ethics Principles (2019) and the draft Voluntary AI Safety Standard (2024)) and various sector-based regulations. However, Australia's existing frameworks are largely reactive as they relate to AI, focusing primarily on responsive remediation of AI-related harm, as opposed to imposing precautionary risk management obligations that regulate the prospective design and deployment of AI systems.

While Australia has not (yet) regulated AI development and deployment in a preventative sense like the EU, a large body of existing Australian law applies to AI, compliance with which would, to some degree, mandate a level of transparency and explainability in relevant AI systems. For instance:

  • The Privacy Act 1988 (Cth) mandates the "open and transparent management of personal information" and, by virtue of amendments made to the Act in 2024, will impose transparency requirements in relation to automated decisions (effective from December 2026) under which regulated entities must disclose, in their privacy policies, the types of automated decisions being made and the personal information used to inform them.

  • Australia's anti-discrimination and employment laws prohibit discrimination against a person on the basis of various protected attributes. Where an AI-driven decision is the subject of a discrimination claim, the organisation defending that claim will be required to adduce evidence to establish that the decision was not, in fact, discriminatory – this would be difficult or even impossible if the system that made that decision is not transparent, and the process for that decision having been made cannot be explained.

  • The Australian Consumer Law prohibits misleading or deceptive conduct, meaning that – to the extent AI is deployed in customer-facing contexts (for example, for automated sales, support or other customer engagement functions) a certain level of AI system transparency and explainability would be required in order to enable consistent monitoring of that system and any AI-generated representations and communications. Procedures would also be required to ensure that customers have appropriate levels of transparency with respect to those system decisions which impact their rights as consumers.

  • Under the Corporations Act 2001 (Cth) (and at common law and in equity), directors have a duty to act with reasonable care and diligence in their decision-making. Historically, this test presupposes the exercise, by some human decision-maker, of a level of cognitive function which can be objectively assessed as either reasonable (or not). It is conceivable that this director's duty could be breached by an over-reliance on AI-based decisions or reasoning which is not re-validated or verified through a human lens, in the absence of proper risk management frameworks. Further, the use of opaque or black-box AI systems may undermine the extent to which this duty can be discharged, due to a reduced ability to identify and remediate algorithmic hazards like model drift or bias. This can also frustrate the ability to develop effective and fit-for-purpose governance frameworks.

  • Sector-based regulations may also apply to the deployment and use of AI systems. For example, APRA's CPS 230 (Organisational Risk Management), which commenced on 1 July 2025, requires regulated entities to implement robust organisational risk management frameworks. In the AI context, this would likely extend to implementing safeguards, including those promoting transparency and explainability, to ensure that AI-related risks can be understood fully, and managed adequately.

Australia’s regulatory horizon

In September 2024, in parallel with the release of the draft Voluntary AI Safety Standard, the Australian Government proposed ten mandatory guardrails for the development or deployment of high-risk AI systems. While currently subject to consultation, these guardrails, applicable to Australian organisations, would fill a gap in Australia's legal and regulatory treatment of AI systems, by imposing forward-looking and preventative risk management obligations, including in relation to transparency and explainability.

Mandatory guardrails

As they relate to transparency and explainability specifically, the mandatory guardrails would require organisations developing or deploying high-risk AI systems to:

  • establish and implement a risk management process to identify and mitigate risks;

  • test AI systems to evaluate model performance and monitor the system;

  • enable human control or intervention in an AI system to achieve meaningful human oversight;

  • inform end users regarding AI-enabled decisions relevant to them;

  • establish processes for people impacted by AI systems to challenge use of AI or AI-driven outcomes;

  • be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks, including by sharing data in relation to incidents and failures; and

  • maintain records to allow third parties to assess compliance with the guardrails, including records relating to the design, capabilities and limitations of the relevant AI systems.

What is high-risk AI?

The mandatory guardrails will only apply to high-risk AI systems and GPAI models, leaving lower risk AI free from additional regulatory burden. To assess whether an AI system or GPAI model is high-risk, the Government proposes two categories of approaches:

  • where a use case is "known or foreseeable", assessing the risk of adverse impact across various metrics (such as impact on human rights, physical or mental health or safety, adverse legal effects, impact on the Australian economy, society, environment and rule of law), as well as the severity of that adverse impact; or

  • prescribing a list of specific high-risk use cases; This is much like the approach taken in the EU, which, if followed closely by Australia, could mean that use cases like biometric identification or emotional recognition systems, and systems used in conjunction with critical infrastructure, education, employment, health and safety and law enforcement, could be designated as high-risk.

Importantly, the Proposals Paper introducing the mandatory guardrails left the door open as to the outright prohibition of certain use cases (as per the EU approach in respect of use cases of "unacceptable risk"), but equally, did not advance any case to introduce such prohibition.

The mandatory guardrails remain, to date, at the proposal stage, however it is likely these will continue to be advanced in some form. To this end, another challenging legal issue is how the Government might implement any new AI regulation, including and beyond the mandatory guardrails. The Proposals Paper proposed various options in this regard, including an adaptation of existing legal and regulatory frameworks, the development of new frameworks or the introduction of a new, cross-economy AI Act. Naturally, the introduction of any new regulation would need to be balanced with the desire to promote and capture AI-driven productivity benefits, as outlined in a recent interim report by the Productivity Commission which cautioned against innovation-stifling regulation.

Adopting transparent and explainable AI

Global efforts to regulate AI technologies to date, both directly and indirectly, suggest that the concepts of transparency and explainability in AI systems are seen by regulators and lawmakers as key tenets of responsible AI deployment, and are, at least for high-risk use cases, likely to form the basis of legal requirements in Australia in due course.

Regardless of the extent to which transparency and explainability are legally-mandated in the future, taking steps to adopt explainable and transparent AI is a powerful mitigant of legal, regulatory and reputational risk. From a best practice perspective, deploying organisations should take proactive steps to ensure transparency and explainability is exhibited by the AI systems they use, especially if those systems pose a certain level of risk to users or other stakeholders. Much like AI governance more broadly, this will require a comprehensive governance approach, as well as a whole-of-organisation effort, across a variety of stakeholders – to this end, deploying organisations should:

  • update governance charters and frameworks to emphasise transparency and explainability in accordance with established best practice (for example, ISO, NIST, the Australian Government's AI Ethics Principles, the proposed Voluntary AI Safety Standard, the OECD AI Principles);

  • embed transparency and explainability requirements into AI procurement processes and ensure vendor due diligence covers these concepts;

  • where they oversee model design, ensure that the model being built is interpretable-by-design and conduct impact assessments in relation to the proposed model;

  • understand all laws and regulations applicable to the AI system(s) in question, stay across legal change and ensure that internal stakeholders are aware of the compliance burden attached to the use and management of those systems;

  • ensure that contracts for the procurement and deployment of AI systems adequately and appropriately apportion risk, and ensure that any representations or guarantees made by the relevant developer/vendor about model transparency, explainability and the extent to which they are willing to stand behind their system, are codified as warranties and indemnities; for example, if the developer/vendor has represented that it is compliant with ISO/IEC 42001 (AI Management Systems));

  • consistently monitor, audit and test AI models in use for model drift and other behavioural anomalies, which could impact transparency and explainability;

  • keep records relating to transparency-related algorithmic behaviours and performance metrics, including data sheets, results of algorithmic risk assessments or decision logs; and

  • provide stakeholder training to increase engagement and knowledge building in relation to responsible AI use.

Get in touch

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.