The next GDPR? How the EU's newly proposed artificial intelligence regulation may affect Australian businesses

By Eleanor Dickens, Sophie Bradshaw
27 May 2021
A recent proposal by the European Union Commission for harmonised regulation of artificial intelligence may set the benchmark for AI regulation in Australia.

Recent years have seen rapid advancements in the functions and capabilities of artificial intelligence (AI). With this has come a suite of associated risks, particularly those concerning privacy, health and safety and potential intrusion into human rights. The question for governments, regulators and policy makers is how to regulate the risks associated with AI, without stifling innovation and the potentially significant societal benefits that come with AI. As with any emerging technology, it can be difficult to strike this balance. The European Union Commission has taken an important first step in the release of its long-awaited proposal for harmonised regulation of AI in the EU (the EU Proposal). The EU Proposal, released on 21 April 2021 proposes a risk-assessment approach to the regulation of AI and follows several years of public consultation, reports and papers released by the Commission. While it could be several years before the regulations proposed by the Commission become law in the EU, the EU Proposal nevertheless serves as a compelling benchmark for proposed AI regulation around the world.

AI regulation proposed by the Commission

The EU Proposal essentially regulates the way in which AI may be put to the market and used in the EU. It proposes classifying AI systems according to the level of risk posed, ranging from "minimal risk' through to "high risk" and "unacceptable risk". (This would be in addition to the existing restrictions for automated decision-making and profiling contained in the EU General Data Protection Regulation.)

Notably, the EU Proposal prohibits some AI practices which are considered to be particularly harmful, such as:

  • any AI system which "deploys subliminal techniques beyond a person's consciousness in order to materially distort a person's behaviour" and which could result in harm;
  • AI systems which exploit vulnerabilities of a particular group of people; or
  • "real time" biometric identification systems for public use in law enforcement, unless a specified exception applies (eg. if it is used to identify a specified victim of crime).

The EU Proposal also identifies "high risk" AI practices which must comply with a suite of new requirements. Examples of "high risk" AI practices include those which are intended to be used:

  • as safety components in the "management and operation of road traffic and the supply of water, gas, heating and electricity";
  • for the purposes of assessing students for admission into, and assessment within educational institutions; or
  • for the purposes of recruitment, or for making decisions on the promotion of employees and the termination of employment.

New requirements which "high risk" AI practices must comply with include that a risk management system be established and implemented, that the systems be developed with capabilities relating to the automatic recording of events, that the systems be sufficiently transparent, and that the systems be capable of achieving the required level of "accuracy, robustness and cybersecurity". These requirements are additional to the obligations of providers and users of high-risk AI systems.

The EU Proposal also introduces specific rules for AI systems which are intended to interact with people, including requirements that people be notified when they are interacting with an AI system.

Those who fail to comply with the EU Proposal may be liable to pay potentially substantial administrative fines, depending on the nature of the contravention. For non-compliance with the prohibition on certain AI practices or non-compliance of AI systems with requirements relating to data and data governance for high risk AI systems, administrative fines may be up to €30,000,000 or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year.

Setting the standard

This is not the first time the Commission has led the way on technological regulation. When the General Data Protection Regulation (GDPR) commenced in 2018, it swiftly became lauded as the international gold standard for privacy and data protection law. Whether the EU Proposal will have a similar effect will likely depend upon a number of unpredictable factors, chief among which is the US Congress' willingness to implement similar reforms.

As with the GDPR, the EU Proposal also contemplates extra-territorial application. In its current formulation, the EU Proposal would directly affect Australian businesses that "place [AI systems] on the market" in the EU, or which use the "output produced by those systems" in the EU.

Impact on Australian AI regulation

Currently, there is no Australian legislation which specifically regulates AI; its use may, depending on the circumstances, be regulated by a patchwork of various laws. For example, businesses bound by the Privacy Act that collect personal information to operate facial recognition software would be required to handle that personal information in accordance with the Australian Privacy Principles.

In the case of administrative decisions made by public bodies, there are dozens of individual pieces of legislation which permit this to occur by, or with the assistance of AI in certain circumstances. For example, section 126H of the Customs Act 1901 (Cth) permits computers to decide whether to permit an individual through immigration automatically, or to flag the individual for further manual review. Australian courts have recognised, however, that administrative decisions are not legally binding unless made by a human being who reaches a decision "after a mental process and [outwardly expresses] the decision to reflect that conclusion" (Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79).

Only time will tell whether the EU Proposal will trigger a change in the regulation of AI internationally, including in Australia, which is presently contemplating the most effective way of regulating AI. Last year, the Australian Government published a discussion paper calling for submissions in relation to its "AI Action Plan", which seeks to "coordinate government policy and national capability under a clear, common vision for AI in Australia". As with the EU Proposal, this AI Action Plan hinges around an ethical framework which contemplates the values to which organisations should have regard in designing AI software.

However, unlike the EU Proposal, which was developed by reference to the Charter of Fundamental Rights of the European Union, the Australian Action plan contemplates a strictly "voluntary" and "aspirational" ethical guideline.

It is difficult to predict exactly how the EU Proposal may ultimately affect Australian businesses. However, its potentially significant impacts on the future direction of Australian AI regulation – and consequently, the future direction of the development and distribution of AI in Australia – means it should not be ignored.

Thanks to Nicole Steemson and Declan McInnes for their help with this article

Get in touch

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.