Employment risks, issues and opportunities in the rise of Generative Artificial Intelligence (AI)

Christy Miller
25 Oct 2023
Time to read: 4.5 minutes

Organisations should continue to monitor and manage the employment risks arising from use of Generative AI, including discriminatory decision-making, harassing or discriminatory conduct, and the employer liability that follows.

Generative Artificial Intelligence (AI) use has been rapidly infiltrating the workforce and is being used in a variety of ways to improve efficiency and productivity in the workplace. Some of the ways that conventional AI systems have historically been used include automating repetitive tasks, providing data-driven insights and now, with the advent of Generative AI and large language models such as ChatGPT, drafting content, generating answers, and even ideas.

More recent advancements have shown some Generative AI models can completely automate processes through something called "AI Agency" – a user will give the AI a goal, and the AI will then generate self-directed instructions and execute the necessary actions to carry out the specified command.

In summary, there are a lot of different applications for Generative AI, and it is expanding and evolving rapidly (we even thought it could help write a draft of this article – although it needed some significant re-work, and it did miss some important nuances).

However, as with any technology, there are legal implications that must be taken into account. When thinking about the rapidly evolving ways Generative AI can be put to use both inside and outside the workplace, an issue to be continually considered and managed is the extent to which an employer may be responsible for an employee's use (or misuse) of it.

Vicarious liability, discrimination, and sexual harassment – what does the law require?

Vicarious liability is a legal principle that holds one party (in this case, the employer) responsible for the actions of another (the employee). The concept is often seen in claims of negligence to seek damages from an employer for the negligence of their employee. However it is also embedded in statute – for example anti-discrimination legislation specifically makes a person or entity liable for its employee's or agent's conduct if the employee or agent contravenes anti-discrimination laws, and the Fair Work Act 2009 (Cth) (FWA) now specifically provides that an employer or principal will be vicariously liable for a contravention of the new sexual harassment prohibitions under the FWA, if the conduct is in the course of the employee's or agent's employment or duties, and the employer or principal has not taken all reasonable steps to prevent contravention. This potential liability will cover actions that occur both during the course of employment, as well as actions outside of work hours but are in connection with the employee's employment – and the onus is on the employer or principal to prove it took all reasonable steps.

But that is not the end of it. Increasing regulation of the workplace has seen a number of amendments to the Sex Discrimination Act 1984 (Cth) (SDA) and the FWA increase the likelihood of employers or principals being vicariously liable for the employee's or agent's conduct. Specifically:

  1. the expansion of applications and remedies available in the Fair Work Commission to address sexual harassment include the capacity to now, since March 2023, seek both an order prohibiting future conduct, and a decision addressing past conduct and compensation from the employer and / or perpetrator;
  2. the requirement now under the SDA for employers to take positive steps to take reasonable and proportionate measures to eliminate, as far as possible, a number of behaviours in the workplace, including:
    1. sexual harassment;
    2. sex discrimination;
    3. sex-based harassment;
    4. certain acts of victimisation;
    5. conduct that amounts to subjecting a person to a hostile work environment on the ground of sex; and
  3. amendments to the FW Act that now expressly prohibit sexual harassment in the connection with work, which includes towards current employees, prospective employees. and contractors. The amendments also allow unions or multiple individuals to bring quasi-class-action proceedings to the Fair Work Commission for breaches of the new sexual harassment prohibitions under the FW Act.

These amendments supplement existing Work Health and Safety laws which already place a positive duty on employers to, as far as reasonably practicable, provide a safe working environment by eliminating hazards, including psycho-social hazards, in the workplace.

Finally, amendments to the Australian Human Rights Commission Act 1986 (Cth) also confer new powers on the Australian Human Rights Commission to monitor, assess and enforce compliance with the positive duties under the SDA as set out above, including by issuing compliance notices, enforcing them in the Federal Courts, and entering into enforceable undertakings with non-compliant employers.

Each of the above combine to place increased pressure on employers to monitor and control the actions of employees in the workplace, or potentially be found liable for their actions.

The risks of Generative AI in the workplace

In the context of Generative AI, the risks for employers and potential liability for any misuse of the technology by their employees, or the unintended impact of the use of Generative AI, is real.

While some (but not all) Generative AI models have inbuilt protections which try to prevent inappropriate use, all AI models still provide a very simple way to:

  1. generate content that is inherently racist, sexist, or discriminatory – drawing its outputs from the internet at large – which may amplify bias and discrimination in our society;
  2. make everyone potentially tech savvy by providing code to enable mass or targeted distribution of content; and
  3. use AI systems for work purposes which can give flawed results if the Generative AI system is trained on biased or inappropriate data. This in turn produces biased outcomes or reinforces existing stereotypes.

So, this is the perfect storm – we have increasing ease of creation of content and automation of communications (by a robot that will not apply the same filter when considering whether communications are appropriate or not) and this has to be managed in an environment of increasing legal responsibility where an employer now needs to take additional steps to proactively guard against harassment and discrimination.

Practical steps for dealing with Generative AI in the workplace

The use of Generative AI in the workplace can bring many benefits, but it also comes with a raft of legal and ethical considerations. As employees experiment with the technology and push the boundaries, employers need flexibility to respond but also must practically manage and set expectations for its use. So how do we manage these increasing obligations when the use of Generative AI for personal and work purposes will just continue to grow in our workplaces?

There are a few steps that employers can take to minimise the risks associated with using Generative AI in the workplace:

  1. Firstly, it is important to have clear policies and procedures to address the use of Generative AI. These policies should clearly define the specific uses of Generative AI that are allowed, as well as any limitations or restrictions that are in place – particularly on the reliance that can be placed on Generative AI for the creation of work-related content. Additionally, employers should provide training to their employees on the proper use of Generative AI, as well as the legal and ethical considerations that come with using this technology.
  2. Where Generative AI is used as a business tool, businesses should take steps to conduct regular audits of such systems to ensure that they are being used in compliance with company policies and legal requirements. This can include analysing data to detect any patterns of discrimination or bias in decision-making, as well as monitoring employee usage to ensure that Generative AI is being used for legitimate business purposes.
  3. Finally, employers should have a robust system for addressing any concerns or complaints related to the misuse of Generative AI. We see the potential for this technology to be used inappropriately, including to convey messages that offend, insult, and potentially intimidate others. We can likely expect more of the defence, "It wasn't me, I didn't mean that, it was the robot…". So workplaces need to be prepared to address not only the complaints but the excuses. Most workplaces will already have in place grievance/complaint reporting systems. These should already link to policies addressing discrimination and harassment in the workplace. Theses policies may need to be strengthened to remind individual employees that they will be held accountable for their conduct in circulating or perpetuating inappropriate content – whether they invented it or not.

By having such frameworks, an organisation can reduce its risk profile while still positioning itself to take full advantage of Generative AI in the workplace.

 

Special thanks to William Howe and Jeremy McCall-Horn for their assistance in writing this article.

Get in touch

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.