Generative AI Miniseries - Opportunities and risks for Australian organisations

22 Feb 2023

Ep2: The workplace and employment implications of Generative AI – Risky business?

In this second episode of our Generative AI Miniseries, host Will Howe (Director of Data Analytics,) speaks with Christy Miller (Partner, Workplace Relations, Employment & Safety) and Jeremy McCall-Horn (Lawyer, Workplace Relations, Employment & Safety) to explore the workplace and employment implications of large language models such as ChatGPT – asking: Is this risky business, or a golden opportunity?

This series takes a deep dive into a number of topics related to generative AI and its applications, as well as the legal and ethical implications of this technology, and provides practical takeaways to help you navigate what to expect in this fast-evolving space.


Subscribe to the generative AI series to receive future episodes            Explore the risks and opportunities of generative AI

Christy Miller

Hi and welcome to the Clayton Utz Generative AI Podcast series where we do deep dives into the legal issues surrounding this exciting new technology. Last week we had Simon Newcomb, who's one of our intellectual property and technology lawyers, and we covered the passion and copyright issues. Really interesting. Worthwhile taking a look if you haven't already. And this week really excited to be joined by Christy Miller, who is one of our leading workplace relations lawyers, where we will explore the human side of this technology.

Will Howe

So thanks and welcome, Christy

Christy Miller

Thanks. Thanks. Thanks for having me. And it's not just me Today I brought with me Jeremy McCall Horn. Jeremy's a lawyer in our team here and the Workplace Relations group, and he's really interested in this AI development stuff and how it impacts on the employment relationship. So I brought Jeremy to do the heavy lifting.

Will Howe

FAB Well, then I guess we'll start with you, Jeremy. So you've been following this for a while, so you know, from your perspective, how has this been used in the past and what are some of the things that are changing here?

Jeremy McCall-Horn

Yeah, it is certainly something I've been interested in for a while and watching how it's developed in this space. It's especially interesting and how it's impacting on the workplace and employment sector and generally, you know, the way that it's been used I suppose in the past. Historically, some of the things we we all know about, for example, you know, recruitment using AI to screen applicants to really get through that first initial screening process, especially if you have a vast number of people applying for a job.

Jeremy McCall-Horn

You know, we also have the data driven type analysis that AI can give us in the workplace. You know, there are companies and devices out there which can actually monitor, you know, people's emotions and that sort of thing in the workplace. And it's de-anonymised or is anonymised but be able to generate reports on that and sort of indicate trends in the workplace and how to how to react to that.

Jeremy McCall-Horn

And one of the other things, obviously, that we all know and love is how much easier it can make our lives and work as well. So really driving those efficiencies in the workplace. So there's especially with the advancements that are coming through now, there's definitely going to be some changes that we see and some exciting changes as well.

Will Howe

A lot of exciting changes. And Christy what changes do we think this is going to bring to employment functions?

Christy Miller

The the answer to that is many and varied. And if you ask ChatGPT itself, it will tell you how it might impact sort of the core functions of human resources. It will promptly tell you that it can automate and set goals for particular employees that align with an overall strategic measure for the the company. It can track employee performance over time and identify areas for improvement.

Christy Miller

It can provide feedback for you to give to your own employees on those areas of improvement, and it can identify key employees for succession planning, promotion and leadership positions. ChatGPT, you know, that's just one of the forms of generative AI. But ChatGPT itself will tell you that it's still important to comply with local laws and employee privacy and recommends that humans have oversight of any text information that is generated by the bot.

Christy Miller

And I think that's really where the rubber hits the road for its use in human resources and IR management going forward. ChatGPT is not always going to get it right and those compliance with local laws is something that we are seeing that it is not so good at at this stage. So while it might have all these opportunities for use, I think the flip side of the coin is that we need to be really careful in particular about how it is actually being used for what purpose it is being used and the checking of how it's being used along the way.

Christy Miller

So what are the messages there? Opportunities. But we really need to keep our eye and our mind on the risks that this potentially poses on employers.

Will Howe

Right. So two sides to that coin really opportunities and restrictions. And maybe, Jeremy, do you want to go into a bit of the restrictions that employers need to be thinking about?

Jeremy McCall-Horn

Yeah, sure. So there yeah, there certainly will be some restrictions that apply to these circumstances, but that won't necessarily be immediately obvious. And the reason for that is, you know, as a starting point, there's actually no explicit laws in Australia which regulate or, you know, are seeking to regulate the use of AI in the employment space At this stage, that doesn't necessarily mean that there aren't other laws which exist that might indirectly apply as a sort of a lens over this area.

Jeremy McCall-Horn

You know, for example, discrimination, making sure that these AIs, if you are using them for recruitment, they're not making decisions based on protected attributes or those other discriminatory protected things under the various Discrimination Act, you know, that includes ethnicity, religion, you know, whether or not someone's a member of a union, those are all important things that you're going to need to be aware of while there's no, you know, direct laws in Australia, there are actually laws in other jurisdictions which are quite interesting.

Jeremy McCall-Horn

So, for example, I know in Illinois in the US they were actually one of the first places in the world to introduce a law which directly relates to A.I. and the use of I So the law in Illinois was about regulating video interview software and how that is used in. So the way that it operates in Illinois is twofold.

Jeremy McCall-Horn

The first is that it you need consent of the employee to be implementing those recruitment techniques, of course. And the second one is that the employer, if it's relying solely on the decision making capabilities of these AIs during the recruitment process, the employer actually has to record and provide the demographic data of the decisions being made. So again, on those protected attributes, religion, race, etc. to the state government, and that is part of their obligations.

Jeremy McCall-Horn

Similarly, in New York, similar legislation was introduced recently with the in New York, there's an additional obligation they actually have to provide a bias auditor, so a bias auditor will come in and assess their AI systems, give you a report, you have to provide that report and make it available on your website publicly. So I think it really goes to that transparency and accountability piece because, you know, AIs can't be held responsible for the decisions that they're making.

Jeremy McCall-Horn

Someone needs to be accountable for how it's being used. So, again, coming back to the question, while there's no absolute restrictions in Australia at the moment. Those are the types of things we may see in the law develop in the future.

Will Howe

So it sounds like a lot of change in this space. And then obviously, speaking of change, Christy I might ask you the question that's on everyone's mind right now is are the robots coming for our jobs?

Christy Miller

Well, look, hopefully not today or tomorrow, Will. And I don't think they're coming for everyone's jobs. Even Bill Gates, however, has said that certainly there are going to be job losses, particularly in the white collar market, as as the use of this technology continues and increases. And we see it more universally adopted. But I don't think it's right around the corner for us.

Christy Miller

And I think what business needs to be doing now is starting to look at its potential uses, considering how that might be able to assist each individual business moving forward because the impact is going to be different in different industries. Looking at those potential uses in their own business and then starting to work with employees. On whether that means there needs to be a shift in the employee skills or in the employee roles that are being undertaken.

Christy Miller

I think we will inevitably see a shift from over a period of time from process driven work and such as the data entry, the more manual labour pieces to more technology, customer focused, customer centric positions that will develop. But as I said, the changes are not going to happen overnight.

Will Howe

And so it sounds like with these changes, there's new capabilities that employees will need. And so Jeremy, what what do we do when an employee's capabilities doesn't align with what's actually needed in the future?

Jeremy McCall-Horn

Yeah, So, I mean, employers will have obligations depending on how these things are going to impact their business and how they're going to impact employees, you know, in changing capabilities and training and requirements in the business. Obviously, some positions may become redundant and you know, as the business change need needs change, so does the needs of the employees.

Jeremy McCall-Horn

But one of the most relevant obligations, I suppose, in this situation is the consultation obligations. So the consultation obligations exist under pretty much every enterprise agreement and award in Australia. If you're an employer in Australia, you can bet that probably one of these apply to you. And what it is, is a requirement to consult with your employees to have a candid and open conversation with them about what's changing, how it's going to be implemented and giving them an opportunity to have their say as well, and taking that into consideration during the process.

Jeremy McCall-Horn

Now, these obligations are triggered differently depending on the industry or the award or instrument that's covering you and your employees, but taking, for example, the Clerk’s award as an area which might be impacted the most, I suppose, by these AI advancements is the two requirements to trigger the obligation. So the first is that there's a major workplace change in the organization structure or the technology of the business, and that that major change is going to have a significant impact on an employee or employees.

Jeremy McCall-Horn

It's obviously not too much of a stretch of the imagination to to think that these types of AI advancements are definitely going to have that significant impact so will likely trigger these obligations. So I suppose at this stage it's probably critically important to understand how these advancements are going to be used in your business, the type of impact and the type of areas that it's going to have the most impact in.

Jeremy McCall-Horn

And that way you can identify the employees that you're going to need to consult with and I'll finally just add again, these obligations are different across all enterprise agreements and awards. So, you know, it's important to understand the legal requirements before you act.

Will Howe

And for obvious reasons, we've gone straight to the legal obligations component of this. But given there's there is a bit of concern about this type of technology within the workplace and within society at the moment, what should employers be doing before we get to that point? Christy

Christy Miller

I think it's about setting expectations. Well, talking to employees now about what use the business sees for this particular technology and the advancements in this technology setting expectation around whether we we want employees to be experimenting with it or alternatively, if the business has made a decision that it is not to be used for a variety of reasons, making sure that that is equally communicated, setting some policies around what that use is and how it needs to be used and vetted.

Christy Miller

Because as we're seeing, there are a lot of mistakes coming out from particularly the ChatGPT. So set those expectations early, have that conversation with employees about how it's impacted. We're seeing a lot of our clients and other businesses now setting up working groups. And this is not just restricted to larger businesses either working groups about how and how it might be implemented in their business in the future.

Christy Miller

Those are always where you can provide some comfort now to employees and give them a real role in either the changing nature of their particular position or positions within the organization and let them learn and grow with the organization to the extent possible. Is it going to save every every job? Probably not, but I think it is going to provide comfort now demonstrate to employees that the business is actively looking at it and considering this type of technology and taking a measured approach to it.

Will Howe

That that measured approach. It brings me on to an interesting point, which maybe we'll talk about some of the risks with some of this technology. And we know there's there's been some previous attempts at bots like this, and sometimes it's gone horribly wrong. And so, you know, do we see this technology potentially contributing to any workplace issues like a toxic culture or harassment or bullying even?

Christy Miller

Yeah, look, definitely Will. And what we've been talking a lot about the opportunities and how to integrate it into business and how and the process for doing that. I think employers need to be keeping a really clear eye on the risks of using this technology within their business and in an external facing capacity. All businesses are continuing to grapple with issues like workplace harassment, sexual harassment in the workplace discrimination, this creation of the toxic work culture.

Christy Miller

But now we're introducing technology that has a bot that will write that joke, that will be circulated to the entire workforce, that will create the meme that picks on or makes fun of a particular person or cohort in the workplace. We've got this bot that might be helping managers right show cause letters or start disciplinary proceedings in relation to employees.

Christy Miller

We need to make sure that those functions that firstly we've got a check on how it's being used by employees in the workplace, but we've also got to check on, on how we're using it as a decision making tool. And to the extent that it is helping with those functions, particularly around performance management, disciplinary management and termination of employment, that we are really looking at the implications, as Jeremy has already mentioned, for discrimination then human rights compliance.

Christy Miller

An interesting point tonight is that ChatGPT also says straight up that it has limitations and that it might generate wrong information or produce harmful or biased content. I mean, we recently asked ChatGPT to write us a letter that we could send to a colleague telling them how bad they were at their work. And it said to us, No, that was not a good idea, that there are more professional ways to undertake that task.

Christy Miller

How have we also asked it to write us some a JavaScript to email, blast a specific person five times a minute with spam email and it was happy to help us along with that process. So we really need to be careful about the uses that these types of technologies are being put to. Similarly, some of the generative AIS is being used to create art and even nudes of people that we're seeing and creating those obviously without consent of the person.

Christy Miller

For images found on social media, these depictions can be quite lifelike and can if circulated in a workplace. We know the extent and the problems that that will will cause in the workplace, both from sexual harassment, discrimination point of view and damaging for the person, including for worker's compensation purposes. So there are definitely a range of risks that we need to be alive to.

Christy Miller

It's about how, again, we set expectations with employees about how they are allowed to use it when they're allowed to use it. The purpose for which it's being used and managing and monitoring those expectations.

Will Howe

Now we're talking about employees using this technology to to do their work. And I know we talked about some experiments we're running. And, you know, a lot of people are playing around with this tech. And obviously we're being overt about that and being very careful about it, to do it within very restricted grounds. But what about if employees or when employees start using this secretly to do their work?

Will Howe

Hmm.

Jeremy McCall-Horn

I mean, this you know, technology has been used by employees probably in ways that their employers haven't expected or authorized for a while Now, I was just reading an article the other week about a tech guy who was responsible for data input. So the types of things that maybe this generative AI is going to really assist with moving forward as well.

Jeremy McCall-Horn

But he automated his job, so he wrote a simple script, had it automate his job, and he stayed at home playing video games well, earning, you know, he worked 10 minutes a day and $90,000 a year. So not bad. Perhaps the employer wouldn't be too happy once they found out about it. But yeah, you know, this type of tech has definitely got the the the prospect of changing how work is going to be conducted and how employees actually perform their work.

Jeremy McCall-Horn

So again, I think it's about setting those clear expectations. As Christy mentioned, as you know, you need to have these policies of policies and procedures which set out those expectations. So you're clear about the types of uses that it can be used for and what it should or shouldn't be used for, especially probably important in the professional services stream where, you know, if an employee is creating a piece of advice, you know, a financial advisor, for example, and they ask to give financial advice and then copy and paste that and send it to the customer.

Jeremy McCall-Horn

And it's bad advice, which quite potentially could be, you know, it's the employer who's going to be held probably vicariously liable for that. Again, these generative AIs aren't legal people. They're probably not going to be held responsible for the answers they give. But the employer or the business who is acting or the employee is providing advice, and then the business is going to be liable for that.

Christy Miller

And speaking of that concept of vicarious liability, that is one that really does exist in the discrimination space. So in circumstances where the AI is helping employees in relation to conduct that may be in breach of the sexual harassment or discrimination laws, we are going to find that the employer is responsible for that and the employer needs to be mindful of that because we're in an era where the laws in relation to particularly sexual harassment are just being updated, we’re accepting a lot of the recommendations from the Respect at Work report at the federal level there, including a positive duty on employers to guard against sexual harassment in the workplace.

Christy Miller

So we've got the technology that is going to make it a lot simpler and easier in that respect. And we've got a positive duty on employers to guard against that exact conduct. So we need to be very careful and mindful of those duties.

Will Howe

So speaking of the duties in employers, a lot of things, obviously, many of the people watching this today will be involved in drafting employment related contracts and employer related communications. Do we see this technology being used in that space? Potentially?

Christy Miller

Definitely. I think and I think when it becomes more fine tuned, I think it's going to be really helpful actually. But at the moment, and we've certainly been playing around with it a while, it might get you 30, 40% of the way there. I certainly don't think it gets you all of the way there. We've been talking to ChatGPT about drafting contracts of employment and it got lot of the clauses, but things like offset clauses which are really important in terms of managing award obligations, overtime provisions, even recognizing that an award might apply in certain circumstances.

Christy Miller

These are all the types of things that ChatGPT was not able to help with or recognize, and it also applied a dispute resolution clause that is probably better found in a commercial contract as opposed to an employment contract. But you know, over time thing, these things are going to change and these things are going to develop. But we need to be mindful now of what we're putting into a contract of employment and what just generally what we're asking ChatGPT to do for us without any oversight.

Will Howe

So we've covered a lot of ground in this discussion, a lot of interesting concepts coming to light. But for the people viewing this, where do you think we start? What's your key takeaways? Christy

Christy Miller

Ultimately, as employers looking to utilize AI in their workplace? I think I think there are a lot of opportunities there. I think it's a great tool, but I think it is just a tool. It's a tool that you can use and test. It might be able to automate some processes for you, remove some of those inefficiencies, streamline processes and jobs.

Christy Miller

But I do want everyone to keep in mind that it is just a tool and it should be used with caution. At this stage it's not a tool. So certainly chatGPT online is not a tool that specifically tailored to anyone's business or anyone's specific needs, and it is limited by the information that you put into it. So everyone needs to be conscious of that.

Christy Miller

It can't and it certainly shouldn't take away that decision making function from the employer and the organization to the extent that it does, or to the extent that the employer is not been able to demonstrate that it understands or has made a conscious decision to either terminate, start performance management as some just basic examples, then we're really going to come into risk territory in relation to identifying or being able to satisfy the requirements of procedural fairness and providing that valid reason to substantiate the action ultimately taken.

Christy Miller

So definitely use it, play around with it, have a look at it, see how it can help. It's a tool. It's just a tool that you can use. Other other takeaways that we've spoken about today, I think around setting those expectations for employees, look at whether you need a specific policy or whether your current policies can be updated to reference and deal with the use of AI in the workplace.

Christy Miller

At the very least, out looking at sexual harassment and discrimination policies to ensure that there is a specific reference in there to using AI for those purposes or or content that is generated from this chat. GPT or any like AI tools. What we don't want to do is get into a situation where an employee says, But it wasn't me, it was the bot.

Christy Miller

So there are a few takeaways for today.

Will Howe

That's fantastic. Christy Thank you. Jeremy Thank you. Sounds like I think our watchers are going to be doing a lot of activity coming up soon with some of those takeaways you're talking about. And to our watchers, thank you very much for watching and being on this journey with us. We've got a few more episodes planned in this podcast mini series, so we'll see you in the next one.

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.