Generative AI Miniseries - Opportunities and risks for Australian organisations

05 Apr 2023

Ep4: Fighting fraudsters: How Generative AI is both enabling and preventing fraud in a game of cat and mouse.

In this fourth episode of our Generative AI Miniseries, host Will Howe (Director of Data Analytics) speaks with TJ Koekemoer (Director of Forensic Accounting and Investigations, FTS) and Ananya Roy (Special Counsel, Commercial Litigation) about how generative AI technologies like ChatGPT are enabling fraudsters and facilitating more intelligent frauds, but are also being used in the fight against fraud.

This series takes a deep dive into a number of topics related to generative AI and its applications, as well as the legal and ethical implications of this technology, and provides practical takeaways to help you navigate what to expect in this fast-evolving space.


Subscribe to the generative AI series to receive future episodes            Explore the risks and opportunities of generative AI

Get in touch


Transcript

Will Howe, Director FTS

Hi, and welcome to the Clayton Utz generative AI vodcast series I'm your host Will Howe, I lead Clayton Utz's data analytics team where we are building with generative AI technologies. Really excited about today's session. We are covering fraud and what that means for good and for bad from generative AI and I'm pleased to be joined by 2 fantastic guests today. First is Ananya Roy, Ananya is a special counsel in our commercial litigation practice. She works across a variety of matters, and she is also a senior lawyer in our white-collar crime practice and is currently working on civil fraud issues at the moment. And that's Ananya. Also joining me today is TJ. TJ is a director in our forensic practice, and he's forensic accountant having worked on fraud matters in Australia and all over the world. So, looking forward to the insights. Welcome TJ. And Ananya, and maybe actually TJ we'll start with you. Where does generative AI and fraud intersect?

TJ Koekemoer, Director Forensics

Thank you. Well, yes, it's an interesting area. And what we see in history is that the crooks are always two steps ahead of those of us that are trying to catch them or prevent it. So, I think it's got two angles. The first one is how will the crooks use this technology either to generate new technologies and use cash, but also, how can they enhance the existing schemes that they have in place to increase their rate of return? From a financial perspective, so to speak. And what we've seen is we are using AI in a number of ways to detect fraud. Banks use it to detect fraudulent transactions, that real time monitoring. We've seen it being used to validate employee expenses as an example, and you've also seen it being used to detect false invoices. What I think is going to happen, companies won't be left with any choice, but actually that technology to make sure that they stay as much as they can ahead of the crooks. So that's where I see it's going to go.

Will Howe

And obviously this topic is really on the mind of a lot of people in corporate Australia at the moment, but maybe if we could just cover a few fundamentals. And Ananya, what's actually the definition of fraud?

Ananya Roy

Thanks, Will. From a legal perspective, there are slightly different definitions or elements of fraud, depending on whether you're pursuing from a criminal offence perspective or from a civil perspective. For a criminal offence or fraud, a person will have committed fraud if they have by any deception or dishonesty, obtained property belonging to another person or obtained financial advantage, or obtained, caused a financial disadvantage. From a civil claim perspective, fraud can be proved if there's a false representation that is made knowing that it's false or if you're reckless as to whether it's false or true. So, the slight nuance in the definition is that the criminal offence requires that element of dishonesty or deception. And the reason those general definitions are important is because to the extent that we are prosecuting a fraud matter before the court; those are the elements that a court will look at fraud through. And you do get different types of fraud generally. So, we're going to focus on today on corporate fraud rather than perhaps cyber fraud, which warrants a separate session. But those general areas of fraud all do ultimately fall into those definitions that I've covered. And given the significant enhancements or developments in the generative AI space and what that means for fraud, as TJs touched on, I think those definitions provide a very useful lens through which we can view fraud through the lens of the court's eyes.

Will Howe

Well, that's the definitions through the lens of the court. There's another term that we use, TJ though, the "fraud triangle". Can you cover a bit on the fraud triangle?

TJ Koekemoer

The fraud triangle is an account from the 1950s. So, it's actually been used quite widely. And when we look at fraud, so it was developed by Donald Cressy and the model says that for fraud to occur, there must be three elements present opportunity, rationalisation or pressure or incentives as an alternative. The model is quite useful when we want to detect or prevent fraud. When you think of behaviour and scenarios, looking at those three elements is quite useful to understand how can we minimise those elements to prevent or detect fraud, especially the opportunity side from an opportunity perspective. That's the one area that a company has most control over. So, if you think of a fit for purpose-controlled environment that you have in place, that's the ideal that you want, that you want. It's also sometimes interesting when you look at that opportunity and you look at the control environment to be able to determine, is this a fraud, is it an error or is it just maybe a process that wasn't properly designed? So, it's a good way that you can look at examples of fraud to determine what's the issue behind it.

Will Howe

Well, and maybe that point TJ. You know, you talk about the opportunity and the control side, maybe can we talk about the risk side a little bit? So, let's get right into it. You know, where is the risk from this generative? Maybe, TJ?

TJ Koekemoer

So, from a corporate fraud perspective, you have the pressures, so at the moment in the current economic climate that you operate in. There's a lot of pressure on maybe on individuals. We see interest rates rising, pressure on personal finances. So that's arising from an opportunity perspective. Again, coming out of the pandemic, we've seen quite a disruption in control environments. So as people are trying to normalise their operations and companies normalise operations, we see that an increasing opportunity. Talking about from a cyber perspective, there's obviously an incentive to cyber criminals to increase the again, their return that they get from their efforts so that the incentive is there. Again, pressure and opportunities also, from a customer's perspective because. Companies are using electronic communication more and more so that just good opportunity for cyber criminals to take advantage of that. So, it's almost like it's coming together as a perfect storm. Some of the things that I think we are going to see is using generative AI, the development of false reviews and false product reviews that might change people's views, rightly or wrongly, on certain products, but also more from a corporate fraud. If you think of share price manipulation, if I can use GPT as an example, to generate mass false information that can either increase or decrease the share price of a company of a company. Criminals can use that information thing to buy or sell shares, and make profit of that they can remove themselves from the actual financial instrument by buying, say, for example, a linked or indexed product that will be very hard to detect. So, you might see that some of those activities happening through the use of this technology.

Ananya Roy

It raises the question of whether we'll need an expert on generative AI to come before a court and explain to a judge how the technology is being used by a crook or how it could have been used, and whether that can be proven from an evidentiary point of view.

Will Howe

Well, that is actually really interesting, generating all of this new information. And, you know, immediately, most of the world is now thinking about what that means for good opportunities. You know, no doubt there's lots of positive opportunities for the world. But actually, to your point, there's some negative sides to that as well, too. And TJ, maybe where do you see the opportunity for the crooks in this generative AI?

TJ Koekemoer

There's been a case recently where JP Morgan bought a company called Frank in the US for 175 million US dollars. Obviously, they did their due diligence and through that process. They wanted to understand how many customers do you have? Frank said, we have 4 million customers. JP Morgan asked for support and evidence for that, so, they produced a database of 4 million customers in two weeks, in January 2023, just a few months ago. JP Morgan shut down that platform because they learned that most of those email addresses are fake and they wanted to do an email campaign. In the court documents linking back to Ananya's points in the court documents that I find that I submitted. They allege that the previous owner, Frank approached a data science professor who created what we call scientific data. So, 4,000 or four million records, customer records were produced using the 293,000 existing customers for Frank so, they curated it. But it names, addresses, email addresses and all sorts of personal data that was produced as part of that. So well, that is a perfect example to show where that technology and has been used, and that is a form of generative AI that's been used in this case.

Will Howe

That's fascinating. TJ, Thanks. And Ananya you'd be really interesting. Obviously, one of the things that's also talked about is, well, employees are actually using this to do their job. And with the rise of work from home, you know, the employees are out there and, you know, are they actually typing out this information or is generative AI being able to use the job and what's the legal footing for if that was to sort of start to happen?

Ananya Roy

Thanks, Will. That is a really interesting question because it is one of the advantages of chat GPT that it arguably could make our jobs easier. There's probably a couple of considerations we've touched on in our previous episode. So, for example, there's and there's copyright issues that come to mind, Christie and Jeremy flag, the importance of having very clear policies within the workplace about how you might use generative AI. But to me, the other very interesting question that arises from this is what is the duty of care that we owe to our clients in this situation? In many of in many professional services companies, we do a duty of care to provide services with reasonable or due skill and care. And then the question becomes, well, how does that interact? Or how does that arise in the context where we are now providing services, possibly using generative AI? And that may be a question that the court has to determine one day. But, you know, from my perspective, from a practical perspective today, I would emphasise the importance of being transparent. So, if a customer or a client is expecting that something will be manually performed and, you know, a company is proposing to do that from it using generative AI, what is the customer expecting there? Would they be alarmed if they've found out that generative AI was used? And then focusing more on the question you asked me, Will, which is about employees in particular. It does raise the scope for employee misconduct. Arguably, there could be instances where, you know, different minds will differ. But for example, employees could use chat GPT to produce a deliverable or a work product representing that. That's something they've done on their own accord when chat, GPT or some other platform has been used. And then the legal question that arises is, well, how has the employee done what they were supposed to do according to their job? That's going to be a question to be determined in due course.

Will Howe

Well, you know, that's really interesting. I mean, six months ago, if an employee was asked to draft a paper or create some documentation is a reasonable expectation. Computers can't do that. Right, you know, the employee's got to do that in the last six months. Now we have this brand new capability and there's a question mark, what is the reasonable expectation? And so, like you say, you know, no doubt this will get tested at some point. I also like your point about transparency. That sort of brings me maybe the next topic, I want to talk about. There's all the doom and gloom and the fraudsters are going to do all this awful stuff. But surely the good side is winning in this as well. And TJ, how are we using this to actually detect fraud?

TJ Koekemoer

The exciting part of that is the fact that you and I'd be dealing with this on a daily basis and working with our lawyers to really take this technology and make it part of our everyday service offering to clients. The great advantage about generative AI and some of the technologies is the amount of data that you can analyse and the speed at which you can do that. So, if you think from a detection perspective, you want to make sure that you can detect that potential transaction as close as you can to it happening in real time, because that will increase your chances of actually, if there's a financial loss through that, that you can actually recover that. So, I think that's the key thing of this technology. It will help to make it much more real time that we can detect it. If you think of current technologies where something similar is being used that we spoke about earlier, banks using it for in the financial crime sense, we've seen it being used in analysing our voice. So, to a customer and a call centre person, for example, communicating with one another, looking at that voice and the emotive tone used in that conversation to detect, for example, a customer that's in distress. And you can immediately take that call centre operator off the tills for 20 minutes or whatever the time is, just for them to actually recompose themselves. The other one is an internal audit where we initially we used a lot of sample-based testing internal audit. We only looked at like 10% of a sample of transactions. With this technology, you can really expand the coverage that you can get through that. So, your likelihood of detecting something is a lot higher. What I think is going to happen is companies are going to deploy this technology more and more as part of business as usual. So, you don't have to wait for an internal audit to come through once a year to do that or a special review or a supplier that will come with the technology to do it. And periodically, I think it's going to be very much a real time where you can deploy this technology in future.

Will Howe

Well, that's an interesting point. And obviously the detection technologies that are used today or were used six months ago are very much around structured data. So mathematical types of processes and natural language processing is always looked at as a potential, but was really difficult historically. That's the flipside of generative. I actually have really dramatically improved our natural language understanding capability. So being able to roll that into your real time detection, you know, can add a lot. But what about in the space of financial statement fraud? How does this apply to that space, TJ.?

TJ Koekemoer

So, Will, we still see often companies coming in the news because of financial statements fraud for companies that collapse as a result of financial statement fraud. What we see in the external audit profession is very competitive. It is a competitive market, and their margins are under pressure as well. As a result of that, they are almost being pushed to look at technologies that can make the audit process a lot more efficient and also help with their margins. So, your larger audit firms deploy a form of AI to test a company's standard transactions. So, they may look at financial ratios that could be at odds, for example, compared to the industry or looking at specific accounting transactions that were posted to a call centre that's out of norm or a high, higher occurrence of reversals that might happen. So that information helps the audit team to really focus on the high-risk areas in the audit. They still issue of that one is This whole thing of the Black box is that the users may not understand exactly what's taking place behind it, so it's not having the full context of those exceptions when they get reported to you. The other thing is over reliance on the technology. So, they could be a risk that audit teams rely too much on the results are coming out of that AI testing. You still need that professional scepticism to understand the full context of the transaction before, to make an informed decision and assessment about it. And the interesting part will be whether this technology will be able to detect transactions that's being created through like synthetic data generation. So, where the fraudulent transactions are modelled on the company's actual accounting structure, that the mean time will tell. But I think what we will see is that companies will start to deploy this technology long before their external audit comes to the door that to actually detect and use that as a way to identify financial statement fraud.

Will Howe

One of the points you made, TJ, I really liked in there, which is the human side of this. And it's important to remember all of this is technology. But the fraudsters, ultimately human and ultimately on the good side as well. We're humans and we need to, you know, think about all that as well. So, it's a bit of both sides and, maybe Ananya to that point: How would we actually sum this up? What's your key takeaways in this space?

Ananya Roy

Yeah, I think there's three key takeaways that come to me from that discussion we've just had. And the first is we know that there's a risk that the crooks are going to be using generative AI to commit fraud. And that's something all organisations, I think, going forward will be more alive to if they're not already. On the flip side of that, we also have companies using generative AI for good. We know that the crooks are out there. And we also can use generative AI to detect, to prevent. I think it will up the ante for firms and organisations to sort of also raise the bar for them as to how they're using generative AI. And then the last, I think takeaway is a little bit related to the human element you were just talking about, Will, but almost in the middle, there's that I would call it almost an ethical line as to how you manage generative AI in an appropriate way. So how will our employees use generative AI? in a way that is appropriate, is in a way that clients will be comfortable with. And that's something that I think as all industries will have to grapple with and come to an understanding of what role generative AI has to play in the workplace.

Will Howe

That's great. Thank you, Ananya. Thank you, TJ. And to our watchers, thank you for being with us on this journey. We've got a few more episodes lined up in this miniseries. So, we'll see you in the next one.


Transcript

Will Howe, Director FTS

Hi, and welcome to the Clayton Utz generative AI vodcast series I'm your host Will Howe, I lead Clayton Utz's data analytics team where we are building with generative AI technologies. Really excited about today's session. We are covering fraud and what that means for good and for bad from generative AI and I'm pleased to be joined by 2 fantastic guests today. First is Ananya Roy, Ananya is a special counsel in our commercial litigation practice. She works across a variety of matters, and she is also a senior lawyer in our white-collar crime practice and is currently working on civil fraud issues at the moment. And that's Ananya. Also joining me today is TJ. TJ is a director in our forensic practice, and he's forensic accountant having worked on fraud matters in Australia and all over the world. So, looking forward to the insights. Welcome TJ. And Ananya, and maybe actually TJ we'll start with you. Where does generative AI and fraud intersect?

TJ Koekemoer, Director Forensics

Thank you. Well, yes, it's an interesting area. And what we see in history is that the crooks are always two steps ahead of those of us that are trying to catch them or prevent it. So, I think it's got two angles. The first one is how will the crooks use this technology either to generate new technologies and use cash, but also, how can they enhance the existing schemes that they have in place to increase their rate of return? From a financial perspective, so to speak. And what we've seen is we are using AI in a number of ways to detect fraud. Banks use it to detect fraudulent transactions, that real time monitoring. We've seen it being used to validate employee expenses as an example, and you've also seen it being used to detect false invoices. What I think is going to happen, companies won't be left with any choice, but actually that technology to make sure that they stay as much as they can ahead of the crooks. So that's where I see it's going to go.

Will Howe

And obviously this topic is really on the mind of a lot of people in corporate Australia at the moment, but maybe if we could just cover a few fundamentals. And Ananya, what's actually the definition of fraud?

Ananya Roy

Thanks, Will. From a legal perspective, there are slightly different definitions or elements of fraud, depending on whether you're pursuing from a criminal offence perspective or from a civil perspective. For a criminal offence or fraud, a person will have committed fraud if they have by any deception or dishonesty, obtained property belonging to another person or obtained financial advantage, or obtained, caused a financial disadvantage. From a civil claim perspective, fraud can be proved if there's a false representation that is made knowing that it's false or if you're reckless as to whether it's false or true. So, the slight nuance in the definition is that the criminal offence requires that element of dishonesty or deception. And the reason those general definitions are important is because to the extent that we are prosecuting a fraud matter before the court; those are the elements that a court will look at fraud through. And you do get different types of fraud generally. So, we're going to focus on today on corporate fraud rather than perhaps cyber fraud, which warrants a separate session. But those general areas of fraud all do ultimately fall into those definitions that I've covered. And given the significant enhancements or developments in the generative AI space and what that means for fraud, as TJs touched on, I think those definitions provide a very useful lens through which we can view fraud through the lens of the court's eyes.

Will Howe

Well, that's the definitions through the lens of the court. There's another term that we use, TJ though, the "fraud triangle". Can you cover a bit on the fraud triangle?

TJ Koekemoer

The fraud triangle is an account from the 1950s. So, it's actually been used quite widely. And when we look at fraud, so it was developed by Donald Cressy and the model says that for fraud to occur, there must be three elements present opportunity, rationalisation or pressure or incentives as an alternative. The model is quite useful when we want to detect or prevent fraud. When you think of behaviour and scenarios, looking at those three elements is quite useful to understand how can we minimise those elements to prevent or detect fraud, especially the opportunity side from an opportunity perspective. That's the one area that a company has most control over. So, if you think of a fit for purpose-controlled environment that you have in place, that's the ideal that you want, that you want. It's also sometimes interesting when you look at that opportunity and you look at the control environment to be able to determine, is this a fraud, is it an error or is it just maybe a process that wasn't properly designed? So, it's a good way that you can look at examples of fraud to determine what's the issue behind it.

Will Howe

Well, and maybe that point TJ. You know, you talk about the opportunity and the control side, maybe can we talk about the risk side a little bit? So, let's get right into it. You know, where is the risk from this generative? Maybe, TJ?

TJ Koekemoer

So, from a corporate fraud perspective, you have the pressures, so at the moment in the current economic climate that you operate in. There's a lot of pressure on maybe on individuals. We see interest rates rising, pressure on personal finances. So that's arising from an opportunity perspective. Again, coming out of the pandemic, we've seen quite a disruption in control environments. So as people are trying to normalise their operations and companies normalise operations, we see that an increasing opportunity. Talking about from a cyber perspective, there's obviously an incentive to cyber criminals to increase the again, their return that they get from their efforts so that the incentive is there. Again, pressure and opportunities also, from a customer's perspective because. Companies are using electronic communication more and more so that just good opportunity for cyber criminals to take advantage of that. So, it's almost like it's coming together as a perfect storm. Some of the things that I think we are going to see is using generative AI, the development of false reviews and false product reviews that might change people's views, rightly or wrongly, on certain products, but also more from a corporate fraud. If you think of share price manipulation, if I can use GPT as an example, to generate mass false information that can either increase or decrease the share price of a company of a company. Criminals can use that information thing to buy or sell shares, and make profit of that they can remove themselves from the actual financial instrument by buying, say, for example, a linked or indexed product that will be very hard to detect. So, you might see that some of those activities happening through the use of this technology.

Ananya Roy

It raises the question of whether we'll need an expert on generative AI to come before a court and explain to a judge how the technology is being used by a crook or how it could have been used, and whether that can be proven from an evidentiary point of view.

Will Howe

Well, that is actually really interesting, generating all of this new information. And, you know, immediately, most of the world is now thinking about what that means for good opportunities. You know, no doubt there's lots of positive opportunities for the world. But actually, to your point, there's some negative sides to that as well, too. And TJ, maybe where do you see the opportunity for the crooks in this generative AI?

TJ Koekemoer

There's been a case recently where JP Morgan bought a company called Frank in the US for 175 million US dollars. Obviously, they did their due diligence and through that process. They wanted to understand how many customers do you have? Frank said, we have 4 million customers. JP Morgan asked for support and evidence for that, so, they produced a database of 4 million customers in two weeks, in January 2023, just a few months ago. JP Morgan shut down that platform because they learned that most of those email addresses are fake and they wanted to do an email campaign. In the court documents linking back to Ananya's points in the court documents that I find that I submitted. They allege that the previous owner, Frank approached a data science professor who created what we call scientific data. So, 4,000 or four million records, customer records were produced using the 293,000 existing customers for Frank so, they curated it. But it names, addresses, email addresses and all sorts of personal data that was produced as part of that. So well, that is a perfect example to show where that technology and has been used, and that is a form of generative AI that's been used in this case.

Will Howe

That's fascinating. TJ, Thanks. And Ananya you'd be really interesting. Obviously, one of the things that's also talked about is, well, employees are actually using this to do their job. And with the rise of work from home, you know, the employees are out there and, you know, are they actually typing out this information or is generative AI being able to use the job and what's the legal footing for if that was to sort of start to happen?

Ananya Roy

Thanks, Will. That is a really interesting question because it is one of the advantages of chat GPT that it arguably could make our jobs easier. There's probably a couple of considerations we've touched on in our previous episode. So, for example, there's and there's copyright issues that come to mind, Christie and Jeremy flag, the importance of having very clear policies within the workplace about how you might use generative AI. But to me, the other very interesting question that arises from this is what is the duty of care that we owe to our clients in this situation? In many of in many professional services companies, we do a duty of care to provide services with reasonable or due skill and care. And then the question becomes, well, how does that interact? Or how does that arise in the context where we are now providing services, possibly using generative AI? And that may be a question that the court has to determine one day. But, you know, from my perspective, from a practical perspective today, I would emphasise the importance of being transparent. So, if a customer or a client is expecting that something will be manually performed and, you know, a company is proposing to do that from it using generative AI, what is the customer expecting there? Would they be alarmed if they've found out that generative AI was used? And then focusing more on the question you asked me, Will, which is about employees in particular. It does raise the scope for employee misconduct. Arguably, there could be instances where, you know, different minds will differ. But for example, employees could use chat GPT to produce a deliverable or a work product representing that. That's something they've done on their own accord when chat, GPT or some other platform has been used. And then the legal question that arises is, well, how has the employee done what they were supposed to do according to their job? That's going to be a question to be determined in due course.

Will Howe

Well, you know, that's really interesting. I mean, six months ago, if an employee was asked to draft a paper or create some documentation is a reasonable expectation. Computers can't do that. Right, you know, the employee's got to do that in the last six months. Now we have this brand new capability and there's a question mark, what is the reasonable expectation? And so, like you say, you know, no doubt this will get tested at some point. I also like your point about transparency. That sort of brings me maybe the next topic, I want to talk about. There's all the doom and gloom and the fraudsters are going to do all this awful stuff. But surely the good side is winning in this as well. And TJ, how are we using this to actually detect fraud?

TJ Koekemoer

The exciting part of that is the fact that you and I'd be dealing with this on a daily basis and working with our lawyers to really take this technology and make it part of our everyday service offering to clients. The great advantage about generative AI and some of the technologies is the amount of data that you can analyse and the speed at which you can do that. So, if you think from a detection perspective, you want to make sure that you can detect that potential transaction as close as you can to it happening in real time, because that will increase your chances of actually, if there's a financial loss through that, that you can actually recover that. So, I think that's the key thing of this technology. It will help to make it much more real time that we can detect it. If you think of current technologies where something similar is being used that we spoke about earlier, banks using it for in the financial crime sense, we've seen it being used in analysing our voice. So, to a customer and a call centre person, for example, communicating with one another, looking at that voice and the emotive tone used in that conversation to detect, for example, a customer that's in distress. And you can immediately take that call centre operator off the tills for 20 minutes or whatever the time is, just for them to actually recompose themselves. The other one is an internal audit where we initially we used a lot of sample-based testing internal audit. We only looked at like 10% of a sample of transactions. With this technology, you can really expand the coverage that you can get through that. So, your likelihood of detecting something is a lot higher. What I think is going to happen is companies are going to deploy this technology more and more as part of business as usual. So, you don't have to wait for an internal audit to come through once a year to do that or a special review or a supplier that will come with the technology to do it. And periodically, I think it's going to be very much a real time where you can deploy this technology in future.

Will Howe

Well, that's an interesting point. And obviously the detection technologies that are used today or were used six months ago are very much around structured data. So mathematical types of processes and natural language processing is always looked at as a potential, but was really difficult historically. That's the flipside of generative. I actually have really dramatically improved our natural language understanding capability. So being able to roll that into your real time detection, you know, can add a lot. But what about in the space of financial statement fraud? How does this apply to that space, TJ.?

TJ Koekemoer

So, Will, we still see often companies coming in the news because of financial statements fraud for companies that collapse as a result of financial statement fraud. What we see in the external audit profession is very competitive. It is a competitive market, and their margins are under pressure as well. As a result of that, they are almost being pushed to look at technologies that can make the audit process a lot more efficient and also help with their margins. So, your larger audit firms deploy a form of AI to test a company's standard transactions. So, they may look at financial ratios that could be at odds, for example, compared to the industry or looking at specific accounting transactions that were posted to a call centre that's out of norm or a high, higher occurrence of reversals that might happen. So that information helps the audit team to really focus on the high-risk areas in the audit. They still issue of that one is This whole thing of the Black box is that the users may not understand exactly what's taking place behind it, so it's not having the full context of those exceptions when they get reported to you. The other thing is over reliance on the technology. So, they could be a risk that audit teams rely too much on the results are coming out of that AI testing. You still need that professional scepticism to understand the full context of the transaction before, to make an informed decision and assessment about it. And the interesting part will be whether this technology will be able to detect transactions that's being created through like synthetic data generation. So, where the fraudulent transactions are modelled on the company's actual accounting structure, that the mean time will tell. But I think what we will see is that companies will start to deploy this technology long before their external audit comes to the door that to actually detect and use that as a way to identify financial statement fraud.

Will Howe

One of the points you made, TJ, I really liked in there, which is the human side of this. And it's important to remember all of this is technology. But the fraudsters, ultimately human and ultimately on the good side as well. We're humans and we need to, you know, think about all that as well. So, it's a bit of both sides and, maybe Ananya to that point: How would we actually sum this up? What's your key takeaways in this space?

Ananya Roy

Yeah, I think there's three key takeaways that come to me from that discussion we've just had. And the first is we know that there's a risk that the crooks are going to be using generative AI to commit fraud. And that's something all organisations, I think, going forward will be more alive to if they're not already. On the flip side of that, we also have companies using generative AI for good. We know that the crooks are out there. And we also can use generative AI to detect, to prevent. I think it will up the ante for firms and organisations to sort of also raise the bar for them as to how they're using generative AI. And then the last, I think takeaway is a little bit related to the human element you were just talking about, Will, but almost in the middle, there's that I would call it almost an ethical line as to how you manage generative AI in an appropriate way. So how will our employees use generative AI? in a way that is appropriate, is in a way that clients will be comfortable with. And that's something that I think as all industries will have to grapple with and come to an understanding of what role generative AI has to play in the workplace.

Will Howe

That's great. Thank you, Ananya. Thank you, TJ. And to our watchers, thank you for being with us on this journey. We've got a few more episodes lined up in this miniseries. So, we'll see you in the next one.

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.