Generative AI Miniseries - Opportunities and risks for Australian organisations

13 Jun 2023

Ep6: The Future is Now: Next-Gen Lawyers sound off on ChatGPT, ethics, and the future of Generative AI in law

In episode six of our Generative AI Miniseries, host Will Howe (Director of Data Analytics,) speaks with some of our firm's lawyers and data technologists in the early years of their career to hear the next generation's take on generative AI. Emina Besirevic (Lawyer, Commercial Litigation), Jeremy McCall-Horn (Lawyer, Workplace Relations, Employment & Safety), Tim Edstein (Lawyer, Banking & Financial Services) and Paul Tuohy (Consultant, Forensic & Technology Services) explore topics from regulation of AI to questions around ethics.

This series takes a deep dive into a number of topics related to generative AI and its applications, as well as the legal and ethical implications of this technology, and provides practical takeaways to help you navigate what to expect in this fast-evolving space.


Subscribe to the generative AI series to receive future episodes            Explore the risks and opportunities of generative AI

Get in touch

Transcript

Will Howe (WH):

Hi everyone and welcome to the Clayton Utz Genative AI vodcast series. I'm your host Will Howe. I lead Clayton Utz data analytics capability where we are building with generative AI technologies. In this series, we explore generative AI and how it impacts the legal sector.

This particular episode is around next generation views. And I'm really excited to have four fantastic colleagues of mine on, with me today to discuss different elements of AI. So, Tim Edstein, Tim is a lawyer in our banking practice. He does a lot of regulatory work and is really interested in the application of generative AI technology to the banking sector.

Emina Besirevic, she's a graduate lawyer who has a keen interest in generative AI in the law and Emina has experience in intellectual property and technology and major projects in construction. We also have Jeremy McCall-Horn, Jeremy was a guest on our previous episode, where we talked about the workplace relations issue. Jeremy, good to have you back. Thanks for being on board again.

And Paul Tuohy has a background in data science, artificial intelligence, and also his unique background in film and design. So, he's a consultant in the forensic technology services, division and has a keen interest in this as well. So welcome, guys. Really excited to have you all for this episode and we're going to cover some really interesting ground on governments of AI, on Australia's national interest, on ethics.

And importantly, what does this mean for our career path as a lawyer? So maybe we start with governance and now Emina, what do we see in terms of how do we actually govern this stuff?

Emina Besirevic (EB):

Well, Will, if we stop this analysis in the private private sector, there are really two emerging approaches to how generative AI is being regulated in the space. So, on the one hand, you have companies such as Open AI, that are self governing in the space through limited release strategies, monitored use of models, and controlled access to their commercial products like Dali E. And on the other hand, you have other companies like Stability AI that really believe that these models should be openly released to democratise access. So, stability AI has, for instance, open source its models, which allows developers to access the code, and start using it with little to no controls.

And as for the public sector, there is still relatively little to no regulation in Australia to govern the evolving landscape of generative AI.

Australia seems to be approaching the regulation of AI through soft law principle based approach, so the Department of Industry science, energy, and resources, has released a set of voluntary ethical principles, which businesses and governments can choose to adopt in the development, the design, and the use of AI. However, this certainly isn't the case globally, and there are a few key developments that happened over the last year that I think we should touch on. So, for example, in December 2022, the Council of the European Union adopted its common position on the Artificial Intelligence Act, which categorises AI by the risk that they pose.

And then they impose highly prescriptive requirements on the systems that they consider to be high risk. And this legislation is largely extra territorial, in the sense that they will apply to systems that are used both in the EU, but also foreign systems whose outputs go to the EU. In the U. S.

There has been developments in Congress to the American Data Privacy and Protection Act, And this legislation seeks to regulate algorithmic decision making, which is essentially a system that analyzes large amounts of data, to infer correlations.

One last example, earlier this year, China, in China, the cyber space administration enacted regulations on deep synthesis technology, which includes deepfakes, and the rules place significant restrictions on AI generated media, which requires them to be identified as AI generated through things such as watermarks.

So, in this context, it may just be an advantage for Australia to review the approach that has been taken globally, and then see what works, see what inadvertently might stifle innovation, and be really selective in the kind of In the kind of inspiration we take, I guess, from other states in designing our own regulation and approach to AI.

WH:

Sounds like there's a lot going on in different jurisdictions and like you say it may be prudent to actually see how this stuff plays out and so Tim, I know that you spend a lot of your time working with the banking sector and that this must be a priority for them. What do you see going on in the FS space?

Tim Edstein (TE):

I think in the FS space right now, especially as amino has mentioned about the two different type of models by private sector players, on AI, the second model being the democratization of AI, I see that as a bigger threat to the banks rather than a benefit as if let's say anyone can just buy any AI tool and capabilities, there's a huge potential of people writing malicious, for example, a malicious hacking code, or running a malicious call center script, pretending to be a bank staff that if you could deploy other technologies, such as generated voice, where it's, for example, you're having someone having an Australian accent calling into Australia, and the AI is just writing a script, an automated script that can help the scammer, you know, identify and respond to a client it's very hard for that client to think, that person to think, this is not my bank calling me, versus currently now most people pick up a scam from someone with axa from different jurisdiction, or if a text message, whether spelling or grammar is incorrect, like very lower level stuff.

And as this stuff's getting cheaper, and the cost of AI is getting cheaper at these tools.

It's a lot the cost benefit analysis for a malicious organized crime group using this it'll be easier for them to deploy.

WH:

Yeah, that's a good point, Tim, as much as we see lots of great opportunities, unfortunately there's opportunities for the scammers and technology as well too. And so the banks will really need to think about how they stay ahead of that and actually we touch on this in an episode of this podcast on fraud And now Paul, where do you see this governance piece really kicking off?

Paul Tuohy (PT):

Yeah, I think the major concerns here are these guardrails that have been put in place to prevent anything rather than to respond to anything. And I think a good example to sort of think about is is comparing this to genetic research.

That's just something that's just come top of my mind is that we've sort of taken the risk of what what that could potentially do to society and to culture, and we've we've taken measures to prevent that. So I think it's gonna be a major thing to to to be considering, especially, you know, that we've been talking about how of this technology is quite open. It's open source, so there's a lot of different actors that can get involved. And that has a lot of benefits as well. We're considering here a firm about how to implement a lot of these technologies locally.

But I think an important factor for governance is just where is this the scope of this? How do we how do we limit the application of the AI, especially when it comes to access to different data sources and databases. We just want to sort of make sure that we're all just slowing the progress down even though we want things to accelerate as quickly as possible.

WH:

Let's get insights, Paul. Thanks, and Jeremy, what thoughts do you have on the governance piece?

Jeremy McCall-Horn (JMH):

Yeah. So I've been touching well, springing from what Paul was just mentioning on, you know, putting the brakes on this sort of thing.

I was reading an article earlier about this is sort of like the new nuclear arms race almost, and people likening it to, you know, hold on, we really need to think about what we're doing and, you know, put the brakes and take us slowly because if we don't, you know, we're gonna have these malicious actors who can potentially use it for all these malicious nefarious purposes.

So I think and the way the technology works as well, obviously, you know, as it gets better, it's sort of to the nth degree sort of betters itself in a way as well. I mean, that's the fear with AI, isn't it that it will just suddenly take off and we won't be able to controll it. But I I think there's definitely gonna be some interesting developments over the next few years for sure.

Australia as a minor, I mean, as mentioned, has taken more of a soft approach so far.

But for example, you know, I saw an article the other day reporting that the labor MP, Julian Hill. He actually used his first parliamentary speech to talk about, you know, AI and the potential impacts, and he doesn't sound too keen on it. But it's, yeah, So he wants to develop a white paper into this space and see what we can do from the regulatory perspective and take seriously? Because I think, obviously, that's what we need to do.

WH:

Now, it's interesting to talk about Australian government and maybe actually that's a good opportunity to move on to our national strategic interest in this. And Jeremy, maybe we'll stick with you.

There's obviously a lot of going on in the world and big developments states and in China for example, what's the strategic and sovereign play for Australia in all of this?

JMH:

Yeah, so Obviously, again, this sort of technology is gonna change the way the world works and, you know, not only from a commercial perspective, but for that that sovereign national interest perspective as well from national security, defense, and all those fun things. Just some numbers, just to give some numbers around the scale of what AI is gonna impact the world with, you know.

Is expected the global GDP is going to grow by something like fifteen trillion dollars by twenty thirty. Productivity is gonna increase by forty percent The number of AI startups in themselves has increased by fourteen times. I think it was over the last two decades.

So it is a huge area which, you know, to maintain our our sovereign interest, we're going to have to see government, investment, development, and protections put in place.

The federal government did invest, I think it was thirty million dollars to advance AI machine learning.

There are projects from CSIRO and stands Australia to try and make sure that our national interests are part of the conversation at a global level to make sure our cultural and national interests being taken into account in the development of these things.

And another really interesting point in which is something that has come up and I'm sure will come up again in this podcast is the the data side and and the privacy of citizens.

As Australian citizens, we tend to I think we tend to take our privacy quite seriously, and we really, you know, we have what we're seeing from the attorney generals department at the moment and a review or report into the privacy act, and how that might need to be reviewed to make sure that as a sovereign nation, our data is being protected as well.

WH:

Right? There's a lot to that point. Our data flowing right offshore.

That okay in some ways it's okay, but is there risk that sort of need to be managed with that? And I mean I'd love to get your thoughts in this space as well.

EB:

Yeah, absolutely. So, just jumping off those figures that Jeremy raised. It really looks like Australia is investing quite a lot in this space. But I think it's important to remember that this is quite new. And up until now, our involvement in the AI space has been quite opportunistic.

So, to kind of couch that in what that means is that just last year, the global AI readiness index, which is conducted by Oxford Insights, ranked Australia tenth in the world in terms of readiness to adopt AI and the advantages of AI in its in its governance, and government led initiatives.

And a big part of this is that Australia's local industry is less developed and competitive than other areas of the world. And, for instance, in comparison to our other high achieving economies around the world, like the U. S. Where they have the national artificial intelligence initiative or through Germany, through their artificial intelligence strategy, where the emphasis in these countries is really on investing in the development of world class intellectual property through fundamental AI research rather than just translating what is already known. So I think that the coordination and national priorities in our in AI can really help to drive economic impact in Australia and really should be a priority.

WH:

Thanks, Emina. And Tim, I know in the financial services and banking space, has always been a rough focus on our domestic capability. Do you see the same thing playing out in this AI space?

TE: Yeah, definitely. Of the major banks, Westpac has recently announced that they will be deploying a CHAPGPT like solution into their banks.

And this type of solution will also drive, for example, an easier form for the compliance teams, where there's tar sets currently that's a bit more manual, where there's a lot of review or monitoring of, for example, transactions. AI would definitely be assisting that. That's based for the moment. And I see in Australia, a lot of technology investments has been historically been driven by the big four banks.

And that's, and as you can see in Australia, where our banking system has led up to instantaneous transaction payment terms such as PRD, and now page is going to be released. So that's just one example of just tech investment that's really been driven by the banking sector rather than just generally in other sectors in the Australian economy. And that's just obviously my banking bias towards it, but that's just my balance towards AI will be developed in Australia.

WH:

Thanks, Tim. Paul, what's your thoughts around Australia's national interests here?

PT:

Well, I know I did read that we've developed roughly, I think, four four centers that are called AI digital capability centers.

They've been they've been created nationally that we can help connect small and medium sized businesses to, you know, reskill up and and then figure out, I guess, new ways to provide these services with AI.

I I think there's a there's a lot of increasing like, I mean, recently, there's been a lot probably from since two thousand nineteen and then from two thousand twenty too probably where it really started to pick up.

You know, I I think there needs to be a lot more funding going into this space. But it it kinda reminds me that this sort of mirrors the same sort of fascination that we had with cryptocurrency and the blockchain only recently. And it feels like that fit like, that that is all faded away. And then all of a sudden, we've picked up this interest and taken on board with the AI. And, I mean, we got to remind ourselves that, like, AI is I suppose, you know, fundamentally a lot algorithms and and sort of black box learning. So it's been around for quite a while, and it's just sort of, okay, where exactly are we I guess developing these new skills, yeah.

WH:

That's a great insight, Paul, and obviously, as a data scientist, you rightfully call evidence, just an algorithm behind the scenes, right? But there's implications of these algorithms. I love to touch maybe on the ethics point next and obviously as we roll those algorithms out, it's really important in a safe way, in a sustainable way and in a way that reflects the ethical priorities that the Australian society would want.

What sort of considerations do you see us having around ethics and AI?

PT:

Well, we we would touch upon data quite a few times, especially, you know, us as a sovereign nation. We wanna protecting our own data, you know, domestically, and we've we've had a lot of concerns and issues around that. And and into It's a catch point too because a lot of these AI models are built on the fact that the only way you can really improve them is giving them more data.

And so it's like, okay. Where are we gonna get this data from? Are we able to sort of protect this data?

Which which then, you know, comes into the the question of what's the ethics of what data can you actually provide? Where can we find ownership of this data? Who owns this data? And in particular, you know, are we able to sort of copyright the import data itself, you you know, if if an import is not just sort of like training data, but also like an input like a prompt to generate an image or to generate a piece of text. Are we able to sort of consider that itself can be protected?

And then furthermore, you know, is there is the reach to be able to copyright what that output is?

Especially when that output is, you know, potentially based on other people's work. It's it's it's certainly a question to be considering.

WH:

There's there's quite a bit to unpack in there, maybe Emina might go to you next to sort of have a bit of a think about the ethical points.

EB:

Yeah, absolutely, I agree with Paul that data is a big part of the debate on ethics. Taking a step back though, one thing that I'm interested in exploring it lately is when we're looking at building ethical AI models is thinking about how they're actually being developed and distinguishing between company standards standards and policies on AI, and community standards. So, absolutely, it's really common practice right now for these really big tech companies. To have their own ethical policies. So, your Microsoft Intel and IBM, all have really well developed and publish ethical policies on the development of their their AI. But the question here becomes is are these kind of company policies enough in the ethical space? And in answering this, I think it's always interesting to turn our mind to back in twenty twenty when Microsoft came up with a policy that it would not release its facial recognition technology and sell it on to police departments.

And so at the time, their policies were really well documented and they govern the use of these really powerful facial recognition AI.

However, in saying this, there were also not community wide standards, so then you had companies that were happy to sell on similar technologies, and this is exactly what ClearView did. They developed a similar technology and soldered onto police agencies over two thousand in the U. S. That went on to use a technology.

And I raise this because it's an example of what happens when there are company specific policies, but no community wide standards.

And what happens is, while company A might have really strong ethical underpinnings to its development of generative AI, or otherwise, if company B doesn't, then society is left in a position where it's as if no one has it. And Of course, the follow-up question to that is, okay, well, what are the key ethical concerns with generative AI specifically?

And I agree with my colleagues that the biggest one is bias in data. And I think this problem isn't inherently linked with the technology.

Itself, but because this technology feeds and depends on large amounts of data to learn, if the data that they are provided is biased, that leads us into a big problem.

Another problem is also the accuracy of the data. So for those of us who have used ChatGPT, It's really easy to assume that it's quite reliable, and truthful in its answers, but dig a bit deeper, and you'll find that that's not always the case. And data, of course, can be poisoned, it can be false, it can be duplicated. There are quite a lot of problems that can come out of, data that isn't good.

But a positive to this angle is that toolkits that can detect and possibly mitigate the effects of this kind of data, are already being imagined. So there's kind of two approaches to it, and the first is reinforcement reinforcement learning through human feedback. Where humans are involved in responding to the data and feeding that back in. But the other view is also quite a bit more optimistic where we can develop tools that for the AI, where the AI does it itself.

So in summary, I think that we need to be really cognizant of these kinds of issues. When developing the AI, so that we're not dealing with the fire at the end and just trying to put it out.

WH:

And Emina, that's some very insightful thoughts on bias and maybe Jeremy, I'd love your thoughts because obviously in the workplace relation space, there's a lot of work that goes into thinking around bias and how to make sure that we remove that. What's your thoughts on sort of the ethics of AI from a workplace angle, how is that being handled?

JMH:

Yeah. So definitely agree with Paul and Emina.

The data and, you know, making sure that these systems are developed in such a way that they are going to act ethically, probably two of the biggest points.

I suppose another point though to consider would possibly be the the transparency and the explainability, and it sort of comes back to what Emina was saying with the development of these technologies as well in that they need to be developed in a way where you can check them, you can see how the decisions are being made from the data because it's only when you have that level of oversight that you're able to rely on the AI to make a decision, and when it does make a decision that hopefully, maybe a human will pick up on if it's a bit of a strange decision, you're able to go back into the system and say, well, This is why the AI has made the decision that way based on the data and its algorithm that is programmed into it.

And it's through that constant feedback loop that you're able to actually improve these things to make sure, and this is you know, you see this repeated throughout a lot of different ethics frameworks on AI around the world at the moment is that The the primary consideration is making sure that these systems act in the public good and for the benefit of humankind.

Holistically. So by having that transparency piece in place, you can actually make sure that these things are being checked and you know, the accountability as well, I think.

It's gonna be an interesting question on where accountability and liability lie for the decisions that get made by especially by these automated decision makers.

You know, you can't really hold the AI itself liable. So who is liable? Is it the is it the company that created and programmed the algorithm? Is it for example, the employer that's relied on this decision to terminate their employee, or is it the the company that's sold the data to the algorithm in the first place. So there's a lot of different thoughts about, you know, where that accountability might lie, but I think that's obviously something, as far as our ethical obligations go, someone does need to be accountable at the end of the day, because that's how you make sure things are done properly and for the good of humanity?

WH:

Your point on explainability is a really interesting one I've all placed and obviously with some of this tech actually early in the game. It's a little bit difficult sometimes to get it to explain how it's making a decision a while. I know Tim in your space in the banking space been able to explain how decisions were come to that, but that's extremely important. What's your thoughts about the ethics space, Tim?

TE:

So with the ethics space and AI and go back to Meeters, went about unconscious bias and, you know, getting that AI getting a feedback loop on. For example, who's we could use credit assessments as as a good example of who's a high credit risk and who's a low credit risk.

When when I was doing this when I was doing a banking law course in university, I had a professor who always brought it up why that the biggest threat to future in banking and credit and providing people services is that with so much data that's going out out there, it's very easy for an organization to figure out someone's, you know, sensitive personal information just by pure data alone. So you can you can pick it up so on certain habits whether they offer consider religion, whether they have if you can look at transactional data to see, you know, the spending shopping habits, whether they gamble a lot, whether they you know, buy healthy foods, whether they're by unhealthy foods. And in in in our sectors, you know, a bank could theoretically discriminate against certain people either by the ethnic origin or by their cultural cultural origin or by their religious beliefs, just basing it off certain habits that someone has.

So, an example would be if someone goes to church on a Sunday and that they wear a Fitbit. And then it geotracks them that they go at the same location every day. You can tell this person's of a certain religion, and this goes to, you know, and a person attempt going to any religious organizations that it belongs to any religious groups.

And it's a it's a big problem if an AI intrinsically thinks, okay, a person of this certain demographic is a high credit risk. Even though this person may have a higher income for now, does the AI's bias go, we're not going to lend them this amount because people of this demographic has popped up before as a red marker for defaulting on on credit. So that's all that's always gonna be a big issue. I think that's that's gonna face the banks if they deploy this type of technology, particularly in this type of credit assessment space. And privacy is always gonna become a bigger, bigger issue as more and more of the stuffs getting collected, especially when a lot of people bank apps through their phones, and a lot of a lot of it's asking for people's location when people click that, a set button, the phones will know, basic can theoretically track.

Based on track where people are going in the day to day activities.

WH:

That's a really good point Tim on some of the changes that are going to be required of your point credit assessment, there's a lot of work put in by the banks on ensuring that there's elements of the bias that are removed from that. And so maybe just along the topic of skill sets and those changing skill sets, maybe just love to touch on what's going to change for lawyers and as we're thinking about our career in the legal industry, what do we need to prioritize and how does this sort of change things? And maybe Tim, it'd be good to start off with yourself what's your view on legal careers and how they may or may not change because of generative AI?

TE:

It will definitely change quite a lot with generative AI becoming a different tool. It's like the invention of the calculator or in the current finance space if someone's using Excel just because someone's you know, you have to learn the baseline math skills to understand what the inputs are, an Excel or a calculator will give you the output, but you have to know how to use that tool as well. So learning to use ChatGPT or generative AI tool will, in itself, be a skill. I think an easy example for us lawyers is that in in law school, they do teach us how to do legal research, like you have to use, you know, full queues, and using the different various sources like Lexus, Nexus, or Westlaw, Ostely is one of them that they've that we've all up in Telbino's at university.

And currently, I see ChatGPT, it it it does give an answer that it's very confident in, but there's been a few reports where there's a University of Michigan law school where they use ChatGPT to answer four questions for different law exams. I think once and towards, once in evidence, and once in governance. I forgot the fourth one, but It's it did pass the exams, but it consistently scored at the bottom scale and placed last in the rankings, and the commentary was that it gave very high level detail, and it was very confident, but it didn't provide any nuance. So that's the big caveat.

For down always trying to use ChatGPT, trying to use it for, as an actual legal research tool. One benefit I do think that the tool has at the moment would be saving a lot of time. Like, it's very good at its natural language capabilities where it's you can ask it to fix grammar and draft things in in turn, like, that would save generally quite a lot of time.

And I think, that's going to change the nature of our work as always, where we're going to be more thinking of the strategic side and wear more manual tasks, like, writing out advice or like reviewing documents would become more and more being deployed and used by AI.

TE:

Thanks, Tim. That's a really interesting point about some of the changing skill sets. And Paul, from your perspective, as you rightfully call out, it's just algorithms. And for yourself as a data scientist, you must see some of these different changing skill sets a lot. Where do you see some of the changes for legal careers?

JMH:

I think I think the jumper for Tim was saying, this is This is making things easier and quicker. So just, you know, to do a summary to to to to create, you know, a document based on just a simple input. These language models are gonna be really useful almost to the equivalent of what, like, a search engine has been in the past.

So I think this is gonna be a fantastic assistance just for for everybody. And I think that's that's probably the main thing that I sort is a great takeaway here is that it's going to just bring up the standard of what anyone can do. And that's same as what happened with, you know, coming from my sort of graphic and film background is what happened with Wood Canva, the Australian company Canva, where all of a sudden everyone had the act access to creating documents that were beautiful. They didn't have to just depend on clip art or any sort of other sort of imagery. This was just something you could generate very quickly. You didn't have to have a background in it.

And the same with Squarespace as well, all of a sudden, you could just create a beautiful website. Here I am. This is sponsored by Squarespace here.

It would generate, like, a a beautiful website without having to to do anything, no coding, or nothing. And I feel like it's the same here. Only drawback of course is that it just means that everything's going to become a little bit more standardized. I think we're going to have less differences in we're potentially riding. And I think that's gonna be fascinating to see in a space. It's like how much Like, how will we actually identify between two different people's writing styles if we're all just starting to use the same tool.

WH:

Lots of changes coming up for sure Paul and Emina, your thoughts, legal career?

EB:

Yeah, I think that to start with just picking up on some common themes that were mentioned by Paul and Tim. In the sense that generative AI is a tool, at the end of the day, and I think that it's important to remember it, that that that that's what it is. And it might be cool to tackle some of the more repetitive tasks that we might face in the legal profession, but I don't think that it's going to be replacing lawyers anytime soon. It just means that lawyers might, without having to do those repetitive tasks, have the opportunity to turn their mind to the creative and strategic legal problem solving a lot earlier in their careers.

And I think an interesting point that I heard at the World Economic Forum this year is that there was a study that was discussed, which looked at nine fifty occupations.

And in that study, not a single occupation could be totally wiped out by the use of AI. In every single industry, human involvement is still very fundamental.

So, I think this just means that we're not going to be replaced anytime soon, and it certainly won't happen to whole occupations at a time. But I think that the interesting part is having to reimagine how tasks are handled by us and restructuring work, to handle the involvement of both AI and the elements that cannot be handled AI and still require human involvement.

And I think just as a general point in this kind of discussion, I think it's a really exciting time to be starting a career in law. At this moment, lawyers are seeing so much disruptive technologies in the industry with AI, algorithmic governance, you've got blockchain technology, Fintech, big data. I think that there's going to be some of the fastest regulatory challenges.

And if you are a young lawyer that has an interest in these technologies and has some sort of knowledge about it, I think it's a really good opportunity to get involved and take an active in shaping the legal landscape.

So, in short, I don't think that junior lawyers should be worried when they're looking into the legal landscape, I think it's just really imporant to be cognizant of what you as an individual are doing to leverage this AI, because it won't be AI that's going to take your job, but it might be people who can leverage and use it, that will.

WH:

Very strong remarks, Emina. Thank you for that. Thank you, Emina, thank you, Tim, thank you, Jeremy, and thank you Paul. I really appreciate to have you on and thank you to you the folks watching and listening to this. And we hope you enjoyed this one and we will see you at the next one.

Transcript

Will Howe (WH):

Hi everyone and welcome to the Clayton Utz Genative AI vodcast series. I'm your host Will Howe. I lead Clayton Utz data analytics capability where we are building with generative AI technologies. In this series, we explore generative AI and how it impacts the legal sector.

This particular episode is around next generation views. And I'm really excited to have four fantastic colleagues of mine on, with me today to discuss different elements of AI. So, Tim Edstein, Tim is a lawyer in our banking practice. He does a lot of regulatory work and is really interested in the application of generative AI technology to the banking sector.

Emina Besirevic, she's a graduate lawyer who has a keen interest in generative AI in the law and Emina has experience in intellectual property and technology and major projects in construction. We also have Jeremy McCall-Horn, Jeremy was a guest on our previous episode, where we talked about the workplace relations issue. Jeremy, good to have you back. Thanks for being on board again.

And Paul Tuohy has a background in data science, artificial intelligence, and also his unique background in film and design. So, he's a consultant in the forensic technology services, division and has a keen interest in this as well. So welcome, guys. Really excited to have you all for this episode and we're going to cover some really interesting ground on governments of AI, on Australia's national interest, on ethics.

And importantly, what does this mean for our career path as a lawyer? So maybe we start with governance and now Emina, what do we see in terms of how do we actually govern this stuff?

Emina Besirevic (EB):

Well, Will, if we stop this analysis in the private private sector, there are really two emerging approaches to how generative AI is being regulated in the space. So, on the one hand, you have companies such as Open AI, that are self governing in the space through limited release strategies, monitored use of models, and controlled access to their commercial products like Dali E. And on the other hand, you have other companies like Stability AI that really believe that these models should be openly released to democratise access. So, stability AI has, for instance, open source its models, which allows developers to access the code, and start using it with little to no controls.

And as for the public sector, there is still relatively little to no regulation in Australia to govern the evolving landscape of generative AI.

Australia seems to be approaching the regulation of AI through soft law principle based approach, so the Department of Industry science, energy, and resources, has released a set of voluntary ethical principles, which businesses and governments can choose to adopt in the development, the design, and the use of AI. However, this certainly isn't the case globally, and there are a few key developments that happened over the last year that I think we should touch on. So, for example, in December 2022, the Council of the European Union adopted its common position on the Artificial Intelligence Act, which categorises AI by the risk that they pose.

And then they impose highly prescriptive requirements on the systems that they consider to be high risk. And this legislation is largely extra territorial, in the sense that they will apply to systems that are used both in the EU, but also foreign systems whose outputs go to the EU. In the U. S.

There has been developments in Congress to the American Data Privacy and Protection Act, And this legislation seeks to regulate algorithmic decision making, which is essentially a system that analyzes large amounts of data, to infer correlations.

One last example, earlier this year, China, in China, the cyber space administration enacted regulations on deep synthesis technology, which includes deepfakes, and the rules place significant restrictions on AI generated media, which requires them to be identified as AI generated through things such as watermarks.

So, in this context, it may just be an advantage for Australia to review the approach that has been taken globally, and then see what works, see what inadvertently might stifle innovation, and be really selective in the kind of In the kind of inspiration we take, I guess, from other states in designing our own regulation and approach to AI.

WH:

Sounds like there's a lot going on in different jurisdictions and like you say it may be prudent to actually see how this stuff plays out and so Tim, I know that you spend a lot of your time working with the banking sector and that this must be a priority for them. What do you see going on in the FS space?

Tim Edstein (TE):

I think in the FS space right now, especially as amino has mentioned about the two different type of models by private sector players, on AI, the second model being the democratization of AI, I see that as a bigger threat to the banks rather than a benefit as if let's say anyone can just buy any AI tool and capabilities, there's a huge potential of people writing malicious, for example, a malicious hacking code, or running a malicious call center script, pretending to be a bank staff that if you could deploy other technologies, such as generated voice, where it's, for example, you're having someone having an Australian accent calling into Australia, and the AI is just writing a script, an automated script that can help the scammer, you know, identify and respond to a client it's very hard for that client to think, that person to think, this is not my bank calling me, versus currently now most people pick up a scam from someone with axa from different jurisdiction, or if a text message, whether spelling or grammar is incorrect, like very lower level stuff.

And as this stuff's getting cheaper, and the cost of AI is getting cheaper at these tools.

It's a lot the cost benefit analysis for a malicious organized crime group using this it'll be easier for them to deploy.

WH:

Yeah, that's a good point, Tim, as much as we see lots of great opportunities, unfortunately there's opportunities for the scammers and technology as well too. And so the banks will really need to think about how they stay ahead of that and actually we touch on this in an episode of this podcast on fraud And now Paul, where do you see this governance piece really kicking off?

Paul Tuohy (PT):

Yeah, I think the major concerns here are these guardrails that have been put in place to prevent anything rather than to respond to anything. And I think a good example to sort of think about is is comparing this to genetic research.

That's just something that's just come top of my mind is that we've sort of taken the risk of what what that could potentially do to society and to culture, and we've we've taken measures to prevent that. So I think it's gonna be a major thing to to to be considering, especially, you know, that we've been talking about how of this technology is quite open. It's open source, so there's a lot of different actors that can get involved. And that has a lot of benefits as well. We're considering here a firm about how to implement a lot of these technologies locally.

But I think an important factor for governance is just where is this the scope of this? How do we how do we limit the application of the AI, especially when it comes to access to different data sources and databases. We just want to sort of make sure that we're all just slowing the progress down even though we want things to accelerate as quickly as possible.

WH:

Let's get insights, Paul. Thanks, and Jeremy, what thoughts do you have on the governance piece?

Jeremy McCall-Horn (JMH):

Yeah. So I've been touching well, springing from what Paul was just mentioning on, you know, putting the brakes on this sort of thing.

I was reading an article earlier about this is sort of like the new nuclear arms race almost, and people likening it to, you know, hold on, we really need to think about what we're doing and, you know, put the brakes and take us slowly because if we don't, you know, we're gonna have these malicious actors who can potentially use it for all these malicious nefarious purposes.

So I think and the way the technology works as well, obviously, you know, as it gets better, it's sort of to the nth degree sort of betters itself in a way as well. I mean, that's the fear with AI, isn't it that it will just suddenly take off and we won't be able to controll it. But I I think there's definitely gonna be some interesting developments over the next few years for sure.

Australia as a minor, I mean, as mentioned, has taken more of a soft approach so far.

But for example, you know, I saw an article the other day reporting that the labor MP, Julian Hill. He actually used his first parliamentary speech to talk about, you know, AI and the potential impacts, and he doesn't sound too keen on it. But it's, yeah, So he wants to develop a white paper into this space and see what we can do from the regulatory perspective and take seriously? Because I think, obviously, that's what we need to do.

WH:

Now, it's interesting to talk about Australian government and maybe actually that's a good opportunity to move on to our national strategic interest in this. And Jeremy, maybe we'll stick with you.

There's obviously a lot of going on in the world and big developments states and in China for example, what's the strategic and sovereign play for Australia in all of this?

JMH:

Yeah, so Obviously, again, this sort of technology is gonna change the way the world works and, you know, not only from a commercial perspective, but for that that sovereign national interest perspective as well from national security, defense, and all those fun things. Just some numbers, just to give some numbers around the scale of what AI is gonna impact the world with, you know.

Is expected the global GDP is going to grow by something like fifteen trillion dollars by twenty thirty. Productivity is gonna increase by forty percent The number of AI startups in themselves has increased by fourteen times. I think it was over the last two decades.

So it is a huge area which, you know, to maintain our our sovereign interest, we're going to have to see government, investment, development, and protections put in place.

The federal government did invest, I think it was thirty million dollars to advance AI machine learning.

There are projects from CSIRO and stands Australia to try and make sure that our national interests are part of the conversation at a global level to make sure our cultural and national interests being taken into account in the development of these things.

And another really interesting point in which is something that has come up and I'm sure will come up again in this podcast is the the data side and and the privacy of citizens.

As Australian citizens, we tend to I think we tend to take our privacy quite seriously, and we really, you know, we have what we're seeing from the attorney generals department at the moment and a review or report into the privacy act, and how that might need to be reviewed to make sure that as a sovereign nation, our data is being protected as well.

WH:

Right? There's a lot to that point. Our data flowing right offshore.

That okay in some ways it's okay, but is there risk that sort of need to be managed with that? And I mean I'd love to get your thoughts in this space as well.

EB:

Yeah, absolutely. So, just jumping off those figures that Jeremy raised. It really looks like Australia is investing quite a lot in this space. But I think it's important to remember that this is quite new. And up until now, our involvement in the AI space has been quite opportunistic.

So, to kind of couch that in what that means is that just last year, the global AI readiness index, which is conducted by Oxford Insights, ranked Australia tenth in the world in terms of readiness to adopt AI and the advantages of AI in its in its governance, and government led initiatives.

And a big part of this is that Australia's local industry is less developed and competitive than other areas of the world. And, for instance, in comparison to our other high achieving economies around the world, like the U. S. Where they have the national artificial intelligence initiative or through Germany, through their artificial intelligence strategy, where the emphasis in these countries is really on investing in the development of world class intellectual property through fundamental AI research rather than just translating what is already known. So I think that the coordination and national priorities in our in AI can really help to drive economic impact in Australia and really should be a priority.

WH:

Thanks, Emina. And Tim, I know in the financial services and banking space, has always been a rough focus on our domestic capability. Do you see the same thing playing out in this AI space?

TE: Yeah, definitely. Of the major banks, Westpac has recently announced that they will be deploying a CHAPGPT like solution into their banks.

And this type of solution will also drive, for example, an easier form for the compliance teams, where there's tar sets currently that's a bit more manual, where there's a lot of review or monitoring of, for example, transactions. AI would definitely be assisting that. That's based for the moment. And I see in Australia, a lot of technology investments has been historically been driven by the big four banks.

And that's, and as you can see in Australia, where our banking system has led up to instantaneous transaction payment terms such as PRD, and now page is going to be released. So that's just one example of just tech investment that's really been driven by the banking sector rather than just generally in other sectors in the Australian economy. And that's just obviously my banking bias towards it, but that's just my balance towards AI will be developed in Australia.

WH:

Thanks, Tim. Paul, what's your thoughts around Australia's national interests here?

PT:

Well, I know I did read that we've developed roughly, I think, four four centers that are called AI digital capability centers.

They've been they've been created nationally that we can help connect small and medium sized businesses to, you know, reskill up and and then figure out, I guess, new ways to provide these services with AI.

I I think there's a there's a lot of increasing like, I mean, recently, there's been a lot probably from since two thousand nineteen and then from two thousand twenty too probably where it really started to pick up.

You know, I I think there needs to be a lot more funding going into this space. But it it kinda reminds me that this sort of mirrors the same sort of fascination that we had with cryptocurrency and the blockchain only recently. And it feels like that fit like, that that is all faded away. And then all of a sudden, we've picked up this interest and taken on board with the AI. And, I mean, we got to remind ourselves that, like, AI is I suppose, you know, fundamentally a lot algorithms and and sort of black box learning. So it's been around for quite a while, and it's just sort of, okay, where exactly are we I guess developing these new skills, yeah.

WH:

That's a great insight, Paul, and obviously, as a data scientist, you rightfully call evidence, just an algorithm behind the scenes, right? But there's implications of these algorithms. I love to touch maybe on the ethics point next and obviously as we roll those algorithms out, it's really important in a safe way, in a sustainable way and in a way that reflects the ethical priorities that the Australian society would want.

What sort of considerations do you see us having around ethics and AI?

PT:

Well, we we would touch upon data quite a few times, especially, you know, us as a sovereign nation. We wanna protecting our own data, you know, domestically, and we've we've had a lot of concerns and issues around that. And and into It's a catch point too because a lot of these AI models are built on the fact that the only way you can really improve them is giving them more data.

And so it's like, okay. Where are we gonna get this data from? Are we able to sort of protect this data?

Which which then, you know, comes into the the question of what's the ethics of what data can you actually provide? Where can we find ownership of this data? Who owns this data? And in particular, you know, are we able to sort of copyright the import data itself, you you know, if if an import is not just sort of like training data, but also like an input like a prompt to generate an image or to generate a piece of text. Are we able to sort of consider that itself can be protected?

And then furthermore, you know, is there is the reach to be able to copyright what that output is?

Especially when that output is, you know, potentially based on other people's work. It's it's it's certainly a question to be considering.

WH:

There's there's quite a bit to unpack in there, maybe Emina might go to you next to sort of have a bit of a think about the ethical points.

EB:

Yeah, absolutely, I agree with Paul that data is a big part of the debate on ethics. Taking a step back though, one thing that I'm interested in exploring it lately is when we're looking at building ethical AI models is thinking about how they're actually being developed and distinguishing between company standards standards and policies on AI, and community standards. So, absolutely, it's really common practice right now for these really big tech companies. To have their own ethical policies. So, your Microsoft Intel and IBM, all have really well developed and publish ethical policies on the development of their their AI. But the question here becomes is are these kind of company policies enough in the ethical space? And in answering this, I think it's always interesting to turn our mind to back in twenty twenty when Microsoft came up with a policy that it would not release its facial recognition technology and sell it on to police departments.

And so at the time, their policies were really well documented and they govern the use of these really powerful facial recognition AI.

However, in saying this, there were also not community wide standards, so then you had companies that were happy to sell on similar technologies, and this is exactly what ClearView did. They developed a similar technology and soldered onto police agencies over two thousand in the U. S. That went on to use a technology.

And I raise this because it's an example of what happens when there are company specific policies, but no community wide standards.

And what happens is, while company A might have really strong ethical underpinnings to its development of generative AI, or otherwise, if company B doesn't, then society is left in a position where it's as if no one has it. And Of course, the follow-up question to that is, okay, well, what are the key ethical concerns with generative AI specifically?

And I agree with my colleagues that the biggest one is bias in data. And I think this problem isn't inherently linked with the technology.

Itself, but because this technology feeds and depends on large amounts of data to learn, if the data that they are provided is biased, that leads us into a big problem.

Another problem is also the accuracy of the data. So for those of us who have used ChatGPT, It's really easy to assume that it's quite reliable, and truthful in its answers, but dig a bit deeper, and you'll find that that's not always the case. And data, of course, can be poisoned, it can be false, it can be duplicated. There are quite a lot of problems that can come out of, data that isn't good.

But a positive to this angle is that toolkits that can detect and possibly mitigate the effects of this kind of data, are already being imagined. So there's kind of two approaches to it, and the first is reinforcement reinforcement learning through human feedback. Where humans are involved in responding to the data and feeding that back in. But the other view is also quite a bit more optimistic where we can develop tools that for the AI, where the AI does it itself.

So in summary, I think that we need to be really cognizant of these kinds of issues. When developing the AI, so that we're not dealing with the fire at the end and just trying to put it out.

WH:

And Emina, that's some very insightful thoughts on bias and maybe Jeremy, I'd love your thoughts because obviously in the workplace relation space, there's a lot of work that goes into thinking around bias and how to make sure that we remove that. What's your thoughts on sort of the ethics of AI from a workplace angle, how is that being handled?

JMH:

Yeah. So definitely agree with Paul and Emina.

The data and, you know, making sure that these systems are developed in such a way that they are going to act ethically, probably two of the biggest points.

I suppose another point though to consider would possibly be the the transparency and the explainability, and it sort of comes back to what Emina was saying with the development of these technologies as well in that they need to be developed in a way where you can check them, you can see how the decisions are being made from the data because it's only when you have that level of oversight that you're able to rely on the AI to make a decision, and when it does make a decision that hopefully, maybe a human will pick up on if it's a bit of a strange decision, you're able to go back into the system and say, well, This is why the AI has made the decision that way based on the data and its algorithm that is programmed into it.

And it's through that constant feedback loop that you're able to actually improve these things to make sure, and this is you know, you see this repeated throughout a lot of different ethics frameworks on AI around the world at the moment is that The the primary consideration is making sure that these systems act in the public good and for the benefit of humankind.

Holistically. So by having that transparency piece in place, you can actually make sure that these things are being checked and you know, the accountability as well, I think.

It's gonna be an interesting question on where accountability and liability lie for the decisions that get made by especially by these automated decision makers.

You know, you can't really hold the AI itself liable. So who is liable? Is it the is it the company that created and programmed the algorithm? Is it for example, the employer that's relied on this decision to terminate their employee, or is it the the company that's sold the data to the algorithm in the first place. So there's a lot of different thoughts about, you know, where that accountability might lie, but I think that's obviously something, as far as our ethical obligations go, someone does need to be accountable at the end of the day, because that's how you make sure things are done properly and for the good of humanity?

WH:

Your point on explainability is a really interesting one I've all placed and obviously with some of this tech actually early in the game. It's a little bit difficult sometimes to get it to explain how it's making a decision a while. I know Tim in your space in the banking space been able to explain how decisions were come to that, but that's extremely important. What's your thoughts about the ethics space, Tim?

TE:

So with the ethics space and AI and go back to Meeters, went about unconscious bias and, you know, getting that AI getting a feedback loop on. For example, who's we could use credit assessments as as a good example of who's a high credit risk and who's a low credit risk.

When when I was doing this when I was doing a banking law course in university, I had a professor who always brought it up why that the biggest threat to future in banking and credit and providing people services is that with so much data that's going out out there, it's very easy for an organization to figure out someone's, you know, sensitive personal information just by pure data alone. So you can you can pick it up so on certain habits whether they offer consider religion, whether they have if you can look at transactional data to see, you know, the spending shopping habits, whether they gamble a lot, whether they you know, buy healthy foods, whether they're by unhealthy foods. And in in in our sectors, you know, a bank could theoretically discriminate against certain people either by the ethnic origin or by their cultural cultural origin or by their religious beliefs, just basing it off certain habits that someone has.

So, an example would be if someone goes to church on a Sunday and that they wear a Fitbit. And then it geotracks them that they go at the same location every day. You can tell this person's of a certain religion, and this goes to, you know, and a person attempt going to any religious organizations that it belongs to any religious groups.

And it's a it's a big problem if an AI intrinsically thinks, okay, a person of this certain demographic is a high credit risk. Even though this person may have a higher income for now, does the AI's bias go, we're not going to lend them this amount because people of this demographic has popped up before as a red marker for defaulting on on credit. So that's all that's always gonna be a big issue. I think that's that's gonna face the banks if they deploy this type of technology, particularly in this type of credit assessment space. And privacy is always gonna become a bigger, bigger issue as more and more of the stuffs getting collected, especially when a lot of people bank apps through their phones, and a lot of a lot of it's asking for people's location when people click that, a set button, the phones will know, basic can theoretically track.

Based on track where people are going in the day to day activities.

WH:

That's a really good point Tim on some of the changes that are going to be required of your point credit assessment, there's a lot of work put in by the banks on ensuring that there's elements of the bias that are removed from that. And so maybe just along the topic of skill sets and those changing skill sets, maybe just love to touch on what's going to change for lawyers and as we're thinking about our career in the legal industry, what do we need to prioritize and how does this sort of change things? And maybe Tim, it'd be good to start off with yourself what's your view on legal careers and how they may or may not change because of generative AI?

TE:

It will definitely change quite a lot with generative AI becoming a different tool. It's like the invention of the calculator or in the current finance space if someone's using Excel just because someone's you know, you have to learn the baseline math skills to understand what the inputs are, an Excel or a calculator will give you the output, but you have to know how to use that tool as well. So learning to use ChatGPT or generative AI tool will, in itself, be a skill. I think an easy example for us lawyers is that in in law school, they do teach us how to do legal research, like you have to use, you know, full queues, and using the different various sources like Lexus, Nexus, or Westlaw, Ostely is one of them that they've that we've all up in Telbino's at university.

And currently, I see ChatGPT, it it it does give an answer that it's very confident in, but there's been a few reports where there's a University of Michigan law school where they use ChatGPT to answer four questions for different law exams. I think once and towards, once in evidence, and once in governance. I forgot the fourth one, but It's it did pass the exams, but it consistently scored at the bottom scale and placed last in the rankings, and the commentary was that it gave very high level detail, and it was very confident, but it didn't provide any nuance. So that's the big caveat.

For down always trying to use ChatGPT, trying to use it for, as an actual legal research tool. One benefit I do think that the tool has at the moment would be saving a lot of time. Like, it's very good at its natural language capabilities where it's you can ask it to fix grammar and draft things in in turn, like, that would save generally quite a lot of time.

And I think, that's going to change the nature of our work as always, where we're going to be more thinking of the strategic side and wear more manual tasks, like, writing out advice or like reviewing documents would become more and more being deployed and used by AI.

TE:

Thanks, Tim. That's a really interesting point about some of the changing skill sets. And Paul, from your perspective, as you rightfully call out, it's just algorithms. And for yourself as a data scientist, you must see some of these different changing skill sets a lot. Where do you see some of the changes for legal careers?

JMH:

I think I think the jumper for Tim was saying, this is This is making things easier and quicker. So just, you know, to do a summary to to to to create, you know, a document based on just a simple input. These language models are gonna be really useful almost to the equivalent of what, like, a search engine has been in the past.

So I think this is gonna be a fantastic assistance just for for everybody. And I think that's that's probably the main thing that I sort is a great takeaway here is that it's going to just bring up the standard of what anyone can do. And that's same as what happened with, you know, coming from my sort of graphic and film background is what happened with Wood Canva, the Australian company Canva, where all of a sudden everyone had the act access to creating documents that were beautiful. They didn't have to just depend on clip art or any sort of other sort of imagery. This was just something you could generate very quickly. You didn't have to have a background in it.

And the same with Squarespace as well, all of a sudden, you could just create a beautiful website. Here I am. This is sponsored by Squarespace here.

It would generate, like, a a beautiful website without having to to do anything, no coding, or nothing. And I feel like it's the same here. Only drawback of course is that it just means that everything's going to become a little bit more standardized. I think we're going to have less differences in we're potentially riding. And I think that's gonna be fascinating to see in a space. It's like how much Like, how will we actually identify between two different people's writing styles if we're all just starting to use the same tool.

WH:

Lots of changes coming up for sure Paul and Emina, your thoughts, legal career?

EB:

Yeah, I think that to start with just picking up on some common themes that were mentioned by Paul and Tim. In the sense that generative AI is a tool, at the end of the day, and I think that it's important to remember it, that that that that's what it is. And it might be cool to tackle some of the more repetitive tasks that we might face in the legal profession, but I don't think that it's going to be replacing lawyers anytime soon. It just means that lawyers might, without having to do those repetitive tasks, have the opportunity to turn their mind to the creative and strategic legal problem solving a lot earlier in their careers.

And I think an interesting point that I heard at the World Economic Forum this year is that there was a study that was discussed, which looked at nine fifty occupations.

And in that study, not a single occupation could be totally wiped out by the use of AI. In every single industry, human involvement is still very fundamental.

So, I think this just means that we're not going to be replaced anytime soon, and it certainly won't happen to whole occupations at a time. But I think that the interesting part is having to reimagine how tasks are handled by us and restructuring work, to handle the involvement of both AI and the elements that cannot be handled AI and still require human involvement.

And I think just as a general point in this kind of discussion, I think it's a really exciting time to be starting a career in law. At this moment, lawyers are seeing so much disruptive technologies in the industry with AI, algorithmic governance, you've got blockchain technology, Fintech, big data. I think that there's going to be some of the fastest regulatory challenges.

And if you are a young lawyer that has an interest in these technologies and has some sort of knowledge about it, I think it's a really good opportunity to get involved and take an active in shaping the legal landscape.

So, in short, I don't think that junior lawyers should be worried when they're looking into the legal landscape, I think it's just really imporant to be cognizant of what you as an individual are doing to leverage this AI, because it won't be AI that's going to take your job, but it might be people who can leverage and use it, that will.

WH:

Very strong remarks, Emina. Thank you for that. Thank you, Emina, thank you, Tim, thank you, Jeremy, and thank you Paul. I really appreciate to have you on and thank you to you the folks watching and listening to this. And we hope you enjoyed this one and we will see you at the next one.

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.