Video: The AI Edge: Transforming Financial Crime Compliance Practices for Financial Institutions | Duration: 3376s | Summary: The AI Edge: Transforming Financial Crime Compliance Practices for Financial Institutions
Transcript for "The AI Edge: Transforming Financial Crime Compliance Practices for Financial Institutions":
Hello, everyone, and welcome. My name is Ruth, and I'll be your moderator for today's session. Here at the Institute of Financial Integrity, we're committed to empowering the professional community to protect the global financial system through cutting edge training and education. Thank you all for joining, and we hope you gain valuable insights from today's discussion. Before we dive in, let me quickly go over a few housekeeping items. Please note today's webinar is being recorded, and we'll share the link with you afterwards so you can revisit or share with your colleagues. Please feel free to submit questions through the q and a tab on your screen. We'll try to address as many as time permits during our q and a segment at the end. And thank you to those who submitted questions ahead of time. We've worked some of them into our discussion, and we'll be sure to cover as many as possible. Over the next, we'll be delving into how AI is being applied to financial crime compliance within financial institutions. We can see AI is transforming industries across the board from health care to manufacturing, marketing, and financial services is no exception. In particular, the world of financial crime compliance is evolving at an unprecedented rate. Today, we'll explore how AI is reshaping the way institutions detect fraud, manage compliance risks, and meet regulatory expectations. But with but with these advancements come significant challenges, Ethical, operational, and even cultural that we must navigate very carefully. This conversation today couldn't even be more important as financial crime becomes more sophisticated. Criminals are leveraging the same AI tools we use to protect our institutions, and it's up to us to stay one step ahead. AI can provide powerful solutions, but only when implemented responsibly. Today's panel will dive into the nuances of this topic, looking at how AI can be both a powerful ally, and if we're not careful, a double edged sword. We have a great lineup of speakers today who will cover AI advancements, regulator expectations, and the challenges we face as we adopt these tools in financial crime compliance. So without further ado, let me introduce our panelist for today. With us today, we have Shannon Barnes. He's the chief product officer at IFI, where he leads technical innovation, strategy, and execution for our proprietary Dolphin platform and Aspen generated AI compliance assistant. Shana will be discussing how AI and related technologies such as machine learning and automation can enhance financial crime compliance, including the current state of implementation in the industry. Ajit Tharakin is the chief executive officer at Consilience, which is home of the 1st federated learning technology for financial crime detection. Ajit will explore the challenges and ethical considerations of AI implementations, including data privacy. He'll introduce us to the machine learning challenges, machine learning technique of federated learning and how it addresses key challenges. And last but not least, we have Catherine Woods. She's an associate managing director at IFI and an industry expert in financial crime compliance and emerging technology. Catherine will examine the regulatory scrutiny on AI implementations, including a review of recent case studies featuring enforcement actions against noncompliant practices. Please feel free to click on the panelist tab if you'd like to explore more about our panelist today. Now without further ado, let's dive into the first part of our discussion today. Shannon, you've been working closely with AI technologies in this space. So can you please kick us off by setting the scene as to how these, where these new technologies are at are at and how they're being implemented in the industry today? Certainly can, Ruth. Thanks for the introduction, and and excited to be here at such a great panel for this discussion. Yeah. I wanna set, first off, I think it's helpful for us to take some time just to set a baseline and to explore some of the concepts that we're gonna be discussing today. So AI, firstly, is an umbrella term. Generative AI, which we're all becoming more and more familiar with, is it's it sits at the the intersection of machine learning and natural language processing, and it's an important distinction because AI is not synonymous with generative AI. The language and the specificity with which we talk about these things are actually important. So when someone asks you, are you using AI, embedded with the question is a broader ask. Like, are you using generative AI? Are you using machine learning or natural language processing? Are you using deep learning, neural networks, large language models, for example? Each have distinct characteristics, but each overlap and all are a subset of AI. And AI is not new, which is the other thing. It may be the top of conversation nowadays, but it's been around since the sixties, since the the release of the first chatbot, Eliza. Eliza didn't use generative AI, though. Most institutions have been using machine learning in this, in the financial crime space for for decades. And a recent McKinsey survey found that 55% of respondents had machine learning deployed in some capacity within their institution. Generative AI, though, at the other end of the spectrum, is the new kid on the block. It was launched in 2014, became more widely adopted in 2018, and became really mainstream in 2022. And so the 2018 development was the release of chatgpt1, that their the OpenAI model. And in 2022 was the release of chatgpt3.5, the the later model. And we're certainly at a tipping point because generative AI is everywhere. It's on our phones. It's powering most of the applications that we interact with, and this is only the beginning. And generative AI's explosion is really it comes down to 2 things. It's the significant enhancements with natural language processing and also advances with hardware in terms of commit compute. So we we hear the term GPU thrown around, obviously, firms like NVIDIA, who are key drivers behind that. But I I think if we wanted to just simplify most of the things I've just mentioned, what we've what we've seen is a move from rules based generation. And so think, transaction alerting, think sanction screening. So if you see this, do this, to a capability that's that's sorry. A capability that is able to make these connections itself. So, think around, you have 2 data points, but you wanna actually fuse those together, understand the differences between those, and it's doing a lot of that thought work itself. There's still, human written, algorithms that are sort of driving this, but it's a it is a big change. So the LLMs are really designed to make stuff up. The rate of adoption, one other point I'd make is the rate of adoption with chat gpt was the fastest of any product we've seen in history. They went roughly from, like, a a small group of users to a 1,000,000,000 users within 3 months, which, it was completely unprecedented. I think the next point that I'd make beyond that broader background is that AI is not a magical wand. So it's immature. It's very error prone, and its greatest strength is its greatest weakness. It's generative, and it connects the dots. But to do that, it has to make assumptions. So what we the the way of, explaining that is that generative AI is nondeterministic. It makes life very interesting for engineers and product managers and thought leaders within the space because the outcomes can't be guaranteed. So we wanted to make it stuff up between certain parameters of facts and verifiable data. However, it's not always the case, and it doesn't always get it right. The good news is the pace of improvement. So generative AI is, and the improvements that we're seeing in the space is faster than anything we've we've ever seen. And within months of, like, developing our own capabilities, leveraging market leading technologies such as Microsoft and and OpenAI's Chat gpt, the models we'd use we're using had improved significantly. But but beyond that, they'd also released 4 to 5 new models within a 12 month span, which is incredible. So closing out on my sort of introductory comments, I think the biggest strength is as big as weakness, assumptions and predictability. The most important considerations really come down to the data you're working with and the state of the data, the cleanliness of it, the model you choose to use. And so model, again, I'm referring to things like chatgpt3.5, but but you may have also heard of Meta Lama, etcetera, and also the guardrails that you put in place. And I think that's a good lead in to our sort of next discussion. Yeah. When I say guardrails, I'm referring to blind spots. So generative AI and AI more generally, really do have challenges around fairness, reliability and safety, transparency, inclusiveness, accountability and privacy. And it has to be forefront of the minds of the people that are building these tools, deploying these tools, confirming that they work for the for the use case that they're employed against, because we don't want these tools reinforcing badness. Back to you, Ruth. Thank you, Shannon. It's really helpful to lay the foundation to help us understand the broader picture of how AI is being integrated into financial crime compliance. Now that we've explored the opportunities of AI, I mean, of course, there would be challenges. So, Ajit, you've done extensive work on tackling the issues that arise when implementing AI, especially in in terms of data privacy and security. Can you please share your thoughts with us on some of these key challenges and your solutions? Yeah. Thank you, Ruth, and hello, everyone. So, you know, everyone's talking about AI models. And so just say we have AI models. To have an effective AI model, you need data to train it, and you need good training data. When it comes to regulated industries such as financial services, health care, etcetera, getting access to this data gets extremely difficult. Right? Especially transaction data from banks. This is highly confidential. It's the bank's customer's transactions. It's could be my transactions, yours. Obviously, you don't want this floating around, on the Internet. So just to take a pause there, is we're talking about data, and there's tons of data protection and privacy laws. If you look at the stats, a 160 countries have adopted comprehensive data protection laws. Okay. Last year, another 6 applied, have adopted laws. 137 UN member states have adopted comprehensive data protection privacy laws. These laws apply both to personal information, held both electronically or physically, and it applies to both private and government bodies. Pretty much in nearly every country, there is a data protection commission that oversees and enforces these laws, and then most of them are legally independent. In addition, 30 additional countries have pending data privacy bills in progress right now. So the point I'm trying to make here is that data is not only private, it's highly confidential, but there's also significant laws, and protection around it. The other thing to note is data breaches. In 2023, 80% of the data breaches involve data stored in the cloud. And for the first time, the financial services industry overtook health care for the most data breaches. So this combination of all these regulations, data privacy laws, the potential for data breaches, getting access to data becomes extremely difficult and challenging. So how do you train a model on data that's sitting at various financial institutions? If we were to go and ask these banks to give me all their transaction data so that I could put it into a single data pool to train my model, it's probably not gonna go very well. So there is a concept in artificial intelligence called federated machine learning. And the concept here is you keep the data where it is. Do not move the data. Rather, we move the model to the data. So it's almost the future to us, at least to me, is we are moving AI models, but we're not moving data. And how does this work is we deploy the model to where the data is within the bank's network, behind their firewalls, probably in a air gapped environment if if that's needed. And the model runs locally, so it's training on the data locally. The output of that is what you call a locally trained model. And a locally trained model now doesn't contain any bank data, any customer data. It just contains the patterns and the weights and measures that the model found while training on that dataset. So you could do the same thing at the next bank and create a locally trained model there. And you can take these locally trained models, and you can take them out of the bank because there's no PII. There's no bank data in that. And you could, through a process called federation, bring them together into a champion federated model. This would be the equivalent of me having gotten the datasets from both banks into a single data pool and training the model. So federated learning is actually a very, very innovative way to solve the problem of data privacy, but still give give the model the training that it requires to get smarter at detecting, you know, financial crime. Back to you, Ruth. Thank you, Achin. It's, really exciting to see these viable solutions emerging to some of the challenges of deploying AI in financial crime compliance. Now we've received a follow-up question for you from Mohammed. What are some examples of AI hallucinations in financial crime compliance? I don't know anyone specifically because the models that we work on are not Gen AI, and hallucination typically happen with generative AI models. The thing with Gen AI is you it trains on certain data, but AI does not know whether the data is actually the truth or it's a falsehood. Right? It just sees data as data. So if the training data contains, you know, fake data or data that's not true, the model can then output stuff that's untrue. In fact, there is a case, where a lawyer, a personal injury lawyer, filed a lawsuit in New York where he used ChatGPD to create a citation. And the judge looked at it and basically found that a lot of the stuff that ChatGPD had generated was just not even true, didn't even exist, even though it has links to citations and to law that said it existed. Right? So there is depending on the dataset that you're using and the way the Gen AI interprets that, you could end up with cases where, you know, the data can't really distinguish between something that's truth versus not so true. Thank you, Ajit. Shannon, would you like to, share your thoughts on that question as well? Yeah. Certainly. I think those scenarios of the hallucinations are, I mean, some of them can be very funny, but, obviously, the consequences of those can be very serious. One of my favorite ones is, where a user asked to and and this was early in the Gemini days was where a user asked, about people swimming across the channel but didn't say swimming. Said, pass across the channel by foot. And and and the model responds essentially, like, with bad direction giving a bad outcome. And I think there's there's definitely rules that you can put in place. And then and so I'll take a step back. One thing I should mention is a lot of my most recent experience in this space is implementing generative AI. So I think Ajit and I come from a very different, from 2 different worlds in this way. Lots of crossover, but I think we can talk to these issues in a very complimentary kind of way. So, I think there's things that are really impressive with what generative AI does. So for example, if I ask, what is the what is the engine or the model's view on a particular individual? I've been really impressed when it comes back and says, hey. I'm an AI I'm an AI model. What you're asking me is really outside of my jurisdiction. I'm not gonna provide a response. Like, I'm not gonna weigh in on that. That's really great. So there's what's happening there is this responsible AI that's occurring with the firms that are developing these models where they're training them on sort of what is good, what is bad, and and don't go beyond. But I think what Ajit was talking about, this notion of, like, it all comes down to the the data they're trained on. These models I was reading a statistic the other day around what, chat gpt4 o, the training for that model, and they use this term of petaflops for the to to measure the number of hours of training. But it's essentially like they're running 5 supercomputers for about 3 months to train this model on the amount of content they run it over. And so it'll find these items. And then as I mentioned in my sort of introductory comments, it's trying to draw connections between those and make assumptions about those, and it it just gets things wrong. For example, sentiment analysis. It's it a lot of these models don't perform particularly well, And so it's all about iteration and testing and implementation, on these things. So you can do a lot from, like, the system prompt, which is really what engineers are gonna own, But you can also do it a lot as a user inter interacting with these tools to sort of say, hey. Here's an example of what good looks like. Here's an example of what bad looks like. Don't go off the topic of this. Stay on the documentation of this. But, really, at the end of the day, even with all those guardrails and even with all the prompt engineering, and the best responsible AI engineering, practices in the world, I think we're still at a place with Genai that it requires, like, a human in the loop reviewing these outcomes. What is hap what what is the big advance with generative AI is that it's doing the 1st, 2nd, 3rd, 4th, 5th iteration and saving you significant time in getting from a blank page to something that is a well advanced draft, but it definitely needs your review. Thank you, Shannon. Really great insights. It's clear that while AI holds promise, we need to be mindful of these risks that it introduces. In fact, we're hearing from many clients and financial institutions that there's been slow progress in the deployment of AI projects across compliance functions because these initiatives are held up in governance and ethics committees working on tackling these very issues. So let's move on to explore how regulators are responding to these innovations and challenges. So, Catherine, can you please share what you've seen as far as what regulators are expect expecting from institutions? How are institutions needed this evolving standards? And have there been any enforcement actions related to the AI adoption? Yes. So I think, absolutely, the answer is yes. We're seeing some really clear regulatory statements of intention and even enforcement action. So all of the conversation that we've just heard on this call about the different considerations and and how they need to be, to be thought about. I think it's absolutely spot on and the right thing to be doing. So we saw a UK, so a British Financial Conduct Authority enforcement action earlier this year. I'll talk through that. So the headline here is that a company called Amigo Loans relied heavily on a complex IT system with a high degree of automation, and without adequate human oversight or understanding. So automation, we recognize automation is not AI, however, the findings and observations that the regulator, the FCA made, are directly informative to financial institutions and other businesses who are looking at AI, because if this is what your regulator is focusing on for less complex technologies like automation, they would definitely expect you to have this in place for AI as well. So Amigo Loans provided guarantor loans. They were aimed at consumers who might not have been able to access finance from traditional lenders due to their personal circumstances or credit history, and their process or their approach was that both the borrower and the guarantor needed to pass Amigo's affordability checks for a loan to be approved. What the FCA found was that Amigo's assessment of whether a customer could afford to borrow relied heavily on their IT system. It was a complex system, it had a high degree of automation, and there were design issues and insufficient controls, which meant the system process loans which were potentially unaffordable for the customer. There was a direct effect of that, so 1 in 4, guarantors had to step in at some point during a loan to assist a borrower who wasn't able to make their repayments or was having difficulty making their repayments, And although the system raised manual flags for review in some circumstances, staff didn't actually or sufficiently consider or question the information provided by the customer. So, additional considerations, which I'm sure were in the FCA's mind, were that, the company had not adequately considered regulatory requirements or internal or external review findings, which identified weaknesses in its approach to affordability or creditworthiness, and in addition to its use of technology, MIGO hadn't maintained adequate records of its historic business processes, they deleted emails of former staff, and both of those factors hampered the FCA's investigation. The FCA found that a penalty of 72,900,000 would be appropriate out of £72,900,000 which works out at about $94,000,000 but they held back on imposing that fine because what they wanted was they wanted Amigo to pay redress to its customers, and if they imposed that fine, it would have caused serious financial hardship. So so they held back on actually imposing the fine. So what can financial institutions learn from this case? What this is really about is automation without sufficient understanding and oversight, and regulators are focused on explainability of decisions. So what this means is that humans are still responsible for their decisions, whether they make them directly or whether they delegate them to a system to make that decision. It's a little bit like an aircraft. If you're the pilot and you decide to use autopilot, you're still responsible for the operation of that aircraft. The FCA, a theme that comes through in the FCA's observations is also consumer fairness. In this case, the effect was for consumers and guarantors to be placed in financial difficulty because of the failures in, Amigo's systems and its management of those systems. And this, this, I would I would suggest, a personal view, is that regulators are, they have multiple different policy objectives that they're trying to fulfill. So in the FCA's case, it it, you know, wants to achieve consumer fairness. So if there are failures in automation or potentially in future in AI that cause, unfairness to consumers, that's particularly going to attract their scrutiny. And of course, there are themes of, you know, not understanding regulatory requirements, inadequate record keeping, and those kinds of factors, which will definitely attract, regulatory attention. So having having had conversations informally with various colleagues from, you know, large global banks, what I'm hearing is that they are exploring and experimenting with AI, but they're still working their way through governance committees, ethical considerations, and those types of internal processes before they deploy it at scale. And I, like, I would see this as being a really sensible approach. We've touched on in this conversation so far some of the considerations around data and privacy, how AI can, you know, misinform user that's not scrutinizing its outputs through hallucinations. And the regulatory direction that comes through really clearly is that if you don't take responsible action with regard to your use of automation, machine learning or AI, then regulators will step in and they will, compel you to do so. Thanks, Catherine. We've received a question for you as well. This question is from Nala. So this case study focuses on UK regulators. What's happening in other jurisdictions? Yeah. So that's a great question. Actually, I would say this the similar trends are being followed in other countries. So we have we have seen regulators issuing guidance in multiple different countries around the world, and I can give a couple of examples from, from the US and the European Union European Union, to to demonstrate that this is, this is the UK is very much in step with global regulatory trends. For example, in the United States, the Consumer Financial Protection Bureau has issued guidance about the legal requirements that lenders must meet when they're using artificial intelligence, and what they're saying is that if, if a lender is taking adverse action against a consumer, they must use specific and accurate reasons, and that requirement remains in place even if the company is using complex algorithms or black box credit models, that make it difficult to identify those reasons. So really emphasizing that, businesses remain responsible for their decisions, whether they use technology or not. Also in the US, Treasury has explored opportunities and risks relating to the use of AI and they have, they have included concerns about bias and discrimination and challenges with explainability. So we see the theme of explainability coming up, from a number of regulators And what to add a little bit more detail, what explainability means is the ability to understand a model's outputs and decisions or how the model is establishing relationships based on the inputs. There are proposed amendments, regulatory amendments in the United States, which would require FinCEN to set out rules specifying the standards that institutions need to meet for their testing methods with innovative approaches. One of those innovative approaches is machine learning, so we're definitely in the United States seeing multiple regulators stepping into this space, and again, thinking about bias, fairness to consumers, explainability, those kinds of considerations. In the European Union, the Artificial Intelligence Act was approved by the European Council earlier this year, just in May, and it comes into force later this year. That's when it's due to come into force. So that Act, bans some types of AI systems as creating an unacceptable risk. These, the banned applications are ones, government run social scoring types of systems, which are used by some authoritarian regimes around the world. Those uses of AI are banned by the EU and then it categorizes other applications of AI based on the risk levels that they present. So high risk applications have specific, legal requirements that apply to them, and those high risk applications include ones used for well used to to do automated processing of personal data to assess aspects of a person's life like their economic decisions. So evaluating what that means, it's likely that many applications of AI within banks, like those used for credit scoring, would meet the definition of high risk and there would be additional controls and specific legal requirements applying. Ruth, that's just touching on a couple of other examples, EU, US, we've looked at the UK as well, and countries around the world are, regulators are engaging in similar ways and, wanting to provide direction and guidance and, supervision in some cases to make sure that AI is being used responsibly. Thanks, Catherine. It's clear that while regulatory frameworks vary, across jurisdictions, the core principles for implementing AI responsibly remains the same. Shannon, we received another great question from Lisa. What are your what are some key things to consider in implementing AI in financial compliance, and how have you approached it? Oh, great question. Thanks, Ruth. I think that, again, I come to this with my most recent experience being the generative AI space, and maybe, Ajit can speak to it a little bit more in the machine learning, federated machine learning space. But if I was talking about from my patch, I think the key aspects, or key things that we've learned through implementation have to be, first off, like, data quality. I just talked about the challenges of getting access to data, but even when you have access to data, is the data clean, and is is it in a in format that can be leveraged by these technologies? And if it is, you probably have a huge competitive advantage in in your space. The paradox, though, is that in spending time in getting your house in order, getting your data into a place that you can work with it, etcetera, you're delaying exploration. And I think the costs of of not getting involved now, waiting a day, a week, a month, longer, is obviously gonna cost you. And I think it's gonna cost you more than the time that you spend getting your data into place, and, also, like, getting starting to work with a, a strong partner from an engineering implementation perspective who can help you guide you in that process. Obviously, model selection as well, I referred to before, like, the pace of improvement, and the fact that they've released 5 new models in the 12 months since we've started working with 1. That was just within the family of models. So that was the OpenAI, chat gpt models, and, obviously, Gemini Copilot, Metal Armor, all these other models are being released. There's a huge proliferation of these, but they're not all created equal. And the use cases for each of them, the considerations that you sort of need to be well across in in deploying them is is there's some complexity there. For us, a key consideration was privacy and security. Ajay talked about that in terms of, like, the model moving versus the data moving and and how that's enabled them to overcome challenges. And that's the same in our space in the sense of, like, when we built Ask Fin, which is our generative AI financial crime system, we, the knowledge repository that we have on the Dolphin platform, the resource center, was something that's a key data asset for us, And we didn't we wanted to ensure that there was safety and security around how that data was handled and and that it remained key intellectual property for us. But we also wanted to make sure the way that the ways that our users engage with that, the questions that they ask, the responses that they receive, that their privacy is respected as well. And so, non attribution. We work in the we work in the compliance space, so you may be asking questions that are sensitive. They may reveal gaps in your knowledge within your institution. Those things, you don't want them on the web. You don't want them traced back to you. So it was very important for us to build with the sort of leading edge, sorry, the the the biggest names within the tech space to have those securities and safeguards in place, but it did limit what we had from an implementation what what our options were for implementation. So, privacy and security of data was key. And similar to that was the guardrails to ensure avoiding badness, in terms of responsible AI. I think also there's there's this notion that most people are may not be familiar with with generative AI, although it's inherent in the capabilities. There's this notion of creativity. And so we we all have generative AI in our phones now, and we're starting to play around with generating music or generating images. These are highly creative activities, and it's incredible that the same tool can do things like legal contract review but also image generation. And the only way that it can do that is this notion of temperature. And so it's sort of a a 0 to 2, spectrum in terms of how they implement it with with Chargept. And it's there are an enormous number number of sort of knobs and ways to tune and configure the way that you deploy these things for the use case that you have. So the how very much depends on the what and the who. And in talking to my implementation partners and obviously watching this space very closely, the the range and the diversity of how these technologies are being deployed is is is incredible. I think, also, attuned to that is, like, you're building trust. So you avoid the black box. You ensure transparency and accountability with your tools, ensure sourcing and lineage so that any question that is asked, you can sort of understand, okay, how and why did they get to that, and you can interact with those. 2 more points. User education. This one's been a really interesting one. So I think the adoption of these technologies has been incredibly fast, but I think the level of comprehension for how they work has been much, much slower. And so we've thought very creatively and expansively about, like, the user experience and how do we, I guess, pay back a public good, which is help people level up in their understanding of these tools and capabilities? How do you do things in the user interface and enhance the user experience so that people are learning the best ways to interact with these tools, but in a seamless way without, like, huge amounts of text in the interface and so on. And then finally, a a a cursory or cautionary tone comment. Don't underestimate the lift from a testing and a quality assurance perspective. I think, like, the cautionary tales that Catherine went through before and that the responsibility really sits on on the people that implement these, not the technology themselves. Like, if you're gonna get it right, there's a lot of work to be done, in terms of rolling up your sleeves. I think there's probably 3 things that I 4 things I'd mentioned just in terms of how we've approached it as well beyond those brutal learnings. For me, it's always been important to pair, like, a really smart human with technology to reach a a great outcome. Some people refer to it as human computer symbiosis. But, like, the the approach that we've taken is subject matter expert plus a technical team of, like, extremely good engineers, plus a product team to think about, like, how people engage with it. That trifecta, that capability that that sort of combination of components has been very successful for us. We also have tried red teaming. So we have a dedicated development team, but we brought in an entirely separate and independent development team to look at things from directions that we had not even considered. And I think that, like, anyone building tools, that that will probably land with them. You'll understand that. But anyone implementing them can take the same approach. So you may have, a particular group of users who are going to be using whatever this tool is that you're deploying or this capability that you're thinking of building. But think about bringing in people who think about that capability from another direction. And maybe their mission is to break it and try and identify bad badness so you can sort of patch those things up. 2 more. User feedback was critical for us. So we're we're in a expanded beta right now, and we have a really diligent group of people who are sort of kicking the tires on this and providing great feedback. And that's helping drive our thinking around the user interface, the model that we want, and how to minimize hallucinations that we were talking about before. And that continual review, testing, and improvement, I think that, a lot of people like to think that software has a beginning and an end, that you finish, you deploy, and you're kinda done. That's a huge fallacy. Software is a continual evolution redevelopment process. And so and particularly with AI and then particularly, again, with generative AI, we know that we're we'll have a capability that we're we're comfortable with, and and users are enjoying it and delighted by the experience. But we also know that within months, and and along the journey in those months, we're gonna need to continue to reassess against other capabilities that are being released and continue to make improvements because the developments in the space are so quick. Thank you, Shannon. It's really helpful to hear your approach to practical AI implementation and compliance. Now let's move on to our q and a session. I see we've received some great questions, so let's dive right in. K. Ajit, the first question is for you. Okay. Shantanu is asking, technology doesn't permeate uniformly, and there's a difference in the cost of implementing regulatory compliance for SMBs versus large firms. How can AI serve as an enabler of reducing cost of compliant operations for small businesses while driving down costs for consumer finance businesses? Yeah. Thank you, Ruth. So there was a survey taken out by NVIDIA this year, and the one of the questions was, what are the biggest challenges in achieving your company's AI goals? The number one issue was data issues, privacy, and jurisdictional, access of this data. The 2nd highest ranking issue that organizations had in rolling out AI was recruiting and training of AI experts and data scientists. And the other large issue that respondents had was, data for training the model. Do they have sufficient data to train a model? Right? So when you use something like federated learning, for example, day 1, you are getting the benefit of a model that's probably been trained previously. Typically, today, when you start with a model and you're doing it by yourself, you have a blank model which needs now to be trained. And either you have that data locally within your organization or you don't. Or you have some of it, but you don't have all of it. But with the federated learning, these models are already pretrained at other organizations, other banks. So just starting off, you have a pretrained model, which obviously gets better if we train it locally at your bank and and then use that in the prediction. Right? So there's a significant advantage of federated learning here in terms of cost because it takes care of those three issues I just said. You're able to get varied datasets for training without compromising data privacy. You don't have to go to a whole lot of training people because the model has come prebuilt and trained. Now the other aspect of models is is feature selection. And to Shannon's point, it's not a onetime job. Features keep changing because criminal activity keeps changing. I was talking to a bank. They have a specific model to detect human trafficking at Super Bowl events because it's got a very specific feature set. Right? And they know to deploy this model, at that time of the year because they're looking at different feature sets. Now the other part of lowering cost is the performance of these models, from our tests is significant. A, reducing false positives, a, building efficiency. Terms of the effectiveness, it's 4 x to 5 x better than any rules based systems that's out there. And as a side effect, it then tends to find additional risk that you may not be aware of that might be happening within your organization because you probably were not exposed to this risk before. For example, human trafficking in Canada, the transactions use a lot of gift cards. Human trafficking in Mexico is cash. So if you have a model that's trained on both patterns, you're gonna pick it up. So not only is machine learning helping reduce the noise that's coming out from traditional transaction monitoring systems and rules based systems, but it's also being more effective. So it's definitely lowering cost in your AML operations area. Thank you. Thank you, Ajit. Here's a related question. Can you provide some examples of the force multiplier effect that an AI workforce could provide in the FCC and sanction space? For example, an AI been crime assistant can perform the work of 50 analysts in 1 hour for simple alert clearing, drafting of SAR narratives, and performing l one sanction screening. I don't have an example of that screen, but I can do it in transaction monitoring if that's okay. Sure. So there there's a large bank, global bank, you know, 400,000 alerts a month that are reviewed by their AML team. Just think that by using federated learning, we could reduce that 400,000 by, let's just say, half or even higher, 60%, 80%. The number of you could now review those alerts with your same team at a much quicker pace. You know? I think they were taking almost 40 to 50 days to go through those 400,000 alerts with their team. That could be reduced to 10 days, you know, with a such a reduction in false positives. But also then finding more effective transactions and also pointing you into the transactions that are really the suspect ones versus somebody having to look through and screen through a whole slew of transactions. Right? So there's general lowering of costs right there. Now I think there's general use cases for writing SAR narratives, and I think that's where hallucinations can happen too. So some people have to be careful. It could totally create a false narrative on the SAR report, which if a human doesn't look at, can get filed and send people on a on a on on the wrong turn. But definitely, federated learning and machine learning and Gen AI is definitely reducing the work, manual work. And to Shannon's point, those steps 1, 2, and 3 that people were doing now can be removed, and they can just go to step 4 and 5, in the process. Right? But reducing noise, whether it's sanction screening or transaction monitoring, you're dealing with less alerts and more meaningful alerts. So there's definitely a cost reduction, by using AI. Thank you. Actually, we received so many questions. I'm just going to jump right into the next one. Shannon, here's a question from. He's asking, are you suggesting that Gen AI now has the capability to conduct the work that some analysts are currently doing when dispositioning alerts, e. G. To an initial analysis of whether an alert is a false positive or a potential true match? Yeah. Thanks, Ruth. I think it's a good question. I think Ajit's comment before largely sort of answered this, but where I would add is that, I'm a firm believer that AI won't take your job, but someone with AI will take your job. So I think that there is a a need to start, like, becoming literate with these technologies and becoming comfortable with them and knowing what they're very good at and knowing maybe what way you can complement them. I think there's things that humans do that are gonna be very difficult for AI or Gen AI to replace. But it can it it provides huge efficiency benefits, and it can be taking the 1st, 2nd, 3rd pass as we've sort of all commented on. So I think this notion of, like, reducing lift in each of our processes and and and and efficiency is a is a core one to that. So I think the learning for us as, like, as people is, like, okay, what what do we bring that is unique, in in thinking about this and and contributing to the to the workplace that we're that we're in. Like, a core element of that is soft skills, but other parts are like rationalizing and also just even the discussion we're having on responsible AI and and guardrails. Like, those sort of things we know it's not particularly good at. So, obviously, review of, like, rough results that these things are putting out, plus also the more, like, human centric, thought processes that GenAI just can't compete with right now. Thanks. Thanks, Shannon. So the next question is from Brooke. What for Catherine, what should compliance officers keep in mind when it comes to AI, and what actions should they take to ensure they stay on the right side of regulators? So I think we've, I think we've covered that, to some extent already in the, in the regulatory action section, but maybe what I can do is expand on that and talk a little bit more about how, how resources and and skill sets, are necessary to, for an institution to make sure that they are following the right direction. So leading in from some of the things that, that Shannon said, you know, AI is AI is here. It's growing. We we need to embrace it. So as an institution, I would suggest that financial client compliance teams, but also more broadly within an institution, this is something that has to be on your radar and it has to be part of your resource planning and your response. So I would agree with Shannon. I don't think AI is gonna take our jobs, but I think those with more AI skills, the way I would say it is they're gonna be better positioned for the future employment market than those without them. And I would say, like, I know we've we've had a couple of other questions coming through, Ruth, as well about, you know, illicit actors or or threat actors, using artificial intelligence and how financial institutions need to respond. So I'm gonna merge that into my response on on Brooke's question as well. I would say there has been a lot of positives with AI and automation and automation being, like, the first point on that spectrum between, you know, AI machine learning and then, you know, more generative AI. And I think if we draw a parallel again with automation, like, once when you wanted to open a bank account, you would have to take in your ID and proof of address, you'd go into a branch, bring them in, the bank teller would look at them. If the bank teller, you know, maybe they were they were good at identifying indicators of fraud in those documents, if it was a commonly presented document like a a local driver's license, but how could they be familiar with, fraud indicators for something like a passport from a country that they didn't really see that often? Now we have digital identification and verification. So, you know, you hold up your ID document, you take a video selfie, and then, you know, if you pass, you your account is opened. So digital identification and verification can main maintain 100, 1,000, maybe millions of rules and indicators of fraud if it if it needs to. It's updated immediately. It doesn't forget. It doesn't get tired. It doesn't get distracted. And as frauds and other financial crimes get more sophisticated, so too must our methods. So we need to find ways to use AI, from its simplest to most sophisticated forms, to keep pace with those illicit actors. And that's a that's a longer way of saying that AI isn't going to take our jobs, necessarily, but we do need to have AI skills just as Shannon says and I think as as it's been, implicitly mentioned in, Ajit's presentation as well. So what are the implications of that for, for our career paths, but also what are the implications for financial institutions thinking about their resource requirements? What what AI means is that repetitive tasks that might previously have been entry level would probably be more likely to be performed by AI in one of its forms. So, you know, we've heard about how transaction monitoring or processing of alerts or, you know, even drafting of SARs, are increasingly tasked that AI would perform. What that means is that, an institution may need fewer, like, junior analysts, to do those more basic tasks. So the skill level for, for staff might start at a slightly higher level, they might need to be a little bit more advanced in their knowledge, skills and abilities, but having those new skills and abilities will make candidates much more attractive, because it's still a really scarce talent pool. Existing staff also really offer value. We need to adapt, but staff with experience have a depth of knowledge and experience on existing methods, on typologies, on less frequent exceptions and edge cases, and we know how critical good data is and good experiences in training AI models. So that's a that's a way of saying I think that AI is really positive for financial integrity. An institution needs to be considering regulatory requirements, but also considering its resource requirements and a mixture of newer staff with really strong AI skills, experienced staff with their knowledge, and, you know, maybe adapting to to know more about AI is also important. It's that it's that blend, and that is what we we must do to address the the threats that, new and and emerging, illicit actors present. Thank you, Catherine. Shana, we've received a question you may be able to answer. Matthew's asking, will compliance professionals need to have some knowledge in AI coding or training in order to stay in the compliance field? Great question. I I definitely think they're gonna need knowledge of AI itself and and being able to make those distinctions between what is AI versus generative AI and machine learning. Those those points that I made in my introduction, I think just raising your illiteracy on, your ability to talk about AI, your ability to understand the, like, strengths and weaknesses of AI, gets to what, Catherine and Catherine and I were talking about in the last question. I think that, the other component so I just jogged my memory again, Ruth, on the second component. The second component, will compliance professionals need to have some knowledge in AI coding or training in order to stay in the compliance field? Yeah. Sorry. It was the compoding the coding component that I wanted to come back to. So I I don't think you need to understand it down to a coding level. I think, understanding of AI in concepts, understanding AI in the ability to talk about it, You you probably don't need to get down to the level of understanding models unless you're working in product or engineering type areas. And then the coding aspect, I would say no. I mean, there are language models out there and there are generative AI capabilities out there that can actually write code now. I think, I spent a lot of time when I was in previous roles learning how to write integrations and learning how to write Python and and Python Spark, transformations. Those sort of, learnings now are much less important, particularly if you're sort of in a financial crime type space. I don't think you'd need, And you're, sorry, in an analytical role within that space, I don't think you need to be getting down to the to the coding level. Thank you, Shannon. Well, that brings us to the end of today's webinar. A big thank you to Shannon, Ajit, and Catherine for your valuable contributions and for sharing your expertise with us today. And, of course, thank you to all of our attendees for joining us and submitting your questions. Please look out for our next webinar in November, which will focus on export controls. Please expect a follow-up email, in a bit with the recording of today's session as well as some additional resources for further reading. We invite you to stay connected with us for updates via LinkedIn and our websites. Hope you all have a great day, and we look forward to connecting with you again soon.