Back to All Articles

ACFCS Contributor Report: A Compliance Approach to Mitigating AI Risks to Personal Privacy – the quest to balance data accuracy, integrity, security, sourcing

The skinny:

  • The issue of how AI data is being gathered, stored, secured, sold and used by third-party AI technology firms has gone beyond simply its widely-reputed helpfulness as the frameworks used by digital virtual assistants and the eccentric genius behind text-based generative artwork.
  • Some of these new requirements, particularly in a just-released executive order, have direct import for fincrime teams and the AI-infused algorithms they use to analyze compliance alerts – the tech-heavy foundation of modern frontline crime fighting. In short: AI-vendor risk assessments.
  • These include fresh questions tied to whether these AI technologies are being used to further improve a company’s own proprietary AI systems or if they are being shared and used as a testbed by other technology vendors – while simultaneously ensuring privacy is not encroached by the ravenous, at times unpredictable AI juggernaut.   

By Shayna Grife
June 18, 2024
Law student, Benjamin N. Cardozo

With editing and content additions by Brian Monroe, VP of Content, ACFCS

“Alexa, lights on! Alexa, set my alarm for eight a.m. Alexa, what’s the weather?” Alexa and other voice-activated AI assistants make our lives more convenient and efficient but when, if ever, do they turn off?

In fact, often they are passively – and continually – recording and logging their surroundings and while everyone has the option to change their privacy settings and review the recordings, few of us actually do.

Alexa, however, is just one in a growing pantheon of AI-powered, virtual personal assistants, similar to other competing technologies, like Google Assistant, Siri, Bixby, and Cortana, for example.

All of these technologies, in some form or another, use AI to gorge themselves on all available information – and then further use that data to fine tune their own digital brains or the companies sell the data to the highest bidder in what may feel like a back-alley data broker black market.

Beyond the classic tech titans, AI is also now more accessible to the masses through AI engines, including ChatGPT, Claude, Bing AI, Copilot and others.

The interplay between AI available to criminals and some companies failing to safeguard data – and the explosion in recent years of cyber-enabled fraud, synthetic identities and organized fraud rings working together to share stolen data – has created new challenges to police the AI information universe.

A chaotic, unstable realm where, somehow, privacy and security must co-exist and regulation needs to weave through innovation – rather than stifle it.

But the issue of how AI data is being gathered, stored, secured, sold and used by third-party AI technology firms has gone beyond simply its widely-reputed helpfulness as the frameworks used by digital virtual assistants and the eccentric genius behind text-based generative artwork.

Updated regulations in the United States and Europe have also recently brought new scrutiny, rules and requirements related to how these AI systems are being used.

These include fresh questions tied to whether these AI technologies are being used to further improve a company’s own proprietary AI systems or if they are being shared and used as a testbed by other technology vendors – while simultaneously ensuring privacy is not encroached by the ravenous, at times unpredictable AI juggernaut.  

Some of these new requirements, particularly in a just-released executive order, have direct import for fincrime teams and the AI-infused algorithms they use to analyze compliance alerts – the tech-heavy foundation of modern frontline crime fighting.

In short: internal and external AI data risk assessments.

The very near future for financial institution anti-money laundering (AML), fraud fighting and sanctions-busting teams will need to create and execute a new AI-vendor risk assessment methodology to ensure these third-party allies haven’t tainted their data streams with information and data sets purloined in nebulous or nefarious ways.

The double-edge sword of AI: the virtuous versus villainous virtual arms race

For the past decade, banks large and small – either through homegrown initiatives or expensive vendor partnerships – have turned to AI to confront an increasingly complex world of domestic and international transaction volumes.

Financial institutions of all stripes have come to rely on machine learning and data-sharing consortiums to analyze alerts of aberrant activity across banking groups and jurisdictions to predict the current and even future risks and inclinations of threat actor groups – and preserve and amplify sparse human decision-making resources.

These actions have come with some stunning results.

The last decade has seen AML and fraud teams trim false positives to a fraction of what they were and sharpen collective decision-making to close cases and submit suspicious activity reports (SARs) that are better, faster, cheaper, deeper and more relevant to the government agencies reviewing them.

Banks have also used AI for more mundane tasks.

For instance, they are getting significantly better at natural language processing to better understand common customer issues, needs and challenges when engaging customer service on the phone, online or in human and chatbot functions.
But this AI revolution for fincrime compliance teams is also a double-edged sword.

While there are undoubtedly more opportunities for banks to secure, tame and wield data in and out of the bank with AI, there are now also more chances for bad guys to use these AI powers for ill.

Organized criminal groups are actively using AI to create living, breathing and talking deepfakes and synthetic identities to assault banks anew with automated, off-the-shelf scam kits culled from stolen data on the light and dark webs.

With that in mind, unfortunately, while criminal groups have no restrictions when it comes to how they can use AI – and the origin of the underlying training datasets – many corporates in the coming years will have to practice much more restraint.

While we will get into more detail on this subject later in the report, the Biden Administration “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” gives a preview of the new guardrails expected of government and private sector groups when it comes to employing AI.

To read the full executive order released October 30, click here.

Here is a snapshot under Section 8 (a), “Protecting Consumers, Patients, Passengers, and Students,” which is expected to touch financial institution bank and fraud-fighting teams and the various perspectives.  

From the perspective of federal regulators, like the U.S. Treasury’s Office of the Comptroller of the Currency (OCC):

  • AI exam rules incoming: Independent regulatory agencies are encouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy.
  • Expanding current rules: [These agencies should also consider addressing] other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI.

From the perspective of banks and their technology vendors using AI:

  • Risk ranking AI vendors: This includes clarifying the responsibility of regulated entities [the EO specifically mentions financial services firms and fraud risks] to conduct due diligence on and monitor any third-party AI services they use.
  • No more black box: [These independent regulatory agencies should also consider] emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models. [This section could potentially include not just banks, but their technology vendors and partners.]

What could this mean in practical terms for the risk averse world of banking and fincrime compliance programs writ large?
Potentially taking on another risk ranking universe apart from customers, regions, financial crime, credit and the like.

Banks may soon have to risk rank their own data and AI storehouses and then extend these risk ranking efforts to the AI algorithms and data used by banking and data sharing consortium partners.

Taking this idea a step further, banks may also have to take an even deeper look under the data hood of the technology vendors underpinning their risk management, risk ranking and AML and fraud transaction monitoring platforms.

The end goal is to have data that is clean, accurate, secure, trustworthy, and has a verifiable lineage and origin point that can be explained to government watchdogs in multiple jurisdictions.

Ideally, these data cataloging and shepherding duties must be done alongside ensuring that it can also be shared, sifted and analyzed without exposing the underlying personally identifiable information (PII) by using privacy-enhancing technologies.

In short, the game has changed from does the AI model work and how does it work to what is the legality and sensitivity of the underlying data that made it work in the first place.

Taking this idea to its logical conclusion, bank compliance teams may need to calibrate whether there is a risk that the data they are using was captured without the knowledge of the human users and potentially unwitting trainers who formed the foundation of the AI system.

Moreover, is that data secure and anonymized when shared and remixed for new applications, companies and initiatives?  

The give and take of AI: Maybe more taking than giving?

Remember we mentioned how people love these AI assistants and AI story creators, coding cheats and text-to-image generators for what they provide – in all, analytical powerhouses giving humans new virtual wings to fly toward the sun?

Don’t forget the unspoken inverse of that relationship.

They give – typically instantly springing to attention at the mere mention of their name – but they also take, recording not just a question and answer, but potentially everything and everyone in a given space.

These assistants can capture perfect recordings of your voice and for companies using cameras as part of their security, log biometric details such as unique fingerprint and iris patterns.

But they can also take too much – and not know when to quit. And some companies are keen to take advantage of that.

In ironies highlighted in filings by the U.S. Federal Trade Commission (FTC), some companies touting security did not properly keep their own data houses secure – leading to infiltration by internal and external forces, such as employees spying on and harassing customers.

In some cases, companies readily – and greedily – gobbled up the all-you-can-eat data buffet from unsuspecting customers to tune their algorithms, not considering or caring what could happen if that information fell into the wrong hands, according to an FTC public statement released in June 2023.  

“What you say in your home, what you do in your home. It doesn’t get more private than that,” the FTC noted.

But, Amazon and Ring used this “highly private data – voice recordings collected by Amazon’s Alexa voice assistant and videos collected by Ring’s internet-connected home security cameras – to train their algorithms while giving short shrift to customers’ privacy,” according to two recent FTC complaints.

These matters, contain “important lessons for companies using AI, biometric data, and other sensitive information,” according to the FTC.

The regulatory agency added that these are the first such high-profile actions against household name firms since the agency announced its new Biometric Policy Statement in May 2023, which highlighted the importance of safeguarding such unique and coveted information.

Notably, the FTC’s definition of “biometric information” includes photographs of a person’s face, genetic information, and “characteristic movements or gestures,” which are data types “not commonly encompassed within legal definitions of ‘biometric’ data under other bodies of law,” according to attorneys at law firm Perkins Coie.

The genesis of the FTC statement was a “proliferation of biometric information technologies,” along with “new and increasing risks associated with the collection and use of biometric information,” according to a client alert released in June 2023.

The regulator directly addressed broad concerns that biometric data, amplified by AI, could be used to produce “counterfeit videos or voice recordings” – so-called deepfakes – and “potential unauthorized use of biometric data to access devices by malicious actors.”

That suite of companies using AI and prized biometric data includes banks, AML, fraud and cyber teams – and their technology vendors, an industry that itself has surged into the tens of billions of dollars.

The FTC believes that AI and privacy should not be diametrically opposed to one another, but instead need to “work hand-in-hand. In this age of AI, developers want more and more data – oftentimes, no matter its source.”

The U.S. commerce regulator warned anew that firms tinkering with AI need to “be careful when collecting or keeping consumer data,” as missteps could be a violation of Section 5 of the FTC Act, referred to as the “unfairness standard.”

That section prohibits “unfair methods of competition in or affecting commerce.”

With that as a refreshed frame of reference, the FTC stated that when it comes to operations using and sharing data for AI purposes, the regulator “doesn’t look just at AI’s potential benefits, but also at the costs to consumers.”

According to the FTC complaints, Amazon and Ring “failed that test.”

What went wrong?

The FTC alleged in Ring’s case that lax data access practices “enabled spying and harassment,” while it believed that Amazon’s permanent retention of voice data and shoddy deletion practices “exposed consumers’ voice recordings to the risk of unnecessary employee access.”

The message for businesses – including banks and their technology partners – is clear, according to the agency.

“The FTC will hold companies accountable for how they obtain, retain, and use the consumer data that powers their algorithms,” according to the public statement. “As the Commissioners put it in their joint statement in the Alexa matter, machine learning is not a license to break the law.”

Graphic source: PC Mag. To read the full story, click here.

The illusion of choice: AI could be improving its ability to think, choose without your consent

The new regulatory focus on AI, data harvesting, data brokers and consumer awareness of how their data is being captured means that for some people, while they chose to buy their phone, there are many choices being made about their sensitive information without their approval.

You chose to have an Alexa, so does that mean you consented to the recordings? Maybe.
Should you be concerned that AI is collecting your data? Absolutely.

The real question is not whether your personal information is being collected, but rather how it is being managed and shared by those who do collect it.

Over the past decade, privacy laws have expanded from basic guidelines as to how to retain information to extensive directives that define personal information and provide guidance for controllers and processors on how to handle and retain data responsibly.

Prior to the emergence and rapid expansion of artificial intelligence, The European Union’s General Data Protection initiative (GDPR) and the California Consumer Privacy Act (CCPA) set the standard for companies attempting to safeguard personal information.

With the explosion of AI, companies whose products, services, and customers may be impacted, though, could need more guidance on how to account for the privacy and security risks that come tethered to a thinking machine – a technology without ethical qualms, fear of enforcement or feelings of guilt.

It is challenging to regulate what we don't fully understand and at this point in time, we cannot accurately predict the consequences that society will endure, including those associated with personal data breaches, from the rapid growth of AI.

In response to this issue, the Biden Administration as we briefly noted earlier in this report, has issued an Executive Order underscoring the urgent need for companies to protect personal information from AI.

The order also argued for a more aggressive approach to the adoption of Privacy-Enhancing Technologies (PETs) in order to mitigate the risk of newer technologies and make it easier to adapt to future advancements.

To read a fact sheet on the “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” click here.

Here are some snapshots in the executive order that could be relevant to fincrime compliance teams and AML and sanctions vendor technology companies:

As AI flourishes, an explosion of exposure, exploitation

  • The Biden administration is urging the FTC and other independent agencies to issue guidance documents regarding AI with the intention of clarifying the responsibilities and obligations of companies that choose to adopt different AI programs ranging from ChatGPT to Copilot.
  • Artificial Intelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires. Artificial Intelligence’s capabilities in these areas can increase the risk that personal data could be exploited and exposed.  


PET-ting the savage beast: tech tactics to protect, preserve privacy

  • To combat this risk, the Federal Government will ensure that the collection, use, and retention of data is lawful, is secure and mitigates privacy and confidentiality risks.  
  • Agencies shall use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the broader legal and societal risks — including the chilling of First Amendment rights — that result from the improper collection and use of people’s data.

What is “privacy-enhancing technology” and what are some examples?

  • The term “privacy-enhancing technology” means any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security and confidentiality.  
  • These technological means may include secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy and synthetic-data-generation tools. This is also sometimes referred to as “privacy-preserving technology.”

The new AI order specifically calls out risks, requirements for banks

The Federal Government, according to the executive order, will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy and other harms from AI.  

Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights.

New federal rules, regulatory expectations coming for banks to counter AI scams?

  • Fincrime compliance teams should in particular pay close attention to Section 8 of the executive order on AI, where the administration details its expectations in “Protecting Consumers, Patients, Passengers, and Students.”  
  • Independent regulatory agencies are encouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability and to consider new rulemaking to address any gaps.

The new risk-ranking regime: updated CDD expectations for AML technology vendors?

  • These federal regulators should also consider emphasizing or clarifying where existing regulations and guidance apply to AI, according to the order.
  • The future may see agencies engaging in clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use.
  • Banking, consumer finance and other regulatory agencies could also issue emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.

Could new U.S. AI rules take away crystal ball gazing of future-facing, predictive analytics?

The Biden administration is also concerned about how AI touches the “criminal justice system,” including how criminal groups are using it to scam, defraud and impersonate people and dupe corporations with fake identities.
 
In less than a year, government agencies will submit a report to the president addressing areas that could help or hurt fincrime fighters – potentially a bit of both.

The report will offer guidance related to several areas of illicit activity, including:

Policing predictive analytics

  • Crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density “hot spots,” a highly touted feature of some AI, AML vendors systems.
  • The executive order defines “crime forecasting” as the use of analytical techniques to attempt to predict future crimes or crime-related information.  
  • It can include machine-generated predictions that use algorithms to analyze large volumes of data, as well as other forecasts that are generated without machines and based on statistics, such as historical crime statistics.

Ripples in the AI watermark

  • To fight AI deepfake images, videos and audio, the order is asking government agencies to examiner how they can create “watermarking” for anything that has been wholly or in part created with or altered by AI.
  • The term “watermarking,” according to the order, means the act of embedding information, which is typically difficult to remove, into outputs created by AI for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.
  • The examples in the order include outputs such as photos, videos, audio clips, or text.

Putting the ‘official’ back in government documents, videos, voices

  • The order is also exhorting government agencies to “Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.”
  • This would be the other side of the coin from the above initiative to watermark AI-enhanced content by creating a way to ensure the authenticity of actual government or private-sector messages, images and videos.
  • Currently, the order is tasking The Department of Commerce with developing guidance for “content authentication and watermarking to clearly label AI-generated content.”
  • Federal agencies will “use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world,” according to the executive order.

Apart from protecting the country’s citizens and businesses from AI scams, the government also must better protect itself.

The order is also calling for the establishment of an “advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software,” which would further build on current efforts, including the Biden-Harris Administration’s ongoing AI Cyber Challenge.

Together, these efforts “will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.”

The multi-dimensional directives are also designed to diminish or deflect the potential for organized criminal groups, rogue nation states and other global power competition foes, like Russia and China, to use the same AI technology to puncture national security’s virtual vaults.

PET detective: Could it be possible to use anonymized – not sanitized – data to fight financial crime?

As banks review the Biden administration executive order on AI, they must also do some predictive analytics about what these new rules and directives could look like – and ensure they can still have an effective, data-driven countercrime compliance program when all the dust settles.
 
Transitioning a compliance program over to privacy by design and the use of PETs is a daunting and expensive task, but will pay off in the long run with less liability, fines, and lawsuits, as well as benefits such as an increase in sales.

AI is not the end of innovation but the beginning and with each new advancement comes a myriad of privacy concerns.  

Implementing privacy measures through technology and integrating these control measures into procedures will achieve a higher level of compliance that is more adaptable to change.

Currently, PETs encompass technological tools and techniques designed to enhance procedural protection of private information.

For instance, PETs such as differential privacy, synthetic data, and homomorphic encryption can be utilized to anonymize data on a large scale, allowing companies to benefit from the products of AI without assuming all the liability that can accompany a data breach.

Implementing privacy measures through technology and integrating these control measures into procedures will achieve a higher level of compliance that is more adaptable to change.

Currently, PETs encompass technological tools and techniques designed to enhance procedural protection of private information.

For instance, PETs such as differential privacy, synthetic data, and homomorphic encryption can be utilized to anonymize data on a large scale, allowing companies to benefit from the products of AI without assuming all the liability that can accompany a data breach.

How does this work in practice? What do PET systems look like? What are some names and how do they interact with AI?

Let’s take a look.

Differential privacy.

“Differential privacy (DP) is a way to preserve the privacy of individuals in a dataset while preserving the overall usefulness of such a dataset,” according to a Dec. 2022 story in TowardsDataScience.

“Ideally, someone shouldn’t be able to tell the difference between one dataset and a parallel one with a single point removed. To do this, randomized algorithms are used to add noise to the data.”

Synthetic data

“Synthetic data, generated by sophisticated algorithms, closely mimics real-world scenarios,” according to a Nov. 2023 story in Medium, touting that “ Generative AI and Synthetic Data will make our internet safer.”

“This allows cybersecurity professionals to create realistic testing environments without exposing actual sensitive data,” the piece posits. “Through this, the efficacy of security measures can be thoroughly evaluated without the inherent risks associated with using genuine datasets. Generative AI can revolutionize the way cybersecurity protocols are tested.”

Homomorphic encryption

Homomorphic encryption (HE) is a cryptographic scheme that allows analysts to perform calculations on encrypted data without decrypting it, according to several published reports, including TechTarget, Inpher and John Cook Consulting.

HE converts data into ciphertext that can be analyzed and worked with as if it were still in its untainted, untouched form.
 
This enables analysts, scientists and their advanced systems to encrypt data, perform calculations on it without decrypting it, and then receive the resulting ciphertext back without compromising sensitive or restricted data assets, according to published reports.

Beyond U.S., other global efforts underway to scrutinize, categorize AI risks, rewards

These multi-syllabic, jargon-heavy words to describe PETs and other AI considerations might sound like a foreign language right now.
 
But they are very likely to become the en vogue vernacular for companies the world over – particularly those with a duty to monitor, analyze, risk score customers and report aberrant activity to the government.

The United States is not alone in its efforts to be proactive in regard to how to govern artificial intelligence software.

The European Union released the European Union Artificial Intelligence Act (EU AI Act) which puts forth a regulatory framework to make the use of AI more transparent and traceable.

The act classifies different actions enabled by AI based on the risk they pose to society and the specific measures and precautions companies should take to mitigate or avoid these risks.

For example, AI being used in healthcare for diagnostic or predictive purposes is classified as high risk and needs to be registered in the EU database and continually assessed.

In comparison, the use of Generative AI like ChatGPT poses a systemic risk and needs to follow the transparency requirements set forth in the act.

As compliance experts work to understand what liability and risks AI is subjecting us to, companies must proactively reevaluate their compliance protocols, rethink their approach to risk assessments, and explore updated privacy measures.

As companies eagerly adopt different AI technologies, they face an increased risk of financial crimes including fraud and money laundering – issues squarely on the minds of top officials in the U.S. and Europe as detailed in their dueling directives.

In the hopes of better protecting against financial crime, companies should be incorporating AI-specific risk assessments, but to do so they need to understand the types of financial crimes they are facing.

The use of AI can protect against financial crimes such as fraud and money laundering because its ability to review structured and unstructured data is more effective in catching money laundering schemes and detecting fraud.

The issue is with the advancement of new AI technologies comes new ways to defraud and scam.

AI has the power to create deepfakes of bank staff or consumers to create elaborate schemes with the intent to defraud a broad array of financial sector institutions, including banks, credit unions, money services businesses and crypto exchanges.

It opens up an entire new realm of scam attack angles.

Instead of limiting scams to the funds available in an account, deep fakes can be used to open new accounts, take loans and engage in transactions as if they were the consumer themselves.

Banks and financial institutions have several forms of due diligence in place to identify the beneficial owner of an account that in some cases can fall short when it comes to the depths of AI – particularly when the onboarding is fully digital and online.  

AI can pull data from any source it has access to, which in most cases is limitless.

Criminals can create such extensive synthetic identity scams that traditional compliance measures are inadequate to keep up with the changes that come with AI, putting more pressure on fincrime compliance and fraud teams to see how such technologies can supercharge their defensive efforts.

The importance of pairing innovation with regulation, power with responsibility

While some financial crime teams might be – yet again – pulling their hair out as they are tasked to adjust, adapt and overcome another tectonic shift in the sector, they would be only hurting themselves by putting their communal head in the sand when it comes to AI.  

In short order, AI has emerged as a highly valuable tool, offering the potential to significantly enhance efficiency and effectiveness in practically all industries, from business to government and more – the implications and ramifications of which are only limited by imagination.

It feels like almost overnight, AI programs including ChatGPT, Claude, Midjourney and others have empowered individuals to be able to calculate more quickly, create art and music beyond their direct skill levels with the right prompts and even research and write like a journalist or novelist – creating full stories and books in seconds.

Embracing AI is not just a step toward innovation – it's an essential stride in the pursuit of progress and success in the corporate world.
Technological innovations like AI can change the way we conduct business in every aspect imaginable, but with it comes endless debates on the boundaries of privacy law and civil liberties.

Where would the world be without the development of Facebook, but also where would we be if, say for example, Facebook was not regulated?

Many would say Facebook needs to be significantly more regulated than it is now, but those who work more intimately in the field of technology believe the current regulations are impractical and unreasonable.

The best course of action to bridge the gap between what we want to implement, and what we actually can implement, would be to become familiar with the framework for risk assessments published by the National Institute of Standards and Technology (NIST).

In essence, it would be wise for individuals, software and hardware creators and corporates and banks, in particular, to incorporate NIST’s guidance on AI risk management into enhanced education programs and procedural privacy protection.

Compliance teams should take the most immediate action in updating their risk assessments, tailoring their training and education through the lens of NIST recommendations, and adopting PETs.

Framing the problem: How do you risk assess the technology…you use to reduce risk? 

Fincrime compliance professionals are no stranger to the world of risk assessments.

They routinely risk assess customers through a sliding scale of risk before they even conduct a transaction at the institution.

The risk assessment process typically grades the customer on a variety of factors – the region the operation is, the potential to be tied to fraud, money laundering or corruption – and then arrive at a number and ranking between low, medium and high.

Those efforts sensitize, for instance, a bank’s automated transaction monitoring system to alert more quickly when riskier accounts run amok – and both the risk assessments and monitoring at many institutions rely in bulk on AI.

That creates a seeming dichotomy: How do you use AI risk assessments…to risk assess the AI you are using…to reduce the risk of financial crime?

AI and Risk Assessments are the newest Catch-22. Even so, countries like the U.S., EU and many tech-focused public-private sector groups believe AI-driven risk assessments can be effective in predicting future risks based on historical data.

As well, the socio-technical nature of AI can identify trends and vulnerabilities for a company better and more efficiently than an individual.

However, the risk assessment must also consider the inherent risks associated with the use of the AI technology in the first place.

AI operates effectively when it has access to a broad spectrum of data, but this exposes the company to additional risks, potentially increasing the likelihood of a data breach or cyberattack.

Not surprisingly, the answer on how to untangle the Celtic knot of AI risk assessments comes from a group with historically big brains tackling weighty and complicated challenges: The National Institute of Standards and Technology (NIST).

NIST is a non-regulatory agency that is part of the U.S. Department of Commerce.

As part of its mandate, it develops cybersecurity best practices, guidelines, standards, and offers a bevy of practical resources and guidance that has found its way into federal statutes, Executive orders and other U.S. policy initiatives.

The group also turned its own considerable processing power to guide on AI.

In January 2023 after 18 months of public and private-sector analysis from hundreds of stakeholders, the group released a framework to address the risks accompanying AI.

The seminal work outlines what top minds believe are the four core functions a company must diligently scrutinize at the different stages of the AI-adoption, use, review and retune cycle.

Here are compressed and summarized snapshots of those core functions. They are:

Govern – The politics of the matrix

GOVERN is a cross-cutting function that is infused throughout AI risk management and enables the other functions of the process.

  • Aspects of GOVERN, especially those related to compliance or evaluation, should be integrated into each of the other functions.
  • Attention to governance is a continual and intrinsic requirement for effective AI risk management over an AI system’s lifespan and the organization’s hierarchy of AI risks, uses and data security exposure points.

Map – Directions on the AI interstate

The MAP function establishes the context to frame risks related to an AI system.

  • The AI lifecycle consists of many interdependent activities involving a diverse set of actors.
  • In practice, AI actors in charge of one part of the process often do not have full visibility or control over other parts and their associated contexts. The interdependencies between these activities, and among the relevant AI actors, can make it difficult to reliably anticipate impacts of AI systems.
  • For example, early decisions in identifying purposes and objectives of an AI system can alter its behavior and capabilities, and the dynamics of deployment setting (such as end users or impacted individuals) can shape the impacts of AI system decisions.
  • As a result, the best intentions within one dimension of the AI lifecycle can be undermined via interactions with decisions and conditions in other, later activities.

Measure – Building trust, making sure it does the right thing

The MEASURE function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.

  • It uses knowledge relevant to AI risks identified in the MAP function and informs the MANAGE function. AI systems should be tested before their deployment and regularly while in operation.
  • AI risk measurements include documenting aspects of systems’ functionality and trustworthiness.

Manage – Bringing it all together, to keep it together

The MANAGE function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the GOVERN function. Risk treatment comprises plans to respond to, recover from, and communicate about incidents or events.

  • Contextual information gleaned from expert consultation and input from relevant AI actors – established in GOVERN and carried out in MAP – is utilized in this function to decrease the likelihood of system failures and negative impacts.
  • Systematic documentation practices established in GOVERN and utilized in MAP and MEASURE bolster AI risk management efforts and increase transparency and accountability.
  • Processes for assessing emergent risks are in place, along with mechanisms for continual improvement.

The after-action report – balancing risks, resources, reviews and results

After completing the MANAGE function, plans for prioritizing risk and regular monitoring and improvement will be in place, according to NIST, but you still must go further.

Framework users will at that point have enhanced capacity to manage the risks of deployed AI systems and to allocate risk management resources based on assessed and prioritized risks.

But all of this must live in an atmosphere of continuous improvement, the group stated.

“It is incumbent on Framework users to continue to apply the MANAGE function to deployed AI systems as methods, contexts, risks, and needs or expectations from relevant AI actors evolve over time.”

Caring about the legal boundaries of oversight in humans and AI: The ‘Caremark test’

The NIST framework is the most current resource available to balance the preservation of democratic values with the capitalist technological advancements of AI.

It is what they refer to in the legal field as the “Caremark” liability test equivalent for risk assessments.

What do we mean by this?

In essence, a company that adopts the standards and guidance set out by NIST will have a strong case to combat what are called Caremark equivalent claims of lack of oversight.

Here is why AI-driven fincrime compliance teams need to “care” about Caremark.

Caremark harkens back nearly three decades to the seminal case “In re Caremark International Inc. Derivative Litigation,” where the Delaware Court of Chancery set forth a test to determine whether a director failed to “exercise reasonable oversight,” according to a September report by lawyers at Latham & Watkins.

“The court concluded that a director could be held liable for such claims only where a plaintiff could establish a director’s ‘lack of good faith as evidenced by sustained or systematic failure’ of oversight. These director oversight claims became broadly known as ‘Caremark claims.’”

While no company can shield itself entirely from risk, if it meets the NIST standard, it can likely avoid a great deal of liability, which is the goal they should be striving for.

The best way to mitigate risk is to build a team that understands what they are up against.

This is easier said than done as it can be quite challenging for an in-house legal team to manage its day-to-day tasks, such as contract review and negotiations, while also staying up to date on pending legislation and new executive orders.

To make training and education an effective control for AI programs, a company must have a well-resourced team that operates independently and has reporting mechanisms in place to foster a safe environment for learning and growth within the team.

Accountability structures that outline the roles and responsibilities of team members as well as AI risk management training tailored to the compliance team would be effective to ensure consistent compliance and a thorough understanding of the risks.

Legal, technological, technical issues tied to AI set to collide with AML program duties

While these legal concepts, technological standards and technical guidelines might seem a daunting new chapter in the dynamic world of actual and perceived risk prescience, they are no doubt on a collision course with the modern day fincrime compliance function.

In the ever-evolving landscape of AI and privacy, striking the right balance between innovation and safeguarding personal information is imperative to avoiding litigation and liability while staying current with the changes in technology.

As AI becomes more integrated in our usual course of business, the problems it brings will become more apparent and guidance on how to manage it will become clearer.

By then AI will be commonplace and a new technology will be making us question everything all over again.

Just as criminals are historically “early adopters” of any tip, trick of tech tactic to outsmart bank controls and law enforcement, it would serve compliance champions well to be on the front of this gathering AI data risk assessment storm.

That would be preferable to playing a game of catchup with a machine mind that doesn’t stop, doesn’t eat, doesn’t sleep and has only one goal: to capture and analyze as much data it can – wherever and however it can get it.

About the author

Shayna is a third-year law student (3L) graduating from Benjamin N. Cardozo in the Spring of 2024.

During her time at Cardozo, she has served as a staff editor on the Arts and Entertainment Journal and wrote a note analyzing the legal implications of NFTs in the live events industry.

She has an interest in Technology law, more specifically privacy law and emerging technologies such as AI.

She has interned across several fintech and technology companies working on a variety of matters from the incorporation of the GDPR into a company's internal privacy program to the analysis of recent SEC decisions and their effects on fintech companies.

Currently, she is doing further research into the use of AI programs in evidentiary matters and the newest releases by NIST and NAS.

See What Certified Financial Crime Specialists Are Saying

"The CFCS tests the skills necessary to fight financial crime. It's comprehensive. Passing it should be considered a mark of high achievement, distinguishing qualified experts in this growing specialty area."

KENNETH E. BARDEN 

(JD, Washington)

"It's a vigorous exam. Anyone passing it should have a great sense of achievement."

DANIEL DWAIN

(CFCS, Official Superior

de Cumplimiento Cidel

Bank & Trust Inc. Nueva York)

"The exam tests one's ability to apply concepts in practical scenarios. Passing it can be a great asset for professionals in the converging disciplines of financial crime."

MORRIS GUY

(CFCS, Royal Band of

Canada, Montreal)

"The Exam is far-reaching. I love that the questions are scenario based. I recommend it to anyone in the financial crime detection and prevention profession."

BECKI LAPORTE

(CFCS, CAMS Lead Compliance

Trainer, FINRA, Member Regulation

Training, Washington, DC)

"This certification comes at a very ripe time. Professionals can no longer get away with having siloed knowledge. Compliance is all-encompassing and enterprise-driven."

KATYA HIROSE
CFCS, CAMS, CFE, CSAR
Director, Global Risk
& Investigation Practice
FTI Consulting, Los Angeles

READY TO BEGIN YOUR JOURNEY TOWARDS
CFCS CERTIFICATION?