Skip to content
Back to blog

What Does AI in Finance Look Like?

5 minute read

FCA Compliance Compliance Strategy
AI in finance
Last updated: April 11, 2025

Advances in artificial intelligence (AI) capabilities and accessibility mean that it's no longer an experimental technology—which may be adopted in the future—but something that can now be deployed at scale.

The rise of generative AI, in particular, has seen banks step up adoption, mindful that if they don't get there first, their competitors will. Around three-quarters of banks use AI today, and a further 10% say they expect to do so over the next three years, according to the Bank of England. Earlier this year, Lloyds Banking Group appointed a Head of Responsible AI, while NatWest has become the first UK bank to collaborate with OpenAI.

The reasons for adoption are clear: implemented well, AI could bring down the cost-to-serve while enhancing customer experiences. It paves the way for smarter chatbots, automated investments, and rapid ‘always-on’ hyper-personalised services—all of which can reduce costly customer churn.

AI and related technologies like machine learning are ideal for data-rich industries like banking since they can make sense of vast amounts of information, in multiple formats, with speed and precision. They're particularly powerful in areas like fraud and financial crime detection, where uncovering once-hidden patterns is key to foiling perpetrators.

See our FCA Handbook Training Package

Barriers to adopting AI in finance

For all the interest AI has generated in finance, and beyond, there are big regulatory and ethical concerns, too – particularly when it comes to data protection and privacy.

Generative AI models are trained on large amounts of data, so you risk making sensitive commercial and customer information publicly available.

Another major concern is bias in decision-making, which could unfairly disadvantage or discriminate against some demographics. Some groups, including renters and non-White ethnicities, are already more likely to be declined credit – and AI is only going to exacerbate this if models are trained on biased data.

One of the big challenges for banks is the pace of adoption. While they must scrutinise AI tools and their application, moving too slowly risks falling short of customers' changing expectations and employees using it without approval.

What does the regulator say?

The Financial Conduct Authority (FCA) has explored the potential and risks of AI in depth, including inviting views from the industry. Along with the Information Commissioner’s Office (ICO), it recently set out its support for ‘AI, innovation and growth’, acknowledging that firms need ‘regulatory clarity and certainty around the use of these and other technologies in ways that support responsible innovation and create benefits for the public.’

The current and previous UK governments have signalled their pro innovation approach to AI – indeed, the Action Plan launched at the start of 2025 sets out a strategy to make the UK an ‘AI superpower’. Upcoming legislation is designed to ‘provide regulatory certainty to help kickstart growth and protect UK citizens and assets’ – in other words, it wants to encourage responsible innovation.

Artificial intelligence in finance: Current regulations

In terms of existing legislation, the principles of data protection set out in the UK GDPR can be applied to AI too, including ‘Lawfulness, fairness and transparency’, ‘Integrity and confidentiality (security)’ and ‘Accountability’.

Companies that operate in the EU are bound by the EU Artificial Intelligence Act, which came into force in 2024 and sets out key requirements around data governance, AI model risk, fairness and cybersecurity.

Whatever regulations are introduced in the coming years, the FCA’s principles should be the starting point for AI use right now. In particular, firms must ensure that AI-led decisions are in the customers’ interests, can be trusted and aimed at achieving good outcomes.

Find out more about the regulatory landscape in our blog, Responsible AI Explained: Innovation & Accountability.

'AI' force for good

We can already see examples of what ‘good’ could look like in practice.

A study from the University of Bath found that tweaks to AI algorithms could, in fact, mitigate against the bias women face from lenders who use AI, and still improve the company’s profits and reputation. Leading banks have developed their own ethical frameworks and codes of conduct – recognising the critical importance of human oversight in automated decisions.

Healthy workplace cultures are also central to responsible AI innovation.

This means having both the processes in place to ensure accountability, and the knowledge and skills to recognise bad behaviours. Regular and targeted employee compliance training, not just in AI but also in areas such as data protection, cybersecurity, combating bias and discrimination, and whistleblowing can all help to protect the organisation from fines and reputational damage.

Model training

One overlooked area where AI could be extremely powerful is in compliance training itself. According to our 2025 Annual Benchmarking Report, less than half of compliance, learning and development, and training professionals (43%) use AI to support regulatory compliance. However, this drops to just 10% in financial crime monitoring, and 10% again for staff awareness and training.

Their caution is understandable, especially in the case of financial crime, where the risk of false positives and negatives could damage customer experience and trust.

However, in areas like staff training, AI is a highly efficient and effective way to proactively identify skills/learning gaps and deliver personalised and targeted training.

Equipping employees with tools to quickly access the information they need from approved policies, resources and courses.

By choosing a training provider that prioritises responsible AI themselves, using models that reduce the risk of bias and inaccuracy, firms can both simplify and enhance compliance processes. This ensures your workforce has the right competencies to get the most out of the technology, while minimising the risks to both customers and the organisation.

Explore our FCA Compliance Library

Want to learn more about FCA Compliance?

We have created an SMCR roadmap to help you navigate the compliance landscape, supported by a comprehensive library of FCA Courses. Take a look at our AI-powered digital assistant (Aida) can assist staff by answering compliance questions with accurate, policy-aligned guidance on company rules and relevant legislation.

Related articles

ask-artificial-intelligence:-the-need-for-reliability-|-skillcast
Product News & Events

Ask Artificial Intelligence: The Need for Reliability |...

4 minute read

Despite its capabilities, AI lacks original thought and has been known to produce errors. We explore why trust and reliability must be at the heart of AI's future.

Read more
responsible-ai-explained:-innovation-&-accountability-|-skillcast
Risk Management

Responsible AI Explained: Innovation & Accountability |...

6 minute read

Artificial intelligence (AI) is transforming industries worldwide. We explore the principles of responsible AI in the UK and key steps to use AI ethically.

Read more
top-10-fca-compliance-priorities-in-2025-|-skillcast
FCA Compliance SMCR

Top 10 FCA Compliance Priorities in 2025 | Skillcast

4 minute read

The FCA continues to evolve its regulatory approach. We discuss the major compliance issues facing financial services and what firms need to focus on.

Read more