How can wealth managers balance AI innovation with cybersecurity?
19th February 2026 – As artificial intelligence (AI) becomes embedded across wealth management, firms are under increasing pressure tobalance innovation with resilience, particularly amid a cyber threat landscape that continues to grow in scale andsophistication.
Cybersecurity has emerged as a “top priority” for UK organisations, with the vast majority (82 percent) planning toincrease their budgets for cybersecurity over the coming 12 months, according to the KPMG Global Tech Report 2026. Ofthis number, 57 percent expect to raise their spending by more than ten percent.
Nevertheless, AI remains the largest area of investment. Over 90 percent of respondents agreed that, by the end of 2026,AI will have evolved from an “efficiency enabler” to a “revenue-driving innovation”, signalling a shift away from early-stageapplications and trials, and towards tangible value and measurable returns on investment.
Cybersecurity has emerged as a “top priority” for UK organisations, with the vast majority (82 percent) planning toincrease their budgets for cybersecurity over the coming 12 months, according to the KPMG Global Tech Report 2026. Ofthis number, 57 percent expect to raise their spending by more than ten percent.
Nevertheless, AI remains the largest area of investment. Over 90 percent of respondents agreed that, by the end of 2026,AI will have evolved from an “efficiency enabler” to a “revenue-driving innovation”, signalling a shift away from early-stageapplications and trials, and towards tangible value and measurable returns on investment.
However, as AI becomes more sophisticated, its risks become harder to evade.
Georgios Lekkas, innovation director at wealthtech Objectway, observed that “concerns over accuracy, algorithmic bias,and the murky, ‘black-box’ nature of AI decision-making” continue to permeate the sector.
Practical obstacles – including opaque models, potential data leaks, and reliance on third-party vendors – remain“particularly pertinent” to wealth managers, given the use of “sensitive client data and bespoke services.”
“These risks are exacerbated by the use of unauthorised AI tools, which can lead to blindspots regarding data leakages,compliance breaches, and inconsistent client outcomes,” Mr Lekkas said.
Yet these risks do not lie solely with the firms.
AI is increasingly being used to engineer highly convincing ‘social’ attacks, such as deepfake-enabled fraud andpersonalised phishing scams. In such situations, it is often the client – rather than the firm – that is directly targeted.
“The weakest point of any organisation is its people,” noted Ben Kumar, head of strategy for wealth, public policy andinvestment at 7IM.
This vulnerability is primarily due to limited awareness or experience when it comes to technology. “Individuals do nothave the same capacity to ‘screen out’ scams like firms do,” Mr Kumar added, pointing out that it is “far easier toconvince a client that you are a firm than it is to convince a firm that you are a client.”
This challenge is compounded by the growing sophistication of AI, and technology more broadly. Mr Kumar explainedthat attackers are increasingly adept at using data to ‘triangulate’ information between a business and its people.
“Even details shared unconsciously, like those posted on LinkedIn, can be used to create a picture, which gives attackersgreater credibility,” he said.
This form of profiling enables fraudsters to “convincingly portray themselves as a wealth or investment manager,”despite the personal relationships that typically exist between high net worth clients and their advisers.
“As relationships become more easily mimicked, it becomes harder for clients to distinguish between what is real andwhat is fake,” Mr Kumar claimed.
Despite the growing severity of these threats, the industry’s response appears to be subdued.
Regulations such as the Digital Operational Resilience Act (DORA), the EU AI Act, and emerging AI governance standardspromote stronger controls, secure development, and vendor oversight, all of which help mitigate AI enabled attacks.
Similarly, both the Bank of England and the Financial Conduct Authority (FCA) now incorporate cybersecurity into theirstress testing and operational resilience frameworks, but, despite clear regulatory recognition that AI amplifies cyberrisk, AI-specific cyber stress tests remain absent.
Progress is also uneven amongst firms. While larger banks are investing heavily in advanced AI-enabled defence tools,many smaller wealth managers and mid-sized firms are lagging behind.
“As AI continually transforms both attack methods and defensive possibilities, regulatory and risk frameworks mustevolve in parallel to ensure adequate protection against emerging threats,” urged Mr Lekkas.
He highlighted the need for “dynamic, adaptive governance” to keep pace with AI’s rapid evolution.
“Success will depend on selecting the right use cases, reinforcing robust data governance, and upholding the higheststandards of security and regulatory compliance,” he said.
“As wealth management firms navigate this evolution the balance between innovation, regulatory compliance and trustwill be key to shaping the future of financial services.”
As AI continually transforms both attack methods and defensive possibilities, regulatory and risk frameworks mustevolve in parallel to ensure adequate protection against emerging threats. Success will depend on selecting the right use cases, reinforcing robust data governance, and upholding the higheststandards of security and regulatory compliance.