While artificial intelligence was nothing new, ChatGPT changed AI’s accessibility, utility and appeal globally. It also quickly brought AI into the financial advice domain. It’s been reported clients are now using AI to scrutinise the advice they receive, which on one hand can prompt new, interesting questions, but on another, highlights that technology can lack the judgment and nuance a human adviser can offer.
To date, many of the advances we’ve seen have been on the productivity front, reinforcing that AI can make the business of advice more efficient, but doesn’t replace human judgment. Practices have replaced other technology with AI-driven systems, employed AI to take notes, communicate with clients and prepare Statements of Advice. Our 2025 advice efficiency survey showed that 43 per cent of practices are actively using AI within their advice journey, predominantly for personalisation (53 per cent), data analysis (28 per cent) and compliance (15 per cent).
Unsurprisingly, these applications are expected to rapidly expand in the coming year. As we mark ChatGPT’s third birthday and consider how it’s changed the advice landscape, I put together some reflections on where to keep an eye on next, based on insights from my network of 30,000 global intelliflo users. While much of what AI delivers advisers is revolutionary in positive ways, my experience with advisers in the UK has shown there are also areas to watch as the technology rapidly develops.
As the adviser, you are responsible for your client’s data, which can often be extremely sensitive. You’re the gatekeeper of information about your client’s wealth snapshot and their broader family dynamics. The vast majority of advisers take this responsibility seriously.
However, if you’re letting the data out the front door through a broad market AI solution, there may be consequences that haven’t yet fully been seen. Public models frequently learn from user inputs, store data in ways that advisers cannot audit or operate in jurisdictions outside regulatory protection. There’s the potential for data to be weaponised, exposing the client to risk and the practice to reputational and legal damage.
To reduce these risks, it’s critical advisers and practices use trusted, gated AI applications to ensure their clients’ data is protected. We haven’t yet seen the legal precedents of sensitive advice data being leaked or compromised, but it’s becoming a bigger conversation in the UK market. Simply put, it’s not a risk worth taking.
Small errors may have big consequences
Globally, millions of people are now using AI as part of their advice process, whether through an adviser or on their own personal finance journey. The elephant in the room is: what if AI gets it wrong? Given AI is still in its infancy, we are yet to see these consequences play out, but as advisers know, a small calculation error can significantly affect retirement outcomes.
Firms in the UK have expressed concerns that AI could come up with the wrong solutions and that’s a fair worry. They’re also concerned that AI doesn’t give the same answer every time. These insights contributed to our decision to steer away from a black box model in our new Advice Assistant and instead create a rules-based engine, where advisers can configure their own house rules.
Nonetheless, I believe the question of accuracy will be magnified in 2026. Given advisers’ experience and rigour in producing defensible advice, the much greater risk of error relates to people relying on financial influencers through TikTok or a bot to build and protect their personal finances. While younger generations have high financial literacy, without a highly trained adviser, they are unlikely to apply the same level of scrutiny to advice they receive online. It underscores the urgency for regulated advisers remaining visible, accessible and technology-enabled, which gives end consumers the option to receive trustworthy advice.
For advisers, the question of potential mistakes also reinforces the importance of trusting your experience and expertise, using well-validated AI tools and cross-checking AI’s insights.
The compliance and regulatory question mark
The finfluencer-driven, mass market advice explosion also leads to a bigger question about how the compliance and regulatory goal posts will shift. Whose responsibility is it to protect consumers from (potentially) bad advice when the gates are open? I believe it’s partly a societal and partly a governmental responsibility.
In the UK, our regulator – the Financial Conduct Authority (FCA) – is very open to AI innovation, but of course, our advisers have to show a record of defensible advice, much like in the Australian market. As AI develops further, explainability will become a more central concern. Best practice will hinge on advisers harnessing their data and being able to produce an auditable trail of how advice was produced, for both their clients and regulators.
Again, this all comes back to having technology partners that are trusted and have proved their reliability, which isn’t always easy given the pace of change.
In 2026, AI will continue to deliver enormous benefits to advisers in Australia, but as it advances, it will also reinforce how essential human-based advice is.