When the AFS regulatory regimes started in 2004, the advanced algorithms, natural language programming and behavioural finance techniques that power digital advice were still the stuff of science fiction. Face-to-face or paper advice was the order of the day.
Replicating complex human interactions in a digital environment is not as easy as it seems at first glance. And in some areas, more detailed regulatory guidance would greatly assist digital advisers.
Advice is sequential
Digital advice is generally provided sequentially, topic by topic, e.g. debt reduction, asset protection, superannuation, wealth creation etc. The client chooses the topics on which they take advice and controls the order in which they obtain advice and the priority with which they implement it.
Digital advisers can suggest, but they can’t compel clients to make the right choices. For example, Edgar has $25K in credit card debt, but decides that he wants to make a co-contribution to super. A good human adviser would say this is a bad idea and explain why. So must a digital adviser. But a human adviser with the client in front of them has greater capacity to influence the client.
How can digital advisers manage this risk?
By carefully calibrating their advice engines to detect when clients’ choices may be unsuitable and provide appropriate warnings and information. Warnings of varying strength may be required, depending on the extent of the risk involved – and the degree of personalisation of the advice.
Not surprisingly, this makes digital advisers a little nervous!
Client determines quantity and quality of information
The quality of digital advice is entirely dependent on the accuracy of the information provided by the client; digital advisers have limited ability to verify information.
This creates challenges for personal digital advisers who might seek to make use of the safe harbour – they can only collect information through the technology interface. They have limited ability to probe or challenge clients, as a human adviser can when discharging the safe harbour requirement to make reasonable enquiries to obtain complete and accurate information.
Digital advice engines will need inbuilt parameters to detect information that is incomplete or likely to be inaccurate. A ‘triage’ mechanism may be required to protect the client (and the digital adviser) from advice that may not be in their best interests. Options will include (depending on the extent of the risk):
Determining when and how to apply these constraints requires detailed risk analysis and sophisticated computing capability.
Playing or seeking advice?
Gamification is an appealing feature of many digital advice engines. Users are encouraged to play, to vary their inputs in order to explore potential outcomes before deciding on a course of action.
The trouble is, digital advice engines can’t know when client information is accurate and complete or even when or whether the client has finished playing and wants real advice. How should they approach the obligation to provide SoAs when personal advice is given?
All the alternatives are problematic.
For example, if an SoA is provided every time the client changes an input, the client will have multiple SoAs. Can the real SoA please stand up? Providing an SoA at the end of each session would be artificial – the advice engine can’t know this is the client’s final position. The session could have been interrupted.
Perhaps digital clients could be required to confirm that their information is complete before receiving advice? But this ‘friction’ would considerably reduce the effectiveness of gamification.
There doesn’t appear to be any easy answer; specific regulatory guidance on this unique situation would be most helpful.
When general meets personal
Digital advice comes in a variety of forms, some are more suitable for general advice than others.
For example, standalone portfolio construction advice can readily be provided as general advice – even if the client nominates their risk appetite. Portfolio monitoring can also be general advice when limited to alerting the client to deviations from a model portfolio and recommending appropriate changes to rebalance the portfolio.
Topic-specific strategic advice can readily be provided as general advice. But strategic guidance covering a number of topics can quite quickly become personal advice, even when based on a relatively small amounts of information about the client.
The words used to deliver the advice can help. But they’re not a universal solution, because personal advice can’t be transformed into general advice just by saying so. If the user’s personal circumstances have been taken into account in formulating the advice (or the user reasonably believes that they have), the advice is personal.
Managing the grey space in the middle is difficult for digital advisers. Under the current regime, there’s a rigid cut-off. Advice is either general – in which case no best interests duty applies – or personal, in which case the advice must be reasonably likely to achieve the client’s objectives. Advice engines need to be able to determine which so they know which disclosure to provide. Human advisers face this too, but they are better placed to know when to qualify their advice so that the client is aware they shouldn’t rely on it.
Perhaps more flexibility is required for the grey space. For example, digital advisers could include a confidence rating of the likelihood that the advice will be suitable? See crystalknows.com for a great example of this. An alternative to the general /personal cut-off would be most welcome.
SoAs for class of product advice
Standalone strategic advice doesn’t involve product recommendations. At best, it provides class of product advice, e.g. Edgar, your life insurance is inadequate; Janelle, you should make a co-contribution to super.
Many human advisers believe that no SoA is required if no specific product is recommended. But that’s not the case. Perhaps it should be? Is the obligation to provide a full SoA for class of product advice regulatory overkill – and not just in the digital environment?
These are just some of the challenges faced by digital advisers. It’s not surprising that they’re finding it tricky to navigate the existing regulatory framework; indeed, it’s tough enough for humans.
One thing is for sure. Building a digital advice engine is not for the faint hearted; it’s a lot more complex than providing a bunch of calculators.
If you have any concerns about any of these issues, please contact us.
Author: Claire Wivell Plater
This blog is the first in a two part series on Regulatory Challenges for Digital Advisers. Part 2 looks at a number of operational challenges faced when delivering advice electronically. The blogs also appeared on Fintech Business.