AbstractUnlike the many services already transformed by artificial intelligence (AI), the financial advice sector remains committed to a human interface. That is surprising as an AI-powered financial advisor (a robo-advisor) can offer personalised financial advice at much lower cost than traditional human advice. This is particularly important for those who need but cannot afford or access traditional financial advice. Robo-advice is easily accessible, available on-demand, and pools all relevant information in finding and implementing an optimal financial plan. In a perfectly competitive market for financial advice, robo-advice should prevail. Unfortunately, this market is imperfect with asymmetric information causing generalised advice aversion with a disproportionate lack of trust in robo-advice. Initial distrust makes advice clients reluctant to use, or switch to, robo-advice. This paper investigates the ethical concerns specific to robo-advice underpinning this lack of trust. We propose a regulatory framework addressing these concerns to ensure robo-advice can be an ethical resource for good, resolving the increasing complexity of financial decision-making. Fit for purpose regulation augments initial trust in robo-advice and supports advice clients in discriminating between high-trust and low-trust robo-advisors. Aspiring robo-advisors need to clear four licensing gateways to qualify for an AI Robo-Advice License (AIRAL). Licensed robo-advisors should then be monitored for ethical compliance. Using a balanced score card for ethical performance generates an ethics rating. This gateways-and-ratings methodology builds trust in the robo-advisory market through improved transparency, reduced information asymmetry, and lower risk of adverse selection.
Read full abstract