Image
Banner: Artificial Intelligence and Retail Investing: Use Cases and Experimental Research

Intelligence artificielle et l’investissement de détail: Cas d’usage et recherche expérimentale

Résumé

La récente augmentation de la diffusion de l’intelligence artificielle (IA) et de ses applications offre de nouvelles possibilités, mais aussi de nouveaux risques potentiels pour les particuliers qui investissent. C’est pourquoi les organismes de réglementation des valeurs mobilières s’efforcent de comprendre, de prioriser et de remédier aux préjudices potentiels pour les investisseurs, tout en continuant de favoriser l’innovation.

Les résultats de recherche présentés dans ce rapport ont été élaborés par la Commission des valeurs mobilières de l’Ontario (CVMO) en collaboration avec la Behavioral Insights Team dans le cadre de l’approche fondée sur des données probantes de la CVMO en matière d’initiatives réglementaires et éducatives. Nos résultats proviennent de deux volets de recherche. Nous avons d’abord effectué une revue de la littérature et une analyse de l’environnement des plateformes d’investissement afin de comprendre les principaux cas d’utilisation des systèmes d’IA destinés aux investisseurs particuliers. Nous avons ensuite utilisé les résultats de cette recherche pour concevoir et mettre en œuvre une expérience de science comportementale visant à déterminer comment la source d’une suggestion d’investissement — IA, humain ou un mélange des deux — influe sur la mesure dans laquelle les investisseurs suivent cette suggestion.

Sur la base de l’analyse documentaire et de l’analyse de l’environnement réalisées dans le cadre de notre premier volet de recherche, nous avons identifié trois grands cas d’utilisation de l’IA spécifiques aux investisseurs particuliers :

  • Aide à la décision : Les systèmes d’IA qui fournissent des recommandations ou des conseils pour guider les décisions d’investissement des investisseurs particuliers.[1]
  • Automatisation Systèmes d’IA qui automatisent la gestion de portefeuilles ou de fonds (par exemple, ETF) pour les investisseurs particuliers.
  • Escroqueries et fraudes Systèmes d’IA qui facilitent les escroqueries et les fraudes ciblant les investisseurs particuliers, ainsi que les fraudes profitant de la « mode » de l’IA.

Dans ces cas d’usage, nous avons identifié plusieurs avantages et risques majeurs liés à l’adoption et à l’utilisation des systèmes d’IA par les investisseurs particuliers, dont les suivants.

Avantages :

  • Réduction des coûts : Les systèmes d’IA peuvent réduire le coût des conseils personnalisés et de la gestion de portefeuille, créant de cette façon une valeur considérable pour les investisseurs particuliers.[2]
  • Accès aux conseils : Des systèmes d’IA plus sophistiqués et plus correctement réglementés peuvent offrir un accès accru aux conseils financiers aux investisseurs particuliers, notamment à ceux qui ne peuvent accéder à des conseils par les canaux traditionnels.
  • Prise de décision améliorée : Des outils d’IA peuvent être développés afin de guider la prise de décision des investisseurs dans des domaines clés, comme la diversification des portefeuilles et la gestion des risques, ainsi que des outils pour aider les investisseurs à identifier les escroqueries financières.[3]
  • Performance améliorée Les recherches existantes ont montré que les systèmes d’IA peuvent prédire avec plus de précision les variations des bénéfices et générer des stratégies d’échanges commerciaux plus rentables que les analystes humains.[4]

Risques

  • Biais En règle générale, les modèles d’IA sont soumis aux préjugés et aux hypothèses des humains qui les développent. À ce titre, ils peuvent accroître les résultats inadéquats, même lorsque cela n’est pas la fonction prévue du système.
  • Comportement grégaire La concentration des outils d’IA parmi quelques fournisseurs peut induire un comportement grégaire, une convergence des stratégies d’investissement et des réactions en chaîne qui exacerbent la volatilité lors des chocs sur le marché.
  • Qualité des données Si un modèle d’IA repose sur des données de mauvaise qualité, alors les résultats, qu’il s’agisse de conseils, de recommandations ou autres, seront aussi de mauvaise qualité.
  • Gouvernance et éthique La nature de « boîte noire » des systèmes d’IA et les limites en matière de confidentialité et de transparence des données suscitent des inquiétudes quant à la question de déterminer clairement qui est responsable dans les cas où les systèmes d’IA produisent des résultats négatifs pour les investisseurs.

Notre deuxième volet de recherche consistait à réaliser un essai contrôlé randomisé (ERC) en ligne. Nous avons vérifié dans quelle mesure des Canadiens ont suivi une suggestion concernant la façon d’investir un montant hypothétique de 20 000 $ dans trois types d’actifs : actions, titres à revenu fixe et liquidités. Nous avons utilisé une variété d’origines de la suggestion d’investissement : un fournisseur de services financiers humains, un outil d’investissement basé sur l’IA ou un fournisseur de services financiers humains utilisant un outil d’IA (c’est-à-dire une approche « hybride »). Nous avons également utilisé comme variable le fait que la suggestion d’investissements d’actif était jugée judicieuse ou non afin de déterminer si les Canadiens pouvaient discerner la qualité des suggestions en fonction de leur origine. Le tableau 1 présente les différentes variantes de suggestions d’investissement que nous avons testées.

Tableau 1 : Suggestions d’investissement

 HumanIA[5]Hybride
Sain d’espritHumain sain d’espritIA sain d’espritHybride sain d’esprit
Pas sain d’espritHumain pas sain d’espritAI pas sain d’espritHybride pas sain d’esprit

 

Dans cette expérience, nous avons constaté que les personnes ayant reçu une suggestion d’investissement d’un humain utilisant un outil d’IA (c’est-à-dire un conseiller « hybride ») se conformaient davantage à ladite suggestion. L’écart de leur allocation d’investissement était de 9 % inférieur à celui des personnes qui avaient reçu la suggestion d’une source humaine, et de 6 % inférieur à celui des personnes qui avaient reçu la suggestion d’un outil d’IA. Cependant, ces résultats doivent être interprétés avec prudence. Même s’il y avait des différences moyennes dans la manière dont les groupes investissaient leurs fonds, ces différences n’étaient pas suffisamment importantes pour répondre à nos critères statistiques rigoureux (c’est-à-dire qu’elles n’étaient pas statistiquement significatives). En conséquence, il n’est pas possible d’affirmer que ces résultats indiquent un effet réel, car ils pourraient être dus au hasard. En d’autres termes, ces résultats peuvent être valables, mais nous ne pouvons en être certains sans répétitions de l’expérience.

Dans cette optique, les résultats de notre expérience présentent plusieurs implications clés. Nos données contribuent à combler une lacune importante dans la recherche, dans la mesure où une grande partie des travaux existants ont comparé les différences de confiance dans les conseils financiers provenant d’outils d’IA, ou de prestataires humains uniquement. Notre expérience va au-delà de la confiance déclarée en se concentrant sur le comportement (bien que dans un environnement simulé) en réponse aux suggestions d’investissement provenant de diverses sources. En outre, l’ajout de la condition « hybride » nous a permis de développer une première compréhension de la manière dont les investisseurs réagissent aux suggestions provenant d’un futur état potentiel du conseil en investissement, soit une source « hybride ». Enfin, nos données suggèrent que les Canadiens font confiance aux suggestions d’investissement générées par les systèmes d’IA, car nous n’avons observé aucune différence matérielle dans l’adhésion entre les conditions humaines et celles de l’IA. Cela souligne la nécessité de garantir de façon permanente que les systèmes d’IA fournissant des conseils et des recommandations en investissement soient basés sur des données impartiales et de haute qualité, et qu’à terme, ils améliorent l’expérience des investisseurs particuliers.

 


[1] Au Canada, la réglementation interdit aux entreprises d’utiliser l’IA pour fournir des conseils ou des recommandations sans surveillance humaine ;ce cas d’usage a été observé dans les autres juridictions.

[2] Banerjee, P. (2 juin 2024). L’IA surpasse les humains en matière d’analyse financière, mais sa véritable valeur réside dans l’amélioration du comportement des investisseurs. The Globe and Mail. https://www.theglobeandmail.com/investing/personal — finance/household-finances/article-ai-outperforms-humans-in-financial-analysis-but-its-true-value-lies-in/

[3] Ibid.

[4] Kim, A., Muhn, M. et Nikolaev, VV (2024). Analyse des états financiers à l’aide de grands modèles de langage. Chicago Booth Research Paper Forthcoming, Fama-Miller Working Paper. http://dx.doi.org/10.2139/ssrn.4835311

[5] Le paysage réglementaire au Canada ne permet pas de fournir des recommandations aux investisseurs sans surveillance humaine, et ce, peu importe la technologie utilisée. Notre expérience visait à fournir une indication du comportement des investisseurs devant des suggestions d’investissement provenant de différentes sources en ayant ce contexte réglementaire à l’esprit. 

Introduction

There has been a significant increase in the scale and breadth of artificial intelligence (AI) systems in recent years, including within the retail investing space. While these technologies hold promise for retail investors, regulators internationally are alert to the risks they pose to investor outcomes. In this context, the Ontario Securities Commission (OSC) collaborated with the Behavioural Insights Team (BIT) to provide a research-based overview of:

  • The current use cases of AI within the context of retail investing – and any associated benefits and risks for retail investors.
  • The effects of AI systems on investor attitudes, behaviours, and decision-making.

To address these areas, we implemented a mixed-methods research approach with two research streams:

  1. A literature review and environmental scan of investor-facing AI systems in Canada and abroad to identify the current use cases of AI that are retail investor-facing.
  2. A behavioural science experiment to determine how the source of an investment suggestion — AI, human, or a blend of the two — impacts the extent to which investors follow that suggestion.

Our report is structured as follows. We first present use cases of AI in retail investing that we have identified. We then present the methodology and results of our behavioural science experiment.

Use Cases

An artificial intelligence (AI) system “…is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”[6] The massive growth of available data and computing power has provided ideal conditions to foster advancements in the use of AI across various industries, especially within the financial sector.[7]

AI systems have begun to proliferate in the securities industry with certain applications targeted to retail investors. If responsibly implemented, these applications have the potential to benefit retail investors. For example, they could reduce the cost of personalized advice and portfolio management. However, the use of AI within the retail investing space also brings new risks and uncertainties, including systemic implications:

  • Explainability: AI models are often described as “black boxes” because the process by which they reach decisions is unclear.[8]
  • Data Quality: AI systems are only as good as the data upon which they are based. If systems are based on corrupted, biased, incomplete, or otherwise poor data, investor protection could be compromised.
  • Bias: AI models are generally subject to the biases and assumptions of the humans who developed them.[9] As such, they may accelerate or heighten unfair outcomes, even where this is not the algorithm’s intended function.[10]
  • Herding: The concentration of AI tools among a few providers may induce herding behaviour, convergence of investment strategies, and chain reactions that exacerbate volatility during shocks.[11] In other words, if markets are driven by similar AI models, volatility could increase dramatically to the point of financial system contagions.[12]
  • Market Competition: Large firms with big budgets and greater technological capabilities are generally at a greater advantage than smaller firms in developing AI tools – which could reduce the competitive landscape.
  • Principal-Agent Risks: AI applications developed and used by firms to advise or provide other support to retail investors could be developed to prioritize the interests of the firm rather than their clients. This potential risk is exacerbated by the high complexity and low explainability of AI tools. In the US, the SEC has recently proposed rules to address this risk.[13]
  • Scalability: Due to the scalability of AI technologies and the potential for platforms that leverage this technology to reach a broad audience at rapid speed, any harm resulting from the use of this technology could affect investors on a broader scale than previously possible.[14]
  • Governance: The rapid development of AI systems may result in poorly defined accountability within organizations. Organizations should have clear roles and responsibilities and a well-defined risk appetite related to the development of AI capabilities.[15]
  • Ethics: Like any technology, AI can be manipulated to cause harm. Organizations should maintain transparency, both internally and externally, through disclosure on how they ensure high ethical standards for the development and usage of their AI systems.[16]

In this report, we outline three areas where AI is being used in certain jurisdictions within the retail investing space, namely, Canada, the United States, the EU, and the UK: decision support, automation, and scams and fraud.[17]

Decision Support

We classify decision support as AI applications that provide recommendations or advice to guide investment decisions.[18] This includes applications that provide advice directly to retail investors and those that help individual registrants provide advice to their retail investor clients.[19] Decision support may relate to individual securities transactions or overall investment strategy / portfolio management. Our behavioural science experiment (below) explores this use case in the context of investment allocation decisions.

Platforms for self-directed retail investors have started offering “AI analysts” as an add-on feature to support investor decision-making. For example, a US-based firm has partnered with a US-based fintech platform to leverage AI in analyzing large datasets to provide insight into a range of global assets for users. These applications appear to be intended to provide self-directed investors with relevant insights, information, and data to inform their investment decisions.

Standalone AI tools have also been developed to directly support investors. For example, one US-based platform allows investors to enter the details of their financial status such as their debt, real estate, and investment accounts to receive advice on whether their investments match their financial goals and risk tolerance. A new CHATGPT plug-in by the same company allows investors to have conversations with an AI-powered chatbot that can make similar suggestions, simply by reading a copy and paste of one’s investing statements. Another US-based company operates as a standalone website to provide investors with AI-driven tools for identifying patterns and trends in the stock market. The company’s first product was a website which featured AI tools to help retail investors gauge how well their portfolio was diversified.

Automation

We define automation as AI applications that automate portfolio and/or fund (e.g., ETF) management for retail investors. Unlike decision support, these AI applications require minimal user input, making investment decisions for investors instead of providing advice and letting the investor decide. There are three key types of AI applications that automate decisions: robo-advisor platforms using AI, AI-driven funds (e.g., ETFs), and standalone AI platforms offering portfolio management.

Robo-advisers have been using algorithms to automate investing for Canadian retail investors since 2014. In Canada, securities regulators require human oversight over investment decisions generated by algorithms.[20] Other countries, including the United States, the United Kingdom, and Australia, appear to permit similar robo-advising platforms to manage client funds with little or no involvement from a human advisor.[21] Within these other markets, there is an emerging trend of robo-advisors using AI. For example, one US-based platform is reportedly using AI to automatically rebalance portfolios and perform tax-loss harvesting for users.

AI-powered exchange-traded funds (ETFs) use AI to identify patterns and trends in the market to identify investment opportunities and manage risk. For example, the US-based WIZ Bull-Rider Bear-Fighter Index was described as using AI to analyze market conditions and automatically shift holdings from “momentum leaders” in bull markets to “defensive holdings” during bear markets.[22] The fund has since been liquidated.[23] Other fund examples include Amplify AI Powered Equity ETF (AIEQ), VanEck Social Sentiment ETF (BUZZ), WisdomTree International AI Enhanced Value Fund (AIVI), and Qraft AI-Enhanced U.S. Large Cap Momentum ETF (AMOM).[24]

Finally, some standalone AI platforms offer automated portfolio management. For example, a US-based platform claimed to use AI and human insight to anticipate market movements and automatically manage, rebalance, and trade different account holdings for self-directed investors.

Scams and Fraud

AI systems can also be used to enhance scams and fraud targeting retail investors, as well as generate scams capitalizing on the “buzz” of AI.

AI is “turbocharging” a wide range of existing fraud and scams. In the past two years, there has been nearly a ten-fold increase in the amount of money lost to investment-related scams reported to the Canadian Anti-Fraud Centre (an increase from $33 million in 2020 to $305 million in 2022).[25] One factor contributing to this increase is that scammers are using AI to produce fraudulent materials more quickly and increase the reach and effectiveness of written scams. Large language models (LLMs) increase scam incidence in three ways. First, they lower the barrier to entry by reducing the amount of time and effort required to conduct the scam. Second, LLMs increase the sophistication of the generated materials as typical errors such as poor grammar and typographical errors are much less frequent.[26] Finally, through “hyper-personalization,” LLMs can improve the persuasiveness of communications. For example, scammers may use AI to replicate email styles of known associates (e.g., family).[27] Beyond applications in email or other written formats, AI has also been used to generate “deepfakes” that deceive investors by impersonating key messengers. A deepfake is a video or voice clip that digitally manipulates someone’s likeness.[28] Deepfake scams have replicated the faces of celebrities, loved ones in distress, government officials, or fictitious CEOs to steal money or personal information from investors.[29],[30] Deepfakes can also be used to bypass voice biometric security systems needed to access investment accounts by cloning investors’ voices.[31] In the future, we may even see instances of deepfakes of investors’ own faces to access investment accounts that use face biometrics.[32],[33]

While many fraudsters use AI to enhance scams, other fraudsters are simply capitalizing on the hype of AI to falsely promise high investment returns. For example, YieldTrust.ai illegally solicited investments on an application that claimed to use “quantum AI” to generate unrealistically high profits. The platform claimed that new investors could expect to earn returns of up to 2.2% per day.[34] These scams tend to advertise “quantum AI” and use social media and influencers to generate hype around their product. For example, the Canadian Securities Administrators issued a 2022 alert for a company called ‘QuantumAI’, flagging that it is not registered in Ontario to engage in the business of trading securities.[35]

 


[6] OECD. (2023). Updates to the OECD’s definition of an AI system explained. https://oecd.ai/en/wonk/ai-system-definition-update

[7] European Securities and Markets Authority. (2023). Artificial Intelligence in EU Securities Markets. https://www.esma.europa.eu/sites/default/files/library/ESMA50-164-6247-AI_in_securities_markets.pdf

[8] Wall, L. D. (2018). Some financial regulatory implications of artificial intelligence. Journal of Economics and Business, 100, 55-63.

[9] Waschuk, G., & Hamilton, S. (2022). AI in the Canadian Financial Services Industry. https://www.mccarthy.ca/en/insights/blogs/techlex/ai-canadian-financial-services-industry

[10] European Securities and Markets Authority. (2023). Artificial Intelligence in EU Securities Markets. https://www.esma.europa.eu/sites/default/files/library/ESMA50-164-6247-AI_in_securities_markets.pdf

[11] Ibid.

[12] Financial Times. (2023). Gary Gensler urges regulators to tame AI risks to financial stability. https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac

[13] Proposed Rule, Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers, Exchange Act Release No. 97990, Advisers Act Release No. 6353, File No. S7-12-23 (July 26, 2023) (“Data Analytics Proposal”). https://www.sec.gov/files/rules/proposed/2023/34-97990.pdf

[14] Ibid.

[15] Office of the Superintendent of Financial Institutions (2023). Financial Industry Forum on Artificial Intelligence: A Canadian Perspective on Responsible AI. https://www.osfi-bsif.gc.ca/en/about-osfi/reports-publications/financial-industry-forum-artificial-intelligence-canadian-perspective-responsible-ai

[16] Ibid.

[17] We exclude use cases which do not have unique characteristics or implications specific to retail investing (e.g., chat bots).

[18] In Canada, regulations do not permit firms to provide advice or recommendations without human oversight; this use case was observed in the other jurisdictions.

[19] Individual registrants include financial advisors, investment advisors, and other individuals providing investment advice without any AI assistance.

[20] CSA Staff Notice 31-342 - Guidance for Portfolio Managers Regarding Online Advice. https://www.osc.ca/en/securities-law/instruments-rules-policies/3/31-342/csa-staff-notice-31-342-guidance-portfolio-managers-regarding-online-advice

[21] Ibid.

[22] WIZ. (2023, September 30). Merlyn.AI Bull-Rider Bear-Fighter ETF.https://alphaarchitect.com/wp-content/uploads/compliance/etf/factsheets/WIZ_Factsheet.pdf

[23] Merlyn AI Bull-Rider Bear-Fighter ETF. Bloomberg. https://www.bloomberg.com/quote/WIZ:US

[24] Royal, James. (2024, May 6). 4 AI-powered ETFs: Pros and cons of AI stockpicking funds. Bankrate. https://www.bankrate.com/investing/ai-powered-etfs-pros-cons/

[25] Berkow, J. (2023, September 7). Securities regulators ramp up use of investor alerts to flag concerns. The Globe and Mail. https://www.theglobeandmail.com/business/article-canadian-securities-regulators-investor-alerts/

[26] Fowler, B. (2023, February 16). It’s Scary Easy to Use ChatGPT to Write Phishing Emails. CNET. https://www.cnet.com/tech/services-and-software/its-scary-easy-to-use-chatgpt-to-write-phishing-emails/

[27] Wawanesa Insurance. (2023, July 6). New Scams with AI & Modern Technology. Wawanesa Insurance. https://www.wawanesa.com/us/blog/new-scams-with-ai-modern-technology

[28] Chang, E. (2023, March 24). Fraudster’s New Trick Uses AI Voice Cloning to Scam People. The Street. https://www.thestreet.com/technology/fraudsters-new-trick-uses-ai-voice-cloning-to-scam-people

[29] Choudhary, A. (2023, June 23). AI: The Next Frontier for Fraudsters. ACFE Insights. https://www.acfeinsights.com/acfe-insights/2023/6/23/ai-the-next-frontier-for-fraudstersnbsp

[30] Department of Financial Protection & Innovation. (2023, May 24). AI Investment Scams are Here, and You’re the Target! Official website of the State of California. https://dfpi.ca.gov/2023/04/18/ai-investment-scams/#:~:text=The%20DFPI%20has%20recently%20noticed,to%2Dbe%2Dtrue%20profits.

[31] Telephone Services. TD. https://www.td.com/ca/products-services/investing/td-direct-investing/trading-platforms/voice-print-system-privacy-policy.jsp

[32] Global Times. (2023, June 26). China’s legislature to enhance law enforcement against ‘deepfake’ scam. Global Times. https://www.globaltimes.cn/page/202306/1293172.shtml?utm_source=newsletter&utm_medium=email&utm_campaign=B2B+Newsletter+-+July+2023+-+1

[33] Kalaydin, P. & Kereibayev, O. (2023, August 4). Bypassing Facial Recognition - How to Detect Deepfakes and Other Fraud. The Sumsuber. https://sumsub.com/blog/learn-how-fraudsters-can-bypass-your-facial-biometrics/

[34] Texas State Securities Board. (2023, April 4). State Regulators Stop Fraudulent Artificial Intelligence Investment Scheme. Texas State Securities Board. https://www.ssb.texas.gov/news-publications/state-regulators-stop-fraudulent-artificial-intelligence-investment-scheme

[35] Canadian Securities Administrators. (2022, May 20). Quantum AI aka QuantumAI. Investor Alerts. https://www.securities-administrators.ca/investor-alerts/quantum-ai-aka-quantumai/

Experimental Research

This section describes the methodology and findings of an experiment testing how the source of investment suggestions — AI, human, or a blend of the two (‘blended’) — impacts adherence to that suggestion. We also tested whether any differences in adherence depend on the soundness of the suggestion. The experiment was conducted online with a panel of Canadian adults in a simulated trading environment.

We conducted the experiment in two waves. As described further in the results section, in the first wave of the experiment, the recommended cash allocation in the “unsound” condition was objectively very high (20%) but it was also the level that participants naturally gravitated toward, regardless of the suggestion they received. This meant that adherence overall was higher in the “unsound” condition, which in turn limited our ability to assess the interaction effects between soundness of the advice and source of that advice.

To address this issue, we collected a second wave of data approximately three months later. In this second wave, we kept the suggested cash allocation consistent between the sound and unsound conditions and varied the equity and fixed income suggestions to reflect the soundness of the suggestion. We also collected data for a “control” group that did not receive any suggestion. We did this to understand how participants would allocate their funds in the absence of an investment suggestion, testing our hypothesis about the underlying preference for a larger cash allocation.

Conclusion

The applications of artificial intelligence within the securities industry have grown rapidly, with many targeted to retail investors. AI systems offer clear potential to both benefit and harm retail investors. The potential harm to investors is made more pronounced by the scalability of these applications (i.e., the potential to reach a broad audience at rapid speed). The goal of this research was to support stakeholders in understanding and responding to the rapid acceleration of the use of AI within the retail investing space.

The research was conducted in two phases. First, we conducted desk research. This phase included a scan and synthesis of relevant behavioural science literature, examining investors’ attitudes, perceptions, and behaviours related to AI. We also conducted a review of investing platforms and investor-facing AI tools to understand how AI is being used on investing platforms. Our desk research revealed that the existing evidence base on how these tools impact investors is limited, and may not be highly generalizable in the future, given how recent the rapid evolution of AI has been.

From our desk research phase, we identified three broad use cases of AI within the retail investing space:

  • Decision Support: AI systems that provide recommendations or advice to guide retail investor investment decisions.
  • Automation: AI systems that automate portfolio and/or fund (e.g., ETF) management for retail investors.
  • Scams and Fraud: AI systems that either facilitate or mitigate scams and fraud targeting retail investors, as well as scams capitalizing on the “buzz” of AI.

In the second phase of this research, we built an online investment simulation to test investors’ uptake of investment suggestions provided by a human, AI tool, or combination of the two (i.e., a human using an AI support tool). The experiment generated several important insights. We found that participants adhered to an investment suggestion most closely when it was provided by a blend of human and AI sources. However, this difference did not meet our more stringent statistical thresholds, and as such, we cannot be certain that it is not due to chance. That said, this finding adds to existing research on AI and retail investing, as the current literature does not examine “blended” advice. Importantly, we found no discernible difference in adherence to investment suggestions provided by a human or AI tool, indicating that Canadian investors may not have a clear aversion to receiving investment advice from an AI system.

Our research also identified several risks associated with the use of AI within the retail investing space that could lead AI tools to provide investors with advice that is not relevant, appropriate, or accurate. Like human suggestions, AI and blended sources of suggestions had a material impact on the asset allocation decisions of participants, even when that advice was unsound. This underlines the ongoing need to understand the provision of investment recommendations from AI systems, especially given the observed openness among Canadians to AI-informed advice. In particular, there is a need to ensure that algorithms are based on high quality data, that factors contributing to bias are proactively addressed, and that these applications prioritize the best interests of investors rather than the firms who develop them.

Regulators are already proposing approaches to address these risks. For example, in the US, the SEC proposed a new rule that would require investment firms to eliminate or neutralize the effect of any conflict of interest resulting from the use of predictive data analytics and AI that places the interests of the firm ahead of the interests of investors.[48] More broadly, industry regulators and stakeholders should seek to leverage data collected by investing platforms and investor-facing AI tools to investigate the extent to which these tools are resulting in positive or negative outcomes for investors.

The results of our experiment suggest one other critical policy and education implication; investors appear to have a strong bias to holding more cash in their investment portfolios than most financial experts recommend. This finding suggests that excessive cash allocations should be a focus of educational efforts, with this focus potentially broadened to appropriate risk-taking when investing.

This research also underscores several positive impacts that AI tools may have on investor behaviours. For example, tools could be created to support financial inclusion through increased access to more affordable investment advice. We also see a significant opportunity for AI to be used by stakeholders to improve the detection of fraud and scams.

AI presents a range of potential benefits and risks for retail investors. As this technology continues to advance in capabilities and applications, more research will be needed to support capital markets stakeholders in better understanding the implications for retail investors. This report provides important findings and insights for stakeholders in an increasingly complex environment.

 


[48] U.S. Securities and Exchange Commission. (2023). SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictiv Data Analytics by Broker-Dealers and Investment Advisers. Press Release. https://www.sec.gov/news/press-release/2023-140

Authors

Ontario Securities Commission:

Patrick Di Fonzo
Senior Advisor, Behavioural Insights 
[email protected]

Matthew Kan
Senior Advisor, Behavioural Insights 
[email protected]

Marian Passmore
Senior Legal Counsel, Investor Office
[email protected]

Meera Paleja 
Program Head, Behavioural Insights
[email protected]

Kevin Fine
Senior Vice President, Thought Leadership
[email protected]

Behavioural Insights Team (BIT):

Laura Callender
Senior Advisor 
[email protected]

Riona Carriaga
Associate Advisor
[email protected]

Sasha Tregebov
Director
[email protected]

Appendices