Image
Banner -  Artificial Intelligence and Retail Investing: Scams and Effective Countermeasures

Artificial Intelligence and Retail Investing:

Scams and Effective Countermeasures

Executive Summary

There has been a significant increase in the scale and breadth of artificial intelligence (AI) applications in retail investing. While these technologies hold promise for retail investors, they pose novel risks—in particular, the risk of AI increasing investor susceptibility to scams. The terms scams and frauds are often used interchangeably, however, for the purposes of this report, we define them as:

  1. Scams: Deceptive schemes intended to manipulate individuals to willingly provide information and/or money.
  2. Frauds: Deceptive schemes to gain unauthorized access to personal information and/or money without the targets’ knowledge or consent. Also defined as a broader, legal term that covers intentional dishonest activity, including scams.

The development and deployment of AI systems in capital markets raises important regulatory questions. The OSC is taking a holistic approach to evaluating the impact of AI systems on capital markets. This includes understanding how market participants are benefiting from the use of AI systems and understanding the risks associated with their use. It also includes analyzing how their use impacts market participants differently, whether investors, marketplaces, advisors, dealers, investment funds, and more. We hope our work in identifying scams and providing mitigation techniques will add to our growing body of publications relating to AI system deployment, which includes:

  • Artificial Intelligence in Capital Markets – Exploring Use Cases in Ontario (October 10, 2023) [1]
  • AI and Retail Investing (published on September 11, 2024)

This research was conducted by the OSC’s Research and Behavioural Insights Team with the assistance of the consultancy Behavioural Insights Team (BIT) Canada. Our research is structured into two components:

  1. A literature and environmental scan to understand current trends in AI-enabled online scams, and a review of system and individual-level mitigation strategies for retail investor protection.
  2. A behavioural science experiment to assess the effectiveness of two types of mitigation strategies in reducing susceptibility to AI-enhanced investment scams. This experiment also sought to assess whether AI technologies are increasing investor susceptibility to scams.

The literature and environmental scan revealed that malicious actors are exploiting AI capabilities to more effectively deceive investors, orchestrate fraudulent schemes, and manipulate markets, posing significant risks to investor protection and the integrity of capital markets. Generative AI technologies are “turbocharging” common investment scams by increasing their reach, efficiency, and effectiveness. New types of scams are also being developed that were impossible without AI (e.g., deepfakes and voice cloning) or that exploit the promise of AI through false claims of ‘AI-enhanced’ investment opportunities. Together, these enhanced and new types of scams are creating an investment landscape where scams are more pervasive and damaging, as well as harder to detect.

To combat these heightened risks, we explored proven and promising strategies to mitigate the harms associated with AI-enhanced or AI-related investment scams. We identified two sets of mitigations: system-level mitigations, which limit the risk of scams across all (or a large pool of) investors, and individual-level mitigations, which help empower or support individual investors in detecting and avoiding scams. At the individual level, we found promise in innovative mitigation strategies more commonly used to address political misinformation, such as “inoculation” interventions.

BIT Canada and the OSC’s Research and Behavioural Insights Team conducted an online, randomized controlled trial (RCT) to test the efficacy of promising mitigation strategies, as well as to better substantiate the harm associated with the use of generative AI by scammers. In this experiment, over 2000 Canadian participants invested a hypothetical $10,000 across six investment opportunities in a simulated, social media environment. Investment opportunities promoted ETFs, cryptocurrencies, as well as investment advising services (e.g., robo-advising or AI-backed trading algorithms), and included a combination of legitimate investment opportunities, conventional scams, and/or AI-enhanced scams. We then observed how participants allocated their funds across the investment opportunities. Some participants were exposed to one of two mitigation techniques, which were:

  1. Inoculation—a technique that provides high-level guidance on scam awareness prior exposure to the investment opportunities, and,
  2. A simulated web-browser plug-in that flagged potentially “high-risk” opportunities.

We found that:

  • AI-enhanced scams pose significantly more risk to investors compared to conventional scams. Participants invested 22% more in AI-enhanced scams than in conventional scams. This finding suggests that using widely available generative AI tools to enhance fraudulent materials can make scams much more compelling.
  • The “Inoculation” technique and web-browser plug-ins can significantly reduce the magnitude of harm posed by AI-enhanced scams. Both mitigation strategies we tested were effective at reducing susceptibility to AI-enabled scams, as measured through invested dollars. The “inoculation” strategy reduced investment in fraudulent opportunities by 10%, while the web-browser plug-in reduced investment by 31%.

Based on our findings from the experiment and the preceding literature and environmental scan, we conclude that:

  • Widely available generative AI tools can easily enhance fraudulent materials for illegitimate investment opportunities—and that these AI enhancements can increase the appeal of these opportunities.
  • System-level mitigations, followed by individual-level mitigations are both needed for retail investor protection against AI-related scams.
  • Individual-level mitigations such as the “inoculation” technique and web-browser plug-ins can be effective tools at reducing the susceptibility of retail investors to AI-enhanced scams.
     

[1] https://oscinnovation.ca/resources/Report-20231010-artificial-intelligence-in-capital-markets.pdf

Introduction

The rapid escalation in the scale and application of artificial intelligence (AI) has resulted in a critical challenge for retail investor protection against investment scams. To promote retail investor protection, we must understand how AI is enabling and generating investment scams, how investors are responding to these threats, and which mitigation strategies are effective. Consequently, we examined:

  1. The use of artificial intelligence to conduct financial scams and other fraudulent activities, including:
    • How scammers use AI to increase the efficacy of their financial scams;
    • How AI distorts information and promotes disinformation and/or misinformation;
    • How effectively people distinguish accurate information from AI-generated disinformation and/or misinformation; and,
    • How the promise of AI products and services are used to scam and defraud retail investors.
  2. The mitigation techniques that can be used to inhibit financial scams and other fraudulent activities that use AI at the system level and individual level.

Our report includes a mixed-methods research approach to explore each of these key areas:

  1. A literature and environmental scan to understand current trends in AI-enabled online scams, and a review of system and individual-level mitigation strategies to protect consumers. This included a review of 50 publications and “grey” literature (e.g., reports, white papers, proceedings, papers by government agencies, private companies, etc.) sources and 28 media sources. This scan yielded two prominent trends in AI-enabled scams: (1) Using generative AI to ‘turbocharge’ existing scams; and (2) Selling the promise of ‘AI-enhanced’ investment opportunities. This scan also summarized current system- and individual-level mitigation techniques. 
  2. A behavioural science experiment to assess the effectiveness of two types of mitigation strategies in reducing susceptibility to AI-enhanced investment scams. This experiment also sought to quantify and confirm that AI technologies are increasing investor susceptibility to scams.

Desk Research

Experimental Research

To further our research, we conducted an experiment to examine 1) whether AI-enhanced scams are more harmful to retail investors than conventional scams, and 2) whether mitigation strategies can reduce the adverse effects of AI-enhanced scams by improving investors' ability to detect and avoid them.

Conclusion

The use of AI in the retail investing space is rapidly expanding. While AI as a technology is neither inherently good nor bad from an investor protection perspective, the use of AI could bring new threats to investor welfare when applied to scams. Malicious actors are exploiting the advanced capabilities of AI to manipulate markets, deceive investors, and orchestrate fraudulent schemes—posing significant risks to the integrity of financial markets. This concern is further amplified when considering the findings from our previous report on Artificial Intelligence and Retail Investing[115], which noted that retail investors adhered to advice from AI advisors similarly to human advisors. If retail investors trust AI advice as much as they do human advice, then poor, misleading, and/or manipulative AI advice could present substantial retail investor protection concerns.

The current research report was designed to assess the current level of risk associated with AI-enabled scams, and determine a responsive, evidence-based path forward for investor protection. Our research was conducted in two phases. First, we conducted desk research, which revealed they various ways AI capabilities could be exploited by malicious actors to more effectively deceive investors. Generative AI technologies are “turbocharging” common investment scams by increasing their reach, efficiency, and effectiveness. New scams are also being developed that were impossible without AI (e.g., deepfakes and voice cloning) or that exploit thepromise of AI through false claims of ‘AI-enhanced’ investment opportunities. Together, these enhanced and new types of investment scams are creating an investment landscape where they are more pervasive, harder to detect, and potentially more damaging.

We also explored evidence-based strategies to mitigate the harms associated with AI-enhanced or AI-related investment scams. We explored two sets of mitigations: system-level mitigations, which are designed to limit the risk of scams across all (or a large pool of) investors, and individual-level mitigations, which are designed to empower or support individual investors in detecting and avoiding scams. Drawing from research in various online contexts, including targeting misinformation/disinformation, we identified specific measures tailored for AI-enhanced scams, as well as broader strategies applicable to this domain and others.

In the second phase of our research, we built an online investment simulation to empirically test investors’ susceptibility to fraudulent investment opportunities and the effectiveness of mitigation strategies designed to protect investors from these harms. The experiment generated critical, novel, and policy-relevant insights:

  • AI-enhanced scams pose significantly more risk to investors compared to conventional scams. Participants invested 22% more in AI-enhanced scams than in conventional scams. This finding suggests that using widely available generative AI tools to enhance materials can make scams much more compelling. These findings reinforce the critical and escalating threat posed to investors by the availability of generative AI tools in executing scams.
  • Mitigations can reduce the magnitude of harm posed by AI-enhanced scams. In particular, a web browser plug-in that flags potential scams could quite effective. Both mitigation strategies we tested were effective at reducing susceptibility to AI-enabled scams. The ‘“inoculation” strategy reduced the amount invested in fraudulent opportunities by 5pp (10% decrease) while the web-browser plug-in reduced investments by 17pp (31% decrease).

    These results suggest that relevant, clear educational materials provided before people review investment opportunities can reduce the magnitude of harm posed by (AI-enhanced) investment scams. This inoculation technique could be implemented as an advertisement within social media platforms, such as Instagram or X.

    We also present significant empirical and theoretical support for the development of a browser or app-based, AI-driven scam detection tool. Beyond labelling potential scams in situ, this type of messaging could be used within education materials and within advertisements in response to certain search results. For example, these types of warnings could appear as Google search ads when users search for investments that have already been identified as scams.


 


[115] Ontario Securities Commission (2024), Artificial Intelligence and Retail Investing: Use Cases and Experimental Research.

Authors

Ontario Securities Commission:

Matthew Kan
Senior Advisor, Behavioural Insights 
[email protected]

Patrick Di Fonzo
Senior Advisor, Behavioural Insights 
[email protected]

Meera Paleja 
Program Head, Behavioural Insights
[email protected]

Kevin Fine
Senior Vice President, Thought Leadership
[email protected]

Behavioural Insights Team (BIT):

Amna Raza
Senior Advisor 
[email protected]

Riona Carriaga
Associate Advisor
[email protected]

Sasha Tregebov
Director
[email protected]