Image
Banner: Artificial Intelligence and Retail Investing: Use Cases and Experimental Research

Artificial Intelligence and Retail Investing: Use Cases and Experimental Research

Executive Summary

The recent increase in the scale and applications of artificial intelligence (AI) presents a range of new possibilities and potential risks to retail investors. As such, securities regulators are striving to understand, prioritize, and address potential investor harms, while continuing to foster innovation.

The research findings presented in this report were developed by the Ontario Securities Commission (OSC) in collaboration with the Behavioural Insights Team (BIT) as part of the OSC’s evidence-based approach to regulatory and educational initiatives. Our findings stem from two research streams. We conducted a literature review and environmental scan of investing platforms to understand the prominent use cases of AI systems that are retail investor-facing. We then used the findings from this research to inform the design and implementation of a behavioural science experiment to determine how the source of an investment suggestion – AI, human, or a blend of the two – impacts the extent to which investors follow that suggestion.

Based on the literature review and environmental scan conducted in our first research stream, we identified three broad use cases of AI specific to retail investors:

  • Decision Support: AI systems that provide recommendations or advice to guide retail investor investment decisions.[1]
  • Automation: AI systems that automate portfolio and/or fund (e.g., ETF) management for retail investors.
  • Scams and Fraud: AI systems that facilitate scams and fraud targeting retail investors, as well as frauds capitalizing on the “buzz” of AI.

Within these use cases, we identified several key benefits and risks associated with the adoption and usage of AI systems by retail investors, including the following.

Benefits:

  • Reduced Cost: AI systems can reduce the cost of personalized advice and portfolio management, thereby creating considerable value for retail investors.[2]
  • Access to Advice: More sophisticated and properly regulated AI systems can provide increased access to financial advice for retail investors, particularly those that cannot access advice through traditional channels.
  • Improved Decision Making: AI tools can be developed to guide investor decision-making around key areas such as portfolio diversification and risk management, as well as tools to assist investors in identifying financial scams.[3]
  • Enhanced Performance: Existing research has shown that AI systems can make more accurate predictions of earnings changes and generate more profitable trading strategies compared to human analysts.[4]

Risks:

  • Bias: AI models are generally subject to the biases and assumptions of the humans who develop them. As such, they may heighten unfair outcomes, even where this is not the system’s intended function.
  • Herding: The concentration of AI tools among a few providers may induce herding behaviour, convergence of investment strategies, and chain reactions that exacerbate volatility during market shocks.
  • Data Quality: If an AI model is built on poor data quality, then the outputs, whether advice, recommendations, or otherwise, will be of poor quality as well.
  • Governance and Ethics: The ‘black box’ nature of AI systems and limitations around data privacy and transparency create concerns around clear accountability in cases where AI systems produce adverse outcomes for investors.

Our second research stream consisted of implementing an online, randomized controlled trial (RCT). We tested how closely Canadians followed a suggestion for how to invest a hypothetical $20,000 across three types of assets: equities, fixed income, and cash. We varied who provided the investment suggestion: a human financial services provider, an AI investment tool, or a human financial services provider using an AI tool (i.e., ‘blended’ approach). We also varied whether the suggested asset allocation was sound or unsound to see whether Canadians could discern the quality of the suggestion depending on who was delivering it. Table 1 outlines the different variations of investment suggestions we tested.

Table 1: Investment Suggestions
 HumanAI[5]Blended
SoundSound HumanSound AISound Blended
UnsoundUnsound HumanUnsound AIUnsound Blended

 

In this experiment, we found that people who received the investment suggestion from a human using an AI tool (i.e., a “blended” advisor) followed the suggestion most closely. Their investment allocation deviated 9% less than those who received the suggestion from the human source, and 6% less than those who received the suggestion from an AI tool. However, these findings should be interpreted with caution. Although there were mean differences in how the groups allocated their funds, these differences were not large enough to meet our stringent statistical criteria (i.e., they were not statistically significant). As a result, it is unclear whether these findings are indicative of a real effect, as they may be due to chance. In other words, these findings may exist, but we cannot be certain without further replications of the experiment.

With this in mind, the findings from our experiment present several key implications. Our data contributes to an important gap in the research, as much of the existing work has compared differences in trust in financial advice from AI tools or human providers only. Our experiment goes beyond stated trust by focusing on behaviour (albeit in a simulated environment) in response to investment suggestions from various sources. Furthermore, the addition of the ‘blended’ condition allowed us to develop an initial understanding of how investors respond to suggestions from a potential future state of investment advice – a ‘blended’ source. Finally, our data suggests that Canadians are trusting of investment suggestions generated by AI systems, as we did not observe any material difference in adherence between our human and AI conditions. This underlines the ongoing need to ensure that AI systems providing investment advice and recommendations are based on unbiased, high-quality data, and ultimately enhance the retail investor experience.

 


[1] In Canada, regulations forbid firms from using AI to provide advice or recommendations without human oversight; this use case was observed in the other jurisdictions.

[2] Banerjee, P. (2024, June 2). AI outperforms humans in financial analysis, but its true value lies in improving investor behavior. The Globe and Mail. https://www.theglobeandmail.com/investing/personal -finance/household-finances/article-ai-outperforms-humans-in-financial-analysis-but-its-true-value-lies-in/

[3] Ibid.

[4] Kim, A., Muhn, M., & Nikolaev, V. V. (2024). Financial statement analysis with large language models. Chicago Booth Research Paper Forthcoming, Fama-Miller Working Paper. http://dx.doi.org/10.2139/ssrn.4835311

[5] The regulatory landscape in Canada does not permit recommendations to be provided to investors without human oversight, regardless of what technology is used. Our experiment is intended to provide an indication of investor behaviour, when faced with investment suggestions from different sources, with this regulatory backdrop in mind. 

Introduction

There has been a significant increase in the scale and breadth of artificial intelligence (AI) systems in recent years, including within the retail investing space. While these technologies hold promise for retail investors, regulators internationally are alert to the risks they pose to investor outcomes. In this context, the Ontario Securities Commission (OSC) collaborated with the Behavioural Insights Team (BIT) to provide a research-based overview of:

  • The current use cases of AI within the context of retail investing – and any associated benefits and risks for retail investors.
  • The effects of AI systems on investor attitudes, behaviours, and decision-making.

To address these areas, we implemented a mixed-methods research approach with two research streams:

  1. A literature review and environmental scan of investor-facing AI systems in Canada and abroad to identify the current use cases of AI that are retail investor-facing.
  2. A behavioural science experiment to determine how the source of an investment suggestion — AI, human, or a blend of the two — impacts the extent to which investors follow that suggestion.

Our report is structured as follows. We first present use cases of AI in retail investing that we have identified. We then present the methodology and results of our behavioural science experiment.

Use Cases

An artificial intelligence (AI) system “…is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”[6] The massive growth of available data and computing power has provided ideal conditions to foster advancements in the use of AI across various industries, especially within the financial sector.[7]

AI systems have begun to proliferate in the securities industry with certain applications targeted to retail investors. If responsibly implemented, these applications have the potential to benefit retail investors. For example, they could reduce the cost of personalized advice and portfolio management. However, the use of AI within the retail investing space also brings new risks and uncertainties, including systemic implications:

  • Explainability: AI models are often described as “black boxes” because the process by which they reach decisions is unclear.[8]
  • Data Quality: AI systems are only as good as the data upon which they are based. If systems are based on corrupted, biased, incomplete, or otherwise poor data, investor protection could be compromised.
  • Bias: AI models are generally subject to the biases and assumptions of the humans who developed them.[9] As such, they may accelerate or heighten unfair outcomes, even where this is not the algorithm’s intended function.[10]
  • Herding: The concentration of AI tools among a few providers may induce herding behaviour, convergence of investment strategies, and chain reactions that exacerbate volatility during shocks.[11] In other words, if markets are driven by similar AI models, volatility could increase dramatically to the point of financial system contagions.[12]
  • Market Competition: Large firms with big budgets and greater technological capabilities are generally at a greater advantage than smaller firms in developing AI tools – which could reduce the competitive landscape.
  • Principal-Agent Risks: AI applications developed and used by firms to advise or provide other support to retail investors could be developed to prioritize the interests of the firm rather than their clients. This potential risk is exacerbated by the high complexity and low explainability of AI tools. In the US, the SEC has recently proposed rules to address this risk.[13]
  • Scalability: Due to the scalability of AI technologies and the potential for platforms that leverage this technology to reach a broad audience at rapid speed, any harm resulting from the use of this technology could affect investors on a broader scale than previously possible.[14]
  • Governance: The rapid development of AI systems may result in poorly defined accountability within organizations. Organizations should have clear roles and responsibilities and a well-defined risk appetite related to the development of AI capabilities.[15]
  • Ethics: Like any technology, AI can be manipulated to cause harm. Organizations should maintain transparency, both internally and externally, through disclosure on how they ensure high ethical standards for the development and usage of their AI systems.[16]

In this report, we outline three areas where AI is being used in certain jurisdictions within the retail investing space, namely, Canada, the United States, the EU, and the UK: decision support, automation, and scams and fraud.[17]

Decision Support

We classify decision support as AI applications that provide recommendations or advice to guide investment decisions.[18] This includes applications that provide advice directly to retail investors and those that help individual registrants provide advice to their retail investor clients.[19] Decision support may relate to individual securities transactions or overall investment strategy / portfolio management. Our behavioural science experiment (below) explores this use case in the context of investment allocation decisions.

Platforms for self-directed retail investors have started offering “AI analysts” as an add-on feature to support investor decision-making. For example, a US-based firm has partnered with a US-based fintech platform to leverage AI in analyzing large datasets to provide insight into a range of global assets for users. These applications appear to be intended to provide self-directed investors with relevant insights, information, and data to inform their investment decisions.

Standalone AI tools have also been developed to directly support investors. For example, one US-based platform allows investors to enter the details of their financial status such as their debt, real estate, and investment accounts to receive advice on whether their investments match their financial goals and risk tolerance. A new CHATGPT plug-in by the same company allows investors to have conversations with an AI-powered chatbot that can make similar suggestions, simply by reading a copy and paste of one’s investing statements. Another US-based company operates as a standalone website to provide investors with AI-driven tools for identifying patterns and trends in the stock market. The company’s first product was a website which featured AI tools to help retail investors gauge how well their portfolio was diversified.

Automation

We define automation as AI applications that automate portfolio and/or fund (e.g., ETF) management for retail investors. Unlike decision support, these AI applications require minimal user input, making investment decisions for investors instead of providing advice and letting the investor decide. There are three key types of AI applications that automate decisions: robo-advisor platforms using AI, AI-driven funds (e.g., ETFs), and standalone AI platforms offering portfolio management.

Robo-advisers have been using algorithms to automate investing for Canadian retail investors since 2014. In Canada, securities regulators require human oversight over investment decisions generated by algorithms.[20] Other countries, including the United States, the United Kingdom, and Australia, appear to permit similar robo-advising platforms to manage client funds with little or no involvement from a human advisor.[21] Within these other markets, there is an emerging trend of robo-advisors using AI. For example, one US-based platform is reportedly using AI to automatically rebalance portfolios and perform tax-loss harvesting for users.

AI-powered exchange-traded funds (ETFs) use AI to identify patterns and trends in the market to identify investment opportunities and manage risk. For example, the US-based WIZ Bull-Rider Bear-Fighter Index was described as using AI to analyze market conditions and automatically shift holdings from “momentum leaders” in bull markets to “defensive holdings” during bear markets.[22] The fund has since been liquidated.[23] Other fund examples include Amplify AI Powered Equity ETF (AIEQ), VanEck Social Sentiment ETF (BUZZ), WisdomTree International AI Enhanced Value Fund (AIVI), and Qraft AI-Enhanced U.S. Large Cap Momentum ETF (AMOM).[24]

Finally, some standalone AI platforms offer automated portfolio management. For example, a US-based platform claimed to use AI and human insight to anticipate market movements and automatically manage, rebalance, and trade different account holdings for self-directed investors.

Scams and Fraud

AI systems can also be used to enhance scams and fraud targeting retail investors, as well as generate scams capitalizing on the “buzz” of AI.

AI is “turbocharging” a wide range of existing fraud and scams. In the past two years, there has been nearly a ten-fold increase in the amount of money lost to investment-related scams reported to the Canadian Anti-Fraud Centre (an increase from $33 million in 2020 to $305 million in 2022).[25] One factor contributing to this increase is that scammers are using AI to produce fraudulent materials more quickly and increase the reach and effectiveness of written scams. Large language models (LLMs) increase scam incidence in three ways. First, they lower the barrier to entry by reducing the amount of time and effort required to conduct the scam. Second, LLMs increase the sophistication of the generated materials as typical errors such as poor grammar and typographical errors are much less frequent.[26] Finally, through “hyper-personalization,” LLMs can improve the persuasiveness of communications. For example, scammers may use AI to replicate email styles of known associates (e.g., family).[27] Beyond applications in email or other written formats, AI has also been used to generate “deepfakes” that deceive investors by impersonating key messengers. A deepfake is a video or voice clip that digitally manipulates someone’s likeness.[28] Deepfake scams have replicated the faces of celebrities, loved ones in distress, government officials, or fictitious CEOs to steal money or personal information from investors.[29],[30] Deepfakes can also be used to bypass voice biometric security systems needed to access investment accounts by cloning investors’ voices.[31] In the future, we may even see instances of deepfakes of investors’ own faces to access investment accounts that use face biometrics.[32],[33]

While many fraudsters use AI to enhance scams, other fraudsters are simply capitalizing on the hype of AI to falsely promise high investment returns. For example, YieldTrust.ai illegally solicited investments on an application that claimed to use “quantum AI” to generate unrealistically high profits. The platform claimed that new investors could expect to earn returns of up to 2.2% per day.[34] These scams tend to advertise “quantum AI” and use social media and influencers to generate hype around their product. For example, the Canadian Securities Administrators issued a 2022 alert for a company called ‘QuantumAI’, flagging that it is not registered in Ontario to engage in the business of trading securities.[35]

 


[6] OECD. (2023). Updates to the OECD’s definition of an AI system explained. https://oecd.ai/en/wonk/ai-system-definition-update

[7] European Securities and Markets Authority. (2023). Artificial Intelligence in EU Securities Markets. https://www.esma.europa.eu/sites/default/files/library/ESMA50-164-6247-AI_in_securities_markets.pdf

[8] Wall, L. D. (2018). Some financial regulatory implications of artificial intelligence. Journal of Economics and Business, 100, 55-63.

[9] Waschuk, G., & Hamilton, S. (2022). AI in the Canadian Financial Services Industry. https://www.mccarthy.ca/en/insights/blogs/techlex/ai-canadian-financial-services-industry

[10] European Securities and Markets Authority. (2023). Artificial Intelligence in EU Securities Markets. https://www.esma.europa.eu/sites/default/files/library/ESMA50-164-6247-AI_in_securities_markets.pdf

[11] Ibid.

[12] Financial Times. (2023). Gary Gensler urges regulators to tame AI risks to financial stability. https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac

[13] Proposed Rule, Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers, Exchange Act Release No. 97990, Advisers Act Release No. 6353, File No. S7-12-23 (July 26, 2023) (“Data Analytics Proposal”). https://www.sec.gov/files/rules/proposed/2023/34-97990.pdf

[14] Ibid.

[15] Office of the Superintendent of Financial Institutions (2023). Financial Industry Forum on Artificial Intelligence: A Canadian Perspective on Responsible AI. https://www.osfi-bsif.gc.ca/en/about-osfi/reports-publications/financial-industry-forum-artificial-intelligence-canadian-perspective-responsible-ai

[16] Ibid.

[17] We exclude use cases which do not have unique characteristics or implications specific to retail investing (e.g., chat bots).

[18] In Canada, regulations do not permit firms to provide advice or recommendations without human oversight; this use case was observed in the other jurisdictions.

[19] Individual registrants include financial advisors, investment advisors, and other individuals providing investment advice without any AI assistance.

[20] CSA Staff Notice 31-342 - Guidance for Portfolio Managers Regarding Online Advice. https://www.osc.ca/en/securities-law/instruments-rules-policies/3/31-342/csa-staff-notice-31-342-guidance-portfolio-managers-regarding-online-advice

[21] Ibid.

[22] WIZ. (2023, September 30). Merlyn.AI Bull-Rider Bear-Fighter ETF.https://alphaarchitect.com/wp-content/uploads/compliance/etf/factsheets/WIZ_Factsheet.pdf

[23] Merlyn AI Bull-Rider Bear-Fighter ETF. Bloomberg. https://www.bloomberg.com/quote/WIZ:US

[24] Royal, James. (2024, May 6). 4 AI-powered ETFs: Pros and cons of AI stockpicking funds. Bankrate. https://www.bankrate.com/investing/ai-powered-etfs-pros-cons/

[25] Berkow, J. (2023, September 7). Securities regulators ramp up use of investor alerts to flag concerns. The Globe and Mail. https://www.theglobeandmail.com/business/article-canadian-securities-regulators-investor-alerts/

[26] Fowler, B. (2023, February 16). It’s Scary Easy to Use ChatGPT to Write Phishing Emails. CNET. https://www.cnet.com/tech/services-and-software/its-scary-easy-to-use-chatgpt-to-write-phishing-emails/

[27] Wawanesa Insurance. (2023, July 6). New Scams with AI & Modern Technology. Wawanesa Insurance. https://www.wawanesa.com/us/blog/new-scams-with-ai-modern-technology

[28] Chang, E. (2023, March 24). Fraudster’s New Trick Uses AI Voice Cloning to Scam People. The Street. https://www.thestreet.com/technology/fraudsters-new-trick-uses-ai-voice-cloning-to-scam-people

[29] Choudhary, A. (2023, June 23). AI: The Next Frontier for Fraudsters. ACFE Insights. https://www.acfeinsights.com/acfe-insights/2023/6/23/ai-the-next-frontier-for-fraudstersnbsp

[30] Department of Financial Protection & Innovation. (2023, May 24). AI Investment Scams are Here, and You’re the Target! Official website of the State of California. https://dfpi.ca.gov/2023/04/18/ai-investment-scams/#:~:text=The%20DFPI%20has%20recently%20noticed,to%2Dbe%2Dtrue%20profits.

[31] Telephone Services. TD. https://www.td.com/ca/products-services/investing/td-direct-investing/trading-platforms/voice-print-system-privacy-policy.jsp

[32] Global Times. (2023, June 26). China’s legislature to enhance law enforcement against ‘deepfake’ scam. Global Times. https://www.globaltimes.cn/page/202306/1293172.shtml?utm_source=newsletter&utm_medium=email&utm_campaign=B2B+Newsletter+-+July+2023+-+1

[33] Kalaydin, P. & Kereibayev, O. (2023, August 4). Bypassing Facial Recognition - How to Detect Deepfakes and Other Fraud. The Sumsuber. https://sumsub.com/blog/learn-how-fraudsters-can-bypass-your-facial-biometrics/

[34] Texas State Securities Board. (2023, April 4). State Regulators Stop Fraudulent Artificial Intelligence Investment Scheme. Texas State Securities Board. https://www.ssb.texas.gov/news-publications/state-regulators-stop-fraudulent-artificial-intelligence-investment-scheme

[35] Canadian Securities Administrators. (2022, May 20). Quantum AI aka QuantumAI. Investor Alerts. https://www.securities-administrators.ca/investor-alerts/quantum-ai-aka-quantumai/

Experimental Research

This section describes the methodology and findings of an experiment testing how the source of investment suggestions — AI, human, or a blend of the two (‘blended’) — impacts adherence to that suggestion. We also tested whether any differences in adherence depend on the soundness of the suggestion. The experiment was conducted online with a panel of Canadian adults in a simulated trading environment.

We conducted the experiment in two waves. As described further in the results section, in the first wave of the experiment, the recommended cash allocation in the “unsound” condition was objectively very high (20%) but it was also the level that participants naturally gravitated toward, regardless of the suggestion they received. This meant that adherence overall was higher in the “unsound” condition, which in turn limited our ability to assess the interaction effects between soundness of the advice and source of that advice.

To address this issue, we collected a second wave of data approximately three months later. In this second wave, we kept the suggested cash allocation consistent between the sound and unsound conditions and varied the equity and fixed income suggestions to reflect the soundness of the suggestion. We also collected data for a “control” group that did not receive any suggestion. We did this to understand how participants would allocate their funds in the absence of an investment suggestion, testing our hypothesis about the underlying preference for a larger cash allocation.

Conclusion

The applications of artificial intelligence within the securities industry have grown rapidly, with many targeted to retail investors. AI systems offer clear potential to both benefit and harm retail investors. The potential harm to investors is made more pronounced by the scalability of these applications (i.e., the potential to reach a broad audience at rapid speed). The goal of this research was to support stakeholders in understanding and responding to the rapid acceleration of the use of AI within the retail investing space.

The research was conducted in two phases. First, we conducted desk research. This phase included a scan and synthesis of relevant behavioural science literature, examining investors’ attitudes, perceptions, and behaviours related to AI. We also conducted a review of investing platforms and investor-facing AI tools to understand how AI is being used on investing platforms. Our desk research revealed that the existing evidence base on how these tools impact investors is limited, and may not be highly generalizable in the future, given how recent the rapid evolution of AI has been.

From our desk research phase, we identified three broad use cases of AI within the retail investing space:

  • Decision Support: AI systems that provide recommendations or advice to guide retail investor investment decisions.
  • Automation: AI systems that automate portfolio and/or fund (e.g., ETF) management for retail investors.
  • Scams and Fraud: AI systems that either facilitate or mitigate scams and fraud targeting retail investors, as well as scams capitalizing on the “buzz” of AI.

In the second phase of this research, we built an online investment simulation to test investors’ uptake of investment suggestions provided by a human, AI tool, or combination of the two (i.e., a human using an AI support tool). The experiment generated several important insights. We found that participants adhered to an investment suggestion most closely when it was provided by a blend of human and AI sources. However, this difference did not meet our more stringent statistical thresholds, and as such, we cannot be certain that it is not due to chance. That said, this finding adds to existing research on AI and retail investing, as the current literature does not examine “blended” advice. Importantly, we found no discernible difference in adherence to investment suggestions provided by a human or AI tool, indicating that Canadian investors may not have a clear aversion to receiving investment advice from an AI system.

Our research also identified several risks associated with the use of AI within the retail investing space that could lead AI tools to provide investors with advice that is not relevant, appropriate, or accurate. Like human suggestions, AI and blended sources of suggestions had a material impact on the asset allocation decisions of participants, even when that advice was unsound. This underlines the ongoing need to understand the provision of investment recommendations from AI systems, especially given the observed openness among Canadians to AI-informed advice. In particular, there is a need to ensure that algorithms are based on high quality data, that factors contributing to bias are proactively addressed, and that these applications prioritize the best interests of investors rather than the firms who develop them.

Regulators are already proposing approaches to address these risks. For example, in the US, the SEC proposed a new rule that would require investment firms to eliminate or neutralize the effect of any conflict of interest resulting from the use of predictive data analytics and AI that places the interests of the firm ahead of the interests of investors.[48] More broadly, industry regulators and stakeholders should seek to leverage data collected by investing platforms and investor-facing AI tools to investigate the extent to which these tools are resulting in positive or negative outcomes for investors.

The results of our experiment suggest one other critical policy and education implication; investors appear to have a strong bias to holding more cash in their investment portfolios than most financial experts recommend. This finding suggests that excessive cash allocations should be a focus of educational efforts, with this focus potentially broadened to appropriate risk-taking when investing.

This research also underscores several positive impacts that AI tools may have on investor behaviours. For example, tools could be created to support financial inclusion through increased access to more affordable investment advice. We also see a significant opportunity for AI to be used by stakeholders to improve the detection of fraud and scams.

AI presents a range of potential benefits and risks for retail investors. As this technology continues to advance in capabilities and applications, more research will be needed to support capital markets stakeholders in better understanding the implications for retail investors. This report provides important findings and insights for stakeholders in an increasingly complex environment.

 


[48] U.S. Securities and Exchange Commission. (2023). SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictiv Data Analytics by Broker-Dealers and Investment Advisers. Press Release. https://www.sec.gov/news/press-release/2023-140

Authors

Ontario Securities Commission:

Patrick Di Fonzo
Senior Advisor, Behavioural Insights 
[email protected]

Matthew Kan
Senior Advisor, Behavioural Insights 
[email protected]

Marian Passmore
Senior Legal Counsel, Investor Office
[email protected]

Meera Paleja 
Program Head, Behavioural Insights
[email protected]

Kevin Fine
Senior Vice President, Thought Leadership
[email protected]

Behavioural Insights Team (BIT):

Laura Callender
Senior Advisor 
[email protected]

Riona Carriaga
Associate Advisor
[email protected]

Sasha Tregebov
Director
[email protected]

Appendices