Wall Street Survey and ChatGPT: Companies Face Legal Risks Over Transparency and Customer Relationships

Applications of AI


Regulators want to know how ChatGPT and AI will change the way Wall Street investment firms generate market analysis and other reports.

One area of ​​Wall Street ripe for disruption by artificial intelligence (AI) is investment research. It is generated by a large number of reports every day by many analysts. However, when considering the application of ChatGPT and other AI applications to content research, Wall Street investment banks and other financial services firms are likely to see an obscure area where the technology appears to be running. We encourage you to pause and consider some of the thorny legal risks. ahead of the law.

There seems to be no doubt that AI will cause upheaval among US investment banks and brokerage firms. In a recent report, Goldman Sachs estimated that 35% of jobs in business and financial operations are so-called at-risk. generative artificial intelligence, can produce novel, human-like output rather than simply describing or interpreting existing information. In fact, ChatGPT is a generative AI product developed by his OpenAI in the lab.

Goldman Sachs’ analysis did not delve into the specific impact of AI on investment research, but Joseph Briggs, one of the report’s authors, said, “Equity research, at least on an employment-weighted basis, is a little more It’s highly exposed,” he said.

ChatGPT and Fedspeak

There are many questions about the extent to which AI applications can replace human input and analysis, but new academic research shows that ChatGPT can perform certain Wall Street tasks just like an experienced analyst. suggesting.

A new study from the Federal Reserve Bank of Richmond used the Generative Pre-training Transformer (GPT) model to analyze the technical language the Federal Reserve uses to communicate its monetary policy decisions.Wall Street expert whose job it is to predict future monetary policy decisions – also known as federal watchers — Apply a combination of technical and interpretive skills in reading the often opaque and ambiguous language that Fed officials use in communicating with the public.

The GPT model “shows strong performance in classifying Fedspeak sentences, especially when fine-tuned,” the analysis said, adding, “Despite its impressive performance, GPT-3 is not foolproof. No. It can still misclassify the text and miss nuances that a human evaluator with domain expertise could.”

Fed watchers are also known to err in judging future monetary policy decisions, using ChatGPT and similar technologies to analyze Wall Street’s nuances, such as corporate earnings forecasts and more basic industry research. The question arises as to how it can be applied to the task.

Law Concerning Lag Innovation Using AI

How should investment banks and other investment firms approach the use of ChatGPT in their research activities and communications with clients? carefully.

Mary Jane Wilson-Bilik, partner at Washington, DC law firm Eversheds Sutherland, said: About AI…but there are relatively few laws specific to AI and ChatGPT. “

That’s not to say regulation won’t come close, though. In late April, four U.S. federal agencies released a joint statement warning of a “growing threat” from rapidly growing artificial intelligence applications, citing a range of potential exploits. These agencies called on companies to actively monitor their use of AI technology, including ChatGPT and other “rapidly evolving automated systems.”

The Securities and Exchange Commission has indicated plans to issue a proposed rule on decentralized financial tools this year, but the proposal will not specify whether AI/ChatGPT was used in providing advice and reporting to customers. It is unclear whether such disclosures will be required.

Given the regulatory void in Wall Street’s specific rules for surveys, Wilsonbilick warned companies about how they might use and disclose AI and ChatGPT in their survey products. “There is not yet a legal requirement for her to tell clients that AI was used to create reports and analyses, but that would be best practice,” she said. “Some companies have taken great care to add language to their online privacy policies regarding their potential use of AI.”

While clients currently do not have a legal “right” to know whether AI has been used to produce investigative reports, “there could be risks if clients are misled or deceived as to how AI is being used.” ,” explains Wilson-Bilik. “When a company uses AI in a misleading or deceptive manner, for example, by implying that the results are human-generated when the results are hybrid or mostly AI-generated. If you do, it will be a problem under anti-corruption laws.”

Legal experts also warn that AI tools should be checked for accuracy and bias. Lack of robust guardrails can result in regulatory action and lawsuits.

Opinions expressed are those of the author. They do not reflect Reuters News’ commitment to integrity, independence and freedom from bias under its Trust Principles. The Thomson Reuters Institute is owned by Thomson Reuters and operates independently of Reuters News.

Henry Engler

Henry Engler is the North American Regulatory Information Editor in New York. He joined Thomson at Reuters after his ten years in the financial industry. There he has served as an executive or management consultant overseeing compliance-related and other projects. These include projects on Dodd-Frank Swap reporting requirements, TRACE reporting, data requirements, tax and accounting matters, AML systems, and employee transaction monitoring. Companies he has worked for include IBM Global Business Services, Morgan Stanley and RBS Capital Markets. Prior to these positions, he was an economist by training and was a financial journalist and business strategy officer at Reuters. He is editing a book on the European Monetary Union and the future of banking.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *