How AI will trigger the next financial crisis | Invest

AI News


The potential of artificial intelligence has long been visible at the edge of the cultural horizon. For decades, the promises and pitfalls of AI have been entertained in his science fiction films and literature, serving as fascinating thought experiments on how digital intelligence can help—or hurt—humanity. .

But over the past two years, the usefulness of AI in the real world has grown exponentially. Generative AI, which creates text, audio, images, and other media in response to text input, is growing in popularity.

Text-to-image AI is being used to win art contests. The Beatles will soon release their final song using artificial intelligence to piece together old audio. “Deepfake” videos that imitate the portraits and voices of celebrities can now be faked. And the chaos that OpenAI’s ChatGPT and other large-scale language models (LLMs) bring is too numerous to mention. OpenAI claims that GPT-4 can score above the 89th percentile on the bar exam and the reading and math portions of the SAT. Companies have already incorporated his LLM into their day-to-day operations, and areas such as marketing, customer service, website creation and design have already been applied to these new systems.

Given that AI has made so much progress in such a short time, it is worth considering what risks this technology poses to one of the most fundamental and critical parts of modern society: the financial system. there is.

Could AI trigger the next financial crisis? SEC chair thinks so

In May, Gensler was openly concerned about AI’s potential to create a crisis, stating that “everything relies on one basic level, the so-called generative AI level, on which many fintech apps are built. This could lead to future financial crises,” he said. on top of that. He went on to say that the technology is a potential systemic risk.

“Gensler’s concerns about the ‘one base level’ of AI are perfectly justified, not just in the financial industry, but across society,” said Richard Gardner, CEO of financial technology firm Modulus.

“From a financial point of view, if one AI operator stands out from the rest (which is the industry standard most companies use), that system can be compromised, or the program specific flaws can ultimately have far-reaching consequences, creating a domino catastrophe,” Gardner said.

Other AI risks to financial stability

Of course, it would be a lack of imagination to limit the potential for AI-induced crises to a single root cause. Even if Mr. Gensler’s “basic level” concerns are actively avoided, other credible threats loom, such as:

End of encryption. Encryption is the cornerstone of modern digital security and commerce, allowing users to be safe and secure in activities such as online banking, e-commerce, messaging, and many other everyday activities that most people take for granted. You will be able to work for sure. AI may eventually be involved in breaking encryption and permanently discrediting the internet, but such tasks may require other technologies such as quantum computers.

Risk of fake news and deepfakes. AI’s audio generation capabilities are already good enough to create convincing imitations of artists like Kanye West and The Weeknd. Political use of this technology is still in its early stages. In April, Republicans blew up an ad attacking President Joe Biden using an entirely AI-generated image. The ability to create and disseminate fake media about world leaders and market-moving events is still available today.

Malicious actors have used this handbook before without the help of AI. On April 23, 2013, his Associated Press Twitter account was hacked and used to tweet the “news” that two explosions rocked the White House and injured then-President Barack Obama. The market briefly crashed, but quickly recovered once the tweet was proven to be false.

Geopolitical fake news isn’t the only thing that can cause selling. Nefarious attackers can use AI-generated content and social media to flood individual companies with fake negative articles and rumors.

Massive phishing attempts. Technologies like ChatGPT are already good at tasks such as creating boilerplate marketing emails, so bad actors are using AI to craft ever more compelling “phishing” emails. It can spam, pretend to be a financial institution, and ask for the victim’s sensitive personal information and passwords. This can have devastating economic consequences for relatively few people in society and cause distrust among large sections of the population.

Dire AI trading system. “Like any computer trading system, AI-driven trading algorithms can be susceptible to errors and bugs that, if not properly identified and controlled, can theoretically snowball into flash crashes and It can lead to erroneous trading decisions that lead to epidemics,” says Michael. Ashley Shulman is a partner and chief investment officer at Running Point Capital Advisors, a multifamily asset management firm in El Segundo, California.

Vulnerabilities in systemically important companies. If big investment banks like JPMorgan Chase (NYSE: JPM) and Goldman Sachs Group (GS) are disrupted by AI trading technology and lose hundreds of billions of dollars, the impact will be felt by others. It could quickly ripple through the economy.

Existing Uses of AI in Stock Markets

Nightmare scenarios aside, it’s important to remember that “artificial intelligence” is a broad term that covers a lot and has already been widely adopted on Wall Street.

“Many public equity markets already use highly complex systems that rely on AI and machine learning models to develop trading strategies,” said Vince Lynch, founder and CEO of IV.AI. It’s dominated by incredibly sophisticated hedge funds that are informatizing.”

In other words, trading capabilities unlocked by AI have been around for some time, and some of the results of companies that have adopted this technology for trading have been really impressive.

Famously, mathematician Jim Simmons’ hedge fund Renaissance Technologies’ Medallion Fund uses quantitative trading techniques that leverage massive data sets and rule-based algorithmic trading to routinely Dominating the market. Between 1988 and 2018, the Medallion Fund’s annual return was 63.3%.

Bridgewater Associates, the world’s largest hedge fund, also uses complex algorithmic rules as the basis for many investment decisions.

Outside of hedge funds, some robo-advisors are also using AI to make asset allocation decisions, such as ETFMG’s AI-Powered Equity ETF (AIEQ) and the Qraft AI-Pilot US Large Cap Dynamic Beta and Income ETF (AIDB). A few funds are currently using AI. to make any decisions regarding investments in the Fund;

What can be done to address the risks?

AI has gone through various iterations and has been part of market dynamics for some time, but whatever its value, it has not yet destroyed the financial system. That said, automated trading programs are not immune from the potential for significant risk to the market. Both the 1987 Crash and the 2010 Flash Crash were partly caused by these programs. But it’s not the trading algorithm itself that keeps Gensler up at night. This is another way for AI to participate in crises, some of which are fundamentally unpredictable.

Gardner said the best way to proactively address this is through regulatory guidelines. And if Congress and other regulators don’t take action, it will very likely be the SEC to create strict guidelines on the use of AI in the financial sector. ”

Of course, limiting the scope of supervision to the financial sector could lead to mistaking the forest for the trees. “A robust risk management framework and stress testing procedures should be put in place to assess the potential systemic impact of AI technology,” Schulman said, noting that common AI model errors I am worried about it spreading.

While these approaches may ultimately reduce risk, developing and implementing a regulatory framework in such a rapidly changing industry can prove to be a daunting task. . But at least one regulator is considering it.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *