While tech professionals grapple with the increasingly real threat posed by artificial intelligence (AI), portfolio management businesses are quietly contemplating their own survival plans.
OpenAI’s launch of conversational chatbot ChatGPT last November has left the investment community questioning its own future.
Goldman Sachs estimates 300 million jobs could be lost over the next decade, but a combination of significant labor cost savings and new job creation raises the likelihood of a productivity boom are also predicted.
Cleantech investor Paul Sandu, founder of Prometheus Corporation and former head of multi-asset quants at BNP Paribas in Hong Kong, said: Asian investors: “What is happening in AI right now is going to fundamentally change the way we do most things. Not.”
To date, the use of AI in fund management has been dominated by machine learning (ML) applications to improve the effectiveness of algorithms in the investment process, or the use of AI techniques to process big data for investment insights. was related to
early adopter
For example, Dutch pension fund manager APG and Chinese insurance group Ping An are adopting AI to collect and analyze data early on to achieve their environmental, social and governance (ESG) investment goals. introduced to
“As machine learning methodologies rapidly become mainstream, the industry’s need expands from seeking proof that AI and big data work to seeking action plans that can support corporate strategies. .CFA Institute.
The CFA has produced a handbook aimed at providing tools for investors and policy makers to assess and incorporate AI and big data techniques into best practices.
Asset managers are successfully collecting big data and incorporating it into their investment processes. In particular, as ESG-oriented strategies become more mainstream, the asset manager is looking for ways to assess his ESG-related activities of investment firms and monitor progress towards ESG targets.
Robeco, a global asset manager, is concerned about civil rights and freedoms being threatened, including privacy and surveillance in socially sensitive situations such as finding a job or housing.
“Safety and accountability often lag behind real technological innovation, and our engagement with companies aims to encourage them to catch up,” the company said in an online post. said in
alignment issues
The downside of AI is that it can manipulate markets, tamper with data, and code malicious programs in ways that humans cannot detect. The people who are at the forefront of bringing AI to the world are the ones who show the most clearly that AI is a threat.
OpenAI safety and governance personnel are very candid about risk.
In an academic paper on the alignment problem, OpenAI’s AI governance researcher Richard Ngo and fellow computer scientists Lawrence Chan and Sören Mindermann suggested that AI could be trained in a manner similar to today’s most capable models. For example, it learns internal representation goals that receive higher rewards and generalize beyond the training distribution, and uses power-seeking strategies to pursue those goals.
“The deployment of displaced AI could irreversibly undermine human dominance over the world.”
pause to think
That’s why an open letter signed by Elon Musk and computer scientists from Apple, Amazon, Google, and many other tech giants calls for a halt to AI development.
“Powerful AI systems should only be developed if we are confident that their effects are positive and the risks are manageable,” the letter said.
No such guarantees can be made at this time, but not everyone believes the threats outweigh the opportunities.
I am very excited actually. I can understand why people say we need to pause, but basically it’s not possible,” he said Sandhu. The biggest risk for investors, he said, is that fund managers will lose their ability to innovate.
“Regulating and constraining money management like we do prevents a lot of errors from happening, but it really limits innovation very much.
“This AI knows nothing about any constraints or experiences other than the data it collects. Maybe, but they’ve (since) lost that ability. That’s where I think things get a little hairy.
“Although AI can perform management tasks much better, it is the innovation side where the real risk exists that humans will become obsolete.”
¬ Haymarket Media Limited. all rights reserved.