AI Lens in the Securities Market – Opinion News

AI News


By Sandeep Parekh

In one of the first steps taken by Indian regulators against artificial intelligence (AI), last month the Securities and Exchange Commission of India (SEBI) issued a cooperative discussion paper calling for feedback on proposals regulating the use of AI/machine learning (ML) in the securities market.

As defined in the cooperative discussion paper, AI refers to technology that allows machines to “imulate human decisions to solve problems.” ML is a subset of AI and refers to the automated learning of rules to analyze related data and perform tasks.

Currently, SEBI requires intermediaries in market infrastructure such as stock exchanges, liquidation companies, deposits and other intermediaries, such as mutual funds, to report on the AI/ML systems they have adopted, thereby giving regulators insight into their use cases.

Use cases and governance principles

Sebi instills that AI/ML is used for a variety of purposes. For example, stock exchanges leverage AI for sophisticated monitoring and pattern recognition, while brokers deploy it for product recommendations and algorithm order execution. AI is also used for customer support.

Based on who is creating the AI/ML system, they can be divided into two categories. They were built in-house or sourced from a third party. In this regard, it is also important to remember that AI/ML systems can integrate with each other just like current systems. Furthermore, AI capabilities are expanding rapidly, allowing models to make almost accuratic forecasts in finance, generate model portfolios, and earn money to fund managers for so long from now on.

In a forward-looking approach, Sebi's consultation form proposes guidelines surrounded by five core principles: model governance framework, investor protection, testing mechanisms, fairness and bias, and data/cybersecurity.

Importantly, SEBI proposes that third party services are deemed to be deemed provided by the intermediary involved and therefore be liable for violations of securities laws. Furthermore, we have expanded the applicability of investor complaints mechanisms regarding AI/ML systems.

Towards a safer and smarter framework

Sebi proposed a “regulated light framework” that attempts to separate AI/ML systems that affect clients and systems used for internal business operations. Furthermore, even if the system is outsourced, the intermediary is still responsible. The real challenge for intermediaries is building sophisticated internal teams, robust audit trails, and technical capabilities to manage such systems. In this context, it is worth considering whether Sebi should revisit this approach and borrowing leaf from its own playbook.

In February, regulators introduced a revised framework for safer participation by retail investors in the algo trade. Considering several entities offering an algo strategy to its customers and the resulting risks, SEBI has decided to introduce a new class of regulated entities. Argo provider. Although they are not directly regulated by it, the algo provider must become the agents of the stock broker and be registered and enpaneled on the stock exchange.

A similar approach can be evaluated for AI/ML systems and new classes (AI providers). Although SEBI does not need to directly regulate such people, it could improve monitoring and understanding of the evolving nature of the AI industry and its impact on the nexus and securities market. Furthermore, liability can be corrected to the person or entity that is actually responsible, especially if the intermediary fails to play a role in the violation. The alternative causes a cascade lawsuit as investors sues an intermediary and attempts to recover losses from third-party vendors (AI providers). While it has been proposed that the investor complaint handling mechanism be extended to AI/ML systems, introducing a new class of semi-regulatory players into the securities market could have a positive impact by fostering growth in a transparent and accountable way with appropriate surveillance.

SEBI's proposal includes testing requirements at startup and ongoing basis to ensure that the AI/ML system is functioning as expected. Here, a key reform that could drive growth is to allow players to access a regulatory sandbox framework and test their products and systems. This will increase scrutiny of such systems and allow SEBI to work with emerging players in the AI industry. This also provides important data points and supports fully evolving best practices. Such frameworks help to become active regulators rather than responding to technology development. This will be the first step to transform SEBI into a regulatory authority whose regulatory framework lays the foundation for more innovation and progress. This way, SEBI can become an enabler rather than imposing obstacles to new technologies.

The paper also highlights the potential dangers of AI. Regulators explicitly flag the threat that the generated AI used for market manipulation through deep fakes and false information will be used for market manipulation if the industry relies too much on some dominant AI providers. Identifying concentration risks is particularly pronounced as unregulated technology providers risk becoming systematic chokepoints in the industry. Furthermore, there are only a handful of basic models, so everyone uses the same AI model trained with the same data, and the risk of synthetic data loops emerges, which can cause risks of joint action and herds.

There is much to praise Sebi's proposal for principled regulatory light frameworks, reflecting its intention to adapt to innovation in the technology that shapes financial markets in the future. At the same time, there are steps to designing a regulatory framework that is ahead of the curve and supports growth and innovation, as well as adjusting.

Co-authored with senior associate Parker Caria and associate Finseklow Advisor Varun Matlani.

The writer is Finsec Law Advisors from Managing Partner.

Disclaimer: The views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *