Regulators also propose that the “regulatory light” framework can be adopted for the use of AI/ML in the securities market for purposes other than business operations that may directly affect customers.
The proposed “guiding principles aim to optimize profits and to minimize the potential risks associated with integrating AI/ML-based applications into the securities market to protect investors, market integrity and financial stability,” Sebi said in a joint discussion paper.
Currently, AI/ML is used primarily by market participants for advisory and support services, risk management, client identification and monitoring, monitoring, pattern recognition, internal compliance purposes, and cybersecurity.
“While AI/ML has the potential to improve productivity, efficiency and outcomes, it is also important to manage these systems responsibly as they create or amplify certain risks that can affect financial market efficiency and negatively affect investors,” Sebi said.
Therefore, SEBI proposed a high level of principle to provide guidance to market participants for implementing reasonable procedures and control systems for overseeing and governance of AI/ML applications or tools usage. T The proposed guidelines were proposed by a working group constructed by SEBI after studying existing AI/ML guidelines in India and worldwide.
As part of the proposal, the Working Group proposed that market participants using AI/ML models should have internal teams with the appropriate skills and experience to monitor the performance, effectiveness and security of algorithms deployed throughout the lifecycle, and to explain the auditability and interpretability of such models.
Additionally, teams need to establish exception and error handling procedures related to AI/ML-based systems.
You also need to establish a fallback plan if your AI-based application fails due to technical issues or unexpected confusion.
It is proposed that market participants who use AI/ML models to operate their business, such as selecting trading algorithms, asset or portfolio management, and choosing advisory and support services, could have a direct impact on their customers.
Market participants must properly test and monitor AI/ML-based models to continuously validate their results. Additionally, it has been proposed that the AI/ML model should be tested in an environment isolated from the live environment prior to deployment, so that it behaves as expected in stressed market conditions.
Market participants must also maintain appropriate documentation for all models and store input and output data for at least five years.
“As AI/ML systems rely on data collection and processing, market participants must have a clear policy on data security, cybersecurity and data privacy regarding the use of AI/ML-based models,” said Sebi, and information regarding technical defects and data breaches shall be communicated to IT and other relevant authorities.
India's Securities and Exchange Commission (SEBI) has been seeking public comment on the proposal until July 11th.
