Artificial intelligence and machine learning: avoiding the pitfalls

Machine Learning


The potential benefits of artificial intelligence (AI) and machine learning (ML) for small and medium-sized businesses today are undeniable, but their rapid adoption also raises legal concerns. Bias in AI algorithms, ongoing data privacy violations, and intellectual property ownership create pitfalls.

To overcome these challenges, companies can take proactive steps such as vetting potential AI vendors, setting standards for data collection, implementing robust data security measures, and defining and enforcing protocols for all AI and ML-powered processes and systems. These steps can help companies protect their legal interests, maintain public trust, and meet corporate responsibility while leveraging the power of AI and ML.

Rapid evolution of AI and ML

AI and ML have grown exponentially in the past few years, rapidly redefining the technological landscape and how people interact with each other. ChatGPT is already pervasive in business and personal use in the same way that smartphones have transformed the way people work and live. StatistaThe AI ​​market is expected to exceed $184 billion in 2024, up $50 billion from 2023, and by 2030 the U.S. market is expected to exceed $826 billion.

This massive market shift has companies considering potential use cases for generative AI and ML in their products, services, and operations. But alongside this enthusiasm there are also real concerns about potential privacy violations from the personal data that may be used to train these new algorithms, and the need for controls to protect that data and proprietary information.

Setting parameters

As companies introduce AI and ML into their business plans, they can avoid these potential pitfalls by focusing on a few issues, such as AI and ML bias. Bias occurs when bias creeps into algorithm programming, an issue that often arises when, for example, certain demographic data is used to train algorithms. For example, Amazon abandoned an AI-based recruiting tool in 2018 after it found that its algorithm, dating back four years, was biased in favor of male candidates over female candidates.

In May 2020, the ACLU, ACLU of Illinois, and law firm Edelson PC sued facial surveillance company Clearview AI for violating the privacy rights of Illinois residents under the Illinois Biometric Information Privacy Act (BIPA). Clearview AI used facial recognition software to “scrape,” or extract, over 10 billion facial images from online photos. The company had planned to sell its technology to private companies before the lawsuit was filed. In 2022, the parties reached a settlement that permanently prohibited Clearview AI from offering its facial photo database for free or for profit to companies nationwide and most private businesses.

Nathan Freed Wessler, deputy director of the ACLU's Speech, Privacy & Technology Project, said the settlement “demonstrates that strong privacy laws can provide real protections against misuse. Clearview can no longer treat people's unique biometric identifiers as an unlimited source of profit. Other companies should take note, and other states should follow Illinois's lead and enact strong biometric privacy laws.”

It's difficult to avoid inadvertently building in bias when creating algorithms. Still, companies can be vigilant when processing data using rapid evaluation methods (REMs) by using tools specifically designed to detect bias leaks in algorithms, such as IBM Watsonx.ai. Tarun Chopra, vice president of product management for data and AI software at IBM, said of Watsonx.ai: “Data is at the heart of every AI use case, but it's even more important to access and process the data needed to get the best results from your AI models. Enterprises need to bring computing power and AI models to where enterprise data is created, processed and consumed to support AI use cases, including both traditional AI and machine learning (ML) workloads and generative AI.”

Advance intellectual property agreements

In addition to bias concerns, companies can also focus on managing ownership rights. This includes securing copyrights and having clear contracts in place to avoid legal disputes. If a company is willing to be transparent about its methods, it is not difficult to implement contracts that include how AI or ML systems train their data and opt-in or opt-out processes.

It is also beneficial for organizations to be aware of legal standards already in place regarding regulations and compliance. For example, in the healthcare sector, there is a compliance checklist for the Health Insurance Portability and Accountability Act (HIPAA) that requires companies to appoint a HIPAA Privacy Officer “responsible for developing, implementing, and enforcing HIPAA-compliant policies.” This policy also ensures that data transmitted electronically is encrypted or anonymized by replacing key identifiers with other values.

While the United States does not yet have comprehensive legislation directly regulating AI, there are some rules under the California Consumer Privacy Act (CCPA) and the EU's General Data Protection Regulation (GDPR) that limit how companies can use and share data. Additionally, the Algorithmic Accountability Act of 2023 would require companies to assess the impact of the AI ​​systems they use and sell, and provide new transparency about how and when such systems are used.

The road to the future

As AI and ML technologies expand, increased regulation will inevitably be part of the future landscape, especially regarding the data used to train these systems. Companies will likely see regulations where simply scraping and processing information from the internet will not come with any penalties. Companies should also expect additional regulation around the ethics of AI and ML, requiring validation before allowing the use of these tools, as well as necessary certifications and contracts with disclaimers. Additionally, mechanisms should be in place to ensure human oversight when governing these systems.

To avoid legal pitfalls when implementing AI and ML tools, companies must remain vigilant when complying with data privacy and regulations. It is also important to understand that this is a rapidly evolving field, with new rules and regulations constantly emerging. Companies can pursue appropriate legal strategies, such as investing in algorithmic bias detection and ensuring robust licensing and copyright in contracts when acquiring data. Strong protocols will enable organizations leveraging these powerful tools to ensure the safety and validity of training models as technology advances.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *