Generative AI is taking the public eye and promises to transform the way we live and work. However, this technology comes with many important cybersecurity and privacy considerations for organizations. This alert details the four most important considerations and outlines steps to address them.
- Growing cyber threats. Cyber attackers may use generative AI to further their plans. The potential uses are endless, including creating malware that exploits previously unknown (or “zero-day”) vulnerabilities, creating malicious websites that look legitimate, personalizing phishing emails, generating deepfake data, and flooding security systems. Additionally, company-specific AI models can create risks of abuse.
mitigate this risk. To mitigate this risk, organizations must focus more than ever on cybersecurity. You can start with the basics like developing and testing incident response plans, conducting risk assessments considering these emerging threats, and making cybersecurity an enterprise-wide focus. As part of this preparatory work, organizations should consider identifying and taking action against fraudulent domains masquerading as legitimate domains. When it comes to the security of your own AI models, you may need to focus on regularly patching third-party models, fixing bugs in internal models, and training your employees on acceptable usage.
- General privacy compliance. States pass new privacy laws almost every month. These laws regulate, among other things, how consumer data is collected, processed and shared, and impose new rights on consumers. These laws often impose disclosure and consent requirements, opt-out rights, and contractual obligations. Generative AI affects compliance with these laws, particularly where the tools process consumer personal information for the purposes of automated decision-making or where the tools may be “sold” or “shared” under state privacy laws. Additionally, some regulators, such as the National Labor Relations Board, have begun to provide guidance on the use of AI models for workplace surveillance, and New York City has even passed a law banning the use of automated decision-making in hiring and promotions unless companies take certain anti-stigma measures.
mitigate this risk. Addressing this risk requires understanding the tools involved, the underlying data and its sources, the laws involved, and the potential impact of the tools on consumers. Compliance may begin with a review of the terms of use and other agreements of companies that offer AI products that may touch an organization’s data. Additional compliance steps may include preparing notice and consent, conducting risk assessments and testing, ensuring opt-out rights, ensuring proper record keeping, developing means to review and override tool decisions, and implementing appropriate contractual terms with vendors and service providers. If your AI product touches employee data, or if your company is subject to special industry regulations (healthcare, government contracting, financial services, etc.), consider whether regulatory guidance may be relevant and check your practices against that guidance.
- avoid blind spots. Businesses should also consider the possibility of vendors using his AI tools. Contracts often impose obligations on the company itself to provide necessary notices and secure consumer consent for the vendor’s processing of the company’s personal information. If the vendor has not disclosed his use of AI tools to the company, the company may not be aware of the full extent of this obligation and potential liability to the individuals whose personal information the vendor is processing.
mitigate this risk. Before entering into a contract, companies should understand the types of processing and tools the vendor uses and any opt-out rights the vendor may have already exercised (e.g. opting out of using AI tools to improve their data). As a first step, a vendor’s due diligence could include questions about the vendor’s use of AI tools to process personal information from the company, along with other data processing questions.
- Avoiding Deceptive Trade Practices. The Federal Trade Commission, state attorneys general, and plaintiffs’ attorneys are focused on pursuing allegations of fraudulent trade practices, particularly regarding alleged deviations between an organization’s privacy policy and its privacy practices. In a recent example, the FTC accused online counseling service BetterHelp of sharing sensitive health information with third-party advertising platforms in violation of the company’s privacy policy. Companies that use generative AI to process data in ways that are inconsistent with privacy policies and other public statements may be subject to law enforcement action and litigation.
mitigate this risk. Use a multidisciplinary approach when exploring and implementing generative AI. This approach involves working with multiple groups of stakeholders to understand current practices and tools, reduce risk, and increase transparency. A possible first step here is to research known uses of generative AI tools within your organization, review the agreements and terms of use for their use, and compare that work to the company’s actual privacy policy disclosures.
Generative AI presents opportunities and risks to organizations. Cybersecurity and privacy risks are one of the most salient for him. By identifying and addressing the above risks, organizations can take full advantage of this compelling technology while mitigating risks.
