The Ethics of Generative AI: Our 8 Biggest Concerns

AI For Business


Like other forms of AI, generative AI can have implications for many ethical issues surrounding data privacy, security, policy, and the workforce. Generative AI technologies can also create a new set of business risks, including misinformation, plagiarism, piracy, and harmful content. Lack of transparency and possible layoffs of workers are additional issues that companies need to address.

said Tad Roselund, managing director and senior partner at consulting firm BCG. These risks require a holistic approach, including a well-defined strategy, good governance, and a responsible AI commitment. A corporate culture that embraces the ethics of generative AI should consider eight key issues.

1. Distribution of Harmful Content

AI systems can automatically create content based on human text prompts. “These systems can greatly improve productivity, but they can also be used to do harm, intentionally or unintentionally,” says professional services consultancy PwC’s cloud and explains Bret Greenstein, a partner at Digital Analytics Insights. For example, an AI-generated email sent on behalf of a company could inadvertently contain offensive language or issue harmful guidance to employees. Greenstein said generative AI should be used to augment humans and processes, not replace them, ensuring that content meets the ethical expectations of companies and supports brand values. I advised.

2. COPYRIGHT AND LEGAL EXPOSURE

Common generative AI tools are trained on large image and text databases from multiple sources, including the internet. As these tools create images or generate lines of code, the source of the data can be obscure, for banks dealing with financial transactions and pharmaceutical companies that rely on complex molecular formulas for drugs. can be a problem. Reputational and financial risks can also be greater when one company’s products are based on another company’s intellectual property. “Companies should look to validate the output from their models,” Roselund advised, “until precedent makes him clear about IP and copyright issues.” I’m here.

Generative AI for years.
Companies are scrambling to maximize the benefits of today’s generative AI while grappling with unique ethical issues.

3. Breach of data privacy

Generative AI Large Language Models (LLMs) are trained on datasets that may contain personally identifiable information (PII) about individuals. Abhishek Gupta, founder and principal investigator of the Montreal Institute for AI Ethics, said this data could sometimes be pulled with a simple text prompt. It can also make it more difficult for consumers to find information and request its removal compared to traditional search engines. Companies building or fine-tuning LLMs should ensure that PII is not embedded in language models and that his PII can be easily removed from these models in compliance with privacy laws.

4. Disclosure of Confidential Information

Generative AI is democratizing AI capabilities and making them more accessible. This combination of democratization and accessibility can lead to medical researchers inadvertently disclosing sensitive patient information and consumer brands unwittingly exposing product strategies to third parties, he said. says Mr. The consequences of such unintended incidents can irreversibly damage patient and customer confidence and have legal repercussions. Roselund said companies should establish clear guidelines, governance and effective communication from the top down, while emphasizing shared responsibility for protecting sensitive information, protected data and her IP. recommended.

5. Amplification of existing prejudices

Generative AI can amplify existing biases. For example, biases can be found in the data used to train LLMs, outside the control of companies using these language models for their particular application. Greenstein says it’s important for companies working on AI to have diverse leaders and subject matter experts who can help identify unconscious biases in data and models.

Executive quotes on ethical approaches to generative AI.

6. Employee Roles and Morale

AI can perform many of the day-to-day tasks that knowledge workers do, such as writing, coding, content creation, summarization, and analysis, said Greenstein. Since the introduction of the first AI and automation tools, worker displacement and turnover has continued, but the pace has accelerated as a result of generative AI technology innovations. “The very future of work is changing,” Greenstein added, “and the most ethical companies are investing in it.” [change].”

Ethical responses include investments to prepare specific parts of the workforce for new roles created by generative AI applications. For example, companies should help their employees develop generative AI skills such as prompt engineering. Nick Kramer, his vice president of applied solutions at consultancy SSA & Company, said: “This not only minimizes the negative impact, but also prepares the company for growth.”

7. Data provenance

Generative AI systems consume vast amounts of data that may be poorly managed, of questionable origin, used without consent, or contain bias. An additional level of inaccuracy can be amplified by social influencers and AI systems themselves.

“The accuracy of a generative AI system depends on the corpus of data it uses and where it comes from,” explains Scott Zoldi, chief analytics officer at credit scoring service FICO. “ChatGPT-4 is mining data on the internet, much of it is really garbage and shows basic accuracy issues in the answers to questions it doesn’t know the answer to,” Zoldi said. We have been using generative AI for over a decade to simulate edge cases in training fraud detection algorithms. The generated data is always labeled as synthetic data, so Zoldi’s team knows where the data is allowed to be used. “We treat it as walled-in data for testing and simulation purposes only,” he said. “Synthetic data produced by generative AI does not inform future models. We hold this generative he asset and do not allow it to be ‘published’ . ”

8. Lack of explainability and interpretability

Many generative AI systems group facts probabilistically. This goes back to how AI learned to correlate data elements, he explains Zoldi. However, these details are not always revealed when using applications such as ChatGPT. As a result, the reliability of the data is questioned.

Analysts expect to arrive at causal explanations for outcomes when investigating generative AI. But machine learning models and generative AI look for correlation, not causation. “That’s where we humans need to assert the interpretability of the model. That’s why the model has the answer,” Zoldy said. “And don’t just take the results at face value, really understand if the answer is a plausible explanation.”

Until that level of confidence is achieved, generative AI systems should not be relied upon to provide answers that can have a large impact on lives and livelihoods.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *