Microsoft, Google and Meta all acknowledge the risks posed by AI

AI For Business


Google CEO Sundar Pichai (left) and Meta CEO Mark Zuckerberg (right) are among the tech executives who are enthusiastic about AI.
Justin Sullivan (via Getty Images), Alex Wong (via Getty Images)

  • Big tech companies are finally acknowledging the risks that consumers have been complaining about for months.
  • SEC filings reveal that Meta, Microsoft and others are concerned about problems that AI could pose.
  • Misinformation and harmful content are some of the “risk factors” associated with generative AI.

Artificial intelligence seems to be everywhere these days, but some of the tech industry's largest companies are finally starting to recognize the risks it poses.

AI has become the biggest topic of discussion in the tech industry since OpenAI made ChatGPT publicly available in November 2022. Since then, companies such as Google, Meta, and Microsoft have invested heavily in AI efforts.

Big tech companies have been vocal about their plans to join the AI ​​arms race, but more recently they've quietly grappled with how it could actually hurt their business.

Google's parent company Alphabet said in its 2023 annual report that its AI products and services “may give rise to ethical, technical, legal, regulatory and other challenges that may adversely affect our brand and demand.”

Similarly, Meta, Microsoft and Oracle have also cited concerns about AI pages in their SEC filings, usually in the “Risk Factors” section, Bloomberg reported.

Microsoft said its generative AI capabilities “may be subject to unanticipated security threats from sophisticated adversaries.”

“The development and deployment of AI involves significant risks, and there can be no assurance that our use of AI will improve our products or services or benefit our business, including its efficiency or profitability,” Meta's 2023 annual report said.

The company went on to list several ways that generative AI could be bad news for users and expose the company to lawsuits, including misinformation (especially during elections), harmful content, intellectual property infringement, and data privacy.

But there are concerns among the public about AI making some jobs obsolete, large-scale language models trained on personal data, and the spread of misinformation.

Then on June 4, a group of current and former OpenAI employees signed a letter to the tech company, demanding that it do more to mitigate risks from AI and protect employees who raise questions about its safety.

These issues range from “the further entrenchment of existing inequalities, to manipulation and misinformation, to the potential extinction of humanity due to loss of control over autonomous AI systems,” the letter said.

Meta, Google, and Microsoft did not immediately respond to Business Insider's requests for comment.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *