Busy Week In AI: Major Companies Come Together On AI Safeguards, ChatGPT Releases Custom Instructions, Microsoft Releases Azure OpenAI Transparency Note – New Technology

AI Basics



To print this article, all you need is to be registered or login on Mondaq.com.

In the rapidly-evolving AI space, the last few days of this week
saw significant AI developments occur perhaps even faster than
usual. For example, seven AI companies agreed to voluntary
guidelines covering AI safety and security and ChatGPT rolled out a
custom preferences tool to streamline usage. In addition, as a
related point, Microsoft issued a transparency note for the Azure
OpenAI service. And on top of that, this week saw announcements of
a number of generative AI commercial ventures which are beyond the
scope of this particular post.

AI Voluntary Guidelines

The White House announced that it had secured voluntary commitments from seven major AI
companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft,
and OpenAI) focusing on what the White House termed as “three
principles that must be fundamental to the future of AI –
safety, security, and trust.” According to the announcement,
the Voluntary AI Commitments (the “commitments”) are
“consistent with existing laws and regulations,” reflect
current safety practices and are intended to remain in effect until
regulations covering AI safeguards are enacted. Two important
observations: (1) there is no enforcement mechanism in the
commitments (though, it’s possible that the FTC could
investigate a false claim about compliance with the commitments
under its authority over unfair or deceptive acts or practices);
and (2) the commitments apply only to generative AI models that are
more powerful than the current industry models (i.e., more powerful
than ChatGPT-4, DALL-E 2, Claude 2, PaLM 2, Titan).

The commitments include statements on the following
principles:

  • Safety: The companies committed to internal and
    external security testing of AI systems before their release
    (including related to misuse, societal risks and national security
    concerns). The companies also committed to sharing information
    regarding safety risks, dangerous capabilities and attempts to
    circumvent safeguards.

  • Security: The companies agreed to invest in
    cybersecurity and insider threat safeguards that protect secure AI
    training processes from bad actors, including establishing
    incentive programs (e.g., bug bounties) to uncover undiscovered
    vulnerabilities.

  • Trust: The companies agreed to develop technical
    processes that label AI-generated audio or visual content and
    develop tools that allow others to determine if a piece of content
    was created with their system. One example of such a label might be
    a watermarked AI-generated photo (note: DALL-E generated images
    contain a watermark in the bottom right corner; however, such
    watermarks can be removed relatively easily by users). Another
    commitment requires companies to publish reports for all new
    “significant model public releases within scope” that
    include safety evaluations and limitations on performance and
    intended uses (note: There appears no requirement to publish
    information about a model’s training data). Finally, under the
    commitments, the companies would prioritize research on social
    risks posed by AI and support research on solving the consequential
    challenges of our age, such as climate change, cancer detection and
    cybersecurity challenges.

ChatGPT Custom Instructions

ChatGPT introduced a beta feature to allow users to save custom
instructions or preferences to steer ChatGPT output or help
improve performance of ChatGPT plugins. For example, if a
particular user is a software developer, he or she could save a
preference for a particular coding language so that by default, the
platform releases output in that language. Regarding privacy
concerns, ChatGPT states that it may use users’ custom
instructions to improve model performance, but that users could
disable this sharing through data control settings.

Transparency Note for Azure OpenAI Service

Microsoft released a transparency note for its Azure OpenAI service
that describes the basics of the Azure OpenAI models for businesses
implementing enterprise-grade functions. The note lists some
intended uses (e.g., chat and conversation creation, writing
assistance, code generation, perform sentiment analysis) and some
considerations for customers when choosing a use case (i.e., which
uses might be ill-suited for the service, including the avoidance
of using the models for high stakes scenarios or open-ended
unconstrained content generation). The note also lists best
practices for improving model outputs, such as through human review
of outputs, measuring model quality and fairness, and offers some
reminders about the technical limitations of the system. All in
all, this is the type of information that should inform business
decisions on AI implementation and which potential risks should be
prioritized during contract negotiations with the provider.

* * *

The White House’s interest in generative AI and the flurry
of activity by major generative AI platform providers is not
surprising given the extensive attention focused on this
technology. While there has been some discussion of the benefits
afforded by this technology, other activity in the area such as
pending litigations, ongoing labor actions, congressional
discussions, and international developments has brought extensive
attention to the risks the technology presents

Despite recent testimony in Congress, it seems logical that some
of these developments on behalf of the platform providers are
intended to forestall legislation on the topic. Whether this
divided Congress can come together to enact meaningful regulation
in this area is yet to be seen.

Busy Week in AI: Major Companies Come Together on
AI Safeguards, ChatGPT Releases Custom Instructions, Microsoft
Releases Azure OpenAI Transparency Note

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

POPULAR ARTICLES ON: Technology from United States

An Update On The Latest GenAI Class Action

Proskauer Rose LLP

This webinar explored the possible implications of two new putative class action litigations brought against OpenAI in connection with its generative AI offerings.

An Update On Artificial Intelligence And The Law

Butler Weihmuller Katz Craig LLP

In May 2023, I was part of a panel that gave a presentation at the London Market Association’s Property Insurance Claims Group’s (PICG) Annual Conference. Part of our presentation addressed…

Are NFTs Now Taxable At A Higher Rate?

Falcon Rappaport & Berkman LLP

The Internal Revenue Service (IRS) recently issued a notice (the “Notice”) regarding its intention to provide guidance related to determining when nonfungible tokens (NFTs)…



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *