- Adobe recently published a list of do’s and don’ts for using generative AI apps in-house.
- Prohibit the use of personal email accounts or corporate credit cards to sign up for the app.
- Adobe has set up an internal working group called AI@Adobe to assist in the adoption of internal AI apps.
Adobe employees cannot use personal email accounts or corporate credit cards when signing up for AI tools such as ChatGPT.
You should also never allow AI apps to use your data for machine learning training, and never share personal or non-public Adobe data, including financial information.
These restrictions are part of Adobe’s internal guidelines for using generative AI tools in the workplace, according to a company-wide email from Chief Information Officer Cindy Stoddard. An insider obtained a copy.
In a June 20 email, Stoddard said, “We encourage responsible and ethical consideration of generative AI technology within our company, so that our employees can learn and apply its capabilities. I believe we can explore how it impacts the way we all work.” “As an employee, it is your responsibility to protect Adobe and our customers’ data and not use generative Al in a manner that harms or puts Adobe’s business, customers, or employees at risk.”
The email is notable because Adobe is one of the largest software providers on the Internet and is increasingly offering its own generative AI products to customers. The company employs this new technology externally while adding an additional layer of security to its internal use of other generative AI tools.
Other tech companies have adopted similar approaches. Amazon warns employees to use her ChatGPT wisely at work, with some admins receiving real-time alerts when the app is used on an employee’s work computer. Apple, Alphabet, and Samsung have also restricted employee use of generative AI tools.
An Adobe spokeswoman declined to comment on Stoddard’s email.
Adobe has no intention of outright banning AI tools in the workplace. In his internal email, Stoddard said generative AI “can enhance, not replace, human creativity.” While she encourages the use of third-party AI services such as ChatGPT and her Microsoft’s Azure OpenAI API, she provides the following list of “do’s” and “don’ts” for employees: also shared.
what to do
-
- Find approved software in Adobe’s internal Workspace Store and understand related data classification and handling guidelines.
- If you are building a product integration, please review the applicable Adobe policies and guidelines below and contact Adobe Legal.
- Opt out of having your data used for machine learning training, if the setting is available.
- Validate the accuracy of your output and apply your expertise and judgment as to how and whether to use the results in your work.
- If you accidentally put Adobe internal, restricted, or sensitive data into an unauthorized third-party app or service, contact Adobe Security.
Something you can not do
-
- Generating Other Data Classifications If Al explores apps or services that are not approved, do not disclose or upload any personal or non-public Adobe data in prompts/queries.
- Do not use your personal email account to log into the work-related task tool.
- Do not purchase work-related software or services with a corporate card or expense these purchases with a personal credit card.
- Do not accept the output as is and use it. Please review and add motifs if necessary.
- Do not violate the Terms of Use for Generative AI Technology.
New internal AI working group
Stoddard writes that Adobe recently launched an internal AI working group called AI@Adobe to help “internally adopt gen AI for employee productivity and creative tasks.” there is The new AI working group will “consider and understand use cases and define principles and guidelines for experimentation and use of these technologies,” Stoddard wrote.
In addition, Stoddard shared examples of different data classifications employees should follow when using ChatGPT type AI apps. Below he shows four data types.
1) Restricted data has a huge business impact: regulatory protected data, critical financial data, intellectual property, passwords, credentials
“Do not ask Large Language Model (LLM) implementations such as ChatGPT to summarize sets of sensitive financial or customer data by providing them as inputs to generative AI-powered questions and other open text fields. please give me. “
2) Sensitive data has moderate business impact: Restricted data you need to know, such as people-related data (salary, benefits), source code, customer files, product roadmaps, Adobe financial information, etc.
“Don’t ask LLM to review source code for specific bugs.”
3) Internal data has moderate business impact (note: the default type of unclassified data is internal): operations planning, collaboration, internal communications, and support center articles
“Do not ask the LLM to compose emails based on inside information taken from meeting notes. “
4) Public data: publicly available information
“What to do: Ask your LLM how to solve common technical problems. Ask your LLM to draft a business email without your name or other identifying information.
Don’t: Use LLM information in a business project without verifying its accuracy. “
do you work at adobe? Any tips?
Contact reporter Eugene Kim via the encrypted messaging apps Signal or Telegram (+1-650-942-3061) or email (ekim@insider.com). Communicate using non-work devices. Check out Insider’s source guide Find more tips for sharing information securely here.
