“For those destined to use generative AI tools like ChatGPT, the ethical concerns presented are not insurmountable, but practitioners should seriously pre-determine and consult with their clients before using them.” I have.”
The use of artificial intelligence (AI) has played a central role in popular culture, thanks to significant advances in tools such as ChatGPT. Of course, the use of these new high-performance AI tools poses real challenges for businesses of all types and sizes. Notably, a Samsung employee shared sensitive information with her ChatGPT while using a chatbot at work. Samsung then decided to restrict the use of generative AI tools on company-owned devices and any device with access to its internal network. Concerned about the loss of sensitive information, Apple likewise restricted employees from using his ChatGPT and other external AI tools.
The actual loss or potential loss of confidential information is a critical issue for technology companies, but it should also be a top concern for any attorney who has an ethical obligation to keep customer information confidential.
Confidentiality concerns that arise when using generative AI (a specific kind of AI that can generate different types of content in response to prompts) should be well understood and acknowledged. For example, do you know if AI tools will use future provided information for the purpose of training AI models?
AI won’t keep your secrets
According to ChatGPT, information sent through the OpenAI API will not be used to train OpenAI models or improve OpenAI’s service offerings. However, data submitted through non-API consumer services such as ChatGPT can still be used to improve our models. So when information is sent through ChatGPT, the AI can use that information to inform itself and answer other users’ questions. This almost certainly means that the information is no longer a trade secret. And if such information is shared by an attorney or patent practitioner who is ethically required to keep it confidential, it falls far short of one of the most basic ethical requirements.
Recall that Rule 1.6(a) of the American Bar Association’s (ABA) Code of Professional Conduct prohibits attorneys from revealing “information on behalf of a client unless the client has given informed consent.” please. ) The Professional Conduct Rule similarly prohibits patent practitioners from revealing “information on behalf of a client unless the client has given informed consent.” look 37 CFR 11.106(a). In addition, both Rule 1.6 and Rule 11.106 require practitioners to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information on behalf of a customer.” Of course, the USPTO rules have caveats not present in the ABA model rules, requiring practitioners to “disclose to the patent office such information as is necessary to comply with applicable disclosure obligations.” look 37 CFR 11.106(c).
“What if”
Whether and how Rule 11.106(c) can be complied with when using generative AI tools remains an open question. For example, if you rely on information from ChatGPT, how can you know the source of that information? If you don’t know where the information provided by ChatGPT came from, what are you incorporating in your patent disclosure? It is impossible to know if If you have trouble practicing Inclusive Incorporation by Reference under 37 CFR 1.57 (e.g. TRUE Know what’s included or contradicted) Blindly incorporating material provided by generative AI tools like ChatGPT can have devastating consequences. Has the information provided been hand-picked from your competitors and are you including that information in your patent application? If any of that information is relied upon in a claim, what does that mean for future property rights? Is it possible for your competitors to prove that the content you included was derived from the information ChatGPT contains in its corpus? The information that ChatGPT has in its corpus, whatever the source, that you use in your claims may lurk unidentified inventors in a successful petition or lawsuit. , who later turn out to be co-inventors and end up sharing ownership?
Of course, even considering the confidentiality risks, loss of rights to trade secrets, and lack of knowledge of the origin of information provided that come with using generative AI tools like ChatGPT, they are still very attractive tools. There is a possibility. When looking for ways to accurately, completely, and creatively describe innovations, generative AI tools can be used to accelerate information and data retrieval, and even provide text on some aspects of the innovation. Please limit yourself to the previous discussion if possible. The art you provide in the background, or provide the reader with context to demonstrate the benefits brought about by your innovation. And in a world where both the Federal Circuit and the Supreme Court continually demand more disclosure of patent applications, while clients demand more work from patent professionals at a lower cost, everyone wants quality. I’m looking for a way to cut corners without compromising. Therefore, from a risk-reward perspective, the use of generative AI tools can be too beneficial to ignore.
While the ethical concerns presented are not insurmountable for those who are supposed to use generative AI tools like ChatGPT, practitioners should give serious pre-consideration and consult with their clients before using them. there is.
Informed Consent and Competence
ABA Rule 1.4 requires attorneys to “promptly notify the client of any decision or circumstance for which the client’s informed consent is required.” look Rule 1.4(a)(1). Attorneys are also required to “explain the matter to the extent reasonably necessary to enable the client to make an informed decision.” look Rule 1.4(b). Attorneys are also required to “consult reasonably with their clients as to the means by which they may achieve their ends.” look Rule 1.4(a)(2). The USPTO rules (37 CFR 1.104) mimic the ABA rules.
Understanding how generative AI solutions collect, store, and use the information they provide will help us make informed considerations of issues such as confidentiality and ultimately communicate with our clients. is a prerequisite for obtaining full informed consent. For example, if a particular AI solution is used to create part of the description of a patent application, that AI will internalize the information provided, continue to incorporate that information as part of its corpus, learn from it, and Will it be drawn from there when filing a patent application? With future users? As already mentioned, ChatGPT may or may not use that information. Knowledge of the specific terms and conditions relating to the use of AI tools is necessary to reasonably consult with the client and to the extent necessary to make an informed decision whether to allow use of the generated AI tools. I need it to explain the problem. look ABA Rule 1.4 and USPTO Rule 11.104.
And perhaps at the most basic level, professional rules of conduct require competence. “Attorneys must provide adequate representation for their clients,” says ABA Rule 1.1, which follows USPTO Rule 11.101. Competence is defined as requiring legal, scientific and technical knowledge, skill, thoroughness, and preparation reasonably necessary for representation. Rule 11.101 is similar to ABA Rule 1.1, except for the general provisions of the ABA, but does not require scientific and technical knowledge.
While it may seem silly or even banal to emphasize the requirement that practitioners demonstrate a requisite level of competence, generative AI tools like ChatGPT excel at what they do, but It’s worth remembering that it’s not foolproof or perfect. For example, a conversation with ChatGPT brought up the topic of the Patent Trial and Appeal Board (PTAB). ChatGPT called the judge an administrative law judge, which is incorrect. The judges who make up the PTAB are administrative patent judges, not ALJs. It may be a small difference on the surface, but as the conversation progressed, this fundamental misunderstanding led ChatGPT to make even more mistakes, such as requiring him to have seven years of law experience to be hired as a PTAB judge. We have come to a conclusion. In fact, dozens of APJs with less than five years of legal experience are employed by the USPTO as PTAB judges. And when asked where this misinformation came from, ChatGPT refused to answer the question. Still, the answers provided are given with authority and credibility, and can easily be succumbed to misinformation being provided by someone completely and completely uninformed.
Competent representation duties will almost certainly require more attorneys and patent practitioners than generative AI tools like ChatGPT can currently provide. This doesn’t mean it’s useless, assuming the hurdles around confidentiality and customer informed consent are resolved, but blind use of ChatGPT by qualified professionals is almost certainly , is not sufficient to reach the level of competence expected by ethical authorities. . In other words, claiming that the information provided by ChatGPT is reliable and believed to be trustworthy would likely not meet the standard of competence expected if things went terribly wrong, and this is usually a prerequisite for ethical research and wider ethical research.
Finally, whatever decisions practitioners and companies make regarding the appropriate protocols for considering the use of generative AI, communicating with clients to obtain informed consent, and verifying information, all practices governing It is important to remember that you have a duty of supervision. The same applies to non-attorneys and non-attorneys employed or engaged to facilitate the representation of clients. look ABA Rules 5.1 and 5.2, and USPTO Rule 11.501 from.
Image Source: Depoist Photos
Image ID: 651971872
Author: Primakov
