Doctor (robot) chats (GPT) with you

AI Basics


With the rise of ChatGPT, we are looking at ways to leverage this technology while ensuring that the risks associated with this technology are mitigated.

Put simply

ChatGPT has exploded in popularity and threatens to revolutionize the way we work. ChatGPT 4.0 not only passed the US bar exam, but also reportedly passed the US medical licensing exam with flying colors.

Given this sophistication, the healthcare industry could (in theory) leverage ChatGPT’s capabilities to streamline processes and improve service. However, many challenges remain related to AI and machine learning. While enjoying the benefits of ChatGPT 4.0, organizations may be able to protect themselves from many of ChatGPT 4.0’s key legal and commercial risks in the following ways:

  • Develop ChatGPT usage policies, risk assessment frameworks, and provide staff training.
  • Include appropriate liability and warranty clauses in contracts with service providers that use ChatGPT.and
  • Including appropriate exclusions of liability provisions in contracts with customers (if the customer is not an individual).

ChatGPT risk

make it right

First and foremost, ChatGPT can give you wrong answers. This could be for one of the following reasons:

  • hallucinations: There are several well-documented examples of AI systems simply making up seemingly plausible information or making false statements. This is called “hallucination”. Perhaps most famously, this happened when Google’s own AI system (Bard) generated an error in the first demo.[1]
  • bias: ChatGPT gets its information from sources, including many user-generated content sites, so its answers may contain inappropriate biases and conclusions. In this way, wrong, harmful, or biased answers can be generated.
  • timing: ChatGPT is currently unaware of events after September 2021 and may make simple inference errors or fail to consider more recent information.

So there is a real risk that ChatGPT (or equivalent chatbots) will generate wrong answers. From a medical point of view, this can lead to very serious consequences if not managed carefully.

content control

Secondly, ChatGPT is not a secure system to receive information that providers do not want to make public. ChatGPT Terms of Service (Clause) provided that the user’s “content” (i.e., inputs and responses) can be used as OpenAI (owners of ChatGPT) deems appropriate. Additionally, OpenAI detected a bug where ChatGPT could be “persuaded” by certain inputs to leak information that shouldn’t be made public.

Therefore, the use of ChatGPT carries risks that organizations operating in the healthcare sector need to manage. They include:

  • privacy: Content may be subject to: Privacy Act 1988 (Cth), which raises issues of potential privacy violations, such as unauthorized cross-border data transfers and unauthorized disclosure of sensitive health information.
  • keep a secret: Content may include confidentiality obligations as outlined in the Australian Medical Association Code of Conduct.
  • intellectual property: Content may infringe the intellectual property rights of other parties.
  • other: AI responses lack empathy, fail to consider the wider context, and may not comply with applicable medical professional standards.
Liability: blame the bot

Another problem arises because the ChatGPT terms state:

  • NEITHER OPENAI NOR ITS AFFILIATES WILL BE LIABLE FOR ANY DAMAGES, EVEN IF OPENAI HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
  • OPENAI DOES NOT WARRANT THAT THE SERVICES WILL BE UNINTERRUPTED, ACCURATE OR ERROR-FREE, OR THAT THE CONTENT WILL BE SECURE, NOT LOST OR MODIFIED.

Therefore, there is a real risk that you cannot hold ChatGPT or OpenAI liable by contract if ChatGPT generates an incorrect answer. This increases all the risks mentioned above.

De-risk AI systems

To mitigate the above risks, organizations may want to consider:

  • Usage Policy: Implement an AI usage policy that stipulates proper practices for internal use of AI systems. For example, you may be allowed to use ChatGPT to create initial problem lists, draft various documents, or conduct basic research, all of which are subject to careful scrutiny. is required, but is not permitted to be used to prepare reports or medical instructions. Or advice.
  • Terms and Conditions: Include clear obligations in all contracts with service providers.
    • Restrict or control the use of AI systems.
    • You are expected to comply with your organization’s ChatGPT usage policy and promptly notify us of any inadvertent disclosure.
  • record: Implement accurate and up-to-date records of AI systems and versions in use, and impose similar obligations on service providers using AI systems.
  • training: Provide detailed and required training to staff members (and service providers) who may use AI to ensure they understand the benefits and limitations of the system.
  • responsibility: understanding and This is to prepare for the fact that the organization, and not ChatGPT, retains responsibility for advisory services. If information is obtained from a service provider, the relevant contract must clearly state that the service provider is responsible for inaccurate information.
  • guarantee: Include clear guarantees in your service provider contract for AI-generated responses. This includes assurances that service providers will check AI system responses and accept responsibility for errors in those responses.
  • risk assessment:
    Before using ChatGPT, conduct a risk assessment to ensure your organization understands the pros and cons/risks of modern ChatGPT development and use cases. A risk assessment is also required if ChatGPT is used by a service provider.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *