Risks and responsibilities to consider

AI For Business


As companies increasingly use artificial intelligence (A.I.) When leveraging tools and chatbots to support day-to-day operations, it’s important to remain alert to potential business and reputational risks that may arise from AI-generated responses.

Teams implementing AI into their operations must recognize that automation does not equal accuracy. AI tools can produce false, exaggerated, or biased information, especially when given misleading or poorly structured prompts. One inaccurate or inappropriate response generated by AI can erode customer trust, damage your reputation, and even expose your business to legal and compliance risks.

Understand your responsibilities when using AI tools

Companies are ultimately responsible for any responses that give rise to a claim, for example, if the response is inaccurate, misleading, defamatory, or infringes someone else’s copyright.

At the same time, companies have little control over user-generated prompts, which can significantly impact the quality and tone of responses generated by AI tools.

In practice, a lot depends on the tools used and the terms and conditions governing their use. Most providers of publicly available generative AI platforms, such as ChatGPT, do not take any responsibility for the output their tools produce. This means that companies are less likely to take effective action against their suppliers if something goes wrong.

On the customer-facing side, companies may seek to protect themselves by including disclaimers for AI-generated responses. While this helps prevent complaints from dissatisfied or misled customers, it provides limited protection against complaints from third parties.

for example:

  • Companies remain liable even if the AI-generated response defames a third party and its content is published publicly (e.g. on a website or social media).
  • If an AI tool uses its training data to create images or other content that reproduces copyrighted material, the original copyright owner may be able to file a claim for infringement.
  • If an AI describes a product in a misleading way, consumers may have rights under the Australian Consumer Law (ACL) If the product does not match the AI-generated description.

Such claims can have a significant financial impact and threaten the future of your business. For example, under the ACL, penalties for misleading or deceptive conduct can reach 30% of a company’s annual turnover or $50 million.

Content displayed on public-facing websites and social channels is not immune from liability just because it was created by an AI tool. Even the best-written disclaimer cannot provide complete protection.

Important considerations when using AI tools

Companies currently using or planning to deploy AI chatbots or similar tools should consider the following questions to minimize potential risks.

  • How much do you know about your supplier? – Check if the supplier has experience in your industry or market. If you’re based overseas, check what support is available to you during business hours
  • Do you understand how the tool works? – Train your staff on how the tools work and test the output from the algorithms.
  • What data is the tool trained on and where does that data come from? – Many large-scale public AI generation tools rely on vast datasets that can contain copyrighted, biased, controversial, or unreliable information.
  • What data will be input into the tool? – Understand how the supplier uses information from your business and customers, especially personal and sensitive data, and ensure it is consistent with user expectations.
  • What decisions does this tool help with? – Clearly define the tasks and decisions that your AI tool will support. Keep humans in the process of making highly sensitive or important decisions
  • How transparent is the process? – Consider how easily the tool’s reasoning can be traced and justified from input to output.
  • Are users aware that they will be working with an AI product? – Inform staff and customers that there is a clear process for questioning or contesting the output of AI tools when they are working with them
  • Who is ultimately responsible for the output generated by the AI? – Establish clear policies or guidelines for who is responsible and accountable for the tool’s response, and ensure proper records to support accountability in the event of an issue.

While many of the risks seem low for simple tools like chatbots, there are instances where AI products pose significant problems for businesses regarding the output they produce. By addressing the questions above, you can identify potential problems early and reduce the risk of adverse outcomes.

How to protect your business

Companies considering implementing AI tools should conduct a thorough risk assessment to minimize the risk of misuse. This assessment should consider the tool’s capabilities, the dataset on which it was trained, and how the company controls and manages access to the results it produces.

It is also important to have clear policies and processes in place to deal with claims that may arise from misleading or inaccurate output. If AI-generated content on a website defames an individual or infringes a copyright, claiming “AI did it” is not a complete defense. Companies should therefore carefully consider how their chosen tools will work and who is ultimately responsible for their output.

If you have any questions about the current use of AI in your business, or need help conducting a risk assessment before implementing an AI tool, please contact us.

Disclaimer
The information in this article is of a general nature and is not intended to address the circumstances of any particular person or entity. Although we strive to provide accurate and timely information, we cannot guarantee that the information in this article is accurate at the time you receive it or that it will remain accurate in the future.

share this



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *