Microsoft has been pushing its AI services towards its own user base, especially with the launch of Copilot+ PC, but it seems like even the company itself doesn’t believe in its creation. According to Microsoft Copilot’s terms of service, updated last October, AI Large-Scale Language Models (LLMs) are designed for entertainment purposes only and users should not use them for critical advice. This may be a boilerplate disclaimer, but it’s quite ironic considering the company wants you to use Copilot for business purposes and is integrating it into Windows 11.
“Copilot is for entertainment purposes only. It may make mistakes and may not work as intended,” the document says. “Do not rely on Copilot for important advice. Use Copilot at your own risk.” This is not unique to Copilot. Other AI LLMs have similar disclaimers. For example, xAI states that “Artificial intelligence is rapidly evolving and is probabilistic in nature, so in some cases it may a) produce output that contains ‘hallucinations’, b) be offensive, c) not accurately reflect real-life people, places, or facts, or d) be offensive, inappropriate, or unsuitable for its intended purpose.”
Article continues below
Generative AI is a useful tool and can certainly increase productivity, but it is still a tool that does not take responsibility for any mistakes that may occur. For this reason, anyone using it should always be suspicious of its output and be careful to double-check its results. However, even recognizing the limitations of current AI technologies, humans are susceptible to automation bias and tend to favor machine-generated results and ignore data that may contradict them. AI could make this phenomenon even more serious, especially since it can produce results that appear plausible or even true at first glance.
to follow Tom’s Hardware on Google Newsor Add us as a preferred sourceget the latest news, analysis, and reviews in your feed.
