How unmanaged AI deployments put businesses at risk | Trend Micro (MX)

AI News






How unmanaged AI deployments put businesses at risk Download the white paper



Written by Josiah Hagen, Vladimir Kropotov, Robert McArdle, Fyodor Yarochkin

AI systems, including large-scale language models (LLMs), are taking on a larger role in business processes. These are used from content generation to customer interaction. However, while AI responses sound objective and authoritative, our research shows that they are not inherently trustworthy and require proper verification.

AI is neither neutral nor decisive. LLM reflects the data it is trained on, including gaps, biases, and outdated information. As a result, AI systems can:

  1. reflect cultural, social, or political biases
  2. Produce inconsistent or inconsistent output
  3. Make mistakes confidently without feeling any anxiety at all

If organizations perceive AI output as trustworthy by default, technical limitations and biases can turn into enterprise risks. This study tests how AI biases and failures manifest in real-world use and investigates how they can negatively impact companies.

From the limits of AI to business risks

We ran thousands of iterative experiments on approximately 100 AI models using a dataset of over 800 intentionally provocative questions. In total, we analyzed over 60 million input tokens and over 500 million output tokens.

Our testing highlights limitations of AI that can lead to potential operational, reputational, and financial enterprise risks.

1. Not separating relevant and irrelevant information

AI models often have a hard time distinguishing between relevant and irrelevant details. Most of the models we tested had distorted or inaccurate output because the prompts contained extraneous information. Only 43% of the models got it right.

business risk
This limitation can be exploited to manipulate results, leading to incorrect financial calculations, data misclassification, or flawed automated decision-making.

2. Limited cultural, social and religious awareness

AI models trained in one region may produce output that is inconsistent with cultural or religious norms in other regions. This is especially dangerous for global organizations deploying AI at scale.

business risk
Getting it wrong can create a public backlash, alienate your customer base, violate local regulations, and cause lasting reputational damage.

3. Recognition of limited political context

AI models are often unaware of political timelines, legitimacy, or authority, especially when time-sensitive or region-specific contexts are required.

business risk
Inaccurate or misleading political output can lead to legal exposure, compliance violations, or reputational damage, especially if AI-generated content is published in an organization’s name.

4. Overly friendly model behavior

As the user repeats or reframes the question, the AI ​​model tends to gradually adjust its responses to appear more helpful, even at the expense of accuracy.

business risk
This behavior can be exploited in financial, legal, and government contexts, where repeated prompts lead the model to increasingly favorable but inaccurate answers, with real-world consequences.

5. Limited awareness of what is “current”

Many AI models operate based on outdated or inconsistent assumptions about current facts, even when real-time data tools are available.

business risk
Organizations that rely on AI for pricing, currency conversion, market analysis, or decision support run the risk of operational errors and loss of reliability when outdated information is presented as current.

6. Misperception of geographical location

Some models attempt to infer the location of a user or system despite the lack of reliable or relevant data, producing details that are convincing but completely fabricated.

business risk
Using AI output for geolocation, compliance, and personalization without verified inputs can introduce errors that undermine trust and violate regulatory expectations.

Sector-wide impact

Unchecked AI adoption will not affect all stakeholders equally, but will have significant implications across the sector.

company

For organizations, the output generated by AI can convey positions that the company does not support. Global companies in particular need to ensure that their AI outputs are compatible with diverse cultures, languages, and religions.

government

The output of AI used by government agencies can influence public messaging and policy. Because all messages issued by government agencies are often considered official, unvetted AI integration can have significant social and political consequences if its output is biased or misaligned with current policies, local culture, and traditions.

individual

As AI systems become a part of everyday life, users may overly trust AI responses or share personal information without fully understanding the underlying policies of these systems. Over-reliance on AI can expose users to privacy, cognitive, and social risks as they may accept responses uncritically, share sensitive information, or receive inappropriate responses.

Responsible AI adoption

Our analysis reveals examples of AI bias in the context of geography, geofencing, data sovereignty, and censorship dynamics, all of which influence the behavior and output of AI models. This study questions common assumptions about LLM capabilities and highlights the risks of relying unilaterally on these models.

Ensuring transparency and accountability for AI technology is essential. There is no doubt that AI is a key driver of business innovation, but to fully exploit its potential, it must be deployed alongside thorough validation and up-front risk assessment.

The full report, “Unmanaged AI Deployments and Risks to Enterprises: Assessing Geographical Bias, Geofencing, Data Sovereignty, and Censorship in LLM Models,” provides detailed examples of our findings, analysis of real-world responses from various models, and further recommendations for mitigating AI bias risks.

Also included is an executive brief highlighting the report’s key findings, themes, and implications for organizations implementing AI.



hide

Did you like it? Add this infographic to your site.
1. Click the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).

The image will be displayed at the same size as above.