Contractual services are typically accompanied by terms of use provisions that form a framework of mutual rights, responsibilities, restrictions and restrictions governing Customer’s use of the provider’s AI systems, services, intellectual property and other assets. Typically, the framework is expressed within the contract documents, and the provider implements the underlying technical and procedural mechanisms necessary to enforce it.
Although terms of service are not a new idea and apply to most industries, the explosive growth of AI has brought into focus the importance of AI platform limits. Consider the highly publicized dispute in early 2026 between AI provider Anthropic and the Department of Defense. Anthropic has refused to allow the U.S. Department of the Army (DoW) to use its Claude AI platform for domestic mass surveillance or the development of autonomous weapons. The Pentagon called for unrestricted use for “defensive operations,” broadly defined. This disagreement ended the partnership between Anthropic and DoW. DoW has since established partnerships with more flexible providers like OpenAI.
Anthropic’s stance and its aftermath will provide public relations and business students with an important leadership case study to discuss for years to come. But this high-profile dispute highlights a far more pressing challenge for AI technology providers concerned about ethical and social safety issues associated with AI platforms.
AI usage limits
Operators of services such as AI models and platforms may impose terms of use and restrictions on their use. Companies using AI services may offer products or services with permissible usage restrictions or other formal agreements. Such agreements typically focus on things that companies cannot do, or exclusions. Because it’s usually a short list.
So what are the limitations of AI platforms? The usage policies of major AI platform providers include a variety of broad provisions, including:
- Restricted content. Although the concept of “bad” content encompasses a wide range of topics, users are generally prohibited from using AI platforms to create content that is harmful, hateful, illegal, or sexually explicit.
- Intentionally false information. You may not use the AI Platform or Services to create content that is fraudulent, deceptive, or intentionally false or intentionally designed to mislead.
- Illegal or malicious activity. You may not use AI systems to generate or perform any illegal activity, such as generating code that exploits vulnerabilities in software or hardware, hacking into services or systems, or creating malware.
- Do not cause any harm. The classic prohibition is not to allow the use of AI for the purpose of causing physical harm, such as to support the development of autonomous weapons. Such prohibitions are at the heart of business and social ethics.
- Data privacy or security breaches. Although AI systems cannot easily distinguish between personal, private, and public information, you are responsible for refraining from applying sensitive data, personally identifiable information, and proprietary business information such as software code. Because AI systems commonly use new data to train and improve their models, the information you input can quickly be used as someone else’s output.
- Violation of intellectual property rights. AI systems cannot distinguish between public information and intellectual property owned and protected by law. AI users may not input content that is trademarked, copyrighted, or protected by other intellectual property rights. Additionally, AI systems may not be used to create content that violates these legal intellectual property protections.
- Reengineering of AI services. This restriction prevents you from using the AI Service to create a competing AI service or platform, such as benchmarking the performance of an AI system or reverse engineering the AI Service to facilitate the creation or improvement of a competing AI service.
- Tampering with AI services. AI service providers implement guardrails aimed at monitoring and enforcing the limits contained in usage policies. Any attempt to tamper with, disable, or circumvent these measures will be considered a direct violation of the Policy.
- Regulatory restrictions. Usage policies may include broad language related to regulatory issues and remind users to maintain compliance with regulations or laws based on their location or jurisdiction, such as where data sovereignty applies.

Implementing acceptable limits for AI usage
There are several ways to implement restrictions on the use of AI platforms and systems, including through terms and conditions, policy restrictions, and technical infrastructure and tools.
Terms and conditions
Contracts are a classic mechanism for establishing a legally binding framework of understanding between providers and users. The contract outlines how AI will use data, especially proprietary data. How the output of AI can or cannot be used. Who is responsible for AI errors? Contracts therefore form the forefront of AI use and typically include the following provisions:
- the purpose. the intended purpose of the AI platform or service;
- Usage restrictions. These restrictions are often explicit and intentional prohibitions on AI services, such as data entry restrictions, security, tampering, and competition.
- Ownership of Output and Usage Restrictions. These terms define ownership of AI-generated output and typically impose strict restrictions on the sale, publication, and other use of content that includes third-party IP.
- Training limitations. Mutually accepted limitations often prevent providers from using client inputs, prompts, or resulting outputs to train or refine AI models.
- Human review requirements. Contracts can specify that some use cases require human review and approval to ensure accuracy and manage bias. Common human intervention (HITL) requirements include medical and legal employment.
- Liability and Indemnification. These terms define who is responsible for AI output that is harmful, discriminatory, or inaccurate. Providers often try to shift liability to users, but users can negotiate to hold providers accountable under certain circumstances.
- Security and Compliance. Such terms and conditions are common and negotiable where users need to ensure that the AI provider complies with common data privacy, security, and sovereignty regulations.
policy
Contracts and policies have a lot of overlap, but are often used together as part of an acceptable use framework for AI. While a contract is a legally binding and enforceable agreement between parties, a policy provides a more flexible set of guidelines, rules, or procedures.
For example, a contract may provide for compliance with a HITL policy, but the actual HITL policy may be embodied in a separate policy document. This approach typically benefits AI providers. Because AI providers can change. policy always unilaterally contract For changes to occur, both parties must renegotiate and accept the changes.
Common policies, which can be implemented as separate guidelines in conjunction with contracts, often include the following provisions:
- Restrictions on data processing. As AI capabilities and the legal landscape evolve, keeping the actual list of data processing restrictions in a separate policy document allows providers to change them at any time.
- Uses are limited. Similarly, a contract may prohibit limited uses, but the specific list of limited uses remains in the policy document, allowing providers to add or remove restrictions as the capabilities and use of the AI change.
- HITL requirements. These policies list AI use cases that require human review and acceptance of AI output.
- Approved tools. AI providers may only allow a limited set of tools to interact with their AI systems. This policy lists tools and often prohibits the use of free or public tools that may pose security or governance risks.
technical elements
The AI provider enforces the underlying policies of the contract through infrastructure design and implementation. Common technical elements of AI enforcement include:
- Access and authentication tools. AI providers use role-based access controls to limit access to AI based on job function or account status, and authentication systems verify users’ identities. Access and authorization tools prevent unauthorized users from accessing AI systems. Other tools detect and prevent access from unauthorized applications, such as certain browsers.
- Content monitoring. Various filters are used to monitor the inputs and outputs of the AI. Input monitoring limits queries to prevent unacceptable prompts, and output monitoring prevents harmful or inappropriate results from an AI system. Other tools, such as anomaly detection, check for signs of malicious or anomalous use.
- Privacy and data protection tools. AI providers use data anonymization and redaction tools to protect sensitive data. Anonymization tools change sensitive data such as a user’s name or date of birth, while editing tools completely block sensitive data. These tools help prevent sensitive data from reaching your AI models.
- AI governance tools. Additional monitoring can log all types of AI activity, such as prompted queries and file exchanges, allowing AI providers to handle AI governance by capturing and recording risks or ensuring that no discernible risks are identified.

Limits of AI usage restrictions
AI providers rely on acceptable use policies to govern user interactions with AI services, but these limits can be difficult to objectively apply or enforce. Practical and conceptual limitations to the use of AI include:
- Definition of risk. AI Providers may impose additional monitoring, review, or safeguards on uses that are deemed to be high risk. However, risk is often subjective and constantly changing. Subjective risk perceptions can lead to over-restrictions or forgoing additional safeguards.
- The reality of prejudice. Bias exists in data, and it doesn’t really matter how many policies and laws are designed to prevent it. Even the most unintentional biases are reflected in AI performance. AI providers cannot be held responsible for bias as long as all reasonable efforts are made to mitigate it. Users of AI platforms also cannot expect providers to detect or correct flaws in their data.
- Limitations of monitoring. Monitoring requires extensive resources and staff skills, which AI providers may lack. Additionally, responses to user violations can be slowed down by human delays and reviews.
- prompting and ambiguity. Filters are useful, but they’re not perfect. Skilled prompt engineers can build complex or carefully worded prompts that elicit AI responses without directly violating existing filters.
- Government exemptions in AI restrictions. AI providers typically qualify for government exemptions for “any lawful purpose” related to national security, defense, or intelligence. In effect, the provider ends up letting governments use their AI systems for purposes that would normally ban business users from the platform. Such use is typically for internal use, such as within certain government agencies.
TechTarget’s Senior Technology Editor, Stephen J. Bigelow, has more than 30 years of technical writing experience in the PC and technology industries.
