Your AI usage policy is solving the wrong problem

Applications of AI


A company with tens of thousands of software engineers noticed that adoption of new AI-powered tools was well below 50% and wanted to know why. It turns out that the problem isn’t the technology itself. What was holding the company back was a mindset that equated the use of AI with fraud. Those who used this tool were perceived as less skilled than their colleagues, even if their work performance was the same. Naturally, most of the engineers chose not to risk their reputations and continued working in traditional ways.

This type of self-defeating attitude is not limited to one company, but pervades the entire business world. Organizations are being held back because they are taking negative ideas about AI from meaningful contexts into meaningless corporate environments. The result is a harmful combination of bias, unhelpful policies, and a fundamental misunderstanding of what actually matters in business. Moving forward requires setting aside these confusions and embracing simpler principles. In other words, artificial intelligence should be treated like any other powerful business tool.

In this article, I’ll share what I’ve learned over the past six months as we revise our company’s AI usage policy, leveraging research and insights from our internal working group (Paul Scade, Pranay Sanklecha, and Rian Hoque).

MV promotion media

Faisal Haque’s books, podcasts, and his company provide leaders with the framework and platform to align purpose, people, processes, and technology so they can turn disruption into meaningful, lasting progress.

learn more

confusing context

In the context of education, it is entirely natural to have doubts about generative AI. School and university assessments exist for the specific purpose of demonstrating that students are mastering the skills and knowledge they are learning. Prompting ChatGPT and handing it the essay it generates defeats the reason you write essays in the first place.

When it comes to works of art such as fiction and paintings, there are legitimate philosophical debates about whether works produced by AI can have creative authenticity and artistic value. There are also difficult questions about where the boundaries lie when using AI tools to assist.

However, such issues are almost completely unrelated to business operations. In business, success is measured only by results and accomplishments. Does your marketing copy encourage customers to buy? Yes or no? Does your report clarify a complex issue for your stakeholders? Does your presentation persuade the board to approve your proposal? The only metrics that matter in these cases are accuracy, consistency, and effectiveness, not the content’s origin story.

Bringing principles that govern the legitimate use of AI in other fields into the discussion of AI use in business undermines our ability to make the most of this powerful technology.

Distraction due to disclosure

Public discussions about AI often focus on the dangers of allowing generative AI output into the public sphere. From theories of the death of the internet to debates over whether labeling AI output on social media should be a legal requirement, policymakers and commentators are rightly concerned that the use of malicious AI is permeating and undermining public discourse.

These concerns have made rules around AI use disclosure a central part of many companies’ AI use policies. But here comes the problem. While these arguments and concerns are entirely legitimate when it comes to AI agents shaping the debate around social and political issues, bringing these suspicions into a business context can be damaging.

Research consistently shows that the use of publicly available AI creates negative bias within companies, even when its use is explicitly encouraged or when the output quality is the same as human-generated content. The research mentioned at the beginning of this article found that even when the AI ​​tool in question was known to improve productivity and its use was encouraged by the employer, internal reviewers rated the same job performance as less competent if they were told that AI was used in production than if they were told it was not used. Similarly, a meta-analysis of 13 experiments published this year identified a consistent loss of trust in people disclosing their use of AI. Even among respondents who felt positive about the use of AI themselves, they tended to be more distrustful of colleagues who were using AI.

This type of irrational bias has a chilling effect on the innovative use of AI within companies. Disclosure obligations regarding the use of AI tools reflect institutional immaturity and fear-based policy decisions. They treat AI as an epidemic and create a bias against tools as uncontroversial as using spellcheck or design templates, or having communications teams prepare statements for the CEO to sign off on.

Companies that focus on disclosure fail to see the forest for the trees. They are so worried about the process that they neglect what actually matters: the quality of the output.

Ownership Requirements

The solution to both context confusion and the promotion of distracting disclosures is simple. It treats AI like a completely normal (albeit powerful) technological tool and insists that the humans using it take full ownership of everything it produces.

This shift in thinking breaks through the confusing thinking that plagues current AI policy. When we stop treating AI as a rarity that requires a special label and start treating it like any other business tool, the path forward becomes clearer. I won’t reveal that I used Excel to create my budget or that I used PowerPoint to design my presentation. It’s not the tools that matter, it’s whether you stand behind the work.

But here’s the important part. Treating artificial intelligence as a normal technology does not mean that you can work with it quickly and freely. Quite the opposite. If we set aside concepts that are irrelevant in a business context, such as creative trustworthiness and “misconduct,” we are left with something more fundamental: accountability. If AI is just another tool in your tool kit, you will completely own its output, whether you like it or not.

Any mistakes, any deficiencies, any violations of rules belong to the person who puts the content out into the world. If the AI ​​plagiarizes and you use that text, you plagiarize. If the AI ​​misunderstands a fact and you share it, that’s a factual error on your part. If AI generates and chooses to use generic, weak, and unpersuasive language, communication is poor. Customers, regulators, and stakeholders will not accept the “AI did it” excuse.

Given this reality, rigorous verification, editing, and fact-checking are required as non-negotiable elements of AI-enabled workflows. A leading consulting firm recently learned this lesson when it submitted an AI-generated, error-filled report to the Australian government. These mistakes slipped through the cracks because humans in the chain of responsibility treated the AI ​​output as a finished work rather than raw material that required human oversight and ownership. The company could not shift the blame to the tools. The embarrassment, reputational damage, and negative impact on customer relationships were all on their shoulders.

Taking ownership does not simply mean accepting responsibility for errors. It’s also about recognizing that once you review, edit, and approve AI-assisted work, it’s no longer “AI output” but human output created with AI assistance. This is a mature approach that takes us beyond disclosure theater and toward true accountability.

Make a difference: Own your use of AI

Here are four steps businesses can take to clear up the idea of ​​ownership from confusion about context.

1. Replace disclosure requirements with ownership verification. Stop asking “Did you use AI?” And start demanding clear accountability statements that say, “I take full responsibility for this content and verify its accuracy.” Every piece of work, no matter how it’s made, needs to have a human explicitly standing behind it.

2. Establish quality standards that focus on results. Define success criteria that completely ignores how it was created. Is it accurate? Does it work? Are business objectives being met? Create validation workflows and fact-checking protocols that apply equally to all content. If something doesn’t meet these criteria, the conversation should be about improving the output, not about which tools were used.

3. Normalize the use of AI through best practices, not policy. Share internal stories of teams using AI to drive great results. Celebrate business outcomes like faster delivery, higher quality, and breakthrough insights without worrying about methodologies. Make AI proficiency a skill as valuable as Excel expertise or presentation design, rather than something that requires special permission or disclosure.

4. We train you not only to use it, but to own it. We develop training that not only promotes techniques but focuses on verification, fact-checking, and quality assessment. Teach your employees to treat AI output not as a finished product, but as raw material that requires expertise to shape and validate. Includes modules on identifying AI hallucinations, validating claims, and maintaining your brand voice.

The companies that will prosper next year will not be the ones that unintentionally discourage the use of AI through the stigma of disclosure policies. They will be the ones who truly understand that AI is a powerful tool for achieving business outcomes. While your competitors tie themselves together around process documentation and disclosure theater, you can leapfrog them with simple principles. It’s about owning your output, no matter how you create it. The question that separates winners from losers is not “Did you use AI?” But “Is this great?” If you’re still asking the first question, you’re already behind.

MV promotion media

Faisal Haque’s books, podcasts, and his company provide leaders with the framework and platform to align purpose, people, processes, and technology so they can turn disruption into meaningful, lasting progress.

learn more

The final deadline for Fast Company’s World Changing Ideas Awards is Friday, December 12th at 11:59pm PT. Apply now.



Source link