Compliance and competency: Building an AI-enabled government workforce

AI News


Artificial intelligence is rapidly spreading throughout the federal government. Nearly 90% of government agencies currently have it in place or plan to do so in the future. But hiring alone doesn’t mean you’re ready. According to the same study, the skills gap remains one of the biggest challenges government agencies face when implementing AI.

This is not a criticism. In fact, this is one of the biggest opportunities for federal agencies today.

Compliance was never intended to be the goal

Mandatory annual training, cybersecurity awareness, ethics briefings, and acceptable use policies are necessary and important. These establish the foundation that any government agency needs. But that’s exactly what compliance means: the basics, It’s not the ceiling.

I’ve led everything from tactical training with government teams to federal contracts across a variety of private and public sector organizations. What I consistently saw in these organizations was not a lack of commitment or talent, but a gulf between knowing policy and being able to perform under pressure. These are two different skills, and only one will appear in the training completion report.

The leaders who performed best in high-pressure situations were those who prepared so thoroughly that nothing felt new when it mattered most. Similarly, the best teams didn’t panic when problems arose. they smiled.

These leaders felt so uncomfortable during training by constantly testing their limits that authenticity felt like second nature. Those teams wanted to be together. They were confident in their ability to make the most of their time, trust each other, and execute when the time came.

This same discipline is exactly what government agencies need to build capacity for right now.

Building capacity requires discipline

Agencies that perform well under pressure are disciplined not only in their policies but also in how they conduct their work. If your team is lean and demanding, basic practices can be delayed. Reactive will be the default. This will result in inconsistent execution. That was true before AI, and it’s even more true now.

A 2023 audit by the Board of Audit found that 15 of the 23 institutions inspected had incomplete or inaccurate lists of AI use cases. What appears to be a technology problem is actually a discipline problem. You can’t scale what you can’t track. You also can’t track things that aren’t standardized.

After many years of working within and with government organizations, I have developed a standardized framework that separates agencies that perform from those that stagnate.

Ownership of decisions. All high pressure environments have one common point of failure. That means no one knows who owns the call. AI speeds up decision-making, but unclear authority hurts speed. Directing decisions means defining in advance who will decide what and when to escalate. Once that is established, the team stops waiting for permission and starts moving with purpose.

Enforce standards. Non-mandatory standards are just suggestions. Agencies can deploy the best AI tools available, but without consistent expectations for how they are used and how success is measured, results will vary and progress will stall. The Department of Defense’s Responsible AI Framework gets this right. Define who owns the results throughout the AI ​​lifecycle, from development to deployment, ensuring accountability as AI scales across your organization. It’s about enforcing standards at scale.

Multiplication function. Government leaders are being asked to accomplish more with less. The answer is not to work hard in isolation, but to develop the people around you so that the whole team can shoulder the load. When capacity is doubled and distributed, organizations become more resilient. If you sit with just one or two people, an unexpected problem can come up and bring everything back together. AI doesn’t change that equation. Only intentional development is possible.

Rehearse under pressure. Most training takes place in calm, controlled situations. Most missions don’t. GAO is moving in the right direction with AI training tied to specific use cases, simultaneously equipping employees with the skills to act efficiently and responsibly. But access to training is just the beginning. Scenario-based practice, real decision points, and simulated pressure build the kind of muscle memory that holds on when things get tough. Rehearsal is the only way to trust your ability.

Move the mission. All of the above is meaningless if it does not lead to progress. The goal of competency training is to develop employees who perform consistently, adapt quickly, and accomplish their mission no matter what the environment throws at them. Most generative AI pilots fail not because of the technology, but because they are not fully operationalized.

Measure what matters. Completion rates indicate who completes the training, not who is able to perform when it matters. Key metrics include how quickly your team makes decisions under pressure, how quickly your team escalates issues when they should, whether behavior changes after training or reverts within 30 days, and whether your leaders are reinforcing standards every day or missing them. These will tell you if your employees are ready or just compliant.

AI will not be the last change that demands something new from government workers. The agency that continues to carry out its mission is not necessarily the agency that acts first. They invest in how their people think, decide, and perform, and they build the discipline to maintain that over time.

Compliance is what got us here. Competence is what moves us forward.

Ray Resendez is the Senior Vice President of Federal Solutions at ELB Learning.

Copyright © 2026 Federal News Network. Unauthorized reproduction is prohibited. This website is not directed to users within the European Economic Area.





Source link