Garfield County adopts first policy governing use of artificial intelligence by employees

Applications of AI


Garfield County Commissioners on Monday The county unanimously approved the county’s first policy regulating how government employees use artificial intelligence (AI).

The policy was first drafted in August and was added to the county commission’s consent agenda this week. It establishes standards for all county employees, as well as contractors, third-party vendors, consultants, and volunteers, who use AI tools on behalf of the county. This applies to activities such as predictive analytics, automation tools, chatbots, and generative AI programs such as ChatGPT and Microsoft 365 Copilot.

“The ultimate goal is to benefit the community and contribute to improved services and process improvements,” Garfield County IT Manager Gary Noffsinger said during an Oct. 15 work session with county commissioners. “While we are still in the early stages as a county, we look forward to a phased approach to policy, experimentation, and learning.”



According to the county’s IT department, the benefits of AI include faster response times, more efficient service delivery, improved infrastructure management, and advanced data analytics. Automating routine tasks has the potential to reduce the administrative burden on staff while also saving the county money.

County officials say AI tools are already integrated into software that county employees use regularly, such as applications from Microsoft and Adobe, and the county’s attorney’s office and assessor’s office are experimenting with AI for legal research and data analysis. Other departments also plan to start pilot projects in 2026.



But the county’s IT department also warned of AI risks, including algorithmic bias, job losses, inaccurate information, over-reliance on automation, and potential data privacy violations. The newly adopted policy aims to reduce these risks while enabling employees to use AI to optimize their work.

“Sector leaders have a responsibility to support the use of (generative AI) at the right scale to deliver the most important public benefits,” the policy states. “This starts by focusing on the specific needs and challenges faced by the sector and the communities it serves, and ensuring that AI projects are problem-driven rather than technology-driven.”

Important points

The policy’s six key points outline general rules for the use of AI in county operations.

  • AI should only be used as a support tool.
  • Don’t enter sensitive data into public AI tools.
  • Use of AI must be disclosed.
  • AI-generated content must be reviewed before use.
  • All AI use must comply with the law and county guidelines.
  • Employees should consult their IT department and department head before using new AI tools.

Employees can use AI tools to support services, optimize productivity, and support decision-making. It is not intended to “substitute human judgment or circumvent established procedures,” the policy states.

This policy also emphasizes transparency. Employees who use AI must notify their manager and cannot enter sensitive information into public AI tools without permission from their department head. When AI is used for public or sensitive work, the policy states, the use must be made public, affected individuals must be notified, and the tool must be recorded in the county’s AI inventory.

AI-generated content must be reviewed, edited, fact-checked, and verified for fairness, accuracy, and comprehensiveness before use. The policy notes that AI tools can reflect bias in training data and are not necessarily accurate. For example, generative AI models such as ChatGPT can “hallucinate” or fabricate information.

Employees are expressly prohibited from creating deepfakes (fake images or recordings) or fictitious findings, or relying on AI for legal or regulatory analysis. Even when AI tools are used, employees remain responsible for all content they use or share, the policy states.

Data privacy and risk level

The policy prohibits county employees from entering sensitive data, such as medical records, legal files, and protected resident information, into public AI systems. All uses of AI must comply with ethical guidelines such as privacy laws and public records laws.

It also categorizes the use of AI into low-risk, medium-risk, and high-risk categories. Low-risk uses include drafting internal emails, writing meeting summaries, and writing code. Medium- and high-risk uses include publishing content, creating interview questions and recruitment materials, contributing to safety and regulatory documents, and summarizing policy data.

The county’s data classifications are established in policy from Level 1 (public data, including internal memos and public websites) to Level 3 (sensitive or confidential data, including protected health information, passwords, and federal tax information).

This policy defines the level of data that can be input into a particular AI tool. For example, the policy states that data at Level 2 or lower can be used in ChatGPT Business or Enterprise, while protected health information can only be used in tools with a business associate agreement, such as Copilot Chat.

Copilot Chat is already available to county employees as a “safe option for most generative AI tasks,” the policy states. According to the policy, the use of AI tools that have not been purchased or vetted by the county is strongly discouraged.

Content input or generated by AI may qualify as public records under the Colorado Open Records Act (CORA). Therefore, the County will retain Copilot Chat content for 30 days and ChatGPT Enterprise content for 90 days.

“Although content generated by AI may appear authoritative and polished, it can be inaccurate, biased, or misleading. Use of GenAI may also increase the risk of privacy violations, unauthorized data sharing, and cybersecurity threats,” the policy states. “Furthermore, over-reliance on GenAI for decisions that impact the rights or safety of the public could reduce transparency, weaken accountability, undermine trust in government, and may violate Colorado law.”

The IT department enforces this policy through regular audits and monitoring, and violations may result in disciplinary action. The policy will be updated regularly to align with state, federal and industry standards, including the National Institute of Standards and Technology’s AI Risk Management Framework, the policy states.

“It’s like every invention that’s ever come out. There’s a lot of good that can come out of it, but when evil people control it for their own means, people are enslaved,” Mike Samson said on October 15.

“The potential is incredible, but be careful if it’s used in the wrong way,” he added. “Humanity will never be the same.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *