Small purchases, big risks: using shadow AI in government

Applications of AI


Powerful AI tools are now widely available, many of which are free or low cost. This makes it easier for more people to use AI, but it also means that they can skip normal government safety checks like those done by the central IT department. As a result, the risks expand and difficult to control.

A recent EY survey found that 51% of public sector employees use AI tools every day. In the same survey, 59% of state and local government respondents showed that the tools were available to institutions, compared to 72% at the federal level. However, recruitment comes with a set of issues, and even if certification tools are available, it does not eliminate the use of “Shadow AI.”

  • First issue: Low-cost AI tools procurement workaround. In many cases, generated AI purchases can be thought of as microtransactions. Here it's $20 a month, $30 a month…and then suddenly, the new tool flies under traditional budget approval levels. In some state governments, that is the same as US$5,000 overall. Directors procuring generated AI for small teams were not close to the level that appears on the procurement radar. Without digging deep into the finer details of its sourcing policy at the California level, it allows purchases between USD 100 and USD 4,999 for IT transactions, just like other states, including Pennsylvania and New York.
  • Second issue: painful process of government. Employees often use AI tools to work more efficiently and avoid strict IT rules, slow purchases and long security reviews as they try to work more efficiently and provide services that citizens rely on. However, government systems retain large amounts of sensitive data, putting unauthorized use of AI at particular risk. These unofficial tools do not have the monitoring, alerts, or reporting features provided by approved tools. This makes tracking and managing potential threats difficult.
  • Third problem: Embedded (unavoidable) generated AI. When AI is seamlessly integrated into everyday software that is often designed to feel like a personal app, it blurs the line of employees between approved and unauthorized use. Many government workers may not be aware that using AI features such as grammar checkers and report editors can expose sensitive data to ignored third-party services. These tools often bypass governance policies and even unintended use can lead to serious data breaches, especially in high-risk environments such as governments.

And of course, using Shadow AI also creates new risks, such as: 1) Data breaches. 2) Data exposure. 3) The issue of data sovereignty (remember Deepseek?). And these are just some of the cyber issues. Governance issues include: 1) Violation of regulatory requirements. 2) Operational issues with adoption of fragmented tools. 3) The issue of ethics and bias.

Security and technology leaders should mitigate these risks as much as possible, while enabling the use of generated AI. We recommend the following steps:

  1. Increase visibility as much as possible. Discover AI usage across your environment using CASB, DLP, EDR, and NAV tools. Use these tools to monitor, analyze and most importantly report trends to peer leaders. Use blocking (if any) (if any). Because if you remember past shadow lessons, you know that blocking blocks and even using them underground will make you lose insight into what's going on.
  2. Stock AI Application. Based on the data from the tools above, we work in different departments to discover where AI is being used and what is being used.
  3. Adjust the review process. Create a lightweight review process that accelerates approval for small purchases. Develop a faster and easier third-party security review process for employees and contractors.
  4. Establish a clear policy. Include use cases, approved tools, examples, and prompts. We use these policies to do more than clarify what is approved. They also teach you how to use technology.
  5. Train the workforce on what is permitted and why. Explain your team why policies exist, the associated risks, and further explain how to use these sessions to make the most of these tools. View various configuration features, example prompts, and success stories.

Enabling AI to use can provide better results for everyone involved. This is a great opportunity for government security and technology leaders to encourage innovation in technology and processes.

The original story here.

The views and opinions expressed in this article are the views of the author and do not necessarily reflect those of CDOTREND. Image credit:istockphoto/Vladimir Zapren



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *