From adoption to disruption: experts share viewpoints on driving impact from government AI

Applications of AI


Image by Mohamed Hassan via Pixabay

95% of private sector organisations are getting little return on AI, despite billions of dollars of investment, according to a recent study. This begs the question – are governments realising value in their use of AI now? And how can they work to secure benefits throughout the organisation?

Artificial intelligence has been around in one form or another for decades. But it wasn’t until the proliferation of new AI tools in recent years that governments – and wider society – began to embrace it in earnest, heralding an era in which technological innovation is slated to drive transformation not known since the industrial revolution.

But, to what extent is AI already driving value for the government organisations adopting it? Are most still in an experimental phase focused on specific and narrow use cases, or is it driving true and positive impact on a wider scale?

The impact of AI in governments has not yet been probed as it has for the private sector, where a study by MIT revealed that despite US$30bn–US$40bn in enterprise investment into generative AI, 95% of organisations are getting zero return.

So are governments in a similar ‘high adoption, low disruption’ position with their implementation of AI?

AI impact in government will grow steadily

Cam Linke, chief executive of the Alberta Machine Intelligence Institute (Amii) – one of Canada’s three national AI institutes – believes what’s currently happening with AI adoption in government is a ‘let a thousand flowers bloom’ type scenario, whereby ideas, experimentation and innovation will, in time, lead to organisation-wide results.

“When it’s a technology that people are learning about, you’re naturally going to get pockets of people trying things out and seeing what works and what doesn’t, and experimentation is good… The 95% [statistic cited in the MIT study] can feel like a large percentage but I think that’s part of exploring your process and trying to figure out what can be moved now and what can be moved later.”

Laura Gilbert is former chief analyst and director of data science at the UK’s 10 Downing Street, now senior director of AI at the Tony Blair Institute for Global Change. She says the question of value and impact is situational. It is often heavily dependent on the organisation and on how value is assessed but there are pockets of work that are delivering real benefits, both in terms of cost savings and citizens’ experience of public services.

Real government AI applications show impacts

There are examples of AI projects that have driven hundreds of millions of pounds in cost savings, she says, adding that the value to the taxpayer of AI-facilitated fraud catching devices in governments around the world is “very high indeed”.

Some impressive AI applications have emerged in government organisations that operate with aspects of private-sector dynamics, such as Italy’s Poste Italiane. The post and parcel services provider is Italy’s largest service distribution network, with 35 million customers and €586bn (US$680bn) in total financial assets. The use of AI helped it to reduce the fraud ratio on its e-Money services by 50% within three months, at a time when worldwide fraud growth averaged 90%.

In terms of public services, Gilbert cites instances where AI has driven case resolution improvements of 60%, the UK Department for Work and Pensions’ Whitemail Insights and Vulnerability Scanner, and Estonia’s AI assistant – which helps citizens access public services – as “real examples of AI systems being implemented and delivered really well that I think have impacted the public”.

And AI can also help to safeguard social benefits such as improving food assistance programme integrity. In the US, for example, AI-facilitated monitoring and reporting has proven to reduce payment errors, increase operational efficiency and help to ensure support reaches the right households, with obvious benefits for citizens and taxpayers.

Overall, “some governments are doing really quite well and others are not seeing a great deal of return,” Gilbert says. “I would like to see a lot more drive to widen that out. At the moment, it’s specific exemplars or specific use cases, often where the people involved are particularly ambitious, particularly innovative, and really care about the service that they’re driving. But it’s in no way across the board and nowhere near as far [ahead] as it should be.”

The risk of answering the wrong questions

In terms of where value isn’t being realised, Tom Sabo, principal solutions architect at SAS, has a theory about why: “Exploring why adoption would be high but disruption low, there’s so much to unpack around that,” he says, “but one of my thoughts is, are we asking too much of our AI?”

He explains that people often look to AI to answer problems without really understanding “the nuts and bolts of what AI is actually capable of”.

He offers an example: “I’m surprised when people ask straight analytical questions of large language models (LLMs) rather than asking them to summarise text or draw connections, because that’s what traditional LLMs are good at.”

Many AI applications produce small incremental gains

AI is commonly used for tasks like summarisation, which has a relatively small ROI unless one considers the cumulative impact of small incremental gains across an entire organisation, as well as the impact of other small incremental productivity enhancements. AI applications with results of higher demonstrated impact may be easier to measure, and include fraud and cybersecurity detection, disaster prediction, procurement reform, and the streamlining of planning processes.

“We should be doing these things that cost a very large amount of money much better,” Gilbert says, adding that AI also has huge potential in the evaluation of programme delivery – saving large sums of money by highlighting when programmes aren’t working before further investment is thrown at them.

The promise of answering the right questions

Whether the AI tool in question aims to solve a relatively small problem or a big one, it needs to be able to answer the right question if it is to drive impact and value. Get it wrong, and governments can waste a huge amount of taxpayers’ money on an AI implementation that doesn’t deliver.

So, how can the right questions be identified? They are rooted in the organisation’s mission, or in desired outcomes that may be specified in legislation. The equivalent scenario in the private sector is ensuring the AI application ties to business goals.

Gilbert stresses that the pressure some civil and public servants feel to “go and put AI into everything” risks reinforcing processes that don’t work and sending them down an AI rabbit hole that diverts from the intended outcome and potentially much bigger savings.

People can have difficulty understanding the opportunity costs of digital transformation and the most effective investments, Gilbert says, explaining that replacing an ineffective service entirely – and consequently “providing something people really need as opposed to something they don’t” – is a much better investment than moderately improving an existing service through digitalisation. 

“When we think of it from a ‘let’s digitise and automate everything’ approach, we’re really missing the point,” she emphasises.

The key role of careful planning

For the full value of AI to be exploited, Gilbert’s message is that it must be very carefully thought through.

Once the problem has been identified clearly and AI is put forward as a credible solution, a department benefits from running a proof of concept before it puts forward a business case for investment.

Prioritisation is important too. In Canada, the judging criteria devised for a G7 AI hackathon is being used by the federal government to develop intake process criteria to help it prioritise AI deployments.

If working with an external partner, a government should ensure that that partner has a proven track record. This helps the government ensure the right solution will be deployed in the right areas and results in the right outcomes.

In Canada, for example, the government has compiled a list of AI vendors whose capabilities have been verified by a panel of experts.

And of course, the success of any bought-in AI solution comes down, in part, to good procurement.

Gilbert says governments waste hundreds of millions because the procurement team doesn’t really understand the technology or the problem space. She advocates integrating technical expertise into the process.

“Every time you’re spending money on an external provider at the procurement stage, at the point at which you put out your request for solutions and your bid framework, you should have a technologist actually running or very closely advising on that,” she says.

Measuring AI’s impact in the public sector

In the procurement of AI, once a tool or system has been onboarded, measuring its success needs to be established up-front to provide objective criteria that help ensure that projects that aren’t working are nipped in the bud before more money is wasted, or conversely, that projects that deliver value are scaled up and scaled out.

This is not an easy task. The Government of Canada is grappling with figuring out a robust framework for evaluating whether an AI project is delivering what was intended and measuring its impact.

As Kara Beckles, executive director of privacy and responsible data in the Office of the Chief Information Officer, Treasury Board of Canada Secretariat, highlights, with many different types of AI being deployed for different purposes, there is no one-size-fits-all formula. An added complication is that many AI use cases coming through offer a brand new capability, so there is no base for comparison.

The government has a directive on automated decision-making that is accompanied by an algorithmic impact assessment, and it is in the process of drawing up a guide for departments on assessing AI – helping them to measure return on investment as well as other impacts associated with the adoption.  

As Beckles points out, however, there are some AI uses that it may not be useful to measure. “That finding of the MIT study, logically, as an economist, that makes sense to me because you get many, many benefits but spread very thinly across the organisation. Capturing benefits such as enabling public servants to spend less time on mundane tasks is extremely difficult.”

Economics may offer a pathway to measure AI’s impact in public sector applications. Time-series comparisons of gross outputs to gross inputs could be planned before the deployment of AI, and tracking increases in the ratio of outputs to inputs could indicate increased productivity, suggesting a successful use of AI.

Engage all stakeholders and take culture into account

Another consideration – and one that is absolutely key to harnessing the potential of AI applications of any kind – is to get buy-in from the staff who are going to be using it.

As Beckles says: “Culture eats strategy for breakfast. You can have the best intentions but if you don’t bring the people along with you, you really aren’t going to get very far at all.”

She, Sabo, Gilbert and Linke agree that for any type of AI adopted in government to be effective, there must be a close working relationship between the technical experts and those with their ‘boots on the ground’ who understand the pain points and what’s going to be workable in practice. Separate these two things and you run the risk of complicating the lives of civil servants and citizens rather than making them easier.

“When I work directly with folks on the ground, who have some kind of discretion over how things go, they start buying into technology because they can see, concretely, how it will improve their day-to-day working life,” Sabo says.

“They are also the ones who can identify where the process could be improved and determine some of the potential pitfalls on the fly. They might say ‘hey, we would use this, but it really hasn’t been designed smoothly for us because you weren’t working closely enough with us to understand our workflow’.”

On the other hand, if staff find it works for them “they are the ones getting stakeholder buy-in directly from the field” by communicating the benefits to colleagues.

Training and upskilling in AI and digital literacy can be helpful here, not least to enable frontline delivery teams to understand the ‘art of the possible’. This enables them to recognise where AI could help them and to communicate their needs to decision-makers.

Integrating AI with existing workflows

When driving transformation through AI, smooth integration with existing workflows might be the easiest way of introducing a tool that works, and that employees can get on board with.

This ties in with the theme of incremental change and continuous improvement covered in the SAS-sponsored report Reimagining the future of public sector productivity. It says that taking a ‘build, test, learn’ approach, making small incremental changes and getting constant feedback can help ensure projects are working and that they continue to take advantage of the latest technologies, rather than becoming outdated over time.

The report says that large-scale IT overhauls “have a mixed record” and that governments “should not underestimate the power of incremental reform”.

Governments are truly at the starting point of using AI. Those that can identify where AI will drive the most impact, evaluate the outcomes to iterate for ongoing improvement, and gain buy-in at all levels, will be the governments whose citizens and employees reap the rewards.





Source link