
Photo by Myriams-Fotos via Pixabay
AI transcription tools could allow social workers to spend more time with the people they care for. However, if not properly reviewed, the output can potentially lead to serious consequences. Imogen Parker of the Ada Lovelace Institute argues that for these tools to have transformative power, they need strong regulation, testing and guidance.
Last week, we published new research on the use of AI transcription tools in social work. Transcription tools are considered one of the most promising and potentially transformative applications of AI. These tools are rapidly being deployed in critical front-line areas where high-level communication takes place, such as social care and healthcare.
Our research works with social workers and those involved in technology procurement and evaluation in 17 different local authorities to explore how transcription tools are used and evaluated in practice, amplifying the experiences and voices of frontline workers themselves, in particular.
Social work is a particularly important use case. This is a high-stakes risk, as social workers are responsible for consequential decisions regarding vulnerable people. It is also an area where adoption is progressing early. Since the arrival of more powerful and accurate tools (which utilize underlying models), we have seen rapid deployment, with one AI transcription tool being used by 85 local authorities in the last year. This is also an area where transcription can be a game-changer. Social workers are responsible for extensive documentation of incidents and records. This has resulted in social workers spending only about 20% of their working week in direct contact with people in their care.
So what did we find?
AI transcription has real potential to help the public sector
Late last year, I asked: Is transcription a use case that shows the transformative power of AI?
Our new research shows how AI, if effective and reliable, can quickly become highly valued by front-line professionals. Although it was a small sample, the social workers we spoke to were overwhelmingly enthusiastic about AI transcription. Previous research has found that data-driven technologies in the public sector conflict with aspects of professional expertise and judgment.
Many social workers report that using transcription tools “frees up” their time and allows them to focus on the meaningful core of their interpersonal practice. No one comes to social work for paperwork. Many social workers were happy to view and treat these tools and their products as enhancing rather than disrupting or competing with existing professional practice.
However, in the early stages of implementation and testing, social workers have a lot of flexibility and discretion in how and when to use transcription tools, so actual usage is highly variable. For example, some families and people choose not to use AI transcription in some situations. Others are using AI as note takers “on the fly” while traveling between appointments, and others are using it for internal meetings. Some people use it only for transcription, while others use it for the summarization feature as well.
Most social workers we spoke to felt that, so far, the benefits of time savings flow directly to social workers, who can decide how to use that saved time. One team leader suggested that this could change in the future, for example through the use of service targets and mandates (e.g. increasing case numbers, expediting appointments, etc.). This may influence social workers’ views on the potential for AI to enhance, rather than automate, professional work.
As part of the Innovation 2026 event program, the Global Government Forum will bring together leaders from the NHS and wider public sector to discuss how to deliver the 10-year health plan. The conference and exhibition will be held in London on 24th and 25th March 2026. learn more
Social workers are expected to manage risks that arise “upstream”
Social workers have full responsibility for AI transcription tools, and the amount of time they spend checking transcriptions and summaries for accuracy varies widely. Some people have spent a few minutes checking their transcriptions, while others have spent hours checking them. It is unclear how much oversight is needed to review these outcomes and how well this current approach to ‘human-involved’ review works in practice.
There are two important implications. It is important that the ‘time savings’ from transcription tools are not absorbed as service efficiencies, but that social workers are protected to take on this additional time spent on reviews. Cost-benefit analyzes should consider how social workers actually use these tools safely, including the need for additional time.
Second, current approaches to “human-involved” review processes need to be further evaluated. Social workers told us about significant inaccuracies they found in documents, from “gibberish” about names to false reports of suicidal thoughts. Failure to catch these AI “illusions” could have serious implications for people using care, professional implications for social workers, or potential legal consequences if flawed evidence contributes to formal court proceedings and decision-making.
Given that there is no “upstream” regulation of the underlying models that AI transcription tools rely on in the UK, frontline professionals may be taking on risks that they cannot necessarily address or assess. For example, low-level but persistent gender bias in summaries may be more difficult for social workers to detect than directly inaccurate content.
Read more: Sophisticated cities: How local governments are leveraging AI to tackle their most pressing problems
Structural gaps need to be addressed
Given their rapid adoption rate, it is welcome that more research is focusing on evaluating these tools. However, we found that resourcing challenges are prompting local authorities to act quickly through light pilots and, in some cases, simplified procurement processes, with knock-on effects on the quality and scope of evidence collection and evaluation.
Some dedicated providers are working on co-creating pilots and assessments with all local authorities that use their tools, but the “shadow use” of general-purpose tools with AI transcription capabilities is unlikely to be subject to this type of approach.
This lack of evidence-based evidence and independent systematic evaluation of different AI transcription tools means that public servants and policy makers still do not have a proper understanding of how the use of AI transcription tools may impact social work, including the potential risks to care recipients.
Local authorities and procurement agencies do not have detailed insight into how different tools compare on different aspects of performance, such as bias and accuracy.
And without clear sector-specific guidance from regulators, professional bodies, social workers and those involved in care, it remains unclear what legal or appropriate use should be.
This approach to managing or mitigating potential risks leaves sectoral implementation weak. Without proper regulation, testing, and guidance, a high-profile transcription error or incident could cause an entire sector to retreat from the technology.
Public sector interventions are not perfect. Despite potential errors, transcription tools used responsibly can produce output that is as good or better than a social worker’s own extensive notes. Indeed, if AI transcription were deemed safe, fair, and legal, the social workers we spoke to would welcome it as a way to improve their practice.
That’s why we need structures like the new What Works Center for Public Sector AI to actually assess tools in detail, factor in risks and opportunities, and provide balanced guidance, rather than leaving it up to local authorities and individual social workers to use these under-vetted tools responsibly.
Read more: License to build: Understanding how people think about the use of AI in the public sector
To learn more about AI in government and the public sector, visit the Global Government Forum’s dedicated AI hub.
