AI Skills for Life and Work: Rapid Evidence Review

Applications of AI


This report was authored by Prof Rob Procter, Warwick University and Alan Turing Institute for Data Science and AI

This research was supported by the Department for Science, Innovation and Technology (DSIT) and the R&D Science and Analysis Programme at the Department for Culture, Media and Sport (DCMS). It was developed and produced according to the research team’s hypotheses and methods between November 2023 and March 2025. Any primary research, subsequent findings or recommendations do not represent UK Government views or policy.

1. Executive summary

As AI-based products and services become increasingly embedded not only in people’s work but also in their everyday lives, the imperative for equipping the UK population with AI skills for life and work is becoming steadily more acute. The aim must be understanding AI in a technical and applied sense, but also the broader implications of using AI, whether that be in work or in everyday life.

However, this aim cannot be realised without a high level of digital literacy. Hence, it should be a concern that a significant proportion of the UK population have only partial Essential Digital Skills (EDS) for life and work. Levels of attainment are influenced by a range of demographic factors, including gender, age, education, income, region, etc. and these will need to be addressed if existing regional economic inequalities are not to be reinforced as the adoption of AI by businesses increases.

The UK’s focus to-date has largely been on increasing the supply of AI skills for work through investment in tertiary education. While this is important, the UK could also increase efforts to introduce education curricula in AI skills at primary and secondary levels.

A taxonomy of AI skills for work is under review at the time of this paper in the UK, but its development will require a better understanding of the role of STEM skills. It may not be possible to meet the future demand for AI skills for work purely by increasing take up of STEM subjects at GCSE and A Level. In other words, to provide an adequate grounding for AI skills, it will be necessary to raise the level of STEM skills underpinning all subjects, not just those that are STEM-focused.

Significant gaps between the supply and demand for AI skills for work have been evident for several years in the UK and globally, and demand is expected to rise dramatically over the next five years. Businesses and public sector organisations may find it increasingly difficult to recruit and then hold on to employees. If this gap is to be closed, government may need to expand its support for the AI skills pipeline at secondary and tertiary levels and invest in the provision of lifelong learning opportunities.

As AI technologies continue to advance and find new applications, education programmes will need to be alert to the impact on the kinds of AI skills required to exploit them. Skills in understanding how businesses can employ AI strategically, while ensuring AI-based products and services are compliant with ethics regulations, are likely to be in high demand. With the average skills lifespan now under three years and likely to fall further, employers will need to take increasing responsibility to create a learning culture that will enable employees to adapt and upskill now and in the foreseeable future.

2. Introduction and overview

This rapid evidence review aims to explore the main literature on the AI skills for work and life. The specific research questions it is intended to address are as follows:

  • What AI-relevant skills are needed for life and work?
  • To what extent does the UK have or lack these skills in the labour force?
  • What can the UK learn from international counterparts about AI skills?

The review begins by setting out the methodology (Section 2) and context of the demand for AI skills given the rapid uptake of AI within the workplace and the increasing penetration of AI into people’s everyday lives (Section 3).

The review continues by defining AI (Section 4) and the key concepts of digital skills (Section 5), and AI skills and AI literacy (Section 6). It then draws on evidence from the UK and internationally of how AI skills frameworks for life (Section 7) have been used to translate AI skills into educational curricula. Progress in defining AI skills frameworks for work are then reviewed in Section 8. Sections 9 and 10 examine the evidence of where attainment gaps exist in UK citizens’ digital skills and AI skills for life (Section 10.1) and for work (Section 10.2). As the evidence for the latter is necessarily limited at this time, the review uses data on skills gaps and shortages as recorded by employer surveys and job postings, summarising the various approaches (e.g., survey questions) that have been used, as well as evidence on the proliferation of AI in life and work contexts. Section 11 reflects on the implications for the rapid pace of AI innovation for the future demand for AI skills.

The current state of AI education in the UK in the context of predicted levels of need and specific skill gaps is identified and the range of strategies for AI education that would be available to government to address gaps in AI skills for life (Section 12.1) and AI skills for work (Section 12.2) are then reviewed.

Finally, the summary reflects on responses to the above research questions, and the issues that need to be addressed to cover the challenges identified (Section 13). This is followed by appendices (A-D) providing some further details on AI literacy skills, the state of digital skills in the UK and AI personas, a skills framework for businesses.

3. Methodology

The methodology followed for this rapid evidence review was as follows. The researcher was first given several reports that were considered useful by DCMS, DSIT, and Ipsos Mori. A Google Scholar search for academic publications with [“AI skills” , “AI literacy”, ”AI competence”] in the title returned longlists of 46, 205 and 10 articles respectively from 2020 onwards, compared to 56, 209 and 11 articles from 2015 onwards. A literature review by Ng et al. (2022b) found that the term “AI literacy” was first mentioned in 2016 and that significant growth in number of research articles on AI literacy only occurred from 2020 onwards, a result subsequently confirmed by Casal-Otero et al. (2023). Based on this, a start date of 2020 was chosen, making an initial longlist of 261 articles. Each article was evaluated for its relevance, quality and recency of research. For example, articles whose focus was on applications of AI in education were discarded. This reduced the longlist to a shortlist of 150, to which were added a further 20 documents from the grey literature (working papers, government documents, including those provided by Department for Science, Innovation and Technology and other project partners). A further round of screening, where articles were ranked in terms of their importance and drilling down into relevant technical literature on AI produced a list of 176 articles that were included in this final version of the rapid evidence review.

4. Context

In the past ten years, the conviction that AI will be the key to a successful economy of the future has reached a point where this is now taken for granted (Hall and Pesenti, 2017). For businesses, AI has become the key enabling technology for supply chain management, customer services, manufacturing, financial services, insurance, healthcare (including diagnostics and drug discovery) and platform labour and is expected to have a radical impact on the market for a wide range of jobs, both manual and white collar (Frank et al., 2019). It is widely predicted that scarcely any sector of the economy will emerge unchanged as AI becomes steadily more embedded into business practices, and similar developments are gradually becoming evident also in the public sector (Wirtz et al., 2018). The penetration of AI into everyday life is reflected in its rapid integration into a diverse range of consumer products and services that people already take for granted, including Internet search, recommender systems, help lines, the smart home and social media.

This rapid pace of AI development and adoption is already making its presence felt in surveys on the market for AI skills, both in the UK and globally (e.g., Lightcast, 2023). This has spurred governments and businesses to identify strategies that can tackle this growing gap between supply and demand for AI skills, which threatens to jeopardise the predicted economic and social benefits of AI (Sorbe et al., 2019). The skills people will need in order to use AI in their everyday lives is also now increasingly recognised as being important, and this is reflected in the rapidly rising number of academic articles on AI literacy and reports published annually on strategies governments should adopt in order to prepare citizens for the coming wave of AI-based products and services that they will encounter as they go about their daily lives.

5. Defining AI

A major challenge in developing strategies to equip the UK population with essential AI skills is the wide range of definitions for AI that have evolved over the past 60 years (Collins et al., 2021).

One approach has been to define AI in terms of its underlying technologies and techniques (Hall and Pesenti 2017):

An umbrella term to cover a set of complementary techniques that have developed from statistics, computer science and cognitive psychology. While recognising distinctions between specific technologies and terms (e.g., artificial intelligence vs. machine learning, machine learning vs. deep learning), it is useful to see these technologies as a group, when considering how to support development and use of them.

Since then, alternative definitions have been proposed that progressively define AI in terms of the functional capabilities of AI systems and their applications:

A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. (OECD, 2023)

Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. (Irish National Skills Council, reported in EGFSN, 2022)

Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks. (National AI Strategy, 2021)

Yet other definitions of AI list AI technologies along with their functional capabilities (Calvino and Fontanelli 2023):

Artificial intelligence refers to systems that use technologies such as: text mining, computer vision, speech recognition, natural language generation, machine learning, deep learning to gather and/or use data to predict, recommend or decide, with varying levels of autonomy, the best action to achieve specific goals.

Some definitions of AI are accompanied by examples of AI systems (Calvino and Fontanelli, 2023). For example:

  • chatbots and business virtual assistants based on natural language processing;

  • face recognition systems based on computer vision

  • machine translation software;

  • data analysis based on machine learning, etc.;

  • autonomous robots for warehouse automation or production assembly works;

  • autonomous drones for production surveillance or parcel handling, etc.

It is evident from the above that no single definition of AI will serve all purposes equally well. Defining AI in terms of technologies will be understood by a technical audience and is subject to change as these evolve and new ones emerge. Moreover, the public at large are more likely to grasp the significance of AI when it is defined in terms of its functional capabilities and how these measure against those that people will recognise as being previously distinctly human. For this reason, we will follow the definition below.

AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem-solving, and even exercising creativity.” (McKinsey and Company, 2023)

6. Digital literacy, competence and skills frameworks

Prior to the emergence of commercially viable AI-based products and services, academic research, educational curricula and government policy has focused on identifying the competencies and skills citizens need to be make productive and safe use of digital technologies and their applications, including email, search and online services, along with the education curricula and programmes designed to deliver them (Bravo et al., 2021; Tinmaz et al., 2023). In the research literature, skills are associated with the ability to complete specific tasks, whereas competencies group skills that are relevant to performing a specific role (Hoskins and Fredriksson, 2008), with competences and skills linked together through an overarching framework (Falloon, 2020). Self-evidently, any attempt to define AI competences and skills must be founded on broadly agreed definitions of digital competencies and skills (for a review, see Pangrazio et al., 2020).

Figure 1: Essential Digital Skills Framework (Department of Education, 2018).

The Department of Education published its essential digital skills (EDS) framework (Figure 1) in 2018 (Department of Education, 2018). Although EDS uses different terminology (i.e., ‘’skill’ instead of ‘competence’) and does not make an explicit reference to digital literacy, the components of the inner circle in Figure 1, together with that of the outer circle (Be Safe, Legal and Confident Online), broadly correspond to definitions of digital literacy competencies, such as that proposed by Carretero et al. (2017). As a competence, Be Safe, Legal and Confident Online is distinctive in that it includes the knowledge and skills to not only use digital tools but also how to use them appropriately and avoid exposing oneself – or others – to risks.

The EDS framework itself is an update on the Basic Digital Skills framework (2015), which was arrived at through a consultation with businesses, government departments, local authorities, etc, although the exact process followed is not documented. This created a distinction between digital skills for life and for work and identified five distinct essential digital competencies and associated skills for life and work beyond a very basic ‘Foundation Level’. The EDS was updated in 2019, following further consultations with providers, industry, digital inclusion organisations, awarding organisations, local authorities, etc. (Department of Education, 2019a). In this latest version, digital competencies and skills for life and work are defined in terms of a total of 26 and 20 tasks respectively. An individual must be able to perform at least one task from each category to achieve overall competence in life or work EDS (Lloyds and FutureDotNow, 2023):

  • Communicating: 6 life tasks and 3 additional work tasks
  • Handling information and content: 5 life tasks and 2 additional work tasks
  • Transacting: 4 life tasks and 2 additional work tasks
  • Problem solving: 2 life tasks and 4 additional work tasks
  • Being safe and legal online: 9 common life and work tasks

See Appendix A for a full list of life and work tasks.

While EDS for life and work use the same taxonomy, they differ in terms of the kinds tasks a person should be able to carry out successfully. As shown above, with the exception of Being safe and legal online, whose tasks are common to both life and work EDS, EDS for work adds several additional tasks to each category (Lloyds and FutureDotNow, 2023).

Summary and key takeaways

Digital literacy frameworks are designed to provide the conceptual underpinnings to educational curricula and programmes, defining the relationships between competencies to perform particular roles and the skills required to complete the tasks associated with those roles.

In the UK, the Essential Digital Skills framework defines the key competencies, skills and tasks for the use of digital tools in life and work. Notably, the framework stresses the importance of not only having skills and knowing how to apply these tools but also knowing when – or when not – to apply them and why. As will become clear in the following section, this competence assumes even greater importance for AI literacy.

7. Defining AI literacy, competence and skills

In this section, we examine how AI literacy competencies and skills have been defined in the research literature. The first thing to note is that these definitions follow the example of the literature on digital literacy and skills. Hence, AI literacy is broadly defined as the competencies to understand, evaluate and use AI systems (Long and Magerko, 2021), while AI skills refer to the technical abilities linked to those competencies. AI literacy is defined in the literature as an extension of digital literacy (Su and Ng, 2023; Ng et al., 2021a; Ng et al., 2021b), with additional competencies (Long and Magerko, 2020) and an increased emphasis on knowing when and how to apply AI technologies (Shiohira, 2021). While being digitally literate is acknowledged to be an essential foundation for AI literacy, Schuetz and Venkatesh (2020) argue that the latter requires a conceptual reframing because AI systems are adaptive and context aware, and so ‘break’ the assumptions that people have about digital technologies (Information Systems (IS) in Table 1 below) and their interactions with them (Schuetz and Venkatesh 2020, p. 464):

CCS [cognitive computer systems] are no longer simple tools and users are no longer simple users. Rather, CCS and users form complex systems in which artifacts use users to achieve their objectives.

This review will follow the distinction made in the literature between competence and skills. Further, in the case of AI skills for work, it will distinguish between skills required to create AI-based systems and skills to understand when and how to apply them within a work context. However, as Section 7 will make clear, current definitions of AI competencies and skills are neither as detailed and nor as mature as those for digital competencies and skills.

Table 1: Broken and revised assumptions about Information Systems (IS) (Schuetz and Venkatesh, 2020).

Broken IS assumption Revised IS assumption
Humans are users Bilateral human-AI relationship
The developer defines the inputs AI is aware of the environment
IT artifact use leads to consistent outcomes AI can be functionally inconsistent
The way the tool derives its outcomes is comprehensible and can be verified AI can be functionally not transparent
There is an artificial interface Humans can be unaware of their AI use

Searches revealed several definitions of AI literacy, together with examples of how these frameworks have been unpacked into distinct sets of competencies and skills (e.g., Wong et at., 2020). Among the most frequently cited definitions of AI literacy is that by Long and Magerko (2020):

A set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.

Following a literature review of a set of 150 articles, Long and Magerko (op. cit.) then specify a framework for AI literacy consisting of a set of 17 competencies (see Appendix C), which they group into five overarching themes that include procedural (i.e., knowing how to use the device) and declarative (i.e., knowing how the device works) knowledge (Kieras and Boviar, 1984). These are listed below, annotated with brief explanations:

  1. What is AI? (competencies 1-4): e.g., knowing differences between AI and other digital technologies.
  2. What can AI do? (competencies 5-6): e.g., knowing what these differences mean for how AI can be used, its strengths and weaknesses.
  3. How does AI work? (competencies 7-15): e.g., understanding the principal technical elements of AI.
  4. How should AI be used? (competency 16): e.g., understanding the ethical issues raised by use of AI.
  5. How do people perceive AI? (competency 17): e.g., understanding common misconceptions about AI; making sense of AI; trustworthiness of AI.

From the above it is evident that, in terms of competencies, this framework is heavily weighted with respect to the first three themes. What is AI? emphasises that while all AI technologies are digital, not all digital technologies are AI and that it is important for people to be able to distinguish between products and services that employ AI and those that do not and to understand the nature of the differences if they are to apply AI effectively.

What can AI do? aims to provide people with the knowledge required to identify the kinds of problems that AI is best equipped to solve. How does AI work? reveals the importance of declarative knowledge of AI when deciding which type of AI is best suited to the problem and how then to apply it effectively. The remaining themes are represented by just one competency each. How should AI be used? focuses on understanding the risks that the use of AI may raise.

How do people perceive AI? aims to explore people’s experiences of Al, including the challenges of making sense of how AI products and services behave and trusting the responses of what have been likened to ‘black boxes’, where the relationships between inputs and outputs are opaque. Long and Magerko (op. cit.) explain that their choice of Programmability as a key competence for this theme reflects how children tend initially to personify AI. They argue that learning to program is an important step towards correcting this misapprehension. They also observe that this is but one of several related misconceptions about AI, to which students without a background in computer science are particularly susceptible, including assumptions that AI systems think like humans.

Ng et al. (2021a; 2021b; 2022a) conducted a review of 30 articles on AI literacy published between 2016 and 2021. They concluded that while variations in definitions of AI literacy exist, all “support the notion that everyone, especially K-12 children, acquire basic AI knowledge and abilities, enhance motivation and career interest, as well as use AI-enabled technology” and are consistent with an AI literacy framework containing four distinct competencies (Table 2). These are: Know and understand AI; Use AI; Evaluate and create AI; AI ethics.

Ng et al.’s framework is an adaptation of Bloom’s six level taxonomy of cognitive levels (Bloom, 1956). Successive levels of Bloom’s taxonomy require higher reasoning competencies and skills, and each level must be mastered before the next one can be reached. Figure 2 (Ng et al., 2021a) shows how this AI literacy framework maps onto Bloom’s taxonomy. Know and understand AI are mapped to Know and Understand; Use AI is mapped to Apply; Evaluate and create AI is mapped to Analyse, Evaluate and Create AI. AI ethics has no direct equivalent level in Bloom’s taxonomy, however, arguably it is a desirable skill in the context of Create.

Based on the definitions of each AI literacy competencies in Table 2, Know and understand AI corresponds to the EDS Foundation Level; Use AI corresponds to the five EDS competencies for life and competencies for work. However, in recognition of the significance of the power and complexity of AI technologies when compared to other digital technologies, there is clearly an emphasis in the case of the AI competencies on the need for mastery of a degree of conceptual knowledge and higher-level reasoning skills that are not recognised as relevant to digital literacy and so are absent from the EDS framework. However, unlike the EDS framework, the AI literacy frameworks lack specific examples of skills and tasks associated with each competence that would help to clarify what this conceptual knowledge and reasoning skills should cover.

Evaluate and create AI and AI ethics have no equivalents within the EDS framework, emphasising the importance of higher order reasoning skills for AI. They are also arguably where an assumption of there being a common taxonomy for AI life and work competencies, distinguished solely by skills that a competent user would be expected to be able to perform, may begin to break down.

Above Analyse in Ng et al.’s adaptation of Bloom’s taxonomy (Figure 2), AI competencies and skills – including ethics – arguably become progressively more technical such that these may only be relevant within a work context.

Table 2: A framework for AI literacy (Ng et al., 2022a).

AI literacy Definitions Sample studies
Know and understand AI Know the basic functions of AI and how to use AI applications Kandlhofer et al (2016), Robinson et al (2020)
Use AI Applying AI knowledge, concepts, and applications in different scenarios Druga et al (2019), Julie et al (2020), Vazhayil et al (2019)
Evaluate and create AI Higher order thinking skills (e.g. evaluate, appraise, predict, design) with AI applications Druga et al (2019), How and Hung (2019)
AI ethics Human-centred considerations (e.g. fairness, accountability, transparency, ethics, safety) Chai et al (2020), Druga et al (2019)

Figure 2: Bloom’s taxonomy and AI literacy (Ng et al., 2021a).

Table 3: Mapping AI literacy frameworks.

Long and Magerko, 2020 Ng et al., 2021a; 2021b Casa-Otero et al., 2023
What is AI? Know AI Learning about AI
How does AI work? Understand AI Learning about how AI works
What can AI do? Use AI Learning about how AI works
How should AI be used? Evaluate and create AI Learning for life with AI
How do people perceive AI? AI ethics Learning for life with AI

As part of a systematic review of approaches to AI literacy, Casal-Otero et al. (2023) identified three distinct types of competencies as the basis for a thorough and robust understanding of AI:

  1. Learning about AI: distinguishing between systems that use AI and those that do not.
  2. Learning about how AI works: competencies that enable effective interaction with AI.
  3. Learning for life with AI: critically evaluating AI and its applications.

Following a close examination, mappings between the competencies in these three AI literacy frameworks were identified (Table 3). In the following sections, we will examine the progress that has been made to-date in elaborating AI literacy frameworks in terms of AI competencies, representative skills and educational curricula.

7.1. Data literacy

As can be seen in Section 5, the EDS framework includes competencies and skills for the Handling of Information and Content and Being Safe, Legal and Confident Online. Perhaps because it is assumed that AI literacy builds upon digital literacy, AI literacy frameworks have nothing specific to add to these particular competencies, which broadly attempt to define what is expected of users in their handling of personal data. Some argue that these competencies are no longer adequate to equip citizens with the skills they need to live well in contemporary societies (Gebre, 2022, p. 1083):

The world we live in has become increasingly data intensive in that not only the nature and volume of data has increased but also the effect of data on the life of citizens has changed drastically in the last two decades or so.

Empowering people to use AI systems well in life and work calls for them to not only make sense of the outputs of these systems but to have better understanding of their inputs, i.e., the sources of data that drive them (Stamboliev, 2023). From this perspective, AI literacy frameworks need to include competencies that go beyond those defined in digital literacy frameworks, such as EDS, and which has led some re-frame these as data literacy (Miao and Shiohira, 2022, p. 11):

The ability to understand how AI collects, cleans, manipulates, and analyses data…

It is reasonable to conclude that data literacy will become increasingly significant and so will need to be addressed if AI literacy frameworks are to remain fit for purpose (Ghodoosi et al., 2023). This is supported by the results of The General Public Survey (cB), which found that respondents were most concerned about managing their safety and privacy when using AI in life but least confident about being able to do so.

Miao and Shiohira (op. cit.) make the place of data literacy within an AI literacy competencies and skills framework explicit, as an element of what they refer to as AI foundations. In other aspects, Miao and Shiohira’s AI literacy framework matches those summarised above, albeit with some minor differences in terminology.

Summary and key takeaways

Research into AI literacy frameworks has recently accelerated, leading to a number being proposed within the period covered by this RER. Each of the frameworks documented here are the result of systematic literature reviews. Hence, it is not surprising that a close reading reveals these frameworks to be broadly compatible with one another.

All these AI literacy frameworks differ in significant ways from digital literacy frameworks, especially in identifying competencies absent from digital competencies and skills frameworks. These additional competencies stress: the importance for users of AI to be able to distinguish between AI and non-AI-based products and services; the possession of both conceptual knowledge and higher-level reasoning skills; the capacity to critically evaluate AI, including an understanding of ethical issues that may be raised by the application of AI.

Finally, as noted in Section 11, AI competence and skills frameworks will need to be regularly reviewed in the light of further advances in AI and as understanding grows of the risks they may pose in both life and work. Arguably, the failure to acknowledge the significance of data literacy is one example that already needs addressing.

8. AI competencies and skills for life

The objective of defining AI competencies and skills sets is to identify desired learning outcomes that can be used to design educational curricula and programmes in AI literacy.

According to Bentley et al. (nd), who examined policy documents from six countries (China, US, Singapore, Sweden, Australia, Canada) to learn from international best practice in developing AI literacy competencies and skills, there is an emerging set of competencies and skill sets covering AI development and deployment. Nevertheless, Bentley et al. (nd) also note that specific competencies and skill sets will depend on the sector(s) where AI-based products and services are being targeted.

In their analysis of the research literature Casal-Otero et al. (2023) identified four broad categories of educational goals and activities that have been proposed for delivering Learning experiences focused on understanding AI to K-12 students (Figure 3).

  1. Learning to recognise artifacts using AI: creating computer-based simulations of human-like behaviours, experimenting with social robots and programming AI-based conversational agents.
  2. Learning about how AI works: building intelligent devices to solve real-world problems.
  3. Learning tools for AI: introduction to age-appropriate programming languages, integration of AI models into games, use of data visualisation and gamification techniques.
  4. Learning for life with AI: exploring how robots can be used in society, programming conversational agents, learning to recognise fake media content.

Figure 3: Categories for AI learning in K-12 (adapted from Casal-Otero et al., 2023).

Under proposals for implementing AI learning at the K-12 level, Casal-Otero et al. (2023) identified six areas that need to be addressed:

  • AI literacy curriculum design: at what level, i.e., primary or secondary, it is introduced.
  • AI as a subject in K-12 education: as a specific subject or as an extracurricular activity.
  • Student perspectives on AI literacy: adapting teaching to student knowledge and attitudes.
  • Teacher training in AI: training schemes and opinions on curriculum development.
  • AI literacy support measures: provision of resources and repositories of materials.
  • Gender diversity in AI literacy: approaches designed to increase participation of girls.

Ng et al. (2022b) found that up to 2021, articles on AI literacy in primary and secondary education programmes were almost all published in the United States, with only 2 published in the United Kingdom. Su et al. (2022) conducted a literature review of AI literacy education in early childhood within the Asia-Pacific region (China, Hong Kong, Japan, Singapore, Korea). The aim was to derive

a set of implications for innovative pedagogical designs in terms of educational standards, curriculum designs, formal/informal education, student learning outcomes, teacher professional development and learning progressions to recommend how governments, researchers and educators could build a widely accepted and age-appropriate AI curriculum for all K-12 learners.

They observed that the assessment of AI literacy competence is not yet widely used and the tools for this have yet to be rigorously validated. Among their recommendations were:

  • Teachers require rigorous training.

  • More teaching resources, especially to support self-paced learning.

  • Use of gamification to increase student motivation.

  • AI-based activities should be available in many different forms.

Yue et al. (2022) observed that the focus of academic research into AI literacy in countries such as the United Kingdom has shifted from tertiary education to secondary and primary in recent years. A systematic review by Rigley et al. (2023) of AI skills policies in Australia, Canada, China, Singapore, Sweden, the United Kingdom and the United States found that policies could be categorised into two groups: 1) a broad approach, where the aim is to upskill and educate all citizens at all levels, represented by The United States and Singapore; and 2) a narrow approach whose aim is to educate a smaller group of experts with advanced AI skills, represented by China, Sweden and Canada. They observed a correlation between a high AI readiness index (see Section 13, summary) and the adoption of a broad approach, which its AI readiness ranking would imply is the approach being followed by the UK. They also observe that in the UK the emphasis is on formal education over “practical skills and work-based methods” (op. cit., p. 8) as the key to developing “talent pipelines”.

No studies have been found that focused specifically on AI literacy education programmes in the United Kingdom at primary and secondary levels. However, several initiatives were identified, including:

  • Data Education for Schools, which is part of the Edinburgh and South-East Scotland City Region Deal Data Skills Gateway, funded by Scottish and UK Government.

  • Data Education in Colleges. This is another stream of the Data Skills Gateway, which has launched a National Progression Award (NPA) and Professional Data Award (PDA) in Data Science for 5th and 6th year pupils in Scotland.

One initiative with a secondary education focus that is mentioned in the UK’s National AI Strategy (Department for Culture, Media and Sport, 2021a) is the National Centre for Computing Education (NCCE) and the courses and resources that it offers for teachers of computer science. Instead, the UK’s National AI Strategy focuses on interventions targeted at tertiary levels and post-secondary programmes such as skills ‘bootcamps’. In a similar vein, a report by the UK AI Council published in the same year (UK AI Council, 2021, p. 18) observed:

To ensure a steady supply of skilled entrants to the workforce, and to send the right signals to young people making further academic choices, the UK should commit to scaling up its programmes significantly at all levels of education and maintain this commitment for at least a decade.

It also described (op. cit., p. 18) its vision as being

For everyone to be able to live confidently with AI, and for those who go on to work with it and to build it to do so with the very best foundation… The UK needs to set itself challenging but realistic goals to ensure that every child leaves school with a basic sense of how AI works. This is not just about understanding the basics of coding or quantitative reasoning, or to describe the mathematical concepts; nor is it simply about ethics. It is about knowing enough to be a conscious and confident user of AI-related products; to know what questions to ask, what risks to look out for, what ethical and societal implications might arise, and what kinds of opportunities AI might provide… For now, the government should focus energy on the most effective existing curriculum enrichment initiatives, not as alternatives to fundamental changes to the curriculum, but because they provide opportunities to accelerate change in the short term.

Summary and key takeaways

Educational curricula designed for delivering AI literacy competencies and skills for life at primary and secondary levels are beginning to be defined in several countries that are among the leaders in AI technology development. In contrast, the focus in the UK to-date has largely been on AI literacy competencies and skills at the tertiary education level.

While it is important that the tertiary level is not neglected, apart from computer science, at the time of this evidence review, the UK has yet to come forward with specific recommendations for enriching curricula in other subjects at this level. More importantly, if it is to keep up with the progress made by its international peers, the UK should look to increase its efforts to progress the development of education curricula in AI literacy and skills at primary and secondary levels.

The UK could benefit from the efforts of its international peers in defining AI literacy competencies and skills frameworks and focus its attention on defining in greater detail the kinds of competencies and skills involved, together with representative activities and tasks around which educational curricula and programmes can then be devised.

In 2024, the Department for Science, Innovation and Technology published a report that defines an AI competency framework for work for public consultation (Department for Science, Innovation and Technology, 2024). The framework identifies the knowledge, skills, personal aptitudes and behaviours required to navigate practical challenges and exhibit competency within their profession and in life. The Framework targets the following audiences:

  • AI Citizens: members of the public who may be customers to organisations making use of artificial intelligence. The framework defines the level of understanding of AI, with respect to its capabilities, opportunities and risks. In doing so it aims to provide AI Citizens with a pragmatic outlook on the use of artificial intelligence.

  • AI Workers: employees not working primarily in ‘data’ or ‘AI’, but whose roles may be impacted by these technologies. The framework supports AI workers to identify the opportunities AI provides in terms of efficiency and productivity in their roles.

  • AI Professionals: employees whose core responsibilities concern the use of ‘data’ and ‘AI’. These include data analysts, Machine Learning engineers, and data ethicists. The framework defines the cross-cutting competencies required to work effectively in multidisciplinary teams, and to collaborate effectively across their organisation.

  • AI Leaders: those holding senior responsibility for the procurement and/or governance of artificial intelligence solutions. The framework aims to support them to foresight the implication of emerging technologies, including impacts on their workforce, and to oversee the responsible and safe introduction of AI in settings of organisational complexity and uncertainty.

The AI Citizens Persona is intended to be interpreted as AI skills for life (Section 7), so it is worthwhile examining how the framework report defines this persona (Department for Science, Innovation and Technology, 2023; p. 8-9):

  1. Every AI citizen should be fully conversant with the foundational data skills set out in the Essential Digital Skills Framework.
  2. They will be able to engage meaningfully and critically with the role of AI in life and livelihood.
  3. They will be aware of the ways in which AI technology is used within their daily life.
  4. They will be aware of the opportunities as well as risks of AI and its underpinning technologies, and the need for security of personal data.
  5. They will be critical consumers of artificial intelligence and possess an awareness of the capabilities of these technologies as well as a pragmatic outlook on their utility.

It is not entirely clear from the draft report how the AI Citizens Persona maps onto the three AI skills for life identified by Casal-Otero et al. (2023) (this report, Section 6) and the related AI skills framework (Section 7) without some details of how its five elements would translate into educational curricula. However, allowing for a degree of interpretation, we may reasonably assume that 2. above maps onto Learning about how AI works; 3. and 4. map on to Learning for life with AI; and 5. maps onto Learning about AI and Learning about how AI works.

The framework is composed of five dimensions, each representing a cluster of competencies and behaviours across five areas.

  • Dimension A: Privacy and Stewardship This area concerns the security and protection of data, including the design, creation, storage, distribution and associated risks. These practical data controls would align with legal, regulatory and ethical considerations.

  • Dimension B: Specification, acquisition, engineering, architecture, storage and curation. This area concerns the collection, secure storage, manipulation, and curation of data. Competencies in this area also relate to the application of data management and analytical techniques. For example, this includes competencies around handling situations arising from the (mis)use of sensitive data.

  • Dimension C: Problem definition and communication. This area concerns the ability to identify and clearly define a problem, to understand the role artificial intelligence can play in potential solutions, and to be able to communicate this knowledge effectively to a variety of audiences. 

  • Dimension D: Problem solving, analysis, modelling, visualisation. This area concerns the knowledge of and ability to apply a range of mathematical, statistical and computing tools and methods to define and analyse a problem and present solutions. 

  • Dimension E: Evaluation and Reflection It is important that all professionals working within the field of data science and artificial intelligence have a clear understanding of the ethics that underpin their work and take responsibility for the assurance of the models they build. 

Dimensions A to D reflect the skills required in the “key phases of the AI project lifecycle” (Department for Science, Innovation and Technology, 2023, p. 12).

Appendix D shows the skills level mapping between the five dimensions and the AI Worker, Professional and Leader personas. According to the draft report (Department for Science, Innovation and Technology, 2023, p. 12), the work personas have been defined in recognition of

The varied nature of roles which have the opportunity to incorporate AI and intends to be flexible to accommodate these. Individual roles vary not only in terms of technical and organisational complexity. They vary in terms of the level of responsibility and accountability, the level of authority to make decisions, and the extent of the impact across the organisation… Personas are intended to be flexible to accommodate these variations. Consequently, an individual role will experience differing areas of specialisation, and differing emphasis on key competencies.

The first significant difference between the EDS digital work skills and AI competency frameworks is that the latter has three distinct levels: Workers, Professionals and Leaders, the first of which being the one that would appear to map most closely with digital skills for work. There are, however, some important distinctions. In particular, the skills expected of an AI Worker are significantly more advanced in that they require at least an awareness of and, in some cases, a working knowledge of all the AI skills set’s five dimensions.

Consistent with the observation earlier about AI literacy frameworks for life, each of these work personas requires a greater level of knowledge (both conceptual and technical) and skills than the EDS digital skills framework. An examination of the nature of these knowledges and skills (e.g., those defined under Dimension D) reveals that they are ones that are most commonly acquired through taking STEM subjects. This emphasises that while the AI skills framework is undoubtedly broader than STEM, elements of the latter will be an essential foundation for the former (Gehlhaus et al., 2021). What is not clear, however, is whether encouraging the taking of traditional STEM subjects (and Maths and Computer Science, in particular) at a significantly larger scale is the right way to provide the skills that AI Workers would require.

The latest available data (2019) on STEM subject take-up in the UK (House of Commons Science and Technology Committee, 2023a) reveals 550,000 students taking Maths at GCSE and 78,000 at A-level respectively, and 77,000 taking computing at GCSE level and 10,000 at A-level. The evidence summarised in Section 10 suggests that it is unlikely, however, that such numbers will be sufficient to meet the demand for AI worker persona skills. Whereas the proportions of females and males taking Maths GCSE and A-level where reasonably balanced (49:51) and (39:61) respectively, in computing a much bigger difference was reported (22:78 and 13:87 respectively). A subsequent report by the same committee (2023b) reported some progress towards a greater gender balance at A-level in STEM subjects overall but more evidently needs to be done.

The second difference is that the AI competency framework has two personas: AI Professional and AI Leader that have no equivalents in the EDS digital skills framework. It may be that it has been assumed that the demand for employees in these roles with EDS digital skills is being adequately met by existing education programmes. In contrast, apart from the gap between demand and supply of these AI skills (Section 10), these two personas in the AI competency framework is a timely recognition of the challenges businesses face from the unpredictable impact of AI, both in terms of the range of business opportunities AI presents – the understanding of which calls for the engagement of both AI Professionals (i.e., developers) and AI Leaders (i.e., senior management) – and of the potential risks to consumers of AI-based products and services if ethical governance is not built into corporate market strategies and business models from the very beginning.

Third, it is not clear that the AI Workers persona acknowledges the emergence of the ‘gig economy’ and a new category of employee who must learn to meet the demands of algorithmic management if they are to thrive and what then would be the implications for AI work skills (Attwell, 2020; Taneri, 2020; Ramos, 2022; Bhatnagar and Gijjar, 2024). Estimates of the size of the UK gig economy vary, from 1.4% of total employment according to the Labour Force Survey (2022) to 14.7% according to TUC research (2021), and tripling in size in five years, and in the process of becoming a significant AI worker category. Algorithmic management is not making an impact in the gig economy alone, however, increasingly it is finding its way into more traditional forms of work and workplaces (Jarrahi et al., 2021; Wood, 2021). Recent studies suggest that there may be health and wellbeing implications for employees working under algorithmic management (Kinowska and Sienkiewicz, 2023; Vignola et al., 2023), of which both employers and employees need to be aware of and have the knowledge and understanding to avoid.

Finally, in terms of what steps should be taken next to develop the AI competency framework, plans are already afoot to begin, as with the EDS framework, a period of public consultation that will focus on developing “…sector-specific case studies and resources and a full skills framework.” (op. cit., p. 25) This is important as buy-in by business will be essential to any ambition of upskilling the UK workforce and the specifics of what AI skills are important may vary from business sector to business sector (Bentley et al., nd).

Summary and key takeaways

A taxonomy of AI skills for work is now, at the time of this review, in the process of being defined in the UK under the AI Competency framework. However, further work is needed to validate the high-level categories if it is to serve as a foundation for defining curricula for secondary and higher education. In this regard, understanding the role of STEM skills will be particularly important.

However, if the future demand for AI worker persona skills cannot be satisfied by raising the numbers of students taking STEM subjects, then alternative approaches will need to be identified. One would be to introduce a relevant subset of STEM subject matter (mainly maths and computer science) into other subjects at both secondary and tertiary levels. More also needs to be done to improve the gender balance in computer science, in particular.

Finally, there is a lack of recognition of the skills people may require if they are to avoid adverse effects of working under algorithmic management and this too needs to be addressed.

10. Digital skills gaps

As noted above, it is necessary – though not sufficient – that educational programmes to develop AI literacy in the UK assume citizens possess digital skills. It is therefore instructive to review the evidence of attainment by the UK population with respect to the Essential Digital Skills (EDS) framework (see Section 5 for an overview of the EDS framework). Lloyds and Ipsos-MORI (2022) conducted telephone interviews with a representative sample of 4,099 UK citizens aged 18 or over to measure competence at performing the tasks defined in the EDS framework.

In the case of EDS Life skills, Table 4 shows the percentages of people with each of the five skills. In the case of EDS Work skills, 22% were classified as being without EDS Work skills (< 5 skills), of which 8% had zero EDS Work skills and 14% had partial EDS Work skills (1-4 skills). Of those surveyed who were in work, 5% did not have EDS Life skills and 18% did not have EDS Work skills.

Table 4: Percentages of people deemed to have the five specific skills categories under EDS Life skills (Lloyds and Ipsos MORI, 2022)

EDS Life skill Percentage with the skill
Communication 94%
Handling information and content 93%
Transacting 92%
Problem solving 90%
Being safe and legal online 94%

Lloyds and FutureDotNow (2023) reported that nearly 60% of the UK labour force are unable to do the twenty tasks listed under the EDS Work skills framework (see Appendix B), only 41% have the full skill set and 8% have none. However, they also reported that 27% were ‘on the cusp’ of being able to do all twenty tasks (i.e., between 17-19 of the tasks). Lloyds and FutureDotNow highlighted several areas of particular concern, including lack of competence in collaboration tools and in online safety. Lloyds and FutureDotNow observe that digital skills are a key enabler for career prospects and socio-economic groups that are currently under performing in digital skills are “at risk of a double disadvantage.” (Lloyds and FutureDotNow, 2023; p. 4)

Figure 4 shows the proportions of the UK labour force that can do all twenty-six EDS Life tasks, broken down by key demographic factors, showing the influence of factors such as age, education, working status, impairment and personal income. By comparison, ethnicity and gender have little influence.

Figure 4: Influence of demographic factors on proportions of people who can do all 26 EDS Life tasks (Lloyds and FutureDotNow, 2023)

Figure 5 shows national and regional levels of competence in foundation level EDS tasks. This reveals that there was a decline recorded in England, Northern Ireland and all but one region between 2020 and 2021, in some cases of several percentage points, with only the East Midlands and Scotland registering an increase.

Figure 5: Proportion of adults 18+ with foundation level EDS Life tasks by nation and region (Lloyds and Ipsos-MORI, 2022).

Figure 6 summarises the influence of demographic factors on the proportions of people who can do all 20 EDS Work tasks (Lloyds and FutureDotNow, 2023). This provides a glimpse of the influence of industry, with 43% difference between Construction and Media and Advertising. As with EDS Life tasks, age, education, working status, impairment and personal income are key factors, while ethnicity and gender have little influence. Additional factors recorded include region, which reveals a similar pattern to that observed for EDS Life skills and size of organisation, which shows that smaller organisations are at a significant disadvantage.

Figure 6: Influence of demographic factors on EDS Work Skills competence (Lloyds and FutureDotNow, 2023).

Figure 7 shows the regional variations in EDS Work Skills, revealing that the North East, Yorkshire and Humber, South West and West Midlands have the lowest proportions of EDS Work Skills. Importantly, no UK region or nation exceeds 50% of the work force being able to do all EDS Work tasks.

Figure 7: Influence of region on proportions of people who can do all 20 EDS Work tasks (Lloyds and FutureDotNow, 2023).

A subsequent survey by Lloyds in 2024 of EDS skills gaps found that 52% of working age adults cannot perform all twenty work tasks in the EDS framework, suggesting that some progress has been made in the 12 months since the 2023 survey (FutureDotNow, 2025). Groups with the largest gaps include part-time workers (65%), older workers (63%) and those with an impairment (62%). The survey also found some ‘unexpected gaps’, including young people (48%) and tech sector workers (20%).

Summary and key takeaways

A population with high levels of attainment in EDS Life and Work skills will be essential if the UK is to succeed in exploiting the personal and economic benefits that AI promises to deliver.

However, recent national surveys show that a significant proportion of the UK population have only partial EDS Life and Work Skills. The figures for the proportion of the population that have the skills to do all EDS essential work tasks could be a particular concern for meeting GDP growth and productivity targets.

These surveys also reveal that levels of attainment in both are influenced by a range of demographic factors, including age, education, income, region, etc. These will need to be addressed if, for example, existing regional economic inequalities are not to be reinforced as the pace of AI adoption by businesses grows.

11. AI skills gaps

11.1. Key gaps in AI skills for life

The evidence on current levels of AI skills for life in the UK is necessarily limited at this time but it is reasonable to assume that they are at a relatively low base compared to EDS for Life and will have similar socio-demographic and regional profiles. However, some recent surveys can help to make up for a lack of in-depth data. Wave 2 of a survey on Public Attitudes to Data and AI (PADAI), conducted in July 2022 by the Centre for Data Ethics and Innovation (CDEI), reported that 11% of adults had never heard of AI, 10% stated that they could explain what AI is in detail, while 46% could provide a partial explanation (Figure 8). An ONS Opinions and Lifestyle survey (OPN) on public awareness, opinions and expectations about AI conducted a year later in May 2023 reported that 8% of adults had never heard of AI, 19% stated they could explain what AI is in detail, while 53% could provide a partial explanation (ONS, 2023a). However, Wave 3 of PADAI, collected in August-September 2023, reveals only a marginal improvement on Wave 2, with 12% claiming to be able to explain what AI is in detail and 54% claiming to be able to give a partial explanation.

The ONS survey also revealed insights into levels of attainment with respect to aspects of the AI literacy framework proposed be Long and Magerko (2020):

  • 17% adults reported that they can often or always recognise when they are using AI.
  • 50% adults reported that they can some of the time or occasionally recognise when they are using AI.
  • 33% adults reported that they can hardly ever or never recognise when they are using AI.

Figure 8: Opinions and Lifestyle Survey from the Office for National Statistics and Public Attitudes to Data and AI from the Office for AI (ONS, 2023a).

Men (21%), adults aged 16-29 (31%) and adults in mixed or multiple ethic groups were more likely to report they can always or often recognise when they are using AI. In contrast, adults aged 70 years and over (55%) and 39% of those without a degree reported they can hardly ever or never recognise when they are using AI.

An OPN survey on Understanding AI uptake and sentiment among people and businesses in the UK undertaken in 2023 (ONS, 2023b) reported on levels of daily use of AI (Figure 9). Unsurprisingly, people over 70 years of age report the highest levels of non-use (68%). At 38%, the most common reported example of AI-based product or service was chatbots. Some reservations about these figures should be noted, however, given the difficulties people report having in recognising when they are using AI.

Figure 9: Percentage of UK adults reporting daily use of AI by age (ONS, 2023b)

For users of AI-based products and services, one way of framing the AI skills for life gaps that AI literacy programmes will need to address is ‘trust’ in AI. This will have a significant impact not only on citizens’ acceptance of AI-based products and services (Bentley, n.d.), but also on their willingness to provide data for commercial and public sector AI-based products and services. The former may slow the growth of the AI-based economy, and the latter may have a significant impact on advancing research in fields, for example, such as health (Moulds and Horton, 2023), which is one of a number of areas where it is hoped that AI will significantly improve productivity. Trust in AI is now an issue because AI techniques have advanced in ways that have led the behaviour of AI systems to becoming increasingly ‘black boxed’. This makes difficult – if not impossible – for users to understand their behaviour and this erodes trust. Consequently, much effort is now being devoted to developing ways of ‘opening up’ the black box (Du et al., 2019) through the development of techniques that aim to provide explanations of how AI-based systems behave that are interpretable by their users.

One challenge for AI skills education will be to provide people with the capacity to make sense of these explanations in the contexts in which they are encountered (Procter et al., 2023). It should be noted that the kinds of skills people will need will depend on factors such as the pace at which explainability techniques are able to advance and what steps, if any, government takes to impose standards (e.g., through regulations) for explainability on businesses building AI-based products and services.

A second and perhaps even more important trust-related issue concerns public confidence that businesses use the data the public generates only in ways that are consistent with their expectations of data privacy and security. These concerns were highlighted by the announcement in November 2023 of the award of a contract to an American company to run the NHS data platform (Armstrong, 2023). Recent research has also revealed similar issues around trust in how data may be used in a nationally representative survey of UK attitudes towards adoption of the Internet of Things (Cannizzaro et al., 2020).

Summary and key takeaways

Unsurprisingly, perhaps, the level of understanding of AI within the UK population at present is low. Surveys show that only a relatively small proportion believes they can explain what AI is in detail, while around 50% believe they are able to provide a partial explanation. Whether these figures add up to a reliable picture of people’s understanding is unclear, however, as these surveys did not test people’s beliefs, so objective measures could be significantly lower.

Of equal concern is that only 17% of people surveyed reported that they can always distinguish between AI and non-AI products and services. This needs to be addressed as it is likely to impact on people’s capacity to use AI-based products and services effectively and appropriately and may also leave them open to harm through the actions of criminals seeking to lure them into disclosing sensitive financial information (see Section 11) or those intent on spreading disinformation.

Finally, while data privacy is a dimension of the AI Work personas, the skills that would enable people to make informed choices about the data they share with businesses is absent from the EDS skills for life tasks list, which makes the case for including data literacy in AI literacy frameworks.

11.2. Key gaps in AI skills for work

According to the Department for Science, Innovation and Technology (2023), AI skills for work can be categorised into three AI personas: Professionals, Leaders and Workers (see Section 8). AI Professionals will possess the skills to design, develop, deploy and maintain AI-based systems and represent the most technically proficient people. These skills are most likely to be acquired through higher education and/or professional training and/or work-based upskilling programmes.

Labour market surveys of the demand for AI work skills are helpful for understanding where the key gaps are emerging in AI Professionals skills, however, it can be difficult to compare surveys and draw robust conclusions due to variations in job titles, both between different surveys and over time. The results reported in this section are the latest available at the time of writing this report. For some more recent data on UK employer demand for AI skills, the reader should consult Job Vacancy Analysis (WP4) and AI skills: Employer Survey Findings (WPC).

Figure 10 shows the top ten postings globally for AI Professional skills in 2019 according to Burning Glass Technologies (Alekseeva et al., 2021).

Figure 10: Top ten postings of jobs requiring AI skills in 2019 (Aleekseva et al., 2021).

Figure 11 shows how the share of postings for jobs that map onto the AI Professionals (i.e., data engineer, machine learning engineer) and AI Leader (i.e., product manager, business analyst) personas have changed in the period 2015-2021 (Lightcast, 2023). This points to significant growth since 2015 in demand for the types of skills associated with these two categories relative to those associated with programming, for example.

Figure 11: Emerging job titles 2015 vs 2021 (Lightcast, 2023).

Figure 12: Top AI skills 2012 vs 2021 (Lightcast, 2023).

Figure 12 shows the breakdown in the changes in demand for skills in the AI Professionals category in more detail (Lightcast, 2023). This shows how the demand for skills in both established (i.e., machine learning) and emerging AI technologies (i.e., deep learning) has increased significantly over the period 2012-2021. Lightcast data on job postings for the period 2021-2023 categorises AI jobs as ‘AI Experts’ (i.e., jobs demanding the most technical skills), ‘AI Specialists’ (i.e., jobs involving the application of AI, such as data analysts) and ‘AI Implementers’ (i.e., business consultants, project managers). Inter alia, the data reveals the ratio of AI Expert, AI Specialist and AI Implementer postings to be 1:2:4 and that the top posting for ‘AI Experts’ was data scientist (20%) and the top skills were Python (68%), Data Science (64%) and Machine Learning (63%).

However, as the most recent Lightcast data only covers the first year following the emergence of – and subsequent surge in commercial interest in – generative, pretrained (GPT) large language models (LLMs), it is likely that the demand for natural language processing skills will now be significantly higher now than it was in 2021. This conclusion is supported by LinkedIn data from 2023, which reveals that in the period August 2021-July 2023, the number of job posts mentioning AI or generative AI (of which LLMs are an example) more than doubled. Evidence such as this points to the market for AI work skills (and those relating to AI professionals in particular) being especially volatile.

Lightcast (op. cit.) also found that of the UK’s nine regions and three nations, demand for AI skills is currently highest in Greater London (2.2% of all job postings), followed by Northern Ireland (1.0%) and the South-East (0.9%). This regional pattern is broadly repeated in the Lightcast data for 2021-2023. Regarding UK AI skills gaps, Fearns et al. (2023) reported findings from a DCSM/DSIT funded study, which found there were “potentially at least 178,00 unfilled data specialist roles.” (op. cit., p. 1)

The World Economic Forum Future of Jobs survey (2020) used LinkedIn data to rank employers’ expectations for 2025 of 15 AI skills, ordered by their expectations of where the key gaps will be. The skills most closely associated with the AI Professional Persona (AI, NLP, signal processing, data science, cloud computing, data storage and scientific computing) occupied the top seven positions (Table 6).

LinkedIn has recently predicted that the impact of AI on workplace skills will be such that “the skills needed will have changed by at least 65%” by 2030. The World Economic Forum Jobs Report (2023) found that companies expect that 60% of their workforces will require re-training in skills to utilise AI and big data and this ranked third in their training priorities. Manca (2023) used online jobs postings in the US, UK, Canada, New Zealand and Australia to reveal the occupations for which AI skills are most relevant and how quickly demand for AI skills is diffusing across labour markets. He reported that “AI skills are relevant in a variety of occupations such as computer scientists, directors of information technology and data scientists.”

It is clear from the above that evidence is accumulating for the growing demand for skills that map not only on the onto AI Professional Persona but also onto both the AI Workers and Leaders. In the case of the former, according to the World Economic Forum, it signifies the emergence of a “new division of labour between humans, machines and algorithms” (op. cit. p. 29). In the case of the former, it arguably signifies the recognition by businesses that AI is a strategic technology, whose adoption is critical for maintaining competitiveness and which demands AI skills on the part of senior management.

Table 6: Employer expectations of AI skills and gaps globally (World Economic Forum, 2020).

Skills Needed for Data and AI Jobs Skill Gap (1 = gap; 0 = no gap)
Artificial Intelligence 0.90
Natural Language Processing 0.89
Signal Processing 0.85
Data Science 0.81
Cloud Computing 0.73
Data Storage Technologies 0.59
Scientific Computing 0.59
Development Tools 0.27
Computer Networking 0.22
Management Consulting 0.15
Information Management 0.07
Product Marketing 0.00
Digital Marketing 0.00
Advertising 0.00
Customer Experience 0.00
Software Development Life Cycle (SDLC) 0.00

Organisations will find it difficult to recruit if the labour market for AI skills is tight and may be impossible for SMEs especially, as scarcity of skills drives up salaries. The solution must be to retrain and upskill employees (Mutebi and McAlary, 2021).

Employees with data science and analytic skills are in high demand, so many employers are looking to cultivate existing talent through long-term training and development. (PwC, 2018).

Organisations may find it equally difficult to retain skilled employees unless they can provide incentives to stay, which may act as a disincentive to offer employees training in AI, especially in the case of SMEs (Dabhi et al., 2021). One way to counter this risk is for organisations to develop internal career pathways (Williams and Procter, 1998) that would enable AI workers to acquire the skills that will enable them to progress to AI professional roles and AI professionals to progress to AI leader roles. However, this can be more challenging for SMEs. Recent reviews confirm that organisations will need to adapt structurally and culturally if they are to meet the challenges of retaining AI skills. The report by PwC (op. cit.) stressed the importance of organisations focusing now and in the future on people and culture (op. cit., p.63):

At the same time, as adoption of AI accelerates, skills like creativity, leadership, and emotional intelligence will continue to be at a premium. It’s important to prepare for a hybrid workforce in which AI and human beings work side-by-side. The challenge isn’t just ensuring you have the right systems in place, but judging what role your people will play in this new model. People will need to be responsible for determining the strategic application of AI and providing challenge and oversight to decisions.

This analysis has been reinforced by findings by the World Economic Forum (2020, 2023), which documented the top 15 skills for 2025 as identified by employers (Table 7) and which observes that twelve of these skills are not technical but are what it refers to as ‘transversal’, reflecting an expansion of demand for AI skills that map into the AI Leaders persona.

Table 7: Top 15 skills for 2025 identified by employers globally (World Economic Forum, 2020).

Top 15 Skills for 2025 Number
Analytical thinking and innovation 1
Active learning and learning strategies 2
Complex problem-solving 3
Critical thinking and analysis 4
Creativity, originality and initiative 5
Leadership and social influence 6
Technology use, monitoring and control 7
Technology design and programming 8
Resilience, stress tolerance and flexibility 9
Reasoning, problem-solving and ideation 10
Emotional intelligence 11
Troubleshooting and user experience 12
Service orientation 13
Systems analysis and evaluation 14
Persuasion and negotiation 15

Recent World Economic Forum surveys also report expectations of increasing adoption of AI in most sectors of the economy (World Economic Forum, 2020; World Economic Forum, 2023). As a consequence, the demand for AI Workers Persona skills is being felt within a wider set of occupations (Manca, 2023), the nature of these skills is changing and ‘technology literacy’ is reported to be the fastest growing core skill. Technological literacy is defined as the ability to use, manage, evaluate, and understand technology (International Technology Education Association, 2006) and is therefore shaped by the technologies that are organisationally significant in any given period. Currently, this would certainly include digital literacy but the extent to which it implies AI literacy as well is not clear.

For the UK specifically, organisations surveyed on expected growth in key roles for business transformation in the next 5 years (World Economic Forum, 2023) reported as follows: AI and machine learning specialists 42%; data analysts and scientists 35%; and business development professionals 20%. Analytical thinking – “a higher order thinking skill, as the ability to differentiate between existing facts and opinions by analyzing their strengths and weaknesses, to analyse data, and to develop thinking capacity and use information effectively by reasoning” (Amer, 2005) – continues to be the top skill.

What is evident from these reports is that the range of skills required to deliver an AI project are becoming more diverse (Kelnar and Kostadinov, 2020):

In addition to technical skills, increasingly AI practitioners must have: domain knowledge, to interpret data appropriately and provide relevant recommendations; engineering experience, to develop solutions that work in the real world as well as the laboratory; commercial experience, to develop and manage AI teams.

Evidence submitted to the House of Lords Select Committee on AI by the Royal Academy of Engineering stated (Select Committee on AI, 2018):

There was a skills gap for people who can work with an AI system but are not AI experts. These people understand the potential of the technology and its limitations and can see how it might be used in business but are not in a position to advance the state of the art.

Henke et al. (2018) has argued for the importance of the ‘analytics translator’ for successful AI projects. There are clear parallels between the definition of the analytics translator and the AI Leader persona:

Translators are neither data architects nor data engineers. They’re not even necessarily dedicated analytics professionals, and they don’t possess deep technical expertise in programming or modelling. Instead, translators play a critical role in bridging the technical expertise of data engineers and data scientists with the operational expertise of marketing, supply chain, manufacturing, risk, and other frontline managers. In their role, translators help ensure that the deep insights generated through sophisticated analytics translate into impact at scale in an organization.

These results not only endorse earlier findings (Bradshaw et al., 2018) but suggest that, as the adoption of AI-based products and services grows, the demand for translational or transversal skills will accelerate and is in urgent need of being addressed. In addition to businesses being able to apply AI strategically, a particularly important example of the role the analytics translator might play is overseeing procedures for mitigating risks that AI-based systems may introduce if ethical issues are ignored. For example, models built using biased datasets that reproduce historic discrimination against ethnic minorities, as in the case of predictive policing (Lartey, 2016) or are applied inappropriately, as in facial recognition systems used to track people’s movements (Ferguson, 2020). Organisations that plan to exploit AI are beginning to recognise the importance of anticipating these risks and to act to minimise them, which then means that AI Professionals and AI Leaders must have a detailed understanding of AI ethics and safety issues. Businesses are beginning to realise the importance of paying attention to the ethics of AI if the AI sector is to reach its potential (Perspective Economics, 2023, p. 48):

The ethics issue is the biggest blocker to wide-scale adoption. Most stories in the press are negative. It is a comms issue as well as a framework issue – people need to understand the benefits.

A key element of these skills will be to apply and demonstrate compliance in the growing range of regulations and standards that have or will come into force in the near future on AI ethics and safety, such as the EU AI act (EU AI Act, 2023). There are significant gaps between the supply and demand for these skills: a) to understand which regulations apply in a particular sector and application context; and b) the skills to then take the appropriate practical measures to comply with them. These gaps are even more severe than those for other AI skills (Procter and Guy, 2023) and are likely to grow in the next five years as “AI ethics education has not yet fully taken root in the computing curriculum.” (Borenstein and Howard, 2021). Finally, there will be a demand for people in policymaking roles with the necessary skills to draft, review and revise AI regulations (Rismani and Mood, 2023).

It is also essential that AI professionals have the skills to ensure that AI-based products and services are secure against: a) threats such as adversarial attacks (Ren et al., 2020); b) the whole development process is secure against malicious attacks; and c) the infrastructure supporting the process (data and compute) is not vulnerable to cyber-attacks that could result in compromising models or ex-filtration of key resources such as data and models themselves (Sabir et al., 2021).

In past waves of IT-driven business innovation, recurrent shortages in staff have led to high levels of occupational mobility. In response, employers have been forced to offer high salaries and working conditions to attract and retain scarce staff. For the latter, previous studies suggest that in-house career opportunities may become an important resource and, through the operation of strong internal labour markets, may be a crucial means by which expert labour is motivated and retained (Williams and Procter, 1998).

Regarding the AI Workers persona, evidence for skills gaps can be found in several recent reports. A 2023 report by Skillsoft found that employers rated the difficulty of hiring staff with analytics, big data and data science second only to cybersecurity. In 2023 the World Economic Forum reported that training in AI and big data was ranked third in business training priorities in the next 5 years (World Economic Forum, 2023). Business concerns about training appear to be matched by those of their employees. A study by Lloyds (2023) surveyed employees’ expectations of the skills they will need in the future. It found that, overall, 40% of the UK labour force aged 18+ are considering learning new digital skills, with the figures skewed by gender (45% males vs 35% females) and youth (57% aged 18-24 vs just 27% aged 55-64+). Of this 40%, 27% are considering learning Data Analysis (e.g., data science, data visualisation, statistical software, machine learning, AI) or Productivity software skills (e.g., Microsoft Office, SAP, Oracle). These figures for males and females are consistent with studies that have revealed a significant gender gap among AI Professionals. Figure 13 shows the results of a LinkedIn study from 2019 (LinkedIn, 2019).

Closing this gap as part of Equality, Diversity and Inclusion (EDI) efforts is critical for the creation of more diverse AI teams (Fosch-Villaronga and Poulsen, 2022), which, in turn, is argued to be essential for reducing the risk that AI-based products and services will introduce (whether unintentionally or by design) biases that reproduce – or even increase – existing socio-economic and ethnic inequalities (Cachat-Rosset and Klarsfield, 2023; Shiohira, 2021). A recent example in the UK of the harms biased data may lead to has been the use of statistical algorithms to predict student grades during COVID-19 pandemic, students in historically underperforming schools and demographics where penalised (Smith, 2020).

Figure 13: Gender gap among UK AI professionals compared with EU average (LinkedIn, 2019).

Summary and key takeaways

Significant gaps in the availability of people with AI Worker, Professional and Leader skill sets have been evident for several years in the UK and globally, and demand for each is expected to rise dramatically over the next five years. One consequence is that businesses and public sector organisations may find it increasingly difficult to recruit and then hold on to their employees.

Assuming that predictions of the growth of demand are reliable, it is unlikely that this will be satisfied by efforts that are already in place in the UK to prime the tertiary AI skills pipeline (Section 12). This will need to be complemented by employers themselves investing more heavily in employee training and paying attention to the provisions of career pathways for AI works and professionals.

The available evidence suggests that gender gap remains a significant concern and, this will need to be closely monitored, along with changes in other AI skills gaps.

Education programmes, especially those at the tertiary level, will also need to be alert to the increasing diversity in the range of AI skills businesses will require as AI technologies continue to advance rapidly, as regulatory measures, such as the EU AI Act, begin to take effect, and businesses increase their efforts to improve EDI within their workforces. In particular, translational skills for understanding how businesses can employ AI strategically, while ensuring AI-based products and services are compliant with ethics regulations, will be in high demand. These skills are, as of yet, only poorly defined and action should be taken to address this as soon as possible.

12. Pace of AI innovation and its implications for AI skills

AI is a rapidly advancing set of technologies, with increasingly powerful techniques and tools being announced on a regular basis. This has several implications for the demand for AI skills and for organisational responses to it.

Reports in the past five years have been unanimous that, globally, the gap between demand and supply of AI Professionals is significant and the most likely impact of the growing pace of innovation is that this gap will increase. In 2019, cloud-based AI company Peltarion surveyed UK and Nordic firms about the impact of the AI skills shortage. It reported that 83 percent of the AI decision-makers surveyed said a deep learning skills shortage is hampering business productivity and competitiveness, 49 percent said AI projects had been delayed due to the gap, while 44 percent said the shortage was preventing further investment in the technology. A more recent report based on survey data has confirmed skills gaps across all sectors in the UK, and “the lack of a suitably robust skills pipeline and limited AI related knowledge and capability across all levels of organisation structures.” (Perspective Economics, 2023, p. 44). The lack of access to talent is limiting the rate at which firms can grow. The report also notes that the “competition for talent” arising from AI skills gap is especially challenging for SMEs.

People occupying many of the roles within the AI work-related personas sketched above will need to keep abreast of advances in AI technologies. For AI Professionals, these may include new types of AI models and new tools to support stages of the AI development pipeline, such as data wrangling (Petricek et al., 2022) and AutoML (Gijsbers et al., 2022), as well as advances in compute infrastructure, such as specialised hardware (e.g., Google Tensor processor) and cloud architectures (e.g., Amazon EC2).

AI will also become accessible to less specialised developers over time. Development environments for new technologies tend towards higher levels of abstraction over time (few developers program in assembly language today). AI is following this pattern. (Kelnar and Kostadinos, 2022)

More ambitiously, techniques such as automated machine learning (AutoML) aim to automate the creation of AI development pipelines and accelerate the development process (Gijsbers et al., 2022). It is important that AutoML systems have ‘guardrails’ in place and that users have the skills to make sure they are effective in ensuring that key AI model requirements (e.g., fairness) are not violated (Sarkar et al., 2023)

While some of the tasks outlined above may become automated through the application of AI technologies (Wang et al., 2019), it is more likely that organisations will find themselves having to provide training support in ways that those with heavy investment in digital technologies have so far not found necessary (Almansour, 2023). This may be particularly challenging as advances in AI techniques rapidly become commodified and widely available for use, as for example, in the case of ChatGPT (McKelvey and Hunt, 2023). This commodification of AI represents a step change from previous generations of digital innovation, where commodification took place at a slower pace, which gave organisations more time to adapt, to identify and then put in place measures to meet their skill requirements.

A particularly noteworthy case of recent AI innovations is generative AI, which is based on ‘foundation models’ that are trained on extensive, diverse and unlabelled data (Rawte et al., 2023) and can produce realistic outputs in the form of text, images and video. Generative AI is distinct from previous generations of AI in one critically important way: it is general purpose and can be used in many different application domains with only modest additional effort in fine tuning (Gozalo-Brizuela and Garrido-Merchan, 2023). The likely consequences are twofold. First: not only will demand for AI skills increase but new types of AI skills will also be required; second, the so-called ‘reinstatement effect’ (Lanamaki et al., 2024), where AI creates new jobs by generating demand for new products and services. The latter may not directly require new AI skills but, instead, the re-training of those whose jobs disappear because of the adoption of AI. Hence, it will be critically important for organisations of all kinds to develop and adopt a learning culture:

The average skill has a lifespan of just 2.5 years, and we expect to see 40% of core skills change over the next five years.

One of the best-known examples of foundation models are generative large language models (LLMs), which are capable of creating realistic text and became widely available as recently as late 2022. These are capable of a range of tasks associated with the manipulation of qualitative data, such as text extraction, clustering and summarisation, tasks of critical importance in every sector of the economy, public administration and research and which hitherto have been the preserve of human skillsets. At present, opinions differ as to the impact of generative AI on jobs (Department of Education, 2023b; Gmyrek et al., 2023), including which sectors are more likely to experience job automation (e.g., administrative and clerical roles) and which job augmentation (e.g., professional and business services) following the adoption of generative AI (PwC, 2021).

Generative LLMs can be wrapped within a user-friendly, conversational interface or chatbot (e.g., ChatGPT) and these are increasingly finding their way into a range of products and services for use as part of business processes but also by the general public (Alan Turing Institute, 2023). Current examples of LLM-based chatbot-style tools include internet search, where Google and Microsoft Bing now offer LMM-generated summaries of their search results and in customer service support.

LLM-based chatbots provide a good example of where AI can be applied to augment existing jobs. However, this ease-of-use disguises risks to which the unwary user might fall foul. One of these is the tendency of LLMs to ‘hallucinate’, i.e., “generate content that is not based on factual or accurate information.” (Rawte et al., 2023). Hallucinations are a risk that businesses should take seriously and take steps to reduce their exposure. Hence, despite – or perhaps because – chatbots are intended to make their use very simple and straightforward, evidence is growing that employees will need skills if they are to be able to use them well and avoid some of the more basic errors (Federiakin et al., 2024). ‘Prompt literacy’ is defined as “the ability and skill to generate precise input for generative AI, interpret the output, and modify prompts to achieve desired results” (Hwang 2023; Hwang et al., 2023). Beyond the workplace, prompt literacy is already seen as being a key skill students will need to progress through educational systems that are now beginning to exploit AI to accelerate learning (Gattupali et al., 2023).

‘Prompt engineering’ is the process of structuring words in ways that can be interpreted and understood by a generative AI model (Whiting and Phelps, 2023; White et al., 2023). In a recent report for Ipsos-MORI, Legg et al. (2023) stated:

Creating quality prompts is an art that requires substantive domain knowledge, as well as understanding the nature of questions alongside knowledge of the different AI platforms. Ipsos believes that the combination of prompt engineering with domain knowledge, high-quality data, and AI models trained on research frameworks will birth a new scientific approach: Iterative Sciences.

Prompt engineering therefore requires a more technical skill set than prompt literacy, as well as domain knowledge. It will involve, for example, being able to define prompt patterns that are matched to specific processes within a business that would be made available – and perhaps necessary – for employees to use (White et al., 2023). This suggests that a new category of prompt engineer may begin to feature in AI job postings from 2024 (Xu, 2024).

Another risk posed by these advances in AI is how they may be weaponised by bad actors in increasingly sophisticated cyberattacks (Renaud et al., 2023) and disinformation campaigns. The threat to public safety of disinformation was highlighted during the COVID-19 pandemic (Balakrishnan et al., 2022). The past five years have seen the emergence of ‘deepfakes’ – hyper-realistic video, audio and images (Veeriah, 2021; Shahzad and Khan, 2022; Mustak et al., 2023). Deepfakes are rated as one of the biggest security threats faced by business, individuals (Mai et al., 2023) and by democratic societies (Colomina et al., 2021). One challenge of critical importance for AI literacy programmes will be to educate citizens in cybersecurity best practices (Knijnenburg et al., 2023) and equip them with skills to critically assess information online (Tan, 2022; Relmasira et al., 2023; Tiernan et al., 2023; Walker et al., 2023). Some recently published evaluations by the UK AI Security Institute (AISI) reveal some of the risks posed by the current generation of LLMs. Finally, LLMs are themselves temping targets for bad actors who may attempt to find and exploit their vulnerabilities. Businesses that provide generative LLM-based based products and services, and those that chose to take these technologies in-house and build their own tools, will need to ensure that they have access to the cybersecurity skills that will enable them to defend them against such threats (Marulli et al., 2024; McIntosh et al., 2024).

When considering scenarios for future AI literacy programmes, it will be important to consider four distinct factors.

First, the pace of innovation in AI demands that AI skills frameworks be updated on a regular basis if they are to remain fit for purpose for directing the continuing development of educational curricula and programmes. Some researchers are already arguing for the need to define competencies and skills specific to ‘Generative AI literacy’ (Annapureddy et al., 2024).

Second, forecasting the demand for AI skills will be difficult as patterns of adoption of AI in the short term may be quite different from those in the longer term as existing AI technologies mature and new innovations appear (Keep, 2021).

Third, and related, is the arrival and rapid uptake of generative AI, for which evidence of its potential impact on AI skills is rapidly accumulating (Sofia et al., 2023). At the same time, this has re-ignited debates about the prospects for the imminent arrival of Artificial General Intelligence (AGI) (Bubeck et al., 2023) and its implications for jobs and warnings about AGI as an existential threat (Mclean et al., 2023). Regarding the former, Bubeck et al. (op. cit.) observed that if and when AGI arrives, it

Will challenge the traditional notions and assumptions about the relative expertise of humans and machines in many roles, ranging across numerous professional and scholarly fields.

In this view, jobs that currently are expected to involve human-AI collaboration may become fully automated, although the extent to which this happens is likely to depend on public acceptance of removing humans out of the decision-making loop and whether this becomes subject to regulations. Nevertheless, this should serve as a reminder that a more general challenge for educational policies and programmes is not only to help employees to adapt their skills for a future where their jobs are significantly changed by AI, but also to support employees whose jobs may be eliminated (Hunt et al., 2022) to transition into alternative occupations, including ones that may be created by the adoption of AI (Johnson et al., 2020; Wang et al., 2023).

Fourth, future government responses to AI technologies already available, examples of which include regulations compelling social media platforms to moderate content posted on them or mandating providers of AI-based systems to put in place ‘guardrails’ that will guarantee users effective protection from exposure to risks (Gasser and Mayer-Schonberger, 2023; Menz et al., 2023; Smith et al., 2023).

Summary and key takeaways

The rapid pace of innovations in AI technologies is showing no signs of slowing down and is likely to both increase the demand for AI skills, both in terms of sheer numbers and demand for new kinds of skills, such as prompt literacy and prompt engineering.

As generative AI evolves and matures, this is likely to significantly increase the rate of diffusion of AI products and services throughout the commercial and the public sectors. Evidence for the impact of this can be found in the growth in the availability of commodified AI products and services, which is likely to give employers even less time to adapt their AI training programmes if they wish to remain competitive.

These demands can only be met if government and employers are able to scale up their training efforts. Government will need to expand its support for the AI skills pipeline at secondary and tertiary levels and invest in the provision of lifelong learning opportunities, while ensuring that demand for new skills are reflected in updated AI skills frameworks.

Finally, with the average skills lifespan of 2.5 years, increasing responsibility must inevitably fall on employers to create a learning culture that will ensure employees can adapt and upskill themselves.

13. AI education policy recommendations

13.1. AI skills for life

In a report following the 2020 International forum on AI and the Futures of Education, Miao and Holmes (2021a, p. 6) observed:

Countries are increasingly realizing the importance of developing AI Literacy among school students. For this, AI needs to be included in school curricula as an integral component of digital literacies, and alongside existing core competencies such as language and mathematics. AI Literacy also needs to be developed by all young people and citizens through lifelong learning programmes.

Ng et al. (2022b) conducted a systematic review of 49 articles on the development of conceptual frameworks for AI literacy from 2000 to 2020, the subsequent emergence of national AI strategic plans and studies of the effectiveness of educational programmes then launched to implement them. They observe that:

Especially for beginning learners, AI concepts can be difficult for primary and secondary students, as well as non-computer science students to grasp because of knowledge disconnects between the AI concepts and their daily experiences.” Subsequently, ‘age-appropriate’ pedagogies that lower these barriers have appeared as a way of addressing this problem.

Casal-Otero et al. (2023) conducted a systematic review of 179 articles on AI literacy programmes, of which nearly one third were written by US authors and just two by UK authors. They concluded (Casal-Otero et al. 2023, p. 13):

Firstly, AI literacy should be based on an interdisciplinary and competency-based approach and integrated into the school curriculum… Secondly, AI literacy should be leveraged to extend and enhance learning in curricular subjects. As a final point, AI literacy must prioritize the competency of teachers and their active participation in the co-design of didactic proposals, together with pedagogues and AI experts.

Su et al. (2022) provided a set of recommendations on how governments, researchers and educators could build a widely accepted and age-appropriate AI curriculum for all K-12 learners in the Asia-Pacific region. Regarding curricula design for AI literacy, Casal-Otero et al. (2023, p. 13) argue:

There is no need to include a new AI subject in the curriculum, but rather to build on the competencies and content of disciplinary subjects and then integrate AI literacy into those subjects.

A Royal Society report from 2019 (Royal Society, 2019) echoed this view:

Citizens of the future will need to be comfortable with the application of data science to societally pressing questions. This calls for data science skills to be thoroughly integrated into the school curriculum too. Data science and engineering are growing fast and broadening in scope. No longer just the preserve of highly technical STEM– or finance-orientated roles in London, data science increasingly pervades modern business, scientific endeavour and public affairs.

Miao and Holmes (op. cit., p. 5) emphasised the importance of citizens understanding how to control AI and to ensure it is used for the common good:

In particular, it is essential that humans are protected from becoming victims of AI tools, and that AI is used to augment and amplify human capacities, not to replace them. This begins in education. Once the key human and technical AI competencies have been identified, school systems need to ensure that all students are well prepared for a world in which AI is ubiquitous…

Miao and Holmes (op. cit.; p. 6) concluded that if this is to be achieved, then AI literacy must strive for a balance between human-oriented and technology-oriented competencies:

The human-oriented competencies centre on the past, present, and possible futures of AI, the uniqueness of humans, the ethics of AI and its social impact, together with data justice and regulation. Technology-oriented competencies, on the other hand, centre on AI techniques, technologies and their applications, and include the advanced AI knowledge and skills needed to create, manipulate, implement, and interpret AI. Accordingly, the teaching of AI Literacy needs to adopt both a subject-specific and an interdisciplinary approach.

Miao and Holmes (op. cit.) argued that the ambition of AI education programmes should be AI literacy for all and should be integral to digital literacy alongside core STEM competencies, as the latter are also integral to AI literacy. However, if gaps in digital literacy and STEM attainment persist this will not be achieved, and AI may even exacerbate these inequalities (Miao et al., 2021b). In a systematic review of fifteen English language publications on computing education for K-12 students, Martins and Gresse von Wangenheim (2023) found that students from low status economic backgrounds lack opportunities to learn about computing, a situation that if not addressed must impact on their opportunities to acquire AI literacy skills.

Efforts to close digital literacy and STEM attainment gaps between males and females and between various socio-demographic groups and regions will need to be significantly increased if the current gaps and imbalances are not to be carried over into AI literacy. The lack of success so far in closing educational attainment gaps between students from more and less privileged backgrounds must therefore be a concern (Farquharson et al., 2022). Fearns et al. (2023) noted the reservations of the Lords Science and Technology Committee’s inquiry of 2022 that the UK Government’s initiatives might not be enough to address the UK’s STEM skills shortage. Others argue that closing digital literacy and STEM gaps may not, in itself, guarantee that they will not re-appear in AI literacy (Cachat-Rosset and Klarsfield, 2023, p. 21):

If AI literacy begins to be a subject of research and recommendations… such research and recommendations should also explore the technological distance and literacy from diverse economic and cultural contexts.

Miao and Holmes (op. cit.) also emphasised that the pace of advances in AI makes the provision of lifelong learning programmes for both young people after they exit formal education programmes and for older citizens very important. While the focus of AI literacy programmes has tended to be on younger people, older people must also be catered for (Loos and Ivan, 2023), perhaps more so given they are less digitally literate than younger people (Moore and Hancock, 2022). Not least of the challenges will be ensuring that curricula and learning opportunities match the rapid pace of innovation in AI technologies (Bacalja et al., 2022) and the myriad ways in which people are likely to encounter AI in their everyday lives now and in the future. Many online platforms offer free or affordable courses on AI basics, ethical considerations, and more technical skills. Lists of courses for those seeking a career change, for example, can be found at sites such as careerfoundry.com.

Bentley et al. (n.d.) found that the six countries (China, US, Singapore, Sweden, Australia, Canada) they studied are concerned about the AI skills gap and AI skills shortages but so far few are specifically targeting continuing education or professional programmes. They concluded by recommending (op. cit., p. 75)

developing AI curricula in schools at all levels to foster lifelong learning, critical thinking, creativity, and emotional intelligence to navigate an AI-dominated world.

Keep (2022, p. 12) emphasises that the success of lifelong learning programmes is founded on people’s very earliest education experiences:

The importance of initial education in laying the foundations for subsequent lifelong learning (LLL) cannot be overstated. Unless initial schooling (up to upper secondary level) imparts basic skills such as literacy, numeracy and digital literacy, as well as an appetite for further learning, adult learning strategies and the re- and upskilling of the workforce will be founded on shifting sands.

Summary and key takeaways

There is broad agreement among educational researchers globally of the importance of introducing AI literacy skills into primary and secondary education, not as its own subject, but through an interdisciplinary approach that involves integration into existing subjects.

Maximising the benefits and minimising the risks of AI means that educational curricula must be designed in ways that will deliver on the goal of AI literacy for all. It is therefore especially important that key AI concepts and subject matter are presented in age-appropriate ways. For this, ensuring the competency of teachers in AI and involving them in curriculum development will be vital. For the UK, one of the challenges will be to close existing digital literacy and STEM gaps and the evidence suggests that existing initiatives may not be enough.

Finally, with advances in AI set to continue for the foreseeable future, agile curriculum development and greater investment in lifelong learning programmes will be essential if citizens are to remain capable of using AI products and services effectively and safely.

13.2. AI skills for work

Laupichler et al. (2022) carried out a review of thirty research articles published between 2016 and 2022 on AI literacy in global higher and adult education. They identified several teaching formats and pedagogical structures, including the so-called ‘flipped classroom’ where students read materials on AI outside of class and then work on projects or discuss their findings in class. Their conclusions are that more research is needed to define AI literacy in higher and adult education, what content should be taught to non-experts and identified the need for further refinement of concepts and materials. Further, none of the thirty studies documented initiatives that reached across disciplinary boundaries.

Southworth et al. (2023) discuss pathways to address these gaps and integrate AI across the higher education curriculum in the USA, where “… all students are provided with a suite of AI opportunities and are encouraged to engage.” (op. cit., p. 1) The model they propose (Figure 14) is based on Ng et al.’s (2021b) framework, and includes a summary of the skills associated with each category:

  • Know and understand AI, including knowledge of algorithms, use of data in training and their limitations, including potential for biases.
  • Use AI, including ability to use AI to solve problems and may involve coding skills and working with large datasets.
  • Evaluate and create AI, including ability to assess quality and reliability of AI systems, and design and build systems that are ethical and fair.
  • AI ethics, including understanding moral and ethical implications of AI, fairness, transparency, accountability and potential impact on individuals and society.

It also features the addition of a fifth category, Enabling AI, for courses that “support AI through related knowledge and skill development…” (op. cit., p. 6).

Figure 14: AI course literacy categories linked to competencies (Southworth et al., 2023)

The gap between supply and demand for the skills possessed by AI Professionals is well documented but while much remains to be done, it has triggered a significant response in the UK higher education sector. For example, the UCAS website lists 487 undergraduate and 183 postgraduate courses that feature AI for entry in 2024-25. Of the former, 10 are part-time/flexible and one is a distance learning course. Of the latter, 80 are part-time/flexible and 8 are distance learning courses. Similar numbers of courses in data science are available, but it is very likely that many of these are included in the figures for AI courses. Nevertheless, these figures indicate an increase of \~ 50% in both undergraduate and postgraduate provision since 2020. In Scotland, the Data Lab has supported over 1000 students develop skills in data science and AI since 2015.

In the UK, UKRI created 16 Centres for Doctoral Training (CDTs) in Artificial Intelligence in 2019, resulting in the training of 1,000 PhD students and this was boosted in October 2023 by the announcement of £118 million to fund CDTs and scholarships. Recognition of the current lack of diversity in the AI sector has led to the UK government offering in 2023-25 scholarships in conversion courses worth £10K to students from underrepresented groups. In Scotland, £1.3m has been invested in PhDs in data science and AI, co-funded with industry. Whether these initiatives will be sufficient to meet demand for AI skills is unclear. The UK government’s own reviews suggests that they will not, a conclusion that is backed up by a report by SAS (2022), a multi-national developer of data analytics and AI software, which warned that the UK is “’sleepwalking’ into major AI skills crisis.

An additional concern expressed by employers is that the content of an academic degree course is not appropriate to industry needs. A 2019 review of AI education in Northern Ireland found that both multi-nationals and local SMEs were concerned that many current academic courses in AI focused on the production of ‘AI researchers’ rather than ‘AI engineers’ (Guy and Procter, 2019).

One alternative that employers can turn to are the numerous courses now offered by commercial providers, many of which are accessible online, targeted at employers and employees desiring to reskill and upskill, and establish an AI talent pool from within. According to a survey of 2500 adults by Lloyds (2023), two thirds of UK employees find the workplace the easiest place to learn. However, as noted later, overall, UK employers’ investment in training has been falling for over two decades. With only 7% of UK employees reported as having received training in AI in 2023, it does not seem that training in AI skills is bucking this trend. Should employers and employees decide to take this option, the rapid growth in courses makes quality assurance and adherence to standards very important. In the UK, professional bodies, such as the BCS, have taken steps to address this need by offering lists of accredited courses.

To help meet this need, Hall and Pesenti (2017) recommended universities should develop “credit-bearing AI MOOCs (Massive Open Online Courses) and online continuing professional development courses leading to MScs for people with STEM qualifications. According to a 2019 Royal Society report (Royal Society, 2019):

Options such as MOOCs should be considered as a vehicle for developing skills ranging from informed users through to expert data engineers.

In the period since, an increasing number of UK universities have responded by offering a selection of online courses at various levels, including postgraduate, as well as shorter courses, targeted at training in a range of AI skills.

In November 2023, the UK government announced guidance

to help employers boost their employees’ understanding of AI so they can use it safely in their day-to-day role, by setting out the key knowledge, skills and behaviours they should have in order to reap the benefits of AI safely…

Expectations of the breadth of AI impact were reflected in a 2019 British Computer Society report that recommended a broad range of humanities and social science courses should have a significant quantitative component. It suggested courses such as a BA in Social Sciences with Quantitative Methods (British Computer Society, 2019):

Our conclusion combines that idea with the work being done by the Q-Step pilot programme, which was developed as a strategic response to the shortage of quantitatively skilled social science graduates.

The report went on to say that MSc courses should (British Computer Society, 2019):

Develop and maintain ethical and professional AI standards against which MSc graduates can evidence they have gained the appropriate level of skills required to contribute to the design, development, deployment, management and maintenance of AI products and services.

In his study of AI labour markets in Australia, Canada, New Zealand, the United Kingdom and the United States, Manca (2023, p. 13) observed:

As AI becomes increasingly more mainstream in productive processes and across labour market demands, countries will need to put additional efforts to supply effective training opportunities to individuals to benefit from the gains that this new technology can bring. Particular attention should be paid to spur the development of high-level cognitive skills, complementary to further adoption of AI in jobs.

The remarks about the impact of digital skills gaps on AI skills for life hold equally well for AI skills for work. Lloyds and FutureDotNow (op. cit.) argue that a key part of any strategy to address the digital deficit will be for businesses to take collective action (see Table 8) to address its “hidden middle”, i.e., the significant proportion of the UK adult population who lack EDS foundation skills (Section 9).

Table 8: Five actions businesses should take to address EDS Work skills gaps (Lloyds and FutureDotNow, 2023).

De-risk your strategy Quantify your skills gap Target the top ten Design inclusively Be a part of the movement
In many organisations, digital is driving the business strategy. But data suggests there will be people in your organisation and customer base who are new to the digital basics; they are often hidden in plain sight. Do you know your hidden middle? Is this recognised as a risk to your business strategy?

• Identify the key stakeholders and share the headline data in this report. This could be with HR, Tech, Technology, Digital, Customer Service or Business Transformation teams.
• Digital confidence and capability are fundamental for performance, productivity and transformation. Create awareness of the skills and confidence gap and help teams to acknowledge and mitigate this risk.

Do you know which skills are lacking in your organisation, and where the gap is most pronounced? Creating a baseline is a good place to start and helps you measure future impact.

• Use the demographic data within this report such as sector, region and organisation size to group the likely skills gaps in your organisation. Use this to identify the groups most likely to benefit from support.
• Want an even more granular view? Run a short survey of your workforce, based on the digital tasks most lacking (or all twenty work types tasks) to find where gaps are most prevalent and where there are pockets of expertise. There may be digital champions who can lend a hand to their peers.

These are the tasks most commonly missing across the UK labour force. Productivity skills are number one, and four of the top ten relate to Being Safe and Legal Online. Acting on this intelligence is a significant opportunity to reduce business risk while helping individuals build safer digital practices.

• Prioritise cybersecurity training to mitigate risks around data, viruses and other online threats.
• Look for ways to embed learning opportunities into existing processes so you and your teams know the risks and are confident and protected when navigating the digital world.
• Encourage your teams to experiment with productivity tools such as Trello, Microsoft Planner or Slack. Who has natural skills here? How can you support them to share with others?

As with the level of skills, the nature of individual learning styles and support needs will differ. Create varied learning opportunities that reflect the diversity of your workforce.

• Consider the demographics of your teams to identify the skills they are likely missing. What worries or concerns may hold people back? Aim to build confidence as well as capability. Engage your people in the design of your programmes.
• Don’t settle on a single learning pathway. People learn in different ways, so providing a mixture of online, self-paced learning, in-person workshops and peer-to-peer coaching will make upskilling accessible for many more people.

FutureDotNow is a coalition of industry leaders focused on closing the digital skills gap for working age adults. It is free to join and there are no fees.

• Join the coalition to learn from each other’s experiences. Access resources such as the FutureDotNow Playbook and Digital Skills Directory, and collaborate across a diverse community that’s addressing this issue at scale and at pace.
• For more information visit futuredotnow.uk.

First is to recognise and raise awareness within businesses of the skills gap. Second, is to identify where the gaps are most pronounced. Third is to target the ‘top ten’ EDS work tasks that are most commonly missing. Fourth is to engage employees in learning programmes and design them for inclusivity and diversity. Fifth is to join an organisation, like FutureDotNow, that will help employers learn from each other’s experiences.

The UK government has created a number of employer-led organisations that aim to fill a similar role to that occupied by FutureDotNow. These include the Digital Skills Council (DSC), Local Skills Improvement Partnerships (LSIPs) and Local Digital Skills Partnerships (LDSPs), with eight of the latter having been in operation in England since 2019. Keep (2022), however, describes the resources that have been committed to LDSPs as “relatively modest” (op. cit., p. 12) and despite a favourable evaluation (Department of Culture, Media and Sport, 2021b), there are currently no plans to extend this scheme to other UK regions.

The extent to which UK government AI skills policy relies on employers’ investment in training has been questioned, given that this investment has been in decline since the early 2000s (Mutebi and McAlary, 2021). It has been estimated that employer-provided training hours fell by 60% between 1997 and 2017 (Green and Henseke, 2019). Furthermore, the government’s Employers Skills Survey (ESS) of 2022 found that 60% of employers had provided no employee training at all in the preceding 12 months, and this was a decrease from 66% in 2017.

As documented above, the UK government has launched a number of initiatives over the past five years or so that have been designed to equip citizens with the digital and AI skills they will need for life and work, one important one being the AI Skills for Business Framework, whose key features are summarised in Section 8. However, while these individual initiatives are welcome, some observers argue the need for a broader, over-arching skills strategy if the government is to achieve its goals (Keep, 2022).

Summary and key takeaways

Numerous initiatives to embed AI literacy skills across subjects in higher education are underway globally. As with primary and secondary education, the approach advocated is one that embraces interdisciplinarity. However, research suggests that more work is needed to refine both concepts being taught and the materials available to deliver them.

The UK, where the focus has mainly been on increasing the supply of AI professionals, has seen a significant expansion of courses in data science and AI, along with major investment in Centres for Doctoral Training but doubts continue to be expressed about whether this will be enough to close the skills gaps. In addition, employers have expressed reservations about whether the content of these courses address industry needs.

A solution to these twin problems is for employers to expand their efforts to upskill their employees. Both universities and commercial education providers are responding to the need for courses that are sufficiently flexible to be compatible with both employers’ and employees’ needs and can be delivered online.

The UK government has been active in encouraging employers to be more pro-active in employee upskilling in AI, but recent evidence raises doubts about whether employers yet fully recognise its importance.

14. Conclusions

To conclude, we now reflect on what has been learnt in respect of the specific research questions that this review has set out to address.

14.1. What AI-relevant skills are needed for life and work?

As AI-based products and services become increasingly embedded in people’s work and everyday lives, the imperative for developing a broad-based, multidimensional skill set is becoming steadily more acute. Some of the skills that will be needed for work must involve a deeper technical understanding of AI, including fostering awareness of the ethical and social aspects of AI technologies, while others must be focused on developing employees’ and citizens’ competence as users of AI-based tools and services. The aim should be not just understanding AI in a technical and applied sense but understanding the broader implications of using AI, whether that be in work or in everyday life.

Future advances in AI will have a significant impact on the demand and kinds of AI skills for life and work. A report by the Department of Education (2019b) found that education providers recommended digital skills should be reviewed every three years. In the light of recent evidence of the growing pace of AI innovation, however, this recommendation might need to be re-examined.

The research literature on AI literacy has grown significantly in the past five years and there are now well-thought-out definitions of AI literacy in life and work. However, the kinds of skills that will be required need to be set out in more detail before AI educational curricula and delivery programmes can be defined.

14.2. To what extent does the UK have or lack these skills in the labour force?

In common with its international counterparts, the UK is experiencing a significant gap between supply and demand for AI skills in the labour force. Some sense of where the UK stands globally in terms of AI work skills can be found in Oxford Insights’ annual Government AI Readiness Index (GAIRI) and the Global AI Index (GAII). GAIRI provides an estimate of how prepared each country’s government is for implementing AI in public service delivery and consists of three pillars: Government, Technology Sector and Data and Infrastructure. Of these, the Technology Sector includes human capital, i.e., the skills of people working in the sector. The latest index published in 2023 shows the UK in third place and narrowly ahead of Finland and Canada but somewhat more behind the United States and Singapore. GAI, which consists of three pillars: Investment, Innovation and Implementation, provides a ranking of national AI capacity and shows the UK to be in fourth place in 2023.

While there is good evidence that UK AI education programmes at the tertiary level are responding to the growing demand for AI professional skills, demand is unlikely to be satisfied unless employers also make greater efforts to upskill their workforces. Investing effort in fleshing out and evolving the AI persona framework by working with stakeholders to identify specific skills will be important if tertiary level programmes are to remain relevant as AI continues to diffuse through the UK’s commercial and public sectors.

The research literature is currently dominated by studies of AI curricula and programmes in North America and Asia, and the evidence points to the UK lagging behind North America and countries in South-East Asia and the Asia Pacific at primary and secondary levels. This is going to impact on levels of attainment in both AI life and work skills, but how quickly such deficits can be addressed is unclear. Action will likely be needed beyond launching new AI literacy programmes. According to a Lloyds report from 2021, about 21 percent of the UK population lack the EDS required for work and life (Lloyds, 2021). There must be a concern that if patterns of underachievement in UK digital literacy are not tackled then it is inevitable that these will be reproduced in patterns of underachievement in AI literacy. The contribution of organisations such as FutureDotNow, a coalition of industry leaders dedicated to raising digital skills in working age adults in England and Wales, is going to be critical to any efforts to avoid this happening. In this respect, FutureDotNow has recently published its plan of action for 2025 (FutureDotNow, 2025).

14.3. What can the UK learn from international counterparts about AI skills?

There are undoubtedly useful lessons for the UK from the progress made by international counterparts in capitalising on their efforts into devising frameworks for AI skills for life and the educational curricula that would then deliver these skills at primary and secondary levels. Within these curricula, the acquisition of conceptual knowledge and higher-level reasoning skills in AI, together with understanding ethical issues, is seen to be of particular importance and is what distinguishes AI literacy from digital literacy. With regard to tertiary level AI skills provision, the evidence is that, to-date, the UK’s investment in MSc and PhD programmes in Data Science and AI has been matching the efforts made by its international counterparts. More effort may be needed to enrich curricula in other subjects at this level, because, regardless of the careers participants subsequently chose to follow, it is essential that they be equipped with the skills they will need to use AI tools and services productively, ethically and safely.

Studies have also emphasised that teachers will require rigorous training, support for professional development and well-designed resources if they are to be capable of delivering the needed AI literacy curricula. However, international approaches to both teacher training and curriculum design vary widely, and so more UK-focused research is still required if policymakers are to be able to identify and address UK-specific needs of both teachers and students. For example, evidence suggests that significant socio-demographic and regional disparities exist in UK levels of attainment in digital and STEM skills, and these must be tackled if the UK’s AI literacy goals are to be achievable.

Finally, given the rapid pace of AI innovation, provisions for lifelong learning for life and work will be critical if people are to maintain or upgrade their skills after they exit formal education programmes. For the unemployed and senior citizens, the government must determine how it can encourage the provision and take-up of adult education programmes to support lifelong learning. For those in employment, the way forward appears more straightforward, as it would appear to be in employers’ interests to provide their employees with access to courses and there are now many to choose from. However, UK employers’ track record of investing in training suggests the potential need for government action. Offering incentives to employers may be part of the solution, but also important will be taking measures that will ensure the quality of the courses available and so make them more attractive to both employers and employees.

Here, again, there are lessons to be learnt from how international peers are looking to build partnerships between employers and course providers, though this should consider the distinctive features of the UK educational ecosystem that already operates at this level. This means that professional organisations, such as the British Computer Society (BCS) and Royal Academy of Engineering (RAE), could have a role to play in defining and policing standards that training courses should meet if participants are to gain recognised, professional certification. However, because AI skills cross existing professional boundaries, this will require collaboration between these organisations. One example of where this is already taking place is the Alliance for Data Science Professionals, whose member organisations include the BCS, RAE, Royal Statistical Society, Royal Society, Alan Turing Institute for Data Science and AI and Institute of Mathematics.

15. Bibliography

Alan Turing Institute. (2023). Response to the Large Language Models Inquiry: Call for Evidence. Available at: https://www.turing.ac.uk/sites/default/files/2023-09/ati_response_to_large_language_models_inquiry.pdf

Alekseeva, L., Azar, J., Gine, M., Samila, S., and Taska, B. (2021). The demand for AI skills in the labor market. Labour economics, 71, 102002.

Almansour, M. (2023). Artificial intelligence and resource optimization: A study of Fintech start-ups. Resources Policy80, 103250.

Amer, A. (2005). Analytical thinking. Cairo University, Center for Advancement of Postgraduate Studies and Research in Engineering Sciences.

Armstrong, S. (2023). Palantir gets £480m contract to run NHS data platform. BMJ. https://www.bmj.com/content/383/bmj.p2752

Annapureddy, R., Fornaroli, A., and Gatica-Perez, D. (2024). Generative AI literacy: Twelve defining competencies. Digital Government: Research and Practice.

Attwell, G., Bekiaridis, G., Deitmer, L., Perini, M., Roppertz, S., and Tütlys, V. (2020). Artificial intelligence in policies, processes and practices of vocational education and training. ITB Research Reports 71.

Bacalja, A., Beavis, C., and O’Brien, A. (2022). Shifting landscapes of digital literacy. The Australian Journal of Language and Literacy45(2), 253-263.

Balakrishnan, V., Ng, W. Z., Soo, M. C., Han, G. J., and Lee, C. J. (2022). Infodemic and fake news–A comprehensive overview of its global magnitude during the COVID-19 pandemic in 2021: A scoping review. International Journal of Disaster Risk Reduction78, 103144.

Bhatnagar, A., Gajjar, D. (2024). Policy implications of artificial intelligence (AI).

POSTnote 708. Available at: https://post.parliament.uk/research-briefings/post-pn-0708.

Bentley, C., Krook, J., Rigley, E., Cuptor, A., and Saini, S. (nd). Closing the AI skills gap in the UK: Policy for an AI Ecosystems Approach. UKRI Autonomous Systems Hub, Southampton, Kings College, London.

Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. Cognitive domain.

Borenstein, J., and Howard, A. (2021). Emerging challenges in AI and the need for AI ethics education. AI and Ethics1, 61-65.

Bradshaw, A., Tang, C. M., and Panchal, J. H. (2018). Skills and talents for big data analytics. Advanced Science Letters.

Bravo, M. C. M., Chalezquer, C. S., and Serrano-Puche, J. (2021). Meta-framework of digital literacy: A comparative analysis of 21st-century skills frameworks. Revista Latina de Comunicacion Social, (79), 76-109.

British Computer Society. (2019). Scaling up the ethical Artificial Intelligence MSc Pipeline. Available at: https://www.bcs.org/media/3047/ethical-ai.pdf

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., … and Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.

Cachat-Rosset, G., and Klarsfeld, A. (2023). Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines. Applied Artificial Intelligence37(1), 2176618.

Cannizzaro, S., Procter, R., Ma, S., and Maple, C. (2020). Trust in the Smart Home: findings from a nationally representative survey in the UK, PLOS One. 15(5).

Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., and Barro, S. (2023). AI literacy in K-12: a systematic literature review. International Journal of STEM Education10(1), 29.

Calvino, F., and Fontanelli, L. (2023). A portrait of AI adopters across countries: Firm characteristics, assets’ complementarities and productivity. OECD Science, Technology and Industry Working Papers.

Carretero, S., Vuorikari, R., and Punie, Y. (2017). DigComp 2.1: The Digital Competence Framework for Citizens with eight proficiency levels and examples of use (No. JRC106281). Joint Research Centre.

Cebulla, A., Szpak, Z., and Knight, G. (2023). Preparing to work with artificial intelligence: assessing WHS when using AI in the workplace. International Journal of Workplace Health Management16(4), 294-312.

Chen, P., Wu, L., and Wang, L. (2023). AI fairness in data management and analytics: A review on challenges, methodologies and applications. Applied Sciences, 13(18), 10258.

Collins, C., Dennehy, D., Conboy, K., and Mikalef, P. (2021). Artificial intelligence in information systems research: A systematic literature review and research agenda. International Journal of Information Management60, 102383.

Colomina, C., Margalef, H. S., Youngs, R., and Jones, K. (2021). The impact of disinformation on democratic processes and human rights in the world. Brussels: European Parliament.

Dabhi, K., Crick, C., Douglas, J., McHugh, S., Zatterin, G., Donaldson, S., Procter, R., and Woods, R. (2021). Understanding the UK AI labour market: 2020. Ipsos MORI/DCMS. Available at: https://www.gov.uk/government/publications/understanding-the-uk-ai-labour-market-2020

Department of Culture, Media and Sport (2021a). National AI Strategy. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1020402/National_AI_Strategy_-_PDF_version.pdf

Department of Culture, Media and Sport (2021b). Evaluation of the Local Digital Skills Partnerships. AMION Consulting. Available at: https://www.gov.uk/government/publications/local-digital-skills-partnerships-evaluation

Department of Education (2019a). Essential digital skills framework. Available at https://assets.publishing.service.gov.uk/media/5b9246d4e5274a4236952309/Essential_digital_skills_framework.pdf

Department of Education (2019b). The impact of AI on UK jobs and training. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/796173/Improving_adult_basic_digital_skills_-_government_consultation_response.pdf

Department of Education (2023a). Improving adult basic digital skills. Government consultation response. Available at: https://assets.publishing.service.gov.uk/media/656856b8cc1ec500138eef49/Gov.UK_Impact_of_AI_on_UK_Jobs_and_Training.pdf

Department of Education (2023b). The Impact of AI on UK Jobs and Training. Available at: https://assets.publishing.service.gov.uk/media/656856b8cc1ec500138eef49/Gov.UK_Impact_of_AI_on_UK_Jobs_and_Training.pdf

Department for Science, Innovation and Technology (2021). National AI Strategy. Available at: https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version

Du, M., Liu, N., and Hu, X. (2019). Techniques for interpretable machine learning. Communications of the ACM, 63(1), 68-77.

EU AI Act (2023). Available at https://artificialintelligenceact.eu/the-act/

Falloon, G. (2020). From digital literacy to digital competence: the teacher digital competency (TDC) framework. Educational technology research and development68(5), 2449-2472.

Farquharson, C., McNally, S., and Tahir, I. (2022). Educational inequalities, IFS Deaton Review of Inequalities, London: Institute for Fiscal Studies.

Fearns, J., Harriss, L., and Lally., C. (2023). Data science skills in the UK Workforce. POSTnote 697. Available at: https://researchbriefings.files.parliament.uk/documents/POST-PN-0697/POST-PN-0697.pdf

Federiakin, D., Molerov, D., Zlatkin-Troitschanskaia, O., and Maur, A. (2024, November). Prompt engineering as a new 21st century skill. In Frontiers in Education (Vol. 9, p. 1366434). Frontiers Media SA.

Ferguson, A. G. (2020). Facial recognition and the fourth amendment. Minn. L. Rev.105, 1105.

Department for Science, Innovation and Technology. (2023). AI Skills for Business Competency Framework: Draft framework for public consultation. Alan Turing Institute for Data Science and AI. Available at: https://iuk.ktn-uk.org/wp-content/uploads/2023/11/Final_BridgeAI_Framework.pdf

Fosch-Villaronga, E., and Poulsen, A. (2022). Diversity and inclusion in artificial intelligence. Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice, 109-134.

Frank, M. R., Autor, D., Bessen, J. E., Brynjolfsson, E., Cebrian, M., Deming, D. J., … and Wang, D. (2019). Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences116(14), 6531-6539.

FutureDotNow (2025). Routes to 20 million: Progress in 2024 and the 2025 plan for action. Available at: https://futuredotnow.uk/routes-to-20m/

Gasser, U., and Mayer-Schönberger, V. (2024). Guardrails: Guiding Human Decisions in the Age of AI. Princeton University Press.

Gattupalli, S., Maloy, R. W., and Edwards, S. A. (2023). Prompt literacy: A pivotal educational skill in the age of AI.

Gehlhaus, D., Koslosky, L., Goode, K., and Perkins, C. (2021). U.S. AI Workforce: Policy Recommendations. Centre for Security and Emerging Technology. Available at: https://cset.georgetown.edu/publication/u-s-ai-workforce-policy-recommendations/

Gebre, E. (2022). Conceptions and perspectives of data literacy in secondary education. British Journal of Educational Technology53(5), 1080-1095.

Ghodoosi, B., West, T., Li, Q., Torrisi-Steele, G., and Dey, S. (2023). A systematic literature review of data literacy education. Journal of business and finance librarianship28(2), 112-127.

Gijsbers, P., Bueno, M. L., Coors, S., LeDell, E., Poirier, S., Thomas, J., … and Vanschoren, J. (2022). Amlb: an automl benchmark. arXiv preprint arXiv:2207.12560.

Gmyrek, P., Berg, J., and Bescond, D. (2023). Generative AI and jobs: A global analysis of potential effects on job quantity and quality. International Labour Organization, Working Paper 96. Available at: https://www.ilo.org/wcmsp5/groups/public/—dgreports/—inst/documents/publication/wcms_890761.pdf

Gozalo-Brizuela, R., and Garrido-Merchan, E. C. (2023). ChatGPT is not all you need. A State of the Art Review of large Generative AI models. arXiv preprint arXiv:2301.04655.

Green, F., and Henseke, G. (2019). Training Trends in Britain. Centre for Research on Learning and Life Chances Research paper 22, London: University College London, Institute of Education.

Guy, K., and Procter, R. (2019. Artificial Intelligence Research in Northern Ireland and the Potential for a Regional Centre of Excellence. Matrix Northern Ireland Science Industry Panel. Available at: https://matrixni.org/wp-content/uploads/2019/06/Artificial-Intelligence-Research-in-Northern-Ireland.pdf

Hall, W., and Pesenti, J. (2017). Growing the artificial intelligence industry in the UK. DSIT/DCMS/BEIS. Available at: https://www.gov.uk/government/publications/growing-the-artificial-intelligence-industry-in-the-uk

Henke, N., Levine, J., and McInerney, P. (2018). Analytics translator: The new must-have role. Harvard Business Review.

HM Government (2021). National AI Strategy. Available at: https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version

Hoskins, B., and Fredriksson, U. (2008). Learning to learn: What is it and can it be measured?. European Commission JRC.

House of Commons Science and Technology Committee. (2023a). Diversity and inclusion in STEM. Available at: https://committees.parliament.uk/publications/34531/documents/190060/default/

House of Commons Science and Technology Committee. (2023b). Diversity and inclusion in STEM: Government Response to the Committee’s Fifth Report. Available at: https://committees.parliament.uk/work/1639/diversity-and-inclusion-in-stem/publications/

Hunt, W., Sarkar, S., and Warhurst, C. (2022). Measuring the impact of AI on jobs at the organization level: Lessons from a survey of UK business leaders. Research Policy, 51(2), 104425.

Hwang, Y. (2023). The emergence of generative AI and PROMPT literacy: Focusing on the use of ChatGPT and DALL-E for English education. Journal of the Korea English Education Society, 22(2), 263-288.

Hwang, Y., Lee, J. H., and Shin, D. (2023). What is prompt literacy? An exploratory study of language learners’ development of new literacy skill using generative AIarXiv preprint arXiv:2311.05373.

International Technology Education Association (ITEA). (2006). Technological literacy for all: A rationale and structure for the study of technology (2nd ed.) Reston, VA: International Technology Education Association.

Jarrahi, M. H., Newlands, G., Lee, M. K., Wolf, C. T., Kinder, E., and Sutherland, W. (2021). Algorithmic management in a work context. Big Data and Society8(2), 20539517211020332.

Johnson, B. A., Coggburn, J. D., and Llorens, J. J. (2022). Artificial Intelligence and Public Human Resource Management: Questions for Research and Practice. Public Personnel Management, 51(4), 538-562.

Keep, E. (2021). Initial Thoughts on Policy Issues for the Future of Work, Digital Futures of Work Research Programme Working Paper 3. Available at https://digitalfuturesofwork.com/wp-content/uploads/2021/02/WP-03_Policy-for-future-of-worker.pdf

Keep, E. (2022). English approaches to digital skills policy – some reflections on current directions and developments. Digital Futures of Work Research Programme, Working Paper 8.

Kelnar, D., and Kostadinov, A. (2020). The State of AI: Divergence. MMC Ventures. Available at: https://www.stateofai2019.com

Kieras, D. and Bovair, S. (1984). The role of a mental model in learning to operate a device. Cognitive science8(3), 255-273.

Kinowska, H., and Sienkiewicz, Ł. J. (2023). Influence of algorithmic management practices on workplace well-being–evidence from European organisations. Information Technology and People36(8), 21-42.

Knijnenburg, B., Bannister, N., and Caine, K. (2021). Using mathematically‐ grounded metaphors to teach ai‐related cybersecurity. In IJCAI-21 Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies.

Lanamäki, A., Väyrynen, K., Hietala, H., Parmiggiani, E., and Vassilakopoulou, P. (2024). Not Inevitable: Navigating Labor Displacement and Reinstatement in the Pursuit of AI for Social Good. Communications of the Association for Information Systems55(1), 30.

Laupichler, M. C., Aster, A., and Raupach, T. (2023). Delphi study for the development and preliminary validation of an item set for the assessment of non-experts’ AI literacy. Computers and Education: Artificial Intelligence4, 100126.

Legg, J., Bangla, A., and Timpone, R. (2023, June). Conversations with AI. How Generative AI and qualitative research will benefit each other. Ipsos-MORI.

Lightcast (2023). Artificial Intelligence in the UK The relevance of AI in the digital transformation of the UK labour market. Available at https://lightcast.io/resources/blog/demand-for-ai-skills-triples-in-the-uk-labour-market

LinkedIn (2019). AI Talent in the European Labour Market. Available at: https://economicgraph.linkedin.com/content/dam/me/economicgraph/en-us/PDF/AI-Talent-in-the-European-Labour-Market.pdf

Lloyds Bank. (2021). Essential Digital Skills Report 2021, Third Edition – Benchmarking the Essential Digital Skills of the UK, London: Lloyds Bank. Available at [https://charnwood.moderngov.co.uk/documents/s9362/DTSP%2029%20March%202022%20-%20Itm%20XX%20-%20Ann%203%20-%20211109-lloyds-essential-digital-skills-report-2021.pdf](https://charnwood.moderngov.co.uk/documents/s9362/DTSP%2029%20March%202022%20-%20Itm%20XX%20-%20Ann%203%20-%20211109-lloyds-essential-digital-skills-report-2021.pdf

Lloyds and Ipsos-MORI. (2022). Essential Digital Skills. London: Lloyds Bank. Available at https://www.lloydsbank.com/banking-with-us/whats-happening/consumer-digital-index/essential-digital-skills.html

Lloyds. (2023). Consumer Digital Index. London: Lloyds Bank. Available at: https://www.ipsos.com/sites/default/files/ct/publication/documents/2023-11/loyds-consumer-digital-index-2023-report.pdf

Lloyds and FutureDotNow (2023). UK Essential Digital Skills for Work. Available at: https://futuredotnow.uk/essential-digital-skills-for-work-report

Long, D., and Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI conference on human factors in computing systems.

Loos, E., and Ivan, L. (2023). Using media literacy to fight digital fake news in later life: a mission impossible?. In International Conference on Human-Computer Interaction (pp. 233-247). Cham: Springer Nature Switzerland.

Manca, F. (2023). Six questions about the demand for artificial intelligence skills in labour markets. OECD.

Martins, R. M., and Gresse von Wangenheim, C. (2023). Teaching Computing to Middle and High School Students from a Low Socio-Economic Status Background: A Systematic Literature Review. Informatics in Education.

Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., … and Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI?. Computers and Education: Artificial Intelligence3, 100056.

Marulli, F., Paganini, P., and Lancellotti, F. (2024). The Three Sides of the Moon LLMs in Cybersecurity: Guardians, Enablers and Targets. Procedia Computer Science246, 5340-5348.

McIntosh, T. R., Susnjak, T., Liu, T., Watters, P., Xu, D., Liu, D., … and Halgamuge, M. N. (2024). From cobit to iso 42001: Evaluating cybersecurity frameworks for opportunities, risks, and regulatory compliance in commercializing large language models. Computers and Security144, 103964.

McLean, S., Read, G. J., Thompson, J., Baber, C., Stanton, N. A., and Salmon, P. M. (2023). The risks associated with Artificial General Intelligence: A systematic review. Journal of Experimental and Theoretical Artificial Intelligence35(5), 649-663.

McKelvey, F., and Hunt, R. (2023). Remodelling internet infrastructure: A first look at platform governance in the era of ChatGPT.

McKinsey and Company (202). What is AI? Available at: https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai

Menz, B. D., Modi, N. D., Sorich, M. J., and Hopkins, A. M. (2023). Health disinformation use case highlighting the urgent need for artificial intelligence vigilance: weapons of mass disinformation. JAMA internal medicine.

Miao, F., and Holmes, W. (2021a). International Forum on AI and the Futures of Education, developing competencies for the AI Era, 7-8 December 2020: synthesis report.

Miao, F., Holmes, W., Huang, R., and Zhang, H. (2021b). AI and education: Guidance for policymakers.

Miao, F., and Shiohira, K. (2022). K-12 AI curricula. A mapping of government-endorsed AI curricula. UNESCO Publishing. Available at https://unesdoc.unesco.org/ark:/48223/pf0000380602

Moore, R., and Hancock, J. (2022). A digital media literacy intervention for older adults improves resilience to fake news. Scientific reports12(1), 6008.

Moulds, A., and Horton, T. (2023). What do technology and AI mean for the future of work in health care? The Health Foundation. Available at: https://www.health.org.uk/publications/long-reads/what-do-technology-and-ai-mean-for-the-future-of-work-in-health-care

Mustak, M., Salminen, J., Mäntymäki, M., Rahman, A., and Dwivedi, Y. (2023). Deepfakes: Deceptions, mitigations, and opportunities. Journal of Business Research154, 113368.

Mutebi, N., and McAlary, P. (2021). Upskilling and retraining the adult workforce. POSTnote 659. Available at: https://researchbriefings.files.parliament.uk/documents/POST-PN-0659/POST-PN-0659.pdf

Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., and Qiao, M. S. (2021a). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology58(1), 504-509.

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., and Qiao, M. S. (2021b). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence2, 100041.

Ng, D. T. K., Leung, J. K. L., Su, M. J., Yim, I. H. Y., Qiao, M. S., and Chu, S. K. W. (2022a). AI literacy in K-16 classrooms. Springer International Publishing AG.

Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., and Chu, S. K. W. (2022b). A review of AI teaching and learning from 2000 to 2020. Education and Information Technologies28(7), 8445-8501.

Ng, D. T. K., Su, J., Leung, J. K. L., and Chu, S. K. W. (2023). Artificial intelligence (AI) literacy education in secondary schools: a review. Interactive Learning Environments, 1-21.

OECD. (2023). OECD AI Principles overview. Available at: https://oecd.ai/en/ai-principles

Scoping the OECD AI principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD (AIGO). OECD Digital Economy Papers, No. 291, OECD Publishing, Paris. Available at: https://dx.doi.org/10.1787/d62f618a-en

Lane, M., Saint-Martin, A. (2021). The Impact of AI on the Labour Market. OECD. Available at: https://www.oecd.org/publications/the-impact-of-artificial-intelligence-on-the-labour-market-7c895724-en.htm

ONS (2023a). Public awareness, opinions and expectations about artificial intelligence: July to October 2023. Available at: https://www.ons.gov.uk/businessindustryandtrade/itandinternetindustry/articles/publicawarenessopinionsandexpectationsaboutartificialintelligence/julytooctober2023#awareness-of-ai-use

ONS (2023b). Understanding AI uptake and sentiment among people and businesses in the UK: June 2023. Available at: https://www.ons.gov.uk/businessindustryandtrade/itandinternetindustry/articles/understandingaiuptakeandsentimentamongpeopleandbusinessesintheuk/june2023#awareness-experience-of-and-trust-in-ai-among-the-general-population

Pangrazio, L., Godhe, A. L., and Ledesma, A. G. L. (2020). What is digital literacy? A comparative review of publications across three language contexts. E-learning and Digital Media17(6), 442-459.

Perspective Economics (2023). Artificial Intelligence Sector Study 2022. Research report for the Department for Science, Innovation and Technology (DSIT).

Petricek, T., van Den Burg, G. J., Nazábal, A., Ceritli, T., Jiménez-Ruiz, E., and Williams, C. K. (2022). AI Assistants: A Framework for Semi-Automated Data Wrangling. IEEE Transactions on Knowledge and Data Engineering.

Pinski, M., and Benlian, A. (2023). AI Literacy-Towards Measuring Human Competency in Artificial Intelligence. Hawaii International Conference on System Sciences (HICSS). Available at https://hdl.handle.net/10125/102649

Procter, R., and Guy, K. (2023). Scoping Exercise for a Global AI Ethics and Safety Centre in Northern Ireland. Matrix Northern Ireland Industrial Panel.

Procter, R., Tolmie, P., and Rouncefield, M. (2023). Holding AI to account: Challenges for the delivery of trustworthy AI in healthcare. ACM Transactions on Computer-Human Interaction30(2), 1-34.

PwC. (2018). The macroeconomic impact of artificial intelligence. Available at: https://www.pwc.co.uk/economic-services/assets/macroeconomic-impact-of-ai-technical-report-feb-18.pdf

PwC (2021). The Potential Impact of Artificial Intelligence on UK Employment and the Demand for Skills. Report for BEIS. Available at https://assets.publishing.service.gov.uk/media/615d9a1ad3bf7f55fa92694a/impact-of-ai-on-jobs.pdf

Ramos, G. (2022). AI’s Impact on Jobs, Skills, and the Future of Work: The UNESCO Perspective on Key Policy Issues and the Ethical Debate. New England Journal of Public Policy34(1), 3.

Rawte, V., Sheth, A., and Das, A. (2023). A Survey of Hallucination in Large Foundation Models. arXiv preprint arXiv:2309.05922.

Relmasira, S. C., Lai, Y. C., and Donaldson, J. P. (2023). Fostering AI Literacy in Elementary Science, Technology, Engineering, Art, and Mathematics (STEAM) Education in the Age of Generative AISustainability15(18), 13595.

Ren, K., Zheng, T., Qin, Z., and Liu, X. (2020). Adversarial attacks and defenses in deep learning. Engineering6(3), 346-360.

Renaud, K., Warkentin, M., and Westerman, G. (2023). From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI. MIT Sloan Management Review.

Rigley, E., Bentley, C., Krook, J., and Ramchurn, S. D. (2023). Evaluating international AI skills policy: A systematic review of AI skills policy in seven countries. Global Policy.

Rismani, S., and Moon, A. (2023, August). What does it mean to be a responsible AI practitioner: An ontology of roles and skills. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 584-595).

Royal Society (2019). Dynamics of data science skills: How can all sectors benefit from data science talent? Available at: https://royalsociety.org/-/media/policy/projects/dynamics-of-data-science/dynamics-of-data-science-skills-report.pdf

Sabir, B., Ullah, F., Babar, M. A., and Gaire, R. (2021). Machine learning for detecting data exfiltration: A review. ACM Computing Surveys (CSUR)54(3), 1-47.

Sallam, M., Salim, N., Barakat, M., and Al-Tammemi, A. (2023). ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J3(1), e103-e103.

Sarkar, A., Drosos, I., Deline, R., Gordon, A. D., Negreanu, C., Rintel, S., … and Zorn, B. (2023). Participatory prompting: a user-centric research method for eliciting AI assistance opportunities in knowledge workflows. arXiv preprint arXiv:2312.16633.

Schuetz, S., and Venkatesh, V. (2020). Research Perspectives: The Rise of Human Machines: How Cognitive Computing Systems Challenge Assumptions of User-System Interaction. Journal of the Association for Information Systems, 460-482.

Shahzad, K., and Khan, S. (2022). Relationship between new media literacy (NML) and web-based fake news epidemic control: a systematic literature review. Global Knowledge, Memory and Communication.

Shiohira, K. (2021). Understanding the Impact of Artificial Intelligence on Skills Development. Education 2030. UNESCO-UNEVOC International Centre for Technical and Vocational Education and Training.

Smith, H. (2020). Algorithmic bias: should students pay the price?. AI and society35(4), 1077-1078.

Smith, G., Kessler, S., Alstott, J., and Mitre, J. (2023). Industry and Government Collaboration on Security Guardrails for AI Systems. Rand Corporation.

Sofia, M., Fraboni, F., De Angelis, M., Puzzo, G., Giusino, D., and Pietrantoni, L. (2023). The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science: The International Journal of an Emerging Transdiscipline, 26, 39-68.

Sorbe, S., Gal, P., Nicoletti, G., and Timiliotis, C. (2019). Digital Dividend: Policies to Harness the Productivity Potential of Digital Technologies. OECD Economic Policy Paper, no. 26.

Southworth, J., Migliaccio, K., Glover, J., Reed, D., McCarty, C., Brendemuhl, J., and Thomas, A. (2023). Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence4, 100127.

Squicciarini, M., and Nachtigall, H. (2021). Demand for AI skills in jobs: Evidence from online job postings. OECD Science, Technology and Industry Working Papers.

Stamboliev, E. (2023). Proposing a postcritical AI literacy: Why we should worry less about algorithmic transparency and more about citizen empowerment. Media Theory7(1), 202-232.

Su, J., Zhong, Y., and Ng, D. T. K. (2022). A meta-review of literature on educational approaches for teaching AI at the K-12 levels in the Asia-Pacific region. Computers and Education: Artificial Intelligence3, 100065.

Su, J., Ng, D. T. K., and Chu, S. K. W. (2023). Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Computers and Education: Artificial Intelligence4, 100124.

Tan, C. (2022). The curious case of regulating false news on Google. Computer Law and Security Review46, 105738.

Taneri, G. U. (2020). Artificial Intelligence and Higher Education: Towards Customized Teaching and Learning, and Skills for an AI World of Work. Research and Occasional Paper Series: CSHE. 6.2020. Center for Studies in Higher Education.

Tiernan, P., Costello, E., Donlon, E., Parysz, M., and Scriney, M. (2023). Information and Media Literacy in the Age of AI: Options for the Future. Education Sciences13(9), 906.

Tinmaz, H., Lee, Y. T., Fanea-Ivanovici, M., and Baber, H. (2022). A systematic review on digital literacy. Smart Learning Environments9(1), 21.

UK AI Council (2012). AI Roadmap. Available at: https://www.gov.uk/government/publications/ai-roadmap

UNESCO. (2020). International Forum on AI and the Futures of Education: Developing Competencies for the AI Era. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000377251

UNESCO. (2022). International Forum on AI and Education Ensuring AI as a Common Good to Transform Education. Available at https://discovery.ucl.ac.uk/id/eprint/10146850/1/381226eng.pdf

Vignola, E. F., Baron, S., Abreu Plasencia, E., Hussein, M., and Cohen, N. (2023). Workers’ Health under Algorithmic Management: Emerging Findings and Urgent Research Questions. International Journal of Environmental Research and Public Health20(2), 1239.

Veeriah, J. (2021). Young adults’ ability to detect fake news and their new media literacy level in the wake of the COVID-19 pandemic. Journal of Content, Community and Communication13(7), 372-383.

Walker, J., Thuermer, G., Vicens, J., and Simperl, E. (2023). AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact Checking. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 26-37).

Wang, F., Xiao, S., Kihara, Y., Spaide, T., Lee, C. S., and Lee, A. Y. (2019). Fully automated artificial intelligence (AI) pipeline for feature-based segmentation and classification of diabetic retinopathy in fundus photographs. Investigative Ophthalmology and Visual Science60(9), 2205-2205.

Wang, S., Mack, E. A., Van Fossen, J. A., Medwid, L., Cotten, S. R., Chang, C. H., … and Baker, N. (2023). Assessing alternative occupations for truck drivers in an emerging era of autonomous vehicles. Transportation research interdisciplinary perspectives, 19, 100793.

White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., … and Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382.

Whiting, K. and Phelps, A. (2023). The rise of the ‘prompt engineer’ and why it matters. World Economic Forum. Available at https://www.weforum.org/agenda/2023/05/growth-summit-2023-the-rise-of- the-prompt-engineer-and-why-it-matters/

Williams, R., and Procter, R. (1998). Trading places: a case study of the formation and deployment of computing expertise. In Exploring Expertise (pp. 197-222). Palgrave Macmillan, London.

Wirtz, B. W., Weyerer, J. C., and Geyer, C. (2018). Artificial intelligence and the public sector—Applications and challenges. International Journal of Public Administration42(7), 596-615.

Wong, G. K., Ma, X., Dillenbourg, P., and Huan, J. (2020). Broadening artificial intelligence education in K-12: where to start?. ACM Inroads11(1), 20-29.

Wood, A. J. (2021). Algorithmic management consequences for work organisation and working conditions. JRC Working Papers Series on Labour, Education and Technology. No. 2021/07, European Commission, Joint Research Centre (JRC), Seville. Available at: http://hdl.handle.net/10419/233886

World Economic Forum. (2020). Future of Jobs Report. Available at https://www.weforum.org/publications/the-future-of-jobs-report-2020/

World Economic Forum. (2023). Future of Jobs Report. Available at https://www.weforum.org/publications/the-future-of-jobs-report-2023/

Xu, J. (2024). GenAI and LLM for Financial Institutions: A Corporate Strategic Survey. Available at SSRN 4988118.

Yue, M., Jong, M. S. Y., and Dai, Y. (2022). Pedagogical design of K-12 artificial intelligence education: A systematic review. Sustainability14(23), 15620.

16. Appendix

Table 9: Mapping between dimensions and AI personas (Department for Science, Innovation and Technology 2023)

Area AI Worker AI Professional AI Leader
Dimension A: Privacy and Stewardship Ensuring the protection of personal and sensitive data. Managing sensitive data. Data stewardship and standards. Working Practitioner or Expert Expert
Dimension B: Specification, acquisition, engineering, architecture, storage and curation. Data Collection and Management. Data Engineering. Deployment. Awareness or Working Working, Practitioner or Expert Working, Practitioner or Expert
Dimension C: Problem definition and communication. Problem definition. Relationship Management. Awareness or Working Practitioner-Expert Expert
Dimension D: Problem solving, analysis, modelling, visualisation. Identifying and applying technical solutions and project management approaches. Data preparation and feature modelling. Data Analysis and Model building. Artificial Intelligence Awareness or Working Working, Practitioner or Expert Working, Practitioner or Expert
Dimension E: Evaluation and Reflection Project Evaluation. Governance Knowledge of data provenance processes. Sustainability and Best Practices. Reflective Practice and Ongoing Development. Working Practitioner-Expert Expert

Department for Science, Innovation and Technology (2023)



Source link