GLOBAL
Every organisation, including universities, is today investigating how data can be used to improve its core performance. Having data is indeed good, but it is not a good idea to be entirely data driven because that means that there is no room for human innovation and human input.
More concretely, if organisations are entirely data driven, then they are really competing in a global marketplace around a race to the bottom of who has the most data and who can perform these data-driven decisions with the highest accuracy, subtask by subtask, and there is always going to be some organisation that has more data than you.
Instead, I want to talk about hybrid intelligence by which I mean finding the optimal mixture of artificial intelligence and human capability not to do what you are doing today better, but to explore the novel potential of the combined human-AI system.
To reimagine the role of the university in the age of generative AI, we need to have the courage to become an exemplary breeding ground for AI-empowered employee innovations in our core operations.
That means we need to figure out novel ways of using AI to do research and education. Moreover, we need to understand how society is changing and how we can create value for our external stakeholders.
This continued real-world engagement and internal transformation will ensure that we produce candidates with not only the competences required by companies but also the ones that they are only slowly starting to realise they will need.
Getting universities aboard the genAI train
Formally speaking, artificial intelligence covers machine learning, neural networks and generative AI, but, pragmatically, we can think of it as data-hungry, highly technical projects, with generative AI having democratised access to AI solutions.
This generative AI accessibility means that, for the first time ever, anyone in an organisation can use advanced technology to innovate their workflows and this unlocks entirely new possibilities at an individual or organisational level.
Even now, through our phones, we can have real-time conversations with AI through the power of GPT-4, but in the future, we are going to have AI that interacts with us live via chat and gives us suggestions about what we can do, maybe even tapping into our systems, our emails and giving exact feedback – all through our phones.
There are two types of organisations at the moment: those that are on the genAI train and enjoying the ride, who say generative AI is going to be revolutionary for us and we need to strategically prioritise it; and those that are still waiting to get on the train and saying we know it may be important, but we don’t know how to access it or how to get started with such a transformation.
My personal mission is to understand and counteract the many different reasons why universities, public sector organisations and companies fall into the latter category.
Synergetic solutions
Many large organisations around the world dream of creating complex virtual assistants which break individual workflows down into tasks, automate small parts of those and then serve the results up for employees to make further commands.
Such virtual assistants can disrupt any workflow, but can only be created with the extensive support of employees.
This can be difficult to obtain when big-tech gurus constantly preach that algorithms will soon be so powerful that humans may very soon be replaced entirely.
Our response to this argument is that these very algorithms will be even more efficient in the hands of creatives and domain experts which, we posit, will unleash endless opportunities for hybrid intelligence innovation. This is a much brighter vision for humanity to which every employee will be happy to contribute.
The problem is that it is quite straightforward to algorithmically automate and improve existing workflows. The task is known and the results are measurable. By contrast, investing in human and AI synergetic solutions is something that requires time and effort.
Even though it will probably be better in the long run, there will be a period in which productivity may not be as high. So if we don’t invest heavily in understanding how to create these interfaces today, we are likely to see automation as the most simple outcome.
But this is certainly not the best long-term solution for humanity and for organisations. Even now we’re starting to see how difficult it is to build an automation system that works in the real world, with the complexities of the human world. And that’s where hybrid intelligence comes in and becomes a much more robust system.
Creating value
Hybrid intelligence contains three different layers: the interface needs to be human-centred, with very high degrees of computer automation, but always in such a way that humans can understand and steer the process.
At an organisational level, hybrid intelligence requires a commitment to avoid technologically induced de-skilling and to instead actively design for employee upskilling.
This collective commitment to creating a future of work where everyone benefits creates psychological safety in employees that is a key component of creating human-centred interfaces.
Moreover, a hybrid intelligent organisation is committed to rethinking how it is creating value and what kind of value it is creating rather than just saying we will do more of what we’re doing today and we’ll do it a little bit better.
Reinventing the core value proposition of the universities of the future can only succeed by democratising access to AI innovation. In an age of generative AI, AI innovation should no longer just be restricted to technology experts. Every employee should be encouraged to try to innovate their own workflow and value contribution.
Finally, in order to make sure that hybrid intelligence is positive for humanity, we have formulated a criterion that truly hybrid intelligent innovations need to be positive case studies of a future of work which foregrounds purpose, agency and employee fulfilment.
In order for new technology solutions to work, we need to change mindsets and the ways we do things. When it comes to universities, there are two different frameworks that are important for innovation. One is that every task needs to be deconstructed into a prediction component, which the computers are very good at, and a judgment component, which is more in the human domain.
History tells us that usually in the first years of transformative technologies, we often take an existing procedure and then we just slot in AI or whatever technology it is. But when we look back, such approaches are never the ones that become disruptive for the entire sector.
Suddenly, a company emerges with a new systemic solution, where they rethink the entire ecosystem of how we generate value. And the same is going to be true in education.
That means asking this question now: how does generative AI disrupt the ecosystem of education and research, and how can we then be an active player in finding systemic solutions?
I always ask every organisation: what are you going to do with the time that you have left over from your employees? I usually get two responses: one type of organisation that says we are so busy, we just want to be able to do more of the same, just faster; and the other says we have this feeling that we can do things in a better way than we could before. The former are likely to be disrupted by the latter. So, how can universities be in the latter category?
Personalised learning paths
If we break down workflows into individual tasks and subtasks and look at how AI can save time in some areas, could we reinvest some of that time into learning and development and use AI tools to help create a personalised learning path for employees?
For instance, in the case of the challenges to doctoral education you can just type your question into the GPT store and you’ll get lots of PhD proposal assistance and PhD reviewer theses. You can access the insights that people around the world have created in relation to this and then start to interact with it.
They may not give exactly the answers that you want so you need to be systematic in the way that you focus on the quality of what is produced. This is something that you can do without IT skills.
You can ask the chatbot to ask itself 100 questions and then you put them into a Google spreadsheet. You can then use your human judgement on those individual outputs and give them a score from one to five, from bad to very good, and then look at the ones that are bad and formulate why that is.
One of the key skills for the future is the ability to transfer expertise acquired through years of work into responses that can help train the AI to become better.
Again, the hybrid intelligence narrative here is crucial, because how can we convince experts to take all of the knowledge that they have in their minds and put it into the system so that we improve the chatbots that might end up replacing them?
So this is a call to action for all of the social sciences and humanities to be the drivers of transformation.
Creating a culture of innovation
The big question is: how do we build a generative AI-empowered organisation?
Our framework consists of a number of guidelines. First of all, who should be participating in this and what should they be doing? People seem to agree that very soon every employee is going to be a user of generative AI.
But I have not seen many organisations that have the ambition to create a culture of innovation where everyone can generate innovative solutions through their workflows with generative AI.
So that is a push towards creating a culture of innovation in a non-technical, low-code, democratised way and that can only happen if we stop focusing on huge IT projects and instead pursue what I call nano, micro and mini innovations, which allow everyone to start innovating with very little prior GPT experience. These GPT products can then be tested, adapted and included in the organisation’s training loop.
Instead of asking what situations an employee can apply ChatGPT to, they should ask how they can apply ChatGPT in this particular situation.
It is important to remember that it is much, much easier to try to build a system that gives you expert advice or options rather than trying to automate something. Fully automating part of your workflow means it has to work 99.9% of the time. Otherwise you will be spending most of your time fixing the bugs and errors that occur.
That isn’t the case if you’re building something which is just a suggestion engine for you. It doesn’t matter if not all of the suggestions are good, as long as some of them are good.
To do the kind of change management that is needed you need to get the entire organisation on board and design a strategy and vision. We need to collectively build a smorgasbord of hybrid intelligence, generative AI transformations or change management processes, and choose the ones that are consistent with a vision that includes all your employees.
In order to succeed, every CEO and every leader needs to formulate their own concise vision of how this transformation is going to be good for their employees.
Sustainability
Another key issue is sustainability because we know that the carbon footprint of generative AI is enormous. To address this, we could foster an organisational understanding of which tasks require state-of-the-art genAI models, which can be solved with smaller, more energy efficient models, and which we can solve by good, old-fashioned googling.
On a larger scale, however, the most energy efficient computer that we have on Earth is the human mind. That means that the most energy-efficient way of handling generative AI in the future is to build hybrid intelligence interactions where we are outsourcing some but not all calculations to the generative AI algorithms because these should always be evaluated by humans, using human brain power.
The alternative to that hybrid intelligence approach is automation, which is data hungry, requiring more and more calculations and more and more micro optimisations and that will have a huge impact on the environment.
Ethical, sustainable and efficient
Hybrid intelligence is not just an ethical approach to the future of generative AI. It is not just a sustainable approach to the future in general. It is also the most efficient way to create solutions for the future.
The main challenges are whether we are willing to invest the time in understanding what it means to be human, what it means to be working in every one of our workflows and to understand that deeply enough that we can build AI algorithms that integrate synergistically into our workflows.
If we succeed with that, then those combined hybrid intelligence systems are almost guaranteed to be better than the AI systems we have today, or that we will have in the future.
Jacob Sherson holds professorships at the department of management at Aarhus University and department of quantum physics at the Niels Bohr Institute, Copenhagen University, Denmark. He is the founder and director of the Center for Hybrid Intelligence and the game-based citizen science platform ScienceAtHome which has more than 300,000 contributors. This article is based on his keynote at the EUA Council for Doctoral Education in late June. Jacob’s recent papers include ‘Creativity in the Age of Generative AI’ in Nature Human Behaviour and ‘A Hybrid Intelligent Change Management Approach to Generative AI Adoption’.
This article is a commentary. Commentary articles are the opinion of the author and do not necessarily reflect the views of University World News.