A new £2.4m initiative has been launched, with appointed academic researchers to help organizations develop solutions for the responsible use of artificial intelligence (AI).
Fellows from universities across the UK will apply their research expertise in the humanities and arts, including data ethics, copyright law, digital design and qualitative analysis, to address questions about the responsible use of AI.
The Bridging Responsible AI Divides (BRAID) Fellowship is part of the BRAID program. BRAID is led by the University of Edinburgh in collaboration with the Ada Lovelace Institute and the BBC.

Each Fellow, totaling 17, will partner with public, private, or third-sector organizations to bring together their expertise to tackle existing, new, or emerging AI challenges.
Technology partners include Adobe, Datamind Audio, Diverse AI, Mozilla Foundation, and Microsoft.
Project partners from regulators and the public sector include the Ada Lovelace Institute, the Alan Turing Institute, the BBC, the Future of Work Institute and the Public Media Alliance.
Elsewhere, Fellows will work with arts and culture institutions including Arts Council England, Edinburgh International Book Festival, Serpentine Gallery and Royal Botanic Gardens, Kew.
This collaborative project will explore approaches to using generative AI in media, explore the social and ethical factors shaping the adoption of AI in healthcare, and develop a responsible AI innovation framework for the arts and culture sector. Address your doubts. more.
Professor Ewa Luger, Co-Director of BRAID and Professor of Human Data Interaction at Edinburgh College of Art, said: , develop AI and deploy it in practice and globally.
“We hope that these connections will allow us to make significant strides towards a more responsible AI ecosystem by addressing common challenges across sectors and diverse communities.”
Based at the Edinburgh Institute for Future Research at the University of Edinburgh, BRAID will bring arts and humanities research more fully into the responsible AI ecosystem, and will connect academic, industry, policy and regulatory initiatives around responsible AI. We aim to fill the gap.
The 17 fellowship recipients are:
- Professor Nick Brian KinsUniversity of the Arts London – Project Partner, BBC R&D.
Explainable generative AI at the BBC: Developing an explainable AI approach for creative practice within the BBC and beyond. - Professor Mercedes BunzKing's College London – Project Partner, Serpentine Galleries.
AI art beyond the gallery: Exploring the ability of cultural institutions to influence technology policy. - Clementine ColetteUniversity of Cambridge – Project Partner, Institute for the Future of Work.
Co-design responsible technology and policy for the impact of generative AI on novel writing and publishing. - Dr. Bahare HelavyUniversity of Surrey – Project Partner, BBC R&D.
Improving responsible AI literacy at the BBC and beyond. - Dr. Federica LucibelloUniversity of Oxford – Project Partner, Ada Lovelace Institute.
Today’s Perspective: Co-creating techno-moral tools for responsible AI governance. - Dr. Caterina MoruzziUniversity of Edinburgh – Project Partner, Adobe.
CREA-TEC: Fostering responsible engagement with AI technology and empowering creativity. - Dr. Oona MurphyGoldsmiths, University of London – Project Partner, Arts Council England.
Working with Arts Council England to develop a grant-funded responsible AI innovation framework for the arts and culture sector. - Dr. Martin ParkerUniversity of Edinburgh – Project Partner, Datamind Audio.
Machining Sonic Identity: Explores issues of digital sound identity, including how AI creates sounds and questions of provenance and ownership. - Dr. Kirill PotapovUniversity College London – Project Partner, Microsoft Research.
Human-centered AI for a fair smart energy grid. - Dr. Sanjay SharmaUniversity of Warwick – Project Partner, Diverse AI.
An inclusive future: Radical ethics and transformative justice for responsible AI. - Dr. Anna Maria ŠčanyUniversity of London – Project Partner, Alan Turing Institute.
Responsible data, models, and workflows: Delivering responsible AI digital skills to the cultural heritage community. - Ms. Caroline SindersUniversity of the Arts London – Project Partner, Mozilla Foundation.
AI tools for artists, creatives, and makers center creativity and responsibility. - Dr. Alex TaylorUniversity of Edinburgh – Project Partner, Microsoft Research.
Muted registers: Feminist intersectional (re)constructions of red teams. - Dr. Pip ThorntonUniversity of Edinburgh – Project Partner of the Edinburgh International Book Festival.
Writing about AI mistakes: LLM, copyright, and creativity in the age of generative AI. - Dr. Beverly TownsendUniversity of York – Project Partner, Microsoft Research.
Regulatory guidelines that inform the social and ethical factors shaping the adoption of medical AI. - Dr. Paula WestenbergerBrunel University London – Project Partner, Royal Botanic Gardens, Kew.
Responsible AI for heritage: A copyright and human rights perspective. - Dr Kate WrightUniversity of Edinburgh – Project Partner, Public Media Alliance.
Responsible AI in international public service media.