From the now-infamous Mother's Day photo taken at Kensington Palace to fake audio of Tom Cruise disparaging the Olympic Committee, AI-generated content has been making headlines recently for all the wrong reasons. These cases have sparked widespread controversy and doubt, leading people to question the authenticity and origin of the content they come across online.
This is affecting all sectors of society, affecting not only celebrities and ordinary internet users, but also some of the world's largest companies: Chase Bank, for example, reported being fooled by a deepfake during an internal experiment, while one report revealed a 700% surge in deepfake incidents in the fintech sector in just one year.
Currently, there is a severe lack of transparency around AI, including whether an image, video, or audio was generated by AI. Efficient ways to audit AI would enable a higher level of accountability and motivate companies to more aggressively remove misleading content, but are still in development. These shortcomings combine to exacerbate the trust issue in AI, and greater transparency of AI models is essential to address these challenges. This represents a major hurdle for companies that want to leverage the enormous value of AI tools but are concerned that the risks outweigh the benefits.
CEO and Founder of Casper Labs.
Can business leaders trust AI?
All eyes are on AI right now. But while the technology is experiencing historic levels of innovation and investment, trust in AI and many of the companies behind it is steadily declining. Not only is it becoming harder to distinguish between human-generated and AI-generated content online, but business leaders are also becoming cautious about investing in their own AI systems. Ensuring that the benefits outweigh the risks is a common challenge, but it's all made more complicated by ambiguity about how the technology actually works. It's often unclear what data is being used to train the models, how that data impacts the generated output, and what the technology is doing with a company's own data.
This lack of visibility poses a number of legal and security risks for business leaders. Despite AI budgets expected to grow up to five times this year, it has been reported that 18.5% of all AI or ML transactions within enterprises are blocked due to growing cybersecurity concerns. This is a staggering 577% increase in just nine months, with the highest increase (37.16%) occurring in the finance and insurance industry, which has particularly strict security and legal requirements. The finance and insurance industry is a harbinger of what is to come in other industries as questions about the security and legal risks of AI grow and companies must consider the implications of using this technology.
While we are itching to harness the $15.7 trillion in value that AI could create by 2030, it is clear that businesses cannot fully trust AI today, and this obstacle will only get worse if the problem is not addressed. There is an urgent need for greater AI transparency to make it easier to determine if content is AI-generated, see how AI systems are using data, and better understand the output. The big question is how to achieve this. Transparency and declining trust in AI are complex issues, with no single definitive solution. Progress will require collaboration across sectors around the world.
Tackling Complex Technical Challenges
Fortunately, there are already signs that both governments and technology leaders are focusing on solving this problem: the recent EU AI law is an important first step in setting regulatory guidelines and requirements for the responsible deployment of AI, and in the US, states such as California are taking steps to introduce their own legislation.
While these laws are valuable in outlining risks specific to industry use cases, they only provide standards to adhere to, not solutions to implement. The lack of transparency into AI systems, which extends to the data used to train the models and how that data influences the output, raises deep-rooted and vexing technical problems.
Blockchain is one technology that has emerged as a potential solution. Although blockchain is widely associated with cryptocurrency, the underlying technology is built on a highly serialized data store that is tamper-proof. For AI, it can increase transparency and trust by providing an automated, provable audit trail of AI data, from the data used to train the AI model, to the inputs and outputs in use, and even the impact that a particular dataset had on the AI's output.
Search Augmentation Generation (RAG) is also rapidly emerging and being adopted by AI leaders to bring more transparency to their systems. RAG allows AI models to search external data sources, like the internet or a company's internal documents, in real time to inform their output, meaning the model can ensure that the output is based on the most relevant and up-to-date information possible. RAG also introduces the ability for models to cite sources, empowering users to fact-check information for themselves without having to blindly trust it.
Also tackling deepfakes, OpenAI announced in February that it would embed metadata into images generated by ChatGPT and its API to help social platforms and content publishers more easily detect deepfakes. In the same month, Meta announced a new approach to identify and label AI-generated content on Facebook, Instagram and Threads.
These new regulations, governance techniques, and standards are a great first step toward increasing trust in AI and paving the way for its responsible adoption. But much more work needs to be done across the public and private sectors, especially in light of viral events that have increased public anxiety about AI, looming elections around the world, and growing concerns about AI security in the enterprise.
We are at a critical moment in the trajectory of AI adoption, where trust in the technology will make or break things. Only with greater transparency and trust will businesses embrace AI and their customers benefit from AI-powered products and experiences that delight, not annoy.
We list the best AI website builders.
This article was produced as part of TechRadarPro's Expert Insights channel, featuring the best and brightest minds in technology today. Opinions expressed here are those of the author and not necessarily those of TechRadarPro or Future plc. If you're interested in contributing, find out more here. https://www.techradar.com/news/submit-your-story-to-techradar-pro
