The professional services firm recently released a Navigating AI report aimed at establishing a framework for businesses and individuals to transition to trustworthy AI. In an email interview with ITNews, KPMG Australia’s Technology – Infrastructure, Government and Healthcare Lead, his partner Dean his Grundy and KPMG Australia’s Data & Cloud partner Dawal his Jaggi, reported We elaborated on the themes and findings of the
However, trust in AI remains low, with only 34% of Australians willing to trust the rapidly advancing technology, according to the Navigating AI report.
Join the AI discussion
Grundy argued that governments need to carefully consider their role in AI. Will the Australian government become a regulator, a broker coordinating government, businesses, technology and the public to optimize AI outcomes, an AI investor building people-centric solutions, or three? It is necessary to decide which combination of all
“It will be important for the public sector to participate in AI-related discussions with the business sector, the technology sector, and the broader community,” Grandi said. “In doing so, he can stay on top of risks and opportunities, ensure consistency in AI design and delivery, and target where and how he uses AI solutions to optimize deployment. I can.”
avoid a one-size-fits-all approach
For businesses and government agencies, trust is essential to realizing return on investment and value from AI initiatives, regardless of the organization’s solution, size, or complexity involved. “Without trust, adoption by users is limited, and lack of trust is often one of the main reasons AI programs fail to deliver on expectations,” Jaggi said. increase.
Establishing trust means tailoring strategies to the requirements of different stakeholder groups. “For example, in order for customers to trust AI, they need to know they have full control over their data. And then you have the option to revoke the permission or opt out,” Jaggi said.
Internally, KPMG’s partners argued that they needed to mitigate the risk of employees seeing AI as a threat to their jobs. Next, the employee should be able to trust the output of her AI system by establishing transparency regarding the data used and ensuring that intellectual property and copyrights are adhered to. .
Pragmatic governance is key to the design, development and implementation of AI solutions, starting at the system and design level, Jaggi added. Governance regimes should apply to all business stakeholders, including data scientists, ML engineers, end-users of AI systems, and executives. Leadership and Board of Directors.
He listed the key questions leaders need to ask about AI:
- Are our governance systems and structures fit for purpose in identifying and managing AI risks?
- Does the leadership team have the right competencies and skills to respond? How and to what extent should external expertise be leveraged?
- Is management properly informed about the strategic opportunities and risks that AI poses to the organization?
Provide AI education across your organization
So what are the key characteristics of leaders in AI utilization? According to KPMG, AI, machine learning and data science leaders should avoid siled or piecemeal approaches and instead seek We offer formal education programs in AI and data literacy.
They build this by fostering a culture of collaboration and grassroots innovation to impact the entire business.
KPMG sees AI leaders and disruptors in nearly every industry in Australia, including financial services, mining, healthcare, telecommunications and government.
“The bottom line is that the time lag between the invention of AI and the adoption of AI by companies is rapidly closing,” Jaggi says. “For example, the first chatbot utilizing natural language processing was he invented at MIT in 1966. Mainstream organizations like Pizza Hut have adopted natural language processing and launched chatbots in 2016. It took almost 50 years to do so.
“Back in November 2022 when ChatGPT 3.5 disrupted the internet, Nike, Morgan Stanley, Heinz, Nestle and many other companies in major industries have already deployed generative AI to drive market differentiation. .”
Creating trustworthy AI frameworks
To build trust in AI, KPMG and the University of Queensland have developed a Trustworthy AI framework. It identifies six aspects that must operate in a connected manner to ensure trust across the AI lifecycle. These aspects are data, algorithms, security, legal issues, ethics, and organizational coordination.
The “Navigating AI” paper applies the framework to create a checklist to guide the adoption and use of AI across any organization, including AI developers, procurers, and users.
.Lessons learned from the rise of social media
Both Jaggi and Grandi acknowledge the need to learn lessons from the rise of social media in managing rapidly emerging technologies such as AI. According to Jaggi, the impact of any new technology, not just AI, needs to question the psychological, geopolitical, economic and social dimensions of society. “We need to ensure that the next shiny new technology is properly evaluated for its impact.”
