Justin Tauber, general manager of innovation and AI culture at Salesforce ANZ. Source: Provided.
The author is Justin Tauber, General Manager, Innovation and AI Culture, Salesforce ANZ.
AI promises to transform the way we do business and free up our most precious resource: time. This is especially true for small and medium-sized businesses, where customer-facing staff must understand complex products, policies, and data with limited time and support.
AI-powered customer engagement enables more timely, personalized and intelligent interactions, but without trust, business cannot operate, so we must all learn to use the power of AI in a safe and ethical way.
However, according to the AI Trust Index, 89% of Australian office workers currently do not trust AI to function without human oversight, and 62% are concerned that humans will lose control over AI.
Small and medium-sized businesses need to build capability and confidence in how and when to use AI in a trustworthy manner. Companies that combine the best of human and machine intelligence will be successful in their AI transformation.
To foster trust over time and build confidence in this nascent AI technology, there must be a focus on employee experience with AI, which means integrating staff early into the decision-making, output improvement, and feedback process. Generative AI results are better when humans don't simply have to be “in the loop” – humans need to lead the partnership with AI, and AI works best when humans are in charge.
One strategy is to remind employees of the AI's strengths and weaknesses in the flow of work. Revealing the confidence value (the degree to which the model believes the output is correct) allows employees to treat the model's responses with the appropriate level of caution. Content with low scores still has value, but human review allows for a deeper level of scrutiny. Configuring prompt templates that staff can use ensures more consistent inputs and more predictable outputs. Providing explainability for why and how the AI system created the content, as well as citing sources, can also address trust and accuracy issues.
Another strategy is to focus on use cases that strengthen trust with customers. The best cases are those where productivity and trust-building benefits align. For example, using generative AI to proactively assure anxious customers that their products will arrive on time. Another example is the use of AI in fraud detection and prevention. AI systems can flag suspicious transactions, which human analysts can review to investigate anomalies and risks, and provide feedback to improve the accuracy and effectiveness of fraud detection systems in the future.
Our role at Salesforce is to ensure that the AI solutions we develop are human-driven. This requires that we respect ethical guardrails in the development of our AI products. But we go a step further by creating capabilities and solutions that reduce the cost of responsible adoption and use for our customers: AI-safe products.
Just as power sockets make electricity safe, AI safety products help businesses harness the power of AI without exposing them to significant risk. Salesforce AI products are built with reliability and trustworthiness in mind and embody our Trustworthy AI Principles, making it easier for customers to deploy these products in an ethical and thoughtful way.
For businesses, especially small and medium-sized businesses with limited resources, it's not always practical or fair to ask time-poor employees to refine all of the AI-generated outputs. That's why it's important to give businesses strong system-wide controls and intuitive interfaces that allow people to make timely, responsible decisions about how and when to test and refine responses or escalate issues.
We've been investing in ethical AI for nearly a decade, focusing on principles, policies, and protections for our company and our customers. We've introduced new guidelines for the responsible development of generative AI that expand on our core Trusted AI principles, updated safeguards in our Acceptable Use Policy, and developed the Einstein Trust Layer to protect customer data from external LLMs.
AI is still in its early stages and these guidelines are constantly evolving, but we are committed to working closely with customers and regulators to learn, improve, and make trustworthy AI a reality for all.
Read now: How can businesses increase trust in AI? With humans in charge