How to create generative AI trust for enterprise success

AI For Business


Join C-suite executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing their AI investments for success.. learn more


Computer scientist Yejin Choi made a seemingly contradictory statement in a 2023 TED Talk, saying, “Today’s AI is incredibly smart, yet incredibly stupid.”how can i do something intellectual be stupid?

AI itself, including generative AI, is not built to provide accurate, context-specific information for any particular task. In fact, measuring the model this way is a silly errand. Think of these models as emphasizing relevance based on experience and generating responses based on possible theories.

That is why, while generative AI continues to captivate us with its creativity, it often falls short when it comes to B2B requirements.Sure, it’s smart for ChatGPT to spin out social media copies as wraps, but the downside is that the generative AI hallucinate. This is when the model produces false information that pretends to be true. No matter what industry your company is in, these critical flaws are definitely bad for your business.

The key to enterprise-ready generative AI lies in rigorously structuring your data so that it provides the right context, and using it to train highly sophisticated large-scale language models (LLMs). A well-tuned balance between sophisticated LLM, pragmatic automation, and hand-picked human checkpoints forms a powerful anti-hallucinatory framework in which generative AI delivers the correct results to reveal the true of B2B enterprise value can be created.

event

transform 2023

Join us July 11-12 in San Francisco. There, he shares how management integrated and optimized his AI investments to drive success and avoid common pitfalls.

Register now

For businesses looking to tap into the endless possibilities of generative AI, here are three essential frameworks to incorporate into your technology stack.

Build a strong anti-hallucinatory framework

Got It AI, a company that can identify generative falsehoods, ran a test and found that ChatGPT’s LLM generated false responses approximately 20% of the time. With such a high failure rate, business goals are not met. So to solve this problem and prevent the hallucinations of generative AI, it cannot work alone. It is imperative that the system is trained on high-quality data to derive output and monitored regularly by humans. Over time, these feedback loops help correct errors and improve model accuracy.

Incorporating the beautiful writings of generative AI into context-oriented, result-driven systems is essential. The initial stage of an enterprise system is a blank slate that captures information tailored to the enterprise and its specific goals. The intermediate phase is the heart of a well-designed system and includes rigorous LLM fine-tuning. OpenAI describes model fine-tuning as “a powerful technique for creating new models that are specific to your use case.” This is achieved by taking the usual approach of generative AI and training the model on more case-specific examples to get better results.

In this phase, companies can choose between hard-coded automation and fine-tuned LLM. Choreography may vary from company to company, but leveraging each technology to its strengths ensures the most context-oriented output.

Then, with all the backends set up, it’s time to really unleash the generative AI in your external-facing communications. Answers are produced quickly and are not only highly accurate, but they also provide a personal tone without suffering from empathy fatigue.

Align technology and human checkpoints

By leveraging a variety of technologies, any company can provide the structured facts and context needed to enable LLMs to do what they do best. First, leaders should identify tasks that are computationally intensive for humans but easy to automate, or vice versa. Next, consider where AI is better than both. Basically, don’t use AI when simpler solutions such as automation or human effort would suffice.

In a conversation with OpenAI CEO Sam Altman at Stripe Sessions in San Francisco, Stripe founder John Collison said Stripe is “a place where people do manual work or work on a set of tasks. “Everywhere” uses OpenAI’s GPT-4. Businesses must use automation to do tedious tasks such as aggregating information and reviewing company-specific documents. You can also hard-code clear black-and-white obligations, such as return policies.

Only after building this strong foundation will we be ready for generative AI. The input is highly curated before the generative AI touches the information, so the system is set up to accurately address more complex problems. Human up-to-date information is still important to validate model output accuracy and provide model feedback and correct results when required.

Measure results through transparency

Right now, LLM is a black box. On the release of GPT-4, OpenAI said: “Given both the competitive landscape and the security implications of large models like GPT-4, this report does not include details on architecture (including model size), hardware, training compute, and datasets. not” structure, training methods, etc. Although some progress has been made in reducing model opacity, how the model works is still a mystery. With the lack of standardized effectiveness measures across the industry, it’s not only unclear what’s under the hood, but what the differences between the models are, beyond cost and interaction with the models.

Now there are companies that are changing this and clarifying the entire generative AI model. These standardized efficacy measures will benefit downstream companies. Companies like Gentrace are linking data to customer feedback so anyone can see how well his LLM performed against generated AI output. Other companies, such as Paperplane.ai, are taking the AI ​​data generated and linking it with user feedback, allowing leaders to assess the quality, speed, and cost of deployment over time. We go one step further.

Liz Tsai is the Founder and CEO of HiOperator.

data decision maker

Welcome to the VentureBeat Community!

DataDecisionMakers are experts, including data technologists, who can share data-related insights and innovations.

Join DataDecisionMakers for cutting-edge ideas, updates, best practices, and the future of data and data technology.

You can also consider contributing your own articles.

Read more about DataDecisionMakers



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *