ServiceNow researchers propose machine learning approach that introduces search-enhanced LLM to reduce hallucinations and enable generalization in structured output tasks

Machine Learning


https://arxiv.org/abs/2404.08189

Large-scale language models (LLMs) have made it possible to economically perform tasks with structured output, such as converting natural language to code or SQL. LLM is also used to transform natural language into workflows, which are collections of actions with logical connections between them. These workflows improve worker productivity by encapsulating actions that can be performed automatically under certain circumstances.

In particular, generative artificial intelligence (GenAI) has demonstrated superior capabilities for tasks like generating natural language from prompts. However, one major drawback is that it often produces false or irrational output, known as hallucinations. To achieve universal acceptance and use of his GenAI systems in the real world, resolving this limitation will become increasingly important as the importance of LLM increases.

To address hallucinations and implement enterprise applications that translate natural language requirements into workflows, a team of ServiceNow researchers leverages search augmentation generation (RAG), a method known to improve quality. I created a system. Structured output produced by the GenAI system.

The team shared that by including RAG in their workflow generator, they were able to significantly reduce hallucinations, increasing the reliability and usability of the generated workflows. The ability of this method to generalize LLM to non-domain contexts is a major advantage. This allows the system to process natural language input that differs from the standard patterns it was trained on, increasing the system's adaptability and usefulness in a variety of situations.

The research team was also able to show that by utilizing a small, well-trained retriever in combination with LLM, the attached model could be efficiently scaled down without compromising performance. This was made possible by the successful implementation of RAG. This reduction in model size allows LLM-based system deployments to use fewer resources. This is important to consider in real-world applications where computing resources may be scarce.

The team summarizes their main contributions as follows:

  1. The team demonstrated how RAG can be applied to activities other than text production and showed how well workflows can be generated from plain language requirements.
  1. Applying RAG has been shown to significantly reduce the number of false outputs and illusions, helping to produce more organized, high-quality output that more closely reflects the intended workflow.
  1. By incorporating RAG into the system, the research team demonstrated that a small LLM can be used in conjunction with a compact retriever model without compromising performance. This optimization reduces the need for resources and improves the implementation efficiency of workflow generation LLM-based systems.

In conclusion, this approach is a great step forward in solving the illusory constraints of GenAI. The team uses RAG to develop reliable and effective methods for creating workflows from natural language requirements and optimizing the corresponding model size, making his GenAI system more widely available in enterprise settings. I opened the door to


Please check paper. All credit for this study goes to the researchers of this project.Don't forget to follow us twitter.Please join us telegram channel, Discord channeland linkedin groupsHmm.

If you like what we do, you'll love Newsletter..

Don't forget to join us 40,000+ ML subreddits

Tanya Malhotra is a final year student at University of Petroleum and Energy Research, Dehradun, pursuing a Bachelor's degree in Computer Science Engineering with specialization in Artificial Intelligence and Machine Learning.
She is a data science enthusiast with good analytical and critical thinking, and a keen interest in learning new skills, leading groups, and managing work in an organized manner.

🐝 Join the fastest growing AI research newsletter from researchers at Google + NVIDIA + Meta + Stanford + MIT + Microsoft and more…





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *