Natural Language Processing (NLP) is one of the most fascinating areas in the growing world of artificial intelligence and machine learning. Recent technological breakthroughs in the NLP space have produced a number of impressive models that have been adopted in various fields such as chat services, virtual assistants, language translators, and more. The most prominent example of this is ChatGPT, OpenAI’s conversational dialogue agent that recently took the world by storm. OpenAI Chatbot’s amazing ability to generate insightful, versatile, and human-like responses to user questions from a wide variety of fields allowed him to reach 1 million people within 5 days of launch. acquired more than 1000 users. However, full access to such exceptional models has certain drawbacks. Most of these models are only accessible through various APIs. APIs are often constrained in terms of cost, usage limits, and other technical limitations. This prevents researchers and developers from reaching their full potential, and often slows down research and progress in the NLP field. Moreover, refining and improving such models requires large, high-quality chat corpora, which are often limited in number and often not publicly available. I have.
In response to this problem statement, a team of researchers from the University of California, San Diego and Sun Yat-sen University of China worked with Microsoft Research to develop a new pipeline architecture that uses ChatGPT for conversations. To automatically generate a high-quality multi-turn chat corpus. In addition, the team’s research also focuses on employing parameter-efficient tuning strategies to optimize large-scale language models with constrained computational resources. Using the generated chat corpus, a group of researchers fine-tuned LLaMA, Meta’s open-source large-scale language model, resulting in a new model called Baize. This open-source chat model has exceptional performance and only works on one GPU, making it a viable option for many researchers with computational limitations.
To formulate a data collection pipeline for generating a multi-turn chat corpus, researchers leveraged ChatGPT, which uses the GPT-3.5-Turbo model internally. Researchers used a technique known as self-chat by allowing ChatGPT to converse with itself to simulate both human and AI responses. In this aspect, the researchers used discussion formats and requirement templates to allow the API to continuously generate transcripts for both sides. A template consists of a “seed”. This is essentially a question or phrase that dictates the topic of conversation. The researchers explained that seeds from domain-specific datasets can be utilized to enhance conversation-based models on specific topics. Baize leverages over 111,000 dialogues generated from ChaptGPT and an additional 47,000 of his dialogue exchanges based on the healthcare domain. This pipeline was essential in providing the foundation for creating a corpus that could be used to fine-tune LLaMA to build Baize, thereby improving performance accuracy in multi-turn interactions. .
The next step was to tune Bayes using a highly parametric tuning method. Previous studies have shown that conventional fine-tuning requires enormous computational resources and large high-quality datasets. However, not all researchers have access to unlimited computational resources and most of these corpora are not publicly available. In such situations, parameter-efficient tuning can help. With the help of such tweaks, state-of-the-art language models can be modified to use minimal resources without impacting performance. The researchers adopted a low-rank adaptation (LoRA) approach for all layers of the LLaMA model, increasing the number of adjustable parameters and adaptation features to improve performance.
The researchers initially considered using OpenAI’s GPT-4 model to evaluate the model. However, early investigations showed that the GPT-4 model is not suitable for evaluation because it prefers long responses even in the absence of information. As a result, researchers are now investigating the feasibility of human evaluation. Results of the human evaluation will be included in a future revision of their research paper. Bayesian models are currently available in 7B, 13B and 30B parameters, with a 60B model version to be released soon. An online demo of the model is also accessible here. The researchers also added that the Bayesian model and data will be used for research purposes only. The parent model, LLaMA, has a non-commercial license, so commercial use is strictly prohibited. To further improve model performance, researchers are looking at ways to incorporate reinforcement learning into their studies in the future.
Their significant contributions can be summarized using the team’s reproducible pipeline for automatically generating a multi-turn chat corpus and an excellent open-source chat model called Baize. The group strongly hopes that their work will encourage the community to do more research and take advantage of hitherto unexplored territory when it comes to NLP research.
check out paper, report and demo. All credit for this research goes to the researchers of this project.Also, don’t forget to participate Our 17k+ ML SubReddit, cacophony channeland email newsletterWe share the latest AI research news, cool AI projects, and more.
Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech at the Indian Institute of Technology (IIT), Goa. She has her passions in the fields of machine learning, natural language processing, and her web development. She enjoys learning more about the technical field by participating in some challenges.