Large language models have taken the artificial intelligence community by storm. Their recent influence has helped them serve a wide range of industries such as healthcare, finance, education, entertainment, etc. Famous large-scale language models such as GPT, DALLE and BERT perform extraordinary tasks, Makes life easier.While the DALLE 2 can create images that correspond to simple textual descriptions, the GPT-3 writes great essays, composes complete code, summarizes long text paragraphs, and does human-like can answer questions and generate content with just short natural language prompts. These models are helping artificial intelligence and machine learning move rapidly through paradigm shifts.
Recently, a team of researchers introduced LMQL, an open source programming language and platform for language model interaction. LMQL, short for Language Model Query Language, improvises the functionality of Large Language Models (LLM) by combining prompts, constraints, and scripts. A declarative SQL-like language based on Python, LMQL extends static text prompts with control flow, constraint-guided decoding, and tool extensions. With this type of script, LMQL simplifies multipart prompt flows with very little code.
Researchers have enabled LMP (Language Model Programming) using LMQL. LMP generalizes language model prompts from pure text prompts to combinations of text prompts and scripts. LMQL influences constraints and control flow from LMP prompts to generate efficient inference steps. These hyper-logical, high-level constraints are translated into token masks with the help of some evaluation semantics that are strictly enforced at generation time.
The team introduced LMQL to avoid the high cost of re-querying and validating the generated text. This allows LMQL to produce text close to the desired output on the first try without the need for subsequent iterations. LMQL constraints also allow the user to direct the text generation process according to user-desired specifications, such as ensuring that the generated text follows certain grammar or syntax rules, or that certain words or phrases are avoided. Can be guided or steered.
Researchers note how LMQL can capture a variety of state-of-the-art prompting methods, such as interactive flows, that are difficult to implement with existing APIs. This evaluation shows that LMQL significantly reduces the computation or cost of paid APIs, resulting in cost savings of 13-85%, while maintaining or improving accuracy for many downstream tasks.
LMQL allows users to express a wide range of common and advanced prompting techniques simply and concisely. Integrated with Hugging Face’s Transformers, OpenAI API and Langchain. Developer resources for the same are available at lmql.ai, and a browser-based Playground IDE is available for experimentation.
In summary, LMQL seems like a promising development, as the evaluation shows it to be a powerful tool for improving the efficiency and accuracy of language model programming. This makes it easier for users to achieve their desired results with fewer resources.
check out tool. All credit for this research goes to the researchers of this project.Also, don’t forget to participate Our 18k+ ML SubReddit, cacophony channeland email newsletterWe share the latest AI research news, cool AI projects, and more.
Tanya Malhotra is a final year student at the University of Petroleum and Energy Research in Dehradun with a Bachelor of Science in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
A data science enthusiast with good analytical and critical thinking, she has a keen interest in learning new skills, leading groups, and managing work in an organized manner.