Overview of LMQL: A Programming Language for Interacting with Large Language Models
LMQL (Language Model Query Language) is a new programming language that allows developers to interact with large language models in an intuitive way. It was created by researchers at the Secure, Reliable, and Intelligent Systems Lab (SRI Lab) at ETH Zurich.
LMQL enables users to query large language models like GPT-3 using a combination of natural language and Python code. This approach, called Language Model Programming (LMP), makes it easier to adapt language models for different tasks while retaining high accuracy.
How Does LMQL Work?
With LMQL, users write scripted prompts that contain both natural language instructions and Python code. For example:
"Here is a prompt about [TOPIC]. Please summarize the key points about [TOPIC] in a few bullet points:"
topic = TOPIC.strip().lower()
"- [POINT_1]
- [POINT_2]"
The Python code handles text processing and control flow, while the natural language provides instructions and prompt examples for the language model.
LMQL automatically converts the query into an efficient inference procedure that minimizes expensive calls to the underlying language model API. It does this through novel partial evaluation techniques.
Key Features and Benefits
- Scripted Prompts - Combine natural language and Python code for expressive prompting
- Constraints - Specify logical constraints over the language model's output for robustness
- Control Flow - Use Python loops, conditionals, functions etc. to control the generation
- Efficiency - Queries optimize prompts end-to-end, reducing language model calls by up to 80%
- Debugging - Inspect interpreter state and model hypotheses during query execution
- Portability - Abstract over model internals so queries work across different language models
- Interaction - Integrate user input, calculators, APIs and more into the prompting process
Use Cases
LMQL can be used for a wide variety of language model tasks:
- Question Answering - Ask questions and retrieve answers from the model
- Summarization - Summarize documents, articles or other content
- Data Augmentation - Generate additional training examples for machine learning
- Creative Writing - Use the language model creatively for stories, lyrics, scripts etc.
- Chatbots - Build conversational agents that can chat with users
- Search - Integrate search over external data sources like Wikipedia
- Calculations - Perform mathematical calculations as part of the reasoning process
The declarative nature of LMQL makes it easy to adapt language models for new use cases without having to implement complex prompting strategies from scratch.
Getting Started
LMQL is available through:
- Playground IDE - Web-based editor for trying out LMQL
- Python Package -
pip install lmql
to run locally
The documentation provides guides and API references to help you get started.
LMQL is free and open-source software provided under the MIT license. The researchers welcome contributions and feedback via the GitHub repository.
Conclusion
LMQL makes it simpler for developers to take advantage of large language models like GPT-3. By combining natural language and Python, it provides an intuitive way to adapt these models for many different tasks. The novel techniques used by LMQL optimize queries for efficiency while retaining accuracy. If you work with large language models, LMQL is worth exploring as a way to boost your productivity.
Try Deepgram with $200 in free credits!
Integrate voice into your apps with AI transcription or text-to-speech. No credit card required.








