Go to file
Michal Moskal 76d8a0f9ef switch tests back to guidance main 2025-04-17 09:00:20 -07:00
.github/workflows faster crate publish 2025-02-07 13:58:32 -08:00
c_sample force static linking for c_sample by removing existing shared libraries 2024-11-30 14:56:10 -08:00
docs support for patternProperties in JSON schema (#167) 2025-04-16 13:25:41 -07:00
json_stats support for patternProperties in JSON schema (#167) 2025-04-16 13:25:41 -07:00
parser Bump version to 0.7.16 2025-04-16 17:12:57 -07:00
python support for patternProperties in JSON schema (#167) 2025-04-16 13:25:41 -07:00
python_ext Bump version to 0.7.16 2025-04-16 17:12:57 -07:00
sample_parser support for patternProperties in JSON schema (#167) 2025-04-16 13:25:41 -07:00
scripts switch tests back to guidance main 2025-04-17 09:00:20 -07:00
toktrie Bump version to 0.7.16 2025-04-16 17:12:57 -07:00
toktrie_hf_downloader Bump version to 0.7.16 2025-04-16 17:12:57 -07:00
toktrie_hf_tokenizers Bump version to 0.7.16 2025-04-16 17:12:57 -07:00
.gitignore benchmarking code for substrings 2025-02-28 09:41:46 -08:00
CHANGELOG.md Bump version to 0.7.16 2025-04-16 17:12:57 -07:00
CODE_OF_CONDUCT.md add required files 2024-07-05 18:01:09 -07:00
Cargo.lock Bump version to 0.7.16 2025-04-16 17:12:57 -07:00
Cargo.toml add toktrie_hf_downloader crate 2025-03-20 22:34:48 +00:00
LICENSE add required files 2024-07-05 18:01:09 -07:00
README.md [no ci] add Chromium merge news 2025-04-11 10:57:46 -07:00
SECURITY.md add required files 2024-07-05 18:01:09 -07:00
SUPPORT.md remove CR from file 2025-04-08 15:22:56 -07:00
plan.md move readme 2024-07-05 18:30:01 -07:00
pyproject.toml Bump version to 0.7.16 2025-04-16 17:12:57 -07:00

README.md

Low-level Guidance (llguidance)


Performance results from MaskBench


  • 2025-04-11 integration merged into Chromium
  • 2025-03-25 integration merged into vLLM (v0.8.2)
  • 2025-02-26 integration merged into SGLang (v0.4.4)
  • 2025-02-01 integration merged into llama.cpp (b4613)
  • 2025-01-21 JSONSchemaBench released, including paper and MaskBench
  • 2025-01-07 Guidance v0.2.0 released, using llguidance as the grammar engine

About

This library implements constrained decoding (also called constrained sampling or structured outputs) for Large Langauge Models (LLMs). It can enforce arbitrary context-free grammar on the output of LLM and is fast - on the order of 50μs of CPU time per token (for 128k tokenizer) with negligible startup costs.

Following grammar formats are supported:

The internal format is most powerful (though Lark-like format is catching up, and there are plans to convert the libraries to use it) and can be generated by the following libraries:

The library can be used from:

Integrations

The library is currently integrated in:

  • Guidance - library for interacting with LLMs
  • llama.cpp - available via -DLLAMA_LLGUIDANCE=ON option for cmake; llama.cpp can be also used Guidance Python package
  • Chromium - merged, to be used for JSON Schema enforcement for window.ai in Chromium-based browsers
  • SGLang - use --grammar-backend llguidance; when passing Lark grammar make sure to prefix them with %llguidance {}, just as in llama.cpp
  • vLLM - V0 PR and V1 PR
  • LLGTRT - OpenAI-compatible REST server using NVIDIA's TensorRT-LLM
  • mistral.rs

The integration is ongoing in:

Technical details

Given a context-free grammar, a tokenizer, and a prefix of tokens, llguidance computes a token mask - a set of tokens from the tokenizer - that, when added to the current token prefix, can lead to a valid string in the language defined by the grammar. Mask computation takes approximately 50μs of single-core CPU time for a tokenizer with 128k tokens. While this timing depends on the exact grammar, it holds, for example, for grammars derived from JSON schemas. There is no significant startup cost.

The library implements a context-free grammar parser using Earleys algorithm on top of a lexer based on derivatives of regular expressions. Mask computation is achieved by traversing the prefix tree (trie) of all possible tokens, leveraging highly optimized code.

Grammars can be also used to speed up decode via fast-forward tokens.

Comparison and performance

See MaskBench in JSON Schema Bench for detailed performance comparisons.

LM-format-enforcer and llama.cpp grammars are similar to llguidance in that they dynamically build token masks for every step of the decoding process. Both are significantly slower - the former due to clean Python code and the latter due to the lack of a lexer and use of a backtracking parser, which, while elegant, is inefficient.

Outlines builds an automaton from constraints and then pre-computes token masks for all automaton states, potentially making sampling fast but inherently limiting constraint complexity and introducing significant startup cost and memory overhead. Llguidance computes token masks on the fly and has essentially no startup cost. The lexers automata in llguidance are built lazily and are typically much smaller, as the context-free grammar imposes the top-level structure.

XGrammar follows an approach similar to llama.cpp (explicit stack-based, character-level parser) with additional pre-computation of certain token masks, similar to Outlines. The pre-computation often runs into seconds, and sometimes minutes. If the pre-computation works well for a given input, the masks are computed quickly (under 8μs in half of masks we tested), however if it doesn't fit the particular input, the mask computation times can run to tens or hundreds of milliseconds.

In llguidance, the full mask computation for a typical JSON schema takes about 1.5ms (for 128k tokenizer). However, very often the "slicer" optimization applies, and thus the avarage mask computation in JSON Schema Bench (2.5M tokens, 10k schemas) is under 50μs, with less than 1% of masks taking longer than 1ms, and 0.001% taking longer than 10ms (but still shorter than 30ms). The optimization doesn't involve any significant pre-computation.

Thus, with 16 cores and a 10ms forward pass, llguidance can handle batch sizes up to 3200 without slowing down the model. (Note that a 10ms forward pass for small batch sizes typically increases to 20ms+ for batch sizes of 100-200.)

Building

If you just need the C or Rust library (llguidance), check the parser directory.

For Python bindings:

  • install python 3.9 or later; very likely you'll need a virtual env/conda
  • run ./scripts/install-deps.sh
  • to build and after any changes, run ./scripts/test-guidance.sh

This builds the Python bindings for the library and runs the tests (which mostly live in the Guidance repo - it will clone it).

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.