Skip to content

Latest commit

 

History

History
83 lines (52 loc) · 7.08 KB

File metadata and controls

83 lines (52 loc) · 7.08 KB
graph LR
    CLI_Command_Line_Interface_["CLI (Command Line Interface)"]
    Public_Python_API_API_Entry_Points_["Public Python API (API Entry Points)"]
    Query_Execution_Runtime["Query Execution Runtime"]
    LLM_Integration_Layer["LLM Integration Layer"]
    Scoring_API["Scoring API"]
    CLI_Command_Line_Interface_ -- "invokes" --> Query_Execution_Runtime
    CLI_Command_Line_Interface_ -- "interacts with" --> LLM_Integration_Layer
    CLI_Command_Line_Interface_ -- "may utilize" --> Public_Python_API_API_Entry_Points_
    Public_Python_API_API_Entry_Points_ -- "delegates logic to" --> LLM_Integration_Layer
    Public_Python_API_API_Entry_Points_ -- "relies on" --> Scoring_API
    Public_Python_API_API_Entry_Points_ -- "may utilize" --> Query_Execution_Runtime
    Query_Execution_Runtime -- "interacts with" --> LLM_Integration_Layer
Loading

CodeBoardingDemoContact

Details

The LMQL system is designed around a clear separation of concerns, with user interactions primarily managed by the CLI (Command Line Interface) and the Public Python API (API Entry Points). Both interfaces serve as entry points, abstracting the underlying complexities. The CLI directly invokes the Query Execution Runtime for command-line operations and interacts with the LLM Integration Layer for model serving. The Public Python API delegates core logic to the LLM Integration Layer for model interactions and relies on the Scoring API for output evaluation. At the heart of the system, the Query Execution Runtime orchestrates the execution of LMQL queries, interacting directly with the LLM Integration Layer to communicate with various language models. This layered architecture ensures modularity, allowing for independent development and maintenance of each component while providing a cohesive and powerful platform for language model interaction.

CLI (Command Line Interface)

This component provides the command-line interface for direct user interaction with LMQL. It acts as a direct execution and tooling interface, enabling users to perform operations such as running LMQL queries, launching the interactive playground, and serving models. It abstracts the underlying complexities of the LMQL runtime and LLM integrations, offering a simplified entry point for various tasks.

Related Classes/Methods:

Public Python API (API Entry Points)

This component offers a high-level Python API for programmatic interaction with LMQL. It serves as a facade, providing user-friendly functions like generate and score for common LMQL operations. This is the primary interface for developers who wish to integrate LMQL functionalities into their Python applications, abstracting the complexities of the underlying LLM integration, query execution, and scoring mechanisms.

Related Classes/Methods:

Query Execution Runtime

This component is responsible for the core execution of LMQL queries. It handles the parsing, compilation, and execution of LMQL language constructs, managing the flow of data and interactions with the underlying language models. It includes functionalities for post-processing, tokenization, and tracing, ensuring efficient and accurate query execution.

Related Classes/Methods:

LLM Integration Layer

This component provides a standardized interface for interacting with various Large Language Models (LLMs). It abstracts the complexities of different LLM APIs and backends, allowing the Query Execution Runtime to communicate with models seamlessly. It includes mechanisms for model loading, management, and handling different model types (e.g., Transformers, Llama.cpp).

Related Classes/Methods:

Scoring API

This component provides functionalities for scoring and evaluating the outputs generated by LMQL queries. It is utilized by the Public Python API to offer a direct method for assessing the quality or relevance of generated text based on defined criteria.

Related Classes/Methods: