LangGate is a lightweight, high-performance gateway for AI model inference. This SDK provides modular components for working with LangGate in various configurations, from embedding components directly in your application to connecting to a full LangGate proxy service.
LangGate is designed to be modular. Install only the components you need:
Using uv:
uv add langgate[registry]
Using pip:
pip install langgate[registry]
This provides access to model information and metadata.
Using uv:
uv add langgate[transform]
Using pip:
pip install langgate[transform]
This allows transforming model parameters based on your configuration.
Using uv:
uv add langgate[sdk]
Using pip:
pip install langgate[sdk]
Provides a convenient interface combining registry and transformation capabilities.
Using uv:
uv add langgate[client]
Using pip:
pip install langgate[client]
This installs just the HTTP client for connecting to a remote LangGate registry/proxy.
Using uv:
uv add langgate[all]
Using pip:
pip install langgate[all]
This installs all components needed to run a full LangGate service.
from langgate.registry import LocalRegistryClient
# Initialize the client
client = LocalRegistryClient()
# List available models
models = await client.list_models()
# Get model information
model_info = await client.get_model_info("openai/gpt-4o")
from pprint import pprint as pp
from langgate.transform import LocalTransformerClient
# Initialize the transformer
transformer = LocalTransformerClient()
# Transform parameters for a specific model
transformed_params = await transformer.get_params(
"openai/gpt-4o",
{"temperature": 0.7, "stream": True}
)
from langgate.sdk import LangGateLocal
# Initialize the combined client
client = LangGateLocal()
# Access both registry and transformer functions
models = await client.list_models()
model_info = await client.get_model_info("openai/gpt-4o")
transformed_params = await client.get_params(
"openai/gpt-4o",
{"temperature": 0.7, "stream": True}
)
from langgate.client import HTTPRegistryClient
# Initialize the client with the registry endpoint
client = HTTPRegistryClient("https://langgate.example.com/api/v1")
# Use the same interface as the local client
models = await client.list_models()
model_info = await client.get_model_info("openai/gpt-4o")
LangGate components use configuration from two main sources:
langgate_models.json
: Defines model metadata, capabilities, and costslanggate_config.yaml
: Defines service configurations, parameter mappings, and transformations
These configurations are loaded from:
- Paths specified in environment variables (
LANGGATE_MODELS
,LANGGATE_CONFIG
) - Default paths in the current working directory
- Default built-in configurations if no files are found
LangGate is composed of the following packages (PEP 420 implicitly namespaced by langgate
):
- core: Shared data models and utilities
- client: HTTP client for remote LangGate services
- registry: Registry implementation for model information
- transform: Parameter transformation logic
- processor: Envoy external processor implementation
- sdk: Convenience package combining registry and transform