Query Parameters
Learn how to configure chat conversations using URL query parameters in LibreChat. Set models, endpoints, and conversation settings dynamically.
LibreChat supports dynamic configuration of chat conversations through URL query parameters. This feature allows you to initiate conversations with specific settings, models, and endpoints directly from the URL.
Chat Paths
Query parameters must follow a valid chat path:
- For new conversations:
/c/new? - For existing conversations:
/c/[conversation-id]?(where conversation-id is an existing one)
Examples:
Basic Usage
The most common parameters to use are endpoint and model. Using both is recommended for the most predictable behavior:
URL Encoding
Special characters in query params must be properly URL-encoded to work correctly. Common characters that need encoding:
:→%3A/→%2F?→%3F#→%23&→%26=→%3D+→%2B- Space →
%20(or+)
Example with special characters:
You can use JavaScript's built-in encodeURIComponent() function to properly encode prompts:
Try running the code in your browser console to see the encoded URL (browser shortcut: Ctrl+Shift+I).
Endpoint Selection
The endpoint parameter can be used alone:
When only endpoint is specified:
- It will use the last selected model from localStorage
- If no previous model exists, it will use the first available model in the endpoint's model list
Notes
- The
endpointvalue must be one of the following:
- If using a custom endpoint, you can use its name as the value (case-insensitive)
Model Selection
The model parameter can be used alone:
When only model is specified:
- It will only select the model if it's available in the current endpoint
- The current endpoint is either the default endpoint or the last selected endpoint
Prompt Parameter
The prompt parameter allows you to pre-populate the chat input field:
You can also use q as a shorthand, which is interchangeable with prompt:
You can combine these with other parameters:
Automatic Prompt Submission
The submit parameter allows you to automatically submit the prompt without manual intervention:
This feature is particularly useful for:
- Creating automated workflows (e.g., Raycast, Alfred, Automater)
- Building external integrations
You can combine it with other parameters for complete automation:
Special Endpoints
Model Specs
You can select a specific model spec by name:
This will load all the settings defined in the model spec. When using the spec parameter, other model parameters in the URL will be ignored.
Agents
You can directly load an agent using its ID without specifying the endpoint:
This will automatically set the endpoint to agents.
Assistants
Similarly, you can load an assistant directly:
This will automatically set the endpoint to assistants.
Supported Parameters
LibreChat supports a wide range of parameters for fine-tuning your conversation settings:
LibreChat Settings
maxContextTokens: Override the system-defined context windowresendFiles: Control file resubmission in subsequent messagespromptPrefix: Set custom instructions/system messageimageDetail: 'low', 'auto', or 'high' for image quality- Note: while this is a LibreChat-specific parameter, it only affects the following endpoints:
- OpenAI, Custom Endpoints, which are OpenAI-like, and Azure OpenAI, for which this defaults to 'auto'
spec: Select a specific LibreChat Model Spec by name- Must match the exact name of a configured model spec
- When specified, other model parameters will not take effect, only those defined by the model spec
- Important: If model specs are configured with
enforce: true, using this parameter may be required for URL query params to work properly
fileTokenLimit: Set maximum token limit for file processing to control costs and resource usage.- Note: Request value overrides YAML default.
Model Parameters
Different endpoints support various parameters:
OpenAI, Custom, Azure OpenAI:
Google, Anthropic:
Anthropic, Bedrock (Anthropic models):
Set this to true or false to toggle the "prompt-caching":
More info: https://www.anthropic.com/news/prompt-caching, https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html#prompt-caching-get-started
Bedrock:
Assistants/Azure Assistants:
More Info
For more information on any of the above, refer to Model Spec Preset Fields, which shares most parameters.
Example with multiple parameters:
Example with model spec:
Note: When using spec, other model parameters are ignored in favor of the model spec's configuration.
⚠️ Warning
Exercise caution when using query parameters:
- Misuse or exceeding provider limits may result in API errors
- If you encounter bad request errors, reset the conversation by clicking "New Chat"
- Some parameters may have no effect if they're not supported by the selected endpoint
Best Practices
- Always use both
endpointandmodelwhen possible - Verify parameter support for your chosen endpoint
- Use reasonable values within provider limits
- Test your parameter combinations before sharing URLs
Parameter Validation
All parameters are validated against LibreChat's schema before being applied. Invalid parameters or values will be ignored, and valid settings will be applied to the conversation.
This feature enables powerful use cases like:
- Sharing specific conversation configurations
- Creating bookmarks for different chat settings
- Automating chat setup through URL parameters
#LibreChat #ChatConfiguration #AIParameters #OpenSource #URLQueryParameters
How is this guide?