Docs
⚙️ Configuration
librechat.yaml
Settings
Customize Endpoint Parameters

Custom Parameters

Picking A Default Parameters Set

By default, when you specify a custom endpoint in librechat.yaml config file, it will use the default parameters of the OpenAI API. However, you can override these defaults by specifying the customParams.defaultParamsEndpoint field within the definition of your custom endpoint. For example, to use Google parameters for your custom endpoint:

excerpt of librechat.yaml
endpoints:
  custom:
    - name: 'Google Gemini'
      apiKey: ...
      baseURL: ...
      customParams:
        defaultParamsEndpoint: 'google'

Your “Google Gemini” endpoint will now display parameters for Google API when you create a new agent or preset.

Overriding Parameter Definitions

On top of that, you can also fine tune the parameters provided for your custom endpoint. For example, the temperature parameter for google endpoint is a slide with range from 0.0 to 1.0, and default of 1.0, you can update the librechat.yaml file to override these values:

excerpt of librechat.yaml
endpoints:
  custom:
    - name: 'Google Gemini'
      apiKey: ...
      baseURL: ...
      customParams:
        defaultParamsEndpoint: 'google'
        paramDefinitions:
          - key: temperature
            range:
              min: 0
              max: 0.7
              step: 0.1
            default: 0.5

As a result, the Temperature slider will be limited to the range of 0.0 and 0.7 with step of 0.1, and a default of 0.5. The rest of the parameters will be set to their default values.

Anthropic

When using defaultParamsEndpoint: 'anthropic', the system provides special handling that goes beyond just displaying and using Anthropic parameter sets:

ℹ️

Anthropic API Compatibility

Setting defaultParamsEndpoint: 'anthropic' enables full Anthropic API compatibility for parameters, headers, and payload formatting:

  • Parameters are sent to your custom endpoint exactly as the Anthropic API expects
  • This is essential for proxy services like LiteLLM that pass non-OpenAI-spec parameters directly to the underlying provider
  • Anthropic-specific parameters like thinking are properly formatted
  • The messages payload is formatted according to Anthropic’s requirements (thinking blocks and prompt caching)
  • Appropriate beta headers are automatically added based on the model as when using Anthropic directly

This is mainly necessary to properly format the thinking parameter, which is not OpenAI-compatible:

{
  "thinking": {
    "type": "enabled",
    "budget_tokens": 10000
  }
}

Additionally, the system automatically adds model-specific Anthropic beta headers such as:

  • anthropic-beta: prompt-caching-2024-07-31 for prompt caching support
  • anthropic-beta: context-1m-2025-08-07 for extended context models
  • Model-specific feature flags based on the Claude model being used

This ensures full compatibility when routing through proxy services or directly to Anthropic-compatible endpoints.

✏️

Implementation Status

Currently, this automatic parameter and header handling is fully implemented for Anthropic endpoints. Similar behavior for other defaultParamsEndpoint values (e.g., google, bedrock) is planned for future updates.