Model Specs Object Structure
Overview
The modelSpecs object helps you provide a simpler UI experience for AI models within your application.
There are 3 main fields under modelSpecs:
enforce(optional; default: false)prioritize(optional; default: true)list(required)addedEndpoints(optional)
Notes:
- If
enforceis set to true, model specifications can potentially conflict with other interface settings such asmodelSelect,presets, andparameters. - The
listarray contains detailed configurations for each model, including presets that dictate specific behaviors, appearances, and capabilities. - If interface fields are not specified in the configuration, having a list of model specs will disable the following interface elements:
modelSelectparameterspresets
- If you would like to enable these interface elements along with model specs, you can set them to
truein theinterfaceobject.
Example
Top-level Fields
enforce
| Key | Type | Description | Example |
|---|---|---|---|
| enforce | Boolean | Determines whether the model specifications should strictly override other configuration settings. | Setting this to `true` can lead to conflicts with interface options if not managed carefully. |
Default: false
Example:
prioritize
| Key | Type | Description | Example |
|---|---|---|---|
| prioritize | Boolean | Specifies if model specifications should take priority over the default configuration when both are applicable. | When set to `true`, it ensures that a modelSpec is always selected in the UI. Doing this may prevent users from selecting different endpoints for the selected spec. |
Default: true
Example:
addedEndpoints
| Key | Type | Description | Example |
|---|---|---|---|
| addedEndpoints | Array of Strings | Allows specific endpoints (e.g., "openAI", "google") to be selectable in the UI alongside the defined model specs. | Requires `interface.modelSelect` to be `true`. If this field is used and `interface.modelSelect` is not explicitly set, `modelSelect` will default to `true`. |
Default: [] (empty list)
Note: Must be one of the following:
openAI, azureOpenAI, google, anthropic, assistants, azureAssistants, bedrock
Example:
list
Required
| Key | Type | Description | Example |
|---|---|---|---|
| list | Array of Objects | Contains a list of individual model specifications detailing various configurations and behaviors. | Each object in the list details the configuration for a specific model, including its behaviors, appearance, and capabilities related to the application's functionality. |
Model Spec (List Item)
Within each Model Spec, or each list item, you can configure the following fields:
name
| Key | Type | Description | Example |
|---|---|---|---|
| name | String | Unique identifier for the model. | No default. Must be specified. |
Description:
Unique identifier for the model.
label
| Key | Type | Description | Example |
|---|---|---|---|
| label | String | A user-friendly name or label for the model, shown in the header dropdown. | No default. Optional. |
Description:
A user-friendly name or label for the model, shown in the header dropdown.
default
| Key | Type | Description | Example |
|---|---|---|---|
| default | Boolean | Specifies if this model spec is the default selection, to be auto-selected on every new chat. |
Description:
Specifies if this model spec is the default selection, to be auto-selected on every new chat.
iconURL
| Key | Type | Description | Example |
|---|---|---|---|
| iconURL | String | URL or a predefined endpoint name for the model's icon. | No default. Optional. |
Description:
URL or a predefined endpoint name for the model's icon.
description
| Key | Type | Description | Example |
|---|---|---|---|
| description | String | A brief description of the model and its intended use or role, shown in the header dropdown menu. | No default. Optional. |
Description: A brief description of the model and its intended use or role, shown in the header dropdown menu.
group
| Key | Type | Description | Example |
|---|---|---|---|
| group | String | Optional group name for organizing model specs in the UI selector. Controls where the spec appears in the menu hierarchy. | No default. Optional. |
| groupIcon | String | Optional icon for custom groups. Can be a URL or a built-in endpoint key (e.g., "openAI", "groq"). Only the first spec with a groupIcon in each group is used. | No default. Optional. |
Description:
Optional group name for organizing model specs in the UI selector. The group field provides flexible control over how model specs are organized:
- If
groupmatches an endpoint name (e.g.,"openAI","groq"): The model spec appears nested under that endpoint in the selector menu - If
groupis a custom name (doesn't match any endpoint): Creates a separate collapsible section with that name. You can optionally usegroupIconto set a custom icon for this section (URL or built-in key like"openAI") - If
groupis omitted: The model spec appears as a standalone item at the top level
This feature is particularly useful when you want to add descriptions to models without losing the organizational structure of the selector menu.
Example:
showIconInMenu
| Key | Type | Description | Example |
|---|---|---|---|
| showIconInMenu | Boolean | Controls whether the model's icon appears in the header dropdown menu. |
Description:
Controls whether the model's icon appears in the header dropdown menu. Defaults to true.
showIconInHeader
| Key | Type | Description | Example |
|---|---|---|---|
| showIconInHeader | Boolean | Controls whether the model's icon appears in the header dropdown button, left of its name. |
Description:
Controls whether the model's icon appears in the header dropdown button, left of its name. Defaults to true.
authType
| Key | Type | Description | Example |
|---|---|---|---|
| authType | String | Authentication type required for the model spec. | Optional. Possible values: "override_auth", "user_provided", "system_defined" |
Description:
Authentication type required for the model spec. Determines whether authentication is overridden, provided by the user, or defined by the system.
webSearch
| Key | Type | Description | Example |
|---|---|---|---|
| webSearch | Boolean | Enables web search capability for this model spec. | When true, the model can perform web searches. |
Description:
Enables web search capability for this model spec. When set to true, the model can perform web searches to retrieve current information.
Example:
fileSearch
| Key | Type | Description | Example |
|---|---|---|---|
| fileSearch | Boolean | Enables file search capability for this model spec. | When true, the model can search through uploaded files. |
Description:
Enables file search capability for this model spec. When set to true, the model can search through and reference uploaded files.
Example:
executeCode
| Key | Type | Description | Example |
|---|---|---|---|
| executeCode | Boolean | Enables code execution capability for this model spec. | When true, the model can execute code. |
Description:
Enables code execution capability for this model spec. When set to true, the model can execute code in a sandboxed environment.
Example:
mcpServers
| Key | Type | Description | Example |
|---|---|---|---|
| mcpServers | Array of Strings | List of Model Context Protocol (MCP) server names to enable for this model spec. | Each string should match a configured MCP server name. |
Description:
List of Model Context Protocol (MCP) server names to enable for this model spec. MCP servers extend the model's capabilities with custom tools and resources.
Example:
artifacts
| Key | Type | Description | Example |
|---|---|---|---|
| artifacts | String | Boolean | Enables the Artifacts capability for this model spec and optionally sets the artifact mode. | Set to `true` to enable with the default mode, `false` or omit to disable, or a specific mode string (e.g., `"default"`) to enable with that mode. |
Description:
Enables the Artifacts capability for this model spec, allowing the model to generate and display interactive artifacts such as React components, HTML, and Mermaid diagrams. When set to true, the default artifact mode is used. You can also specify a mode string directly.
Example:
preset
| Key | Type | Description | Example |
|---|---|---|---|
| preset | Object | Detailed preset configurations that define the behavior and capabilities of the model. | See "Preset Object Structure" below. |
Description:
Detailed preset configurations that define the behavior and capabilities of the model (see Preset Object Structure below).
Preset Fields
The preset field for a modelSpecs.list item is made up of a comprehensive configuration blueprint for AI models within the system. It is designed to specify the operational settings of AI models, tailoring their behavior, outputs, and interactions with other system components and endpoints.
System Options
endpoint
Required
Accepted Values:
openAIazureOpenAIgoogleanthropicassistantsazureAssistantsbedrockagents
Note: If you are using a custom endpoint, the endpoint value must match the defined custom endpoint name exactly.
| Key | Type | Description | Example |
|---|---|---|---|
| endpoint | Enum (EModelEndpoint) or String (nullable) | Specifies the endpoint the model communicates with to execute operations. This setting determines the external or internal service that the model interfaces with. |
Example:
modelLabel
| Key | Type | Description | Example |
|---|---|---|---|
| modelLabel | String (nullable) | The label used to identify the model in user interfaces or logs. It provides a human-readable name for the model, which is displayed in the UI, as well as made aware to the AI. | None |
Default: None
Example:
greeting
| Key | Type | Description | Example |
|---|---|---|---|
| greeting | String | A predefined message that is visible in the UI before a new chat is started. This is a good way to provide instructions to the user, or to make the interface seem more friendly and accessible. |
Default: None
Example:
promptPrefix
| Key | Type | Description | Example |
|---|---|---|---|
| promptPrefix | String (nullable) | A static text prepended to every prompt sent to the model, setting a consistent context for responses. | When using "assistants" as the endpoint, this becomes the OpenAI field `additional_instructions`. |
Default: None
Example 1:
Example 2:
resendFiles
| Key | Type | Description | Example |
|---|---|---|---|
| resendFiles | Boolean | Indicates whether files should be resent in scenarios where persistent sessions are not maintained. |
Default: true
Example:
imageDetail
Accepted Values:
- low
- auto
- high
| Key | Type | Description | Example |
|---|---|---|---|
| imageDetail | Enum (eImageDetailSchema) | Specifies the level of detail required in image analysis tasks, applicable to models with vision capabilities (OpenAI spec). |
Default: "auto"
Example:
maxContextTokens
| Key | Type | Description | Example |
|---|---|---|---|
| maxContextTokens | Number | The maximum number of context tokens to provide to the model. | Useful if you want to limit the maximum context for this preset. |
Example:
Agent Options
Note that these options are only applicable when using the agents endpoint.
You should exclude any model options and defer to the agent's configuration as defined in the UI.
Agent Access Filtering (v0.8.0+)
As of v0.8.0, LibreChat uses an ACL (Access Control List) based permissions system for agents. When model specs are configured to use agents, any agents that the user doesn't have access to will be automatically filtered out, even if they are configured in the model spec. This ensures users only see and can use agents they have proper permissions for.
For more information about the ACL permissions system, see the Agents documentation.
agent_id
| Key | Type | Description | Example |
|---|---|---|---|
| agent_id | String | Identification of an assistant. |
Example:
Assistant Options
Note that these options are only applicable when using the assistants or azureAssistants endpoint.
Similar to Agents, you should exclude any model options and defer to the assistant's configuration.
assistant_id
| Key | Type | Description | Example |
|---|---|---|---|
| assistant_id | String | Identification of an assistant. |
Example:
instructions
Note: this is distinct from promptPrefix, as this overrides existing assistant instructions for current runs.
Only use this if you want to override the assistant's core instructions.
Use promptPrefix for additional_instructions.
More information:
- https://platform.openai.com/docs/api-reference/models#runs-createrun-instructions
- https://platform.openai.com/docs/api-reference/runs/createRun#runs-createrun-additional_instructions
| Key | Type | Description | Example |
|---|---|---|---|
| instructions | String | Overrides the assistant's default instructions. |
Example:
append_current_datetime
Adds the current date and time to additional_instructions for each run. Does not overwrite promptPrefix, but adds to it.
| Key | Type | Description | Example |
|---|---|---|---|
| append_current_datetime | Boolean | Adds the current date and time to `additional_instructions` as defined by `promptPrefix` |
Example:
Model Options
Note: Each parameter below includes a note on which endpoints support it.
OpenAI / AzureOpenAI / Custom typically supporttemperature,presence_penalty,frequency_penalty,stop,top_p,max_tokens.
Google / Anthropic typically supporttopP,topK,maxOutputTokens.
Anthropic / Bedrock (Anthropic and Nova models) supportpromptCache.
Bedrock supportsregion,maxTokens, and a few others.
model
Supported by: All endpoints (except
agents)
| Key | Type | Description | Example |
|---|---|---|---|
| model | String (nullable) | The model name to use for the preset, matching a configured model under the chosen endpoint. | None |
Default: None
Example:
temperature
Supported by:
openAI,azureOpenAI,temperature),anthropic(astemperature), and custom (OpenAI-like)
| Key | Type | Description | Example |
|---|---|---|---|
| temperature | Number | Controls how deterministic or “creative” the model responses are. |
Example:
presence_penalty
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
Not typically used by Google/Anthropic/Bedrock
| Key | Type | Description | Example |
|---|---|---|---|
| presence_penalty | Number | Penalty for repetitive tokens, encouraging exploration of new topics. |
Example:
frequency_penalty
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
Not typically used by Google/Anthropic/Bedrock
| Key | Type | Description | Example |
|---|---|---|---|
| frequency_penalty | Number | Penalty for repeated tokens, reducing redundancy in responses. |
Example:
stop
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
Not typically used by Google/Anthropic/Bedrock
| Key | Type | Description | Example |
|---|---|---|---|
| stop | Array of Strings | Stop tokens for the model, instructing it to end its response if encountered. |
Example:
top_p
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
Google/Anthropic often usetopP(capital “P”) instead oftop_p.
| Key | Type | Description | Example |
|---|---|---|---|
| top_p | Number | Nucleus sampling parameter (0-1), controlling the randomness of tokens. |
Example:
topP
Supported by:
anthropic
(similar purpose totop_p, but named differently in those APIs)
| Key | Type | Description | Example |
|---|---|---|---|
| topP | Number | Nucleus sampling parameter for Google/Anthropic endpoints. |
Example:
topK
Supported by:
anthropic
(k-sampling limit on the next token distribution)
| Key | Type | Description | Example |
|---|---|---|---|
| topK | Number | Limits the next token selection to the top K tokens. |
Example:
max_tokens
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
For Google/Anthropic, usemaxOutputTokensormaxTokens(depending on the endpoint).
| Key | Type | Description | Example |
|---|---|---|---|
| max_tokens | Number | The maximum number of tokens in the model response. |
Example:
maxOutputTokens
Supported by:
anthropic
Equivalent tomax_tokensfor these providers.
| Key | Type | Description | Example |
|---|---|---|---|
| maxOutputTokens | Number | The maximum number of tokens in the response (Google/Anthropic). |
Example:
promptCache
Supported by:
anthropic,bedrock(Anthropic and Nova models)
(Toggle Anthropic’s “prompt-caching” feature)
| Key | Type | Description | Example |
|---|---|---|---|
| promptCache | Boolean | Enables or disables Anthropic’s built-in prompt caching. |
Default: true
Example:
Note: For Bedrock endpoints, prompt caching is automatically enabled for Claude and Nova models. Set promptCache: false to explicitly disable it.
reasoning_effort
Accepted Values:
- minimal
- low
- medium
- high
- xhigh (extra high)
Supported by:
openAI,azureOpenAI, custom (OpenAI-like),bedrock(ZAI, MoonshotAI models)
| Key | Type | Description | Example |
|---|---|---|---|
| reasoning_effort | String | Controls the reasoning effort level for the model. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning. The `xhigh` option provides maximum reasoning capability for complex problems. For Bedrock, accepted values are `low`, `medium`, `high`. |
Default: When not set, uses API default (medium)
Example:
reasoning_summary
Accepted Values:
- None
- Auto
- Concise
- Detailed
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
| Key | Type | Description | Example |
|---|---|---|---|
| reasoning_summary | String | Sets reasoning summary preferences for the model. |
Default: "None"
Example:
useResponsesApi
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
| Key | Type | Description | Example |
|---|---|---|---|
| useResponsesApi | Boolean | Enables or disables the responses API for the model. |
Default: false
Example:
verbosity
Accepted Values:
- low
- medium
- high
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
| Key | Type | Description | Example |
|---|---|---|---|
| verbosity | String | Controls the verbosity level of model responses. |
Default: When not set, uses API default (medium)
Example:
web_search
Supported by:
openAI,azureOpenAI, custom (OpenAI-like),anthropic
| Key | Type | Description | Example |
|---|---|---|---|
| web_search | Boolean | Enables or disables web search functionality for the model. |
Default: false
Note: For Google endpoints, this parameter appears as Grounding with Google Search in the actual panel but controls web_search in the implementation.
Example:
disableStreaming
Supported by:
openAI,azureOpenAI, custom (OpenAI-like)
| Key | Type | Description | Example |
|---|---|---|---|
| disableStreaming | Boolean | Disables streaming responses from the model. |
Default: false
Example:
thinkingBudget
Supported by:
anthropic,bedrock(Anthropic models)
| Key | Type | Description | Example |
|---|---|---|---|
| thinkingBudget | Number or String | Controls the number of thinking tokens the model can use for internal reasoning. Larger budgets can improve response quality for complex problems. |
Default: "Auto (-1)" (Google), 2000 (Anthropic, Bedrock (Anthropic models))
Example:
thinkingLevel
Supported by:
| Key | Type | Description | Example |
|---|---|---|---|
| thinkingLevel | String | Controls the thinking effort level for Gemini 3+ models. Gemini 2.5 models use `thinkingBudget` instead. |
Accepted Values:
""(unset/auto)"minimal""low""medium""high"
Default: "" (unset — model decides)
Example:
effort
Supported by:
anthropic,bedrock(Anthropic models)
| Key | Type | Description | Example |
|---|---|---|---|
| effort | String | Controls the Adaptive Thinking effort level for supported Anthropic models (e.g., Claude Opus 4.6). Higher effort levels allocate more thinking tokens for complex problems. |
Options: "" (unset/auto), "low", "medium", "high", "max"
Default: "" (unset — model decides)
Example:
thinking
Supported by:
anthropic,bedrock(Anthropic models)
| Key | Type | Description | Example |
|---|---|---|---|
| thinking | Boolean | Indicates whether the model should spend time thinking before generating a response. |
Default: true
Example:
region
Supported by:
bedrock
(Used to specify an AWS region for Amazon Bedrock)
| Key | Type | Description | Example |
|---|---|---|---|
| region | String | AWS region for Amazon Bedrock endpoints. |
Example:
maxTokens
Supported by:
bedrock
(Used in place ofmax_tokens)
| Key | Type | Description | Example |
|---|---|---|---|
| maxTokens | Number | Maximum output tokens for Amazon Bedrock endpoints. |
Example:
How is this guide?