# AWS Bedrock Object Structure (/docs/configuration/librechat_yaml/object_structure/aws_bedrock)

Integrating AWS Bedrock with your application allows you to seamlessly utilize multiple AI models hosted on AWS. This section details how to configure the AWS Bedrock endpoint for your needs.

## Example Configuration

```yaml filename="Example AWS Bedrock Object Structure"
endpoints:
  bedrock:
    titleModel: "anthropic.claude-3-haiku-20240307-v1:0"
    streamRate: 35
    availableRegions:
      - "us-east-1"
      - "us-west-2"
    guardrailConfig:
      guardrailIdentifier: "your-guardrail-id"
      guardrailVersion: "1"
      trace: "enabled"
```

> **Note:** AWS Bedrock endpoint supports all [Shared Endpoint Settings](/docs/configuration/librechat_yaml/object_structure/shared_endpoint_settings), including `streamRate`, `titleModel`, `titleMethod`, `titlePrompt`, `titlePromptTemplate`, and `titleEndpoint`. The settings shown below are specific to Bedrock or have Bedrock-specific defaults.

## titleModel

**Key:**
<OptionTable
  options={[
    ['titleModel', 'String', 'Specifies the model to use for generating conversation titles.', 'Recommended: anthropic.claude-3-haiku-20240307-v1:0. Set to "current_model" to use the same model as the chat.'],
  ]}
/>

**Default:** Not specified

**Example:**
```yaml filename="titleModel"
titleModel: "anthropic.claude-3-haiku-20240307-v1:0"
```

## streamRate

**Key:**
<OptionTable
  options={[
    ['streamRate', 'Number', 'Sets the rate of processing each new token in milliseconds.', 'This can help stabilize processing of concurrent requests and provide smoother frontend stream rendering.'],
  ]}
/>

**Default:** Not specified

**Example:**
```yaml filename="streamRate"
streamRate: 35
```

## availableRegions

**Key:**
<OptionTable
  options={[
    ['availableRegions', 'Array', 'Specifies the AWS regions you want to make available for Bedrock.', 'If provided, users will see a dropdown to select the region. If not selected, the default region is used.'],
  ]}
/>

**Default:** Not specified

**Example:**
```yaml filename="availableRegions"
availableRegions:
  - "us-east-1"
  - "us-west-2"
```

## models

**Key:**
<OptionTable
  options={[
    ['models', 'Array of Strings', 'Specifies custom model IDs available for the Bedrock endpoint.', 'When provided, these models appear in the model selector for Bedrock.'],
  ]}
/>

**Default:** Not specified (uses default Bedrock model list)

**Example:**
```yaml filename="models"
endpoints:
  bedrock:
    models:
      - "anthropic.claude-sonnet-4-20250514-v1:0"
      - "anthropic.claude-haiku-4-20250514-v1:0"
      - "us.anthropic.claude-sonnet-4-20250514-v1:0"
```

## inferenceProfiles

**Key:**
<OptionTable
  options={[
    ['inferenceProfiles', 'Object (Record)', 'Maps model IDs to inference profile ARNs for cross-region inference. Keys are model IDs and values are the inference profile ARN or an environment variable reference.', 'When a selected model matches a key, the corresponding ARN is used as the application inference profile.'],
  ]}
/>

**Default:** Not specified

**Example:**
```yaml filename="inferenceProfiles"
endpoints:
  bedrock:
    inferenceProfiles:
      "us.anthropic.claude-sonnet-4-20250514-v1:0": "${BEDROCK_INFERENCE_PROFILE_CLAUDE_SONNET}"
      "anthropic.claude-3-7-sonnet-20250219-v1:0": "arn:aws:bedrock:us-west-2:123456789012:application-inference-profile/abc123"
```

**Notes:**
- Inference profiles enable cross-region inference, allowing you to route requests to models in different AWS regions
- Values support environment variable interpolation with `${ENV_VAR}` syntax
- The model ID in the key must match the model selected by the user in the UI
- Use with the `models` field to make cross-region model IDs available in the model selector
- For a complete guide on creating and managing inference profiles, see [AWS Bedrock Inference Profiles](/docs/configuration/pre_configured_ai/bedrock_inference_profiles)

**Combined Example:**
```yaml filename="Bedrock with inference profiles"
endpoints:
  bedrock:
    models:
      - "us.anthropic.claude-sonnet-4-20250514-v1:0"
      - "us.anthropic.claude-haiku-4-20250514-v1:0"
    inferenceProfiles:
      "us.anthropic.claude-sonnet-4-20250514-v1:0": "${BEDROCK_CLAUDE_SONNET_PROFILE}"
      "us.anthropic.claude-haiku-4-20250514-v1:0": "${BEDROCK_CLAUDE_HAIKU_PROFILE}"
```

## guardrailConfig

**Key:**
<OptionTable
  options={[
    ['guardrailConfig', 'Object', 'Configuration for AWS Bedrock Guardrails to filter and moderate model inputs and outputs.', 'Optional. When configured, all Bedrock requests will be validated against the specified guardrail.'],
  ]}
/>

**Sub-keys:**
<OptionTable
  options={[
    ['guardrailIdentifier', 'String', 'The unique identifier of the guardrail to apply.', 'Required when using guardrails.'],
    ['guardrailVersion', 'String', 'The version of the guardrail to use.', 'Required when using guardrails.'],
    ['trace', 'String', 'Controls guardrail trace output for debugging. Options: "enabled", "enabled_full", or "disabled".', 'Optional. Default: "disabled"'],
  ]}
/>

**Example:**
```yaml filename="guardrailConfig"
endpoints:
  bedrock:
    guardrailConfig:
      guardrailIdentifier: "abc123xyz"
      guardrailVersion: "1"
      trace: "enabled"
```

**Notes:**
- Guardrails help ensure responsible AI usage by filtering harmful content, PII, and other sensitive information
- The `guardrailIdentifier` can be found in the AWS Bedrock console under Guardrails
- Set `trace` to `"enabled"` or `"enabled_full"` during development to see which guardrail policies are triggered
- For production, set `trace` to `"disabled"` to reduce response payload size

## Notes

- The main configuration for AWS Bedrock is done through environment variables, additional forms of authentication are in development.