Docs
โš™๏ธ Configuration
librechat.yaml
Custom AI Endpoints
Helicone

Helicone

Helicone API key: https://us.helicone.ai/settings/api-keys

Notes:

  • Known: icon provided, fetching list of models is recommended as Helicone provides access to 100+ AI models from multiple providers.

  • Helicone is an AI Gateway Provider that enables access to models from OpenAI, Anthropic, Google, Meta, Mistral, and other providers with built-in observability and monitoring.

  • Key Features:

    • ๐Ÿš€ Access to 100+ AI models through a single gateway
    • ๐Ÿ“Š Built-in observability and monitoring
    • ๐Ÿ”„ Multi-provider support
    • โšก Request logging and analytics in Helicone dashboard
  • Important Considerations:

    • Make sure your Helicone account has credits so you can access the models.
    • You can find all supported models in the Helicone Model Library.
    • You can set your own rate limits and caching policies within the Helicone dashboard.
    - name: "Helicone"
      # For `apiKey` and `baseURL`, you can use environment variables that you define.
      # recommended environment variables:
      apiKey: "${HELICONE_KEY}"
      baseURL: "https://ai-gateway.helicone.ai"
      headers:
        x-librechat-body-parentmessageid: "{{LIBRECHAT_BODY_PARENTMESSAGEID}}"
      models:
        default: ["gpt-4o-mini", "claude-4.5-sonnet", "llama-3.1-8b-instruct", "gemini-2.5-flash-lite"]
        fetch: true
      titleConvo: true
      titleModel: "gpt-4o-mini"
      modelDisplayLabel: "Helicone"
      iconURL: "https://marketing-assets-helicone.s3.us-west-2.amazonaws.com/helicone.png"

Configuration Details:

  • apiKey: Use the HELICONE_KEY environment variable to store your Helicone API key.
  • baseURL: The Helicone AI Gateway endpoint: https://ai-gateway.helicone.ai
  • headers: The x-librechat-body-parentmessageid header is essential for message tracking and conversation continuity
  • models: Sets default models, however by enabling fetch, you will automatically retrieve all available models from Heliconeโ€™s API.
  • fetch: Set to true to automatically retrieve available models from Heliconeโ€™s API

Setup Steps:

  1. Sign up for a Helicone account at helicone.ai
  2. Generate your API key from the Helicone dashboard
  3. Set the HELICONE_KEY environment variable in your .env file
  4. Copy the example configuration to your librechat.yaml file
  5. Rebuild your Docker containers if using Docker deployment
  6. Restart LibreChat to load the new configuration
  7. Test by selecting Helicone from the provider dropdown
  8. Head over to the Helicone dashboard to review your usage and settings.

Potential Issues:

  • Model Access: Verify that you have credits within Helicone so you can access the models.
  • Rate Limiting: You can set your own rate limits and caching policies within the Helicone dashboard.
  • Environment Variables: Double-check that HELICONE_KEY is properly set and accessible to your LibreChat instance

Testing:

  1. After configuration, select Helicone from the provider dropdown
  2. Verify that models appear in the model selection
  3. Send a test message and confirm it appears in your Helicone dashboard
  4. Check that conversation threading works correctly with the parent message ID header