Helicone
Helicone API key: https://us.helicone.ai/settings/api-keys
Notes:
-
Known: icon provided, fetching list of models is recommended as Helicone provides access to 100+ AI models from multiple providers.
-
Helicone is an AI Gateway Provider that enables access to models from OpenAI, Anthropic, Google, Meta, Mistral, and other providers with built-in observability and monitoring.
-
Key Features:
- ๐ Access to 100+ AI models through a single gateway
- ๐ Built-in observability and monitoring
- ๐ Multi-provider support
- โก Request logging and analytics in Helicone dashboard
-
Important Considerations:
- Make sure your Helicone account has credits so you can access the models.
- You can find all supported models in the Helicone Model Library.
- You can set your own rate limits and caching policies within the Helicone dashboard.
- name: "Helicone"
# For `apiKey` and `baseURL`, you can use environment variables that you define.
# recommended environment variables:
apiKey: "${HELICONE_KEY}"
baseURL: "https://ai-gateway.helicone.ai"
headers:
x-librechat-body-parentmessageid: "{{LIBRECHAT_BODY_PARENTMESSAGEID}}"
models:
default: ["gpt-4o-mini", "claude-4.5-sonnet", "llama-3.1-8b-instruct", "gemini-2.5-flash-lite"]
fetch: true
titleConvo: true
titleModel: "gpt-4o-mini"
modelDisplayLabel: "Helicone"
iconURL: "https://marketing-assets-helicone.s3.us-west-2.amazonaws.com/helicone.png"Configuration Details:
- apiKey: Use the
HELICONE_KEYenvironment variable to store your Helicone API key. - baseURL: The Helicone AI Gateway endpoint:
https://ai-gateway.helicone.ai - headers: The
x-librechat-body-parentmessageidheader is essential for message tracking and conversation continuity - models: Sets default models, however by enabling
fetch, you will automatically retrieve all available models from Heliconeโs API. - fetch: Set to
trueto automatically retrieve available models from Heliconeโs API
Setup Steps:
- Sign up for a Helicone account at helicone.ai
- Generate your API key from the Helicone dashboard
- Set the
HELICONE_KEYenvironment variable in your.envfile - Copy the example configuration to your
librechat.yamlfile - Rebuild your Docker containers if using Docker deployment
- Restart LibreChat to load the new configuration
- Test by selecting Helicone from the provider dropdown
- Head over to the Helicone dashboard to review your usage and settings.
Potential Issues:
- Model Access: Verify that you have credits within Helicone so you can access the models.
- Rate Limiting: You can set your own rate limits and caching policies within the Helicone dashboard.
- Environment Variables: Double-check that
HELICONE_KEYis properly set and accessible to your LibreChat instance
Testing:
- After configuration, select Helicone from the provider dropdown
- Verify that models appear in the model selection
- Send a test message and confirm it appears in your Helicone dashboard
- Check that conversation threading works correctly with the parent message ID header