Docs
Configuration
Pre-Configured AIs
Google

Google

For the Google Endpoint, you can either use the Generative Language API (for Gemini models), or the Vertex AI API (for Gemini, PaLM2 & Codey models).

The Generative Language API uses an API key, which you can get from Google AI Studio.

For Vertex AI, you need a Service Account JSON key file, with appropriate access configured.

Instructions for both are given below.

Generative Language API (Gemini)

See here for Gemini API pricing and rate limits

⚠️ While Google models are free, they are using your input/output to help improve the model, with data de-identified from your Google Account and API key. ⚠️ During this period, your messages “may be accessible to trained reviewers.”

To use Gemini models through Google AI Studio, you’ll need an API key. If you don’t already have one, create a key in Google AI Studio.

Get an API key here: aistudio.google.com

Once you have your key, provide the key in your .env file, which allows all users of your instance to use it.

.env
GOOGLE_KEY=mY_SeCreT_w9347w8_kEY

Or, you can make users provide it from the frontend by setting the following:

.env
GOOGLE_KEY=user_provided

Since fetching the models list isn’t yet supported, you should set the models you want to use in the .env file.

For your convenience, these are the latest models as of 5/18/24 that can be used with the Generative Language API:

.env
GOOGLE_MODELS=gemini-1.5-flash-latest,gemini-1.0-pro,gemini-1.0-pro-001,gemini-1.0-pro-latest,gemini-1.0-pro-vision-latest,gemini-1.5-pro-latest,gemini-pro,gemini-pro-vision
✏️
Notes:

Notes:

  • A gemini-pro model or gemini-pro-vision are required in your list for attaching images.
  • Using LibreChat, PaLM2 and Codey models can only be accessed through Vertex AI, not the Generative Language API.
    • Only models that support the generateContent method can be used natively with LibreChat + the Gen AI API.
  • Selecting gemini-pro-vision for messages with attachments is not necessary as it will be switched behind the scenes for you
  • Since gemini-pro-visiondoes not accept non-attachment messages, messages without attachments are automatically switched to use gemini-pro (otherwise, Google responds with an error)
  • With the Google endpoint, you cannot use both Vertex AI and Generative Language API at the same time. You must choose one or the other.
  • Some PaLM/Codey models and gemini-pro-vision may fail when maxOutputTokens is set to a high value. If you encounter this issue, try reducing the value through the conversation parameters.

Setting GOOGLE_KEY=user_provided in your .env file sets both the Vertex AI Service Account JSON key file and the Generative Language API key to be provided from the frontend like so:

image

Vertex AI

See here for Vertex API pricing and rate limits

To setup Google LLMs (via Google Cloud Vertex AI), first, signup for Google Cloud: cloud.google.com

You can usually get $300 starting credit, which makes this option free for 90 days.

1. Once signed up, Enable the Vertex AI API on Google Cloud:

2. Create a Service Account with Vertex AI role:

  • Click here to create a Service Account
  • Select or create a project
  • Enter a service account ID (required), name and description are optional

    • image
  • Click on “Create and Continue” to give at least the “Vertex AI User” role

    • image
  • Click on “Continue/Done”

3. Create a JSON key to Save in your Project Directory:

  • Go back to the Service Accounts page
  • Select your service account
  • Click on “Keys”

    • image
  • Click on “Add Key” and then “Create new key”

    • image
  • Choose JSON as the key type and click on “Create”
  • Download the key file and rename it as ‘auth.json’
  • Save it within the project directory, in /api/data/
    • image

Saving your JSON key file in the project directory which allows all users of your LibreChat instance to use it.

Alternatively, you can make users provide it from the frontend by setting the following:

.env
# Note: this configures both the Vertex AI Service Account JSON key file
# and the Generative Language API key to be provided from the frontend.
GOOGLE_KEY=user_provided

Since fetching the models list isn’t yet supported, you should set the models you want to use in the .env file.

For your convenience, these are the latest models as of 5/18/24 that can be used with the Generative Language API:

.env
GOOGLE_MODELS=gemini-1.5-flash-preview-0514,gemini-1.5-pro-preview-0514,gemini-1.0-pro-vision-001,gemini-1.0-pro-002,gemini-1.0-pro-001,gemini-pro-vision,gemini-1.0-pro
✏️
If you are using Docker

If you’re using docker and want to provide the auth.json file, you will need to also mount the volume in docker-compose.override.yml

docker-compose.override.yml
version: '3.4'
 
services:
  api:
    volumes:
    - type: bind
      source: ./api/data/auth.json
      target: /app/api/data/auth.json