Docs
⚙️ Configuration
librechat.yaml
Custom AI Endpoints
Huggingface

Huggingface

HuggingFace Token: huggingface.co/settings/tokens

Notes:

  • Known: icon provided.

  • The provided models are free but rate limited

    • The use of dropParams to drop “top_p” params is required.
    • Fetching models isn’t supported
    • Note: Some models currently work better than others, answers are very short (at least when using the free tier).
  • The example includes a model list, which was last updated on May 09, 2024, for your convenience.

   - name: 'HuggingFace'
      apiKey: '${HUGGINGFACE_TOKEN}'
      baseURL: 'https://api-inference.huggingface.co/v1'
      models:
        default: [
          "codellama/CodeLlama-34b-Instruct-hf",
          "google/gemma-1.1-2b-it",
          "google/gemma-1.1-7b-it",
          "HuggingFaceH4/starchat2-15b-v0.1",
          "HuggingFaceH4/zephyr-7b-beta",
          "meta-llama/Meta-Llama-3-8B-Instruct",
          "microsoft/Phi-3-mini-4k-instruct",
          "mistralai/Mistral-7B-Instruct-v0.1",
          "mistralai/Mistral-7B-Instruct-v0.2",
          "mistralai/Mixtral-8x7B-Instruct-v0.1",
          "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
        ]
        fetch: true
      titleConvo: true
      titleModel: "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO"
      dropParams: ["top_p"]
      modelDisplayLabel: "HuggingFace"
⚠️
Other Model Errors

Here’s a list of the other models that were tested along with their corresponding errors

  models:
    default: [
      "CohereForAI/c4ai-command-r-plus", # Model requires a Pro subscription
      "HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", # Model requires a Pro subscription
      "meta-llama/Llama-2-7b-hf", # Model requires a Pro subscription
      "meta-llama/Meta-Llama-3-70B-Instruct", # Model requires a Pro subscription
      "meta-llama/Llama-2-13b-chat-hf", # Model requires a Pro subscription
      "meta-llama/Llama-2-13b-hf", # Model requires a Pro subscription
      "meta-llama/Llama-2-70b-chat-hf", # Model requires a Pro subscription
      "meta-llama/Llama-2-7b-chat-hf", # Model requires a Pro subscription
      "------",
      "bigcode/octocoder", # template not found
      "bigcode/santacoder", # template not found
      "bigcode/starcoder2-15b", # template not found
      "bigcode/starcoder2-3b", # template not found 
      "codellama/CodeLlama-13b-hf", # template not found
      "codellama/CodeLlama-7b-hf", # template not found
      "google/gemma-2b", # template not found
      "google/gemma-7b", # template not found
      "HuggingFaceH4/starchat-beta", # template not found
      "HuggingFaceM4/idefics-80b-instruct", # template not found
      "HuggingFaceM4/idefics-9b-instruct", # template not found
      "HuggingFaceM4/idefics2-8b", # template not found
      "kashif/stack-llama-2", # template not found
      "lvwerra/starcoderbase-gsm8k", # template not found
      "tiiuae/falcon-7b", # template not found
      "timdettmers/guanaco-33b-merged", # template not found
      "------",
      "bigscience/bloom", # 404 status code (no body)
      "------",
      "google/gemma-2b-it", # stream` is not supported for this model / unknown error
      "------",
      "google/gemma-7b-it", # AI Response error likely caused by Google censor/filter
      "------",
      "bigcode/starcoder", # Service Unavailable
      "google/flan-t5-xxl", # Service Unavailable
      "HuggingFaceH4/zephyr-7b-alpha", # Service Unavailable
      "mistralai/Mistral-7B-v0.1", # Service Unavailable
      "OpenAssistant/oasst-sft-1-pythia-12b", # Service Unavailable
      "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", # Service Unavailable
    ]

image