Ollama
Example configuration for Ollama
Ollama API key: Required but ignored - Ollama OpenAI Compatibility
Notes:
- Known: icon provided.
- Download models with ollama run command. See Ollama Library
- It's recommend to use the value "current_model" for the
titleModelto avoid loading more than 1 model per conversation.- Doing so will dynamically use the current conversation model for the title generation.
- The example includes a top 5 popular model list from the Ollama Library, which was last updated on March 1, 2024, for your convenience.
Ollama -> llama3
Note: Once stop was removed from the default parameters, the issue highlighted below should no longer exist.
However, in case you experience the behavior where llama3 does not stop generating, add this addParams block to the config:
If you are only using llama3 with Ollama, it's fine to set the stop parameter at the config level via addParams.
However, if you are using multiple models, it's now recommended to add stop sequences from the frontend via conversation parameters and presets.
For example, we can omit addParams:
And use these settings (best to also save it):
How is this guide?