Is it possible to add/use a local Ollama as a new provider for creating articles, @Sven ? Would need to add my ollama IP:port and most stuff should be the same
https://ollama.com/ similar to what other LLMs can do however you can do this all from your local homelab without any limitations. So similar to what is being done by Groq, ChatGpt etc. I use Ollama for a number of automations and would be great to see it implemented inside GSA SER
just edit the openai.dat file and add your model with the url.
Hmmm, well I found the file however I think there's more to it. From what I see, the base URL for existing models in this file is already set to openai somewhere else. So testing and adding like this:-
[Models]
gemma2:2b=Gemma2-2Bmodel ollama|http://<my IP for local ollama>:11434/
gpt-3.5-turbo=Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003. Will be updated with our latest model iteration.|/v1/chat/completions
for those wanting to use Ollama, the syntax is - llama3.2:latest=Llama3.2 ollama|http://<your ollama ip>:11434/v1/chat/completions works like a charm and thanks @sven for the help
Comments
... isn't going to work, from what I see.
Am I missing something obvious here @Sven ?
llama3.2:latest=Llama3.2 ollama|http://<your ollama ip>:11434/v1/chat/completions
works like a charm
and thanks @sven for the help