Checking and noticed my text service was off for a bit. What are you guys using these days for solving questions? Thank you, to anyone replying and helping out.
Giving Deepseek a test now thanks to you guys. Something I always wondered how to solve those text questions automatically when I had manual input enabled, the pop ups when you config the solvers. Success rate is good so far barely any api requests yet over 100 solved. For anyone unsure how to start, I made an account on deepseek.com Then went to platform.deekseek.com, got a API key then chose Deepseek in the solvers area in the GSA settings, then you're all set
Did you set the port and anything else you might need to change? (I have ZERO idea, have not played with this at all!!)
Wow. I remember talking about the earlier models years ago on Sven's board. Apparently, they were resource-heavy but people were playing with them and running them locally. But can it be done now? Do people do this for savings? What about energy draw? I guess if you have solar panels...it's an unbeatable set-up.
Or, am I misunderstanding? I use Open AI for SEO as well as now for other applications. But I really haven't even begun to explore the possibilities.
There are so many models out there, also. Considering the answers to text captchas usually have to be one word or a short phrase, would even older/lighter models work for this? You don't need super-advanced AI to solve those questions...
Did you set the port and anything else you might need to change? (I have ZERO idea, have not played with this at all!!)
Wow. I remember talking about the earlier models years ago on Sven's board. Apparently, they were resource-heavy but people were playing with them and running them locally. But can it be done now? Do people do this for savings? What about energy draw? I guess if you have solar panels...it's an unbeatable set-up.
Or, am I misunderstanding? I use Open AI for SEO as well as now for other applications. But I really haven't even begun to explore the possibilities.
There are so many models out there, also. Considering the answers to text captchas usually have to be one word or a short phrase, would even older/lighter models work for this? You don't need super-advanced AI to solve those questions...
Asked Claude to summarize my problem.
Hello GSA team and community,
I'm trying to connect GSA Search Engine Ranker to my local AI models (Mistral and DeepSeek) running through Ollama, but keep getting "API key invalid/authentication failed" errors despite the models working perfectly via direct API calls.
GSA SER seems to be caching or ignoring the modified .ini file
It continues showing cloud Mistral models despite the local configuration
Any dummy API key results in "authentication failed"
The Authorization header might be causing issues (local Ollama doesn't need it)
Questions:
Where exactly does GSA SER store/read the AI configuration files?
Is there a cache that needs clearing after modifying .ini files?
Can we disable the Authorization header for local AI connections?
Is there a "Custom AI" or "Generic OpenAI" option that would work better for local models?
Does GSA SER support Ollama's native API format, or only OpenAI-compatible endpoints?
What I Need:
A working configuration to connect GSA SER to my local Ollama instance running Mistral/DeepSeek models. These models work perfectly via API calls, so it's just a matter of getting the right configuration in GSA SER.
Any help would be greatly appreciated! Happy to provide debug logs or test specific configurations.
Thank you!
LOL well it did a pretty good job, there surely is a way to get this working
Comments
Hello GSA team and community,
I'm trying to connect GSA Search Engine Ranker to my local AI models (Mistral and DeepSeek) running through Ollama, but keep getting "API key invalid/authentication failed" errors despite the models working perfectly via direct API calls.
My Setup:
What Works:
My local Ollama installation is confirmed working. I can successfully query it via:
ollama run mistral "Hello"works finehttp://localhost:11434/api/generateresponds correctlyWhat I've Tried:
Attempt 1: Modified Mistral.ini file
Replaced the original
Mistral AI.iniwith a local version:<span>[setup] </span><span>name=Mistral Local </span><span>main url=http://localhost:11434/ </span><span>desc=Local AI via Ollama </span><span>rating=5 </span><span>costs=free (local) </span><span>id=71 </span> <span>[text_solve] </span><span>url=http://localhost:11434/api/generate </span><span>post_data={"model":"%model%","prompt":"%prompt%","stream":false,"temperature":%temperature%} </span><span>encoding=json </span><span>add header=Authorization: Bearer %Api-Key% </span><span>result=%text_result% </span><span>error=%text_error% </span><span>simulate_result={"model":"mistral:latest","created_at":"2024-01-01T00:00:00Z","response":"%result%","done":true} </span><span>simulate_error={"error":"Unable to solve"} </span><span>timeout=120 </span> <span>[text_result] </span><span>front="response": </span><span>front2=" </span><span>back=" </span> <span>[text_error] </span><span>front="error": </span><span>front2=" </span><span>back=" </span> <span>[API-Key] </span><span>type=text </span><span>hint=Use any dummy key for local (e.g., sk-local-1234) </span><span>default=sk-local-dummy-key-1234567890 </span> <span>[Model] </span><span>type=combobox </span><span>list=mistral:latest|deepseek-coder:latest|qwen2.5:7b </span><span>default=mistral:latest </span><span>must be filled=1 </span> <span>[Prompt] </span><span>type=text </span><span>hint=Use a meaningful prompt (%arg1% = question, %arg2% = url) </span><span>default=Answer only '%arg1%' answer only without elaborating or explaining. </span> <span>[Temperature] </span><span>type=text </span><span>default=0.7 </span><span>hint=What sampling temperature to use, between 0 and 2 </span><span>must be filled=1</span>Result: GSA SER still shows the cloud Mistral models in dropdown and gives authentication errors.
Attempt 2: Different endpoint formats
Tried multiple URL variations:
http://localhost:11434/api/generate(Ollama native)http://localhost:11434/v1/chat/completions(OpenAI compatible)http://localhost:11434/api/chathttp://localhost:11434Result: Authentication failed errors on all attempts.
Attempt 3: Creating separate .ini files
Created
Mistral-Local.iniandDeepSeek-Local.inias new files in the GSA folder.Result: New files don't appear in GSA SER's AI dropdown menu.
Working Python Test Script:
This proves the local API is functioning correctly:
<span>import requests </span> <span>url = "http://localhost:11434/api/generate" </span> <span>response = requests.post(url, json={ </span><span> "model": "mistral", </span><span> "prompt": "What is 2+2?", </span><span> "stream": False </span><span>}, timeout=60) </span> <span>result = response.json() </span><span>print(result['response']) # Works perfectly - returns "4"</span>The Problem:
Questions:
What I Need:
A working configuration to connect GSA SER to my local Ollama instance running Mistral/DeepSeek models. These models work perfectly via API calls, so it's just a matter of getting the right configuration in GSA SER.
Any help would be greatly appreciated! Happy to provide debug logs or test specific configurations.
Thank you!
It doesnt take massive hardware to run the small models through ollama or lm studio anymore. Even the big ones you can run its just VERY slow.