Skip to content

Using AI or other solvers for text?

Checking and noticed my text service was off for a bit. What are you guys using these days for solving questions? Thank you, to anyone replying and helping out.

Comments

  • royalmiceroyalmice WEBSITE: ---> https://asiavirtualsolutions.com | SKYPE:---> asiavirtualsolutions
    I use Deepseek for text captchas.
    Thanked by 1Deeeeeeee
  • DeeeeeeeeDeeeeeeee the Americas
    Thank you, @Royalmice. I am using this now as well for text questions. 
    Thanked by 1royalmice
  • deepseek
  • DeeeeeeeeDeeeeeeee the Americas
    Deepseek seems to work well and the prices are low. 
  • Giving Deepseek a test now thanks to you guys. Something I always wondered how to solve those text questions automatically when I had manual input enabled, the pop ups when you config the solvers. Success rate is good so far barely any api requests yet over 100 solved. For anyone unsure how to start, I made an account on deepseek.com Then went to platform.deekseek.com, got a API key then chose Deepseek in the solvers area in the GSA settings, then you're all set
    Thanked by 1Deeeeeeee
  • Is there a way you can use a local model to do this ? been trying and failing. 
  • DeeeeeeeeDeeeeeeee the Americas
    edited October 7
    nutbag said:
    ...use a local model...
    Did you set the port and anything else you might need to change? (I have ZERO idea, have not played with this at all!!)

    Wow. I remember talking about the earlier models years ago on Sven's board. Apparently, they were resource-heavy but people were playing with them and running them locally. But can it be done now? Do people do this for savings? What about energy draw? I guess if you have solar panels...it's an unbeatable set-up.
    Or, am I misunderstanding? I use Open AI for SEO as well as now for other applications. But I really haven't even begun to explore the possibilities.

    There are so many models out there, also.  Considering the answers to text captchas usually have to be one word or a short phrase, would even older/lighter models work for this?  You don't need super-advanced AI to solve those questions...
  • Deeeeeeee said:
    nutbag said:
    ...use a local model...
    Did you set the port and anything else you might need to change? (I have ZERO idea, have not played with this at all!!)

    Wow. I remember talking about the earlier models years ago on Sven's board. Apparently, they were resource-heavy but people were playing with them and running them locally. But can it be done now? Do people do this for savings? What about energy draw? I guess if you have solar panels...it's an unbeatable set-up.
    Or, am I misunderstanding? I use Open AI for SEO as well as now for other applications. But I really haven't even begun to explore the possibilities.

    There are so many models out there, also.  Considering the answers to text captchas usually have to be one word or a short phrase, would even older/lighter models work for this?  You don't need super-advanced AI to solve those questions...

    Asked Claude to summarize my problem. 

    Hello GSA team and community,

    I'm trying to connect GSA Search Engine Ranker to my local AI models (Mistral and DeepSeek) running through Ollama, but keep getting "API key invalid/authentication failed" errors despite the models working perfectly via direct API calls.

    My Setup:

    • Local AI Server: Ollama (running on port 11434)
    • Installed Models:
      mistral:latest (4.4 GB)
      deepseek-coder:latest (776 MB)
      qwen2.5:7b (4.7 GB)

    What Works:

    My local Ollama installation is confirmed working. I can successfully query it via:

    1. Python script (attached below) - works perfectly
    2. Direct Ollama CLI - ollama run mistral "Hello" works fine
    3. API endpoint - http://localhost:11434/api/generate responds correctly

    What I've Tried:

    Attempt 1: Modified Mistral.ini file

    Replaced the original Mistral AI.ini with a local version:

    ini
    <span>[setup]
    </span><span>name=Mistral Local
    </span><span>main url=http://localhost:11434/
    </span><span>desc=Local AI via Ollama
    </span><span>rating=5
    </span><span>costs=free (local)
    </span><span>id=71
    </span>
    <span>[text_solve]
    </span><span>url=http://localhost:11434/api/generate
    </span><span>post_data={"model":"%model%","prompt":"%prompt%","stream":false,"temperature":%temperature%}
    </span><span>encoding=json
    </span><span>add header=Authorization: Bearer %Api-Key%
    </span><span>result=%text_result%
    </span><span>error=%text_error%
    </span><span>simulate_result={"model":"mistral:latest","created_at":"2024-01-01T00:00:00Z","response":"%result%","done":true}
    </span><span>simulate_error={"error":"Unable to solve"}
    </span><span>timeout=120
    </span>
    <span>[text_result]
    </span><span>front="response":
    </span><span>front2="
    </span><span>back="
    </span>
    <span>[text_error]
    </span><span>front="error":
    </span><span>front2="
    </span><span>back="
    </span>
    <span>[API-Key]
    </span><span>type=text
    </span><span>hint=Use any dummy key for local (e.g., sk-local-1234)
    </span><span>default=sk-local-dummy-key-1234567890
    </span>
    <span>[Model]
    </span><span>type=combobox
    </span><span>list=mistral:latest|deepseek-coder:latest|qwen2.5:7b
    </span><span>default=mistral:latest
    </span><span>must be filled=1
    </span>
    <span>[Prompt]
    </span><span>type=text
    </span><span>hint=Use a meaningful prompt (%arg1% = question, %arg2% = url)
    </span><span>default=Answer only '%arg1%' answer only without elaborating or explaining.
    </span>
    <span>[Temperature]
    </span><span>type=text
    </span><span>default=0.7
    </span><span>hint=What sampling temperature to use, between 0 and 2
    </span><span>must be filled=1</span>

    Result: GSA SER still shows the cloud Mistral models in dropdown and gives authentication errors.

    Attempt 2: Different endpoint formats

    Tried multiple URL variations:

    • http://localhost:11434/api/generate (Ollama native)
    • http://localhost:11434/v1/chat/completions (OpenAI compatible)
    • http://localhost:11434/api/chat
    • http://localhost:11434

    Result: Authentication failed errors on all attempts.

    Attempt 3: Creating separate .ini files

    Created Mistral-Local.ini and DeepSeek-Local.ini as new files in the GSA folder.

    Result: New files don't appear in GSA SER's AI dropdown menu.

    Working Python Test Script:

    This proves the local API is functioning correctly:

    python
    <span>import requests
    </span>
    <span>url = "http://localhost:11434/api/generate"
    </span>
    <span>response = requests.post(url, json={
    </span><span>    "model": "mistral",
    </span><span>    "prompt": "What is 2+2?",
    </span><span>    "stream": False
    </span><span>}, timeout=60)
    </span>
    <span>result = response.json()
    </span><span>print(result['response'])  # Works perfectly - returns "4"</span>

    The Problem:

    1. GSA SER seems to be caching or ignoring the modified .ini file
    2. It continues showing cloud Mistral models despite the local configuration
    3. Any dummy API key results in "authentication failed"
    4. The Authorization header might be causing issues (local Ollama doesn't need it)

    Questions:

    1. Where exactly does GSA SER store/read the AI configuration files?
    2. Is there a cache that needs clearing after modifying .ini files?
    3. Can we disable the Authorization header for local AI connections?
    4. Is there a "Custom AI" or "Generic OpenAI" option that would work better for local models?
    5. Does GSA SER support Ollama's native API format, or only OpenAI-compatible endpoints?

    What I Need:

    A working configuration to connect GSA SER to my local Ollama instance running Mistral/DeepSeek models. These models work perfectly via API calls, so it's just a matter of getting the right configuration in GSA SER.

    Any help would be greatly appreciated! Happy to provide debug logs or test specific configurations.

    Thank you!



    LOL well it did a pretty good job, there surely is a way to get this working

  • SvenSven www.GSA-Online.de
    With every change to the ini files you must restart SER to recognise this.
  • You guys must have time and money to run it locally.
  • Sven said:
    With every change to the ini files you must restart SER to recognise this.

    Yeah i did that. Im going to try some more.

    You guys must have time and money to run it locally.

    It doesnt take massive hardware to run the small models through ollama or lm studio anymore. Even the big ones you can run its just VERY slow.
Sign In or Register to comment.