Skip to content

Boost mod

edited April 2014 in GSA SEO Indexer
Hi @Sven

For those like me who do not use reports, is it possible to disable the control links and more ? 

I mean just send the request to the server and does not wait until the end of creation? (just send request and quit, do not collect any info).

I think the link will still be created.

Many uses hugely treads for this because the quality at this level is not important (Just quantity).

Maybe indexer use that at this time but if not, is seem to be nice for incrase maximum speed (approximatively X 2-3 by save second loading/proxy error/checking read write data).

Time is money :)

Thank you


  • SvenSven
    well it sends the request and quits. Of course it has to wait and get the content from the server. Else it is sen as aborted and useless.
  • KaineKaine
    You think the link is not created? I thought once at time of request is sending, the server treated the information no matter what. 

    It would be interesting to check ... thank you for response Sven.
  • SvenSven
    as soon as the tcp/ip connection is broken it is not performing anything.
  • KaineKaine
    edited April 2014
    So we must stay connected ... but he can't force to download after sending (he does not know, maybe script loading and other can maybe are desactived without broken connection).

    EDIT: No time to gain on the transaction, but more thread / bandwidth for others.


    I see that the thread up and down for each url. 
    Are in there is no way to begin Advent url to index the end of the previous url? 

    Like streaming of url with no thread down.

    Thead finished no wait, start other url to index.

    No bounce if i fix 1000 thread = thread all the way.
  • KaineKaine
    edited April 2014
    Uninterrupted as a buffering streaming/avoid falling flow 1000=1000 no 999 or less:
    1000 > 999 > 1000

    50% incrase speed.

    I leave you quiet ;)

  • BrandonBrandon Reputation Management Pro
    According to your image above the problem is that when SER finishes a thread it drops to 999 then picks up a new URL and goes back up to 1000?

    This is a "problem" and you need @Sven to fix it? Set up 1001 threads and you'll be back to 1000 all the time.
  • KaineKaine
    edited April 2014

    1 > 1000 > 1 > 1000 .... currently threads drop 1 to 1000, then 1000 to 1 (^) for each url to index.

    1 > 1000 > 999 > 1000  .... 1000 threads stay until the end of all the work (all batch of url's to index) are finished. 

    but 1 > 1000 > 1000 > 1000 .... is surely possible too.

    No idle thread from A to Z.

    the total list of url to index should spend X2 faster.

    EDIT:  This mean that there would be two to three urls that could be indexed at the same time to keep the 1,000 threads. 

    1000 is just for example.

  • SvenSven

    Hmm I don't quite understand this. For each download a new thread is started and that thread also performes the processing of the result and when successful or not sends a message back to the main thread about it's result.

    The main thread simply waits for a result returns and starts another thread as it has a free slot to do so. I don't know how this can be made faster at all (beside using thread pools which is happening in the background already).

  • KaineKaine
    You mean the list that the user import or integrate in GSA index? 

    I speak of the user list, and the fall of thread from one URL to another URL (the famous 1 >1000 >1 ...). 

    It is not clear for me to explain in English, may be your last post already answered my question. 

    Unless if you speak of urls integrate in GSA indexer.
  • SvenSven
    hmm yea I have to admit my English is not the best and I have a hard time understanding yours ;/ Maybe someone else can bring us together? ;)
  • KaineKaine
    edited April 2014
    :) Your English is very good, it's mine ;)

    If someone understood and can translate it would be very nice.

    Just for exemple:

    Numeric 1 and 1000 is number of threads.

    Actually with user list imported to index.  start indexation 1 > 1000 > 1 (stop because finished and start next url of user list).  1 > 1000 > 1  1 > 1000 > 1  1 > 1000 > 1

    //This is why we see a slight drop in LPM between two urls (list of user) thread drop 1 > 1000 >1 
    URL indexed continuing with the next.

    That i suggest (always) based on the user list.  start indexation 1 > 1000   (dont stop, no 999, if one thread have finished, it start new URL under)  1000 > 1000                     (result= we have always 1000 threads active)  1000 > 1000                     (idem ...)  1000  > 1                          (End, total list of user is done)

    This can be solved with a single good English sentence :)
  • SvenSven
    hmm it should do that already in latest version ... it does not wait for the next URL if there is another one in queue.
  • KaineKaine
    Thank's :)

    I have difficult to use, it block often maybe a timout option can be a good option.
Sign In or Register to comment.