Skip to content

Importing Target URL's Very Low Success Rate


I've got a list of over 13k MediaWiki sites - before you think they are just dead because I found them online please read on...

I got the list after paying a company to build links for me, I then thought I could use these URL's in SER so I used the tool in options to import and sort them, all went well with the following results:

Added 13471/13561 URLs to site list.
Wiki - MediaWiki: 13471
Unknown.........: 90

I sorted these into a separate new file I created so I could run a test - I setup a project, removed all search engines and selected wikis at the type of links - I then checked the box to used identified URL's - but I only managed to get a few hundred links submitted and live over the course of three days before it said no urls to target.

I also tried setting up the project without any search engines and  unchecked use site lists; I then simply imported the list (which had already been sorted) but again the same thing happened.

I ordered the links again from the company and cross checked the links and they are basically all the same URL's so they are still live and working - what tool did they use to submit to these wikis?!


  • SvenSven
    let me don't use any captcha service or alike? Thats propably the cause why nothing is submitted. 
  • I use GSA captcha breaker with Deathbycaptcha as a fallback.

  • steelbonesteelbone Outside of Boston
    wikirobot maybe?
  • The strange thing is, I import the URL's while the project is stopped and view the remaining target URL's and they are all there. I then start the project and then literately 30 seconds later the remaining target URL list is empty.

    It does  do something in the first 30 seconds as I see things in the log; but the whole time the thread count is only around 5 - 10 (set to 100 in the options).
  • ronron
    edited April 2014

    Why not try something sneaky crazy. Copy and paste those url's in the Mediawiki.txt file or simply make a new one with the exact same filename that is used in SER.

    Then stick just that file in an empty folder that you name uniquely, like MediaWiki. Then point one of your unused ports in Main Option>Advanced to that folder, something like Failed.

    Then create or clone a brand new project that uses that failed sitelist to get those links. Make sure your target URL cache is deleted, all history is deleted, use brand new emails, etc.

    Then just run the project by itself with all others turned off. The thinking is that you were trying to get SER to recognize or identify those URL's, but with this method, you pretend it is already sorted into the proper file. I would be curious to see if SER treats it any differently - even though it really shouldn't.

  • I bet the tool used was Wikibomber. Betcha.
  • Cheers ron, I have tried that already it didn't make a difference.

    @JudderMan yeah I've been trying to get hold of that but it looks like Clyde isn't around anymore and the WikiBomber website isn't online :(
  • It's not great, I'd rather load scraped wiki links into SER and let it post. WikiBomber crashes way too often. I stopped using it about a year ago.
Sign In or Register to comment.