Skip to content

Is everybody's first project this slow?

This is the very first project I am running. So I guess it has to try to find targets?
But I am getting a ridiculous, 1 LPM. There are like no links posted in verified
Tagged:

Comments

  • running 2 days now
  • What are you settings and what sites are you posting to?
  • settings for what (more specific)
    set it to 300 threads, html timeout 50

    posting to
    article
    doc sharing
    SB
    SN
    video
    web2.0
    wiki

    PR above 2. skip sites over 50 OBL. avoid posting URL on same domain twice.

    And now it is telling me all my Squid proxies are blocked... yet its barely posted anything. .43 LPM.



  • goonergooner SERLists.com
    PR above 2. skip sites over 50 OBL - That will slow it down.
    Proxies blocked doesn't help of course, what wait time do you set?
  • between search engine queries? 60.
  • goonergooner SERLists.com
    Maybe set that higher, i have 300 threads and 120 sec delay.
  • davbeldavbel UK
    edited September 2013
    In short based on those settings the answer will be yes.  If you're starting from scratch and want to build a list a bit quicker, do a dummy project to bing or google or wikipedia or some other authourity site and spam the f**k out of it. 

    How?
    • Scrape a massive KW list or grab the 100k keyword list (somewhere on these forums)
    • Turn off all filters
    • Turn on all engines

    This will increase the number of sites you'll be able to post to, but will take a few weeks before you reach a critical mass.

    Your other options are to:
    1. Buy one of the lists that are advertised on these forums and other sites.
    2. Buy Scrapebox or Gscraper and scrape your own list.
    3. Use the "Search Online for URLs" & "Search Online for Site Lists" built into SER

    As you'd guess there are pros and cons to each.

    List Buying

    If you buy lists, no matter what the vendor says, you have absolutely no guarantee how often they have been sold or how spammed the sites are, so you could be effectively wasting time posting to sites that have already been penalised / de-indexed by Google. 

    I'm not saying that this is a fact with every list for sale, but it is something you should bear in mind if you choose that route. 

    On the flip side, it's quick and you will have potentially 50k sites you can post to. I've bought lots of lists in the past and some have been good and some have been worse than terrible.

    Scraping

    Scrapebox and Gscraper allow you to find potential sites based on footprints, which are easily findable on this forum, BHW and others, or you can just copy them out of SER.  Using either of these apps will have a learning curve and require some work, but there some excellent guides about including the new one from Jacob King.

    To give you an example of what's possible with Scrapebox, over the last week I've managed to build a potential list of 40 million pages where I might be able to drop a link.  I won't get anything like 40m links, but based on my averages with Scrapebox I could get something like 10 - 15 million pages where I'll be able to drop a link, which after they've been sanitized through SER might realistically be 500k - 1 million and I'd be happy with that. 

    With that in mind, do you buy a list of 50k sites for $40 or do you learn Scrapebox and build your own list of millions with Scrapebox for $57?

    SER "Search Online for URLs" & "Search Online for Site Lists"

    These do pretty much the same as Scrapebox, but aren't anywhere as fast or as in depth (@sven will flame me for that :D).  You bang in a footprint and set it searching.  One scrapes the net just like SER does normally, but builds a list to post to later, where as the other scrapes lists from Pastebin type sites.

    I'm guessing this might be the best place for you to start as it requires no more investment other than a bit of time setting them up.

    Eeek, that was slightly more than I wanted to say :D
  • edited September 2013
    Thanks, and I agree here:
    "With that in mind, do you buy a list of 50k sites for $40 or do you learn Scrapebox and build your own list of millions with Scrapebox for $57?"

    I have been contemplating buying it, I have not yet. will have to learn it.

    I have been using the built in scraper, not 100% sure how to use it, I was using my keywords to get targeted URLs but then it seems I wind up getting the "no engine matches" during running of project.
  • goonergooner SERLists.com
    You could also try GScraper free version, that allows you to scrape 100,000 urls at a time.
    But that's ok because you can tell it not to scrape duplicates so 100,000 uniques is plenty.
    I find Gscraper faster than Scrapebox, the only downside is GScraper free version doesn't scrape public proxies for you to search with.

  • So I purchased scrapebox and have a general idea how to work it, it is doing the first scrape now it says it is at like 60K links LOL but who knows how many are dups (it is still running). not too shabby.
  • if you have SB now
    may be that method
    http://www.blackhatworld.com/blackhat-seo/black-hat-seo-tools/605958-tut-how-easily-build-huge-sites-lists-gsa-ser.html

    helps you to get links without use of SE
    all you need is to get a list of blog comments

    be creative and use other URLs as well NOT only blog comments !!! to start building your link list
    and
    understand that this published system is NOT AT ALL to get links to post comments,
    but to use comment links to find real sites with auto approve linked FROM the blog comments
    same source method could be used with guestbooks and other sites = just kick start your creative mind to find links using SB

    I practice this method on slow ISP and get more potential links than I can process and submit to

    after all external links obtained
    I use offline filtering of the site lists using inurl footprints found in SER
    then a cleaned up list = import tarrget URL, either on project / or Tier level or system wide using
    > options > tools > import target URLs

    on SB to get country targeted domains for evenn more IP-diversity
    I added ALL known G SE by country (wikipedia has the link, just scrape the external links from that page
    http://en.wikipedia.org/wiki/List_of_Google_domains
    and add ALL to your SB

    then for each search that has inurl:
    and looks like starting at root of domain
    ADD the country TLD letters before your search string NOT site:
    and select that precise SE tld

    for example I use in SB with G uk

    "You are here" inurl:uk./node/

    to get precise .uk sites
    this works fine for me and gets more new sites overlooked when using non-country specific SE

    and you be amazed how many precise COUNTRY specific results you get to build up your site-list


  • Thanks for these tipps. They help me for getting better in ser.
  • In GSA SER Options -> Advanced -> Tools there is a built-in scraper for this kind of scraping, and also it can scrape other people's lists that they have saved on pastebin. This can get you a lot of targets fast, though it is not as fast as GScraper / Scrapebox.
Sign In or Register to comment.