Skip to content

Several noobish question that will help me a lot.

Hello everyone!

1. If I got my own URL list from scrapebox is it a must to identify these URL's through Tools-->Identify and sort platforms?
2. If GSA find in list unidentified URL's - does it mean it can't actually post to that link? Even if it got comment section or directory submission but GSA does not recognize it?
3. In identify and sort platforms-->search settings it's better to tick the option "use engine filter" or leave it unticked?
4. In identify and sort platforms-->retry to download - what does this function actually mean? If I set it to "2" so there is a chance that GSA will recognize that engine at second try?
5. We can scrape only with usage of keywords for following engines: "blog comment", "forum", "pingback", "trackback"? For an example we can use a footprint like "inurl:Wix.com "comment" OR "reply" %kw%" and for other engines that not use keywords just leave it blank as ""directory" inurl:submit.php"? Because if other engines are not keyword-related so that make no sense to put %kw% before or after, right?

This questions make my head hurts and I would really like to get answer from you guys!
Thanks in advance! :).

Comments

  • edited July 2014
    1. no you can directly import the list into a project after you dedup domains
    2. If SER cant identify a platform the url is discarded I believe
    3. Use the engine filter to tell SER you only want it to identify those engines. This is effective if you know what engines you prefer to use and don't
    4. retry download is exactly what it says. SER will try to download as many times as you say if the first failed for some reason. I leave that off.
    5. it really doesnt matter if you put your KW before or after the foot print. Test it out in google and see what the results look like (at least I think thats what you were asking)
  • Thanks the_other_dude for your great reply! :)

    In the addnotation of point 5, let me explain it more accurate:
    In GSA we have an option at the left of main window, when we click right mouse button: "Uncheck engines that use no keywords" and we left with blog comments, forums, pingbacks and trackbacks. I'm not so experienced with all engine types and in which one we can post our link related to our niche. So let's say I want to build links to my "weight lose" related niche. Then I would merge all footprints from "blog comments, forums, pingbacks and trackbacks" and add my "weight lose" and other keyword variations at the end of footprint. Then use other engines separately for scraping to use this list for all of my niche types with general footprints without leaving any of keyword types at the end.

    Hope you understand what i'm talking about :).
  • Can anyone confirm my question above?
  • looplineloopline autoapprovemarketplace.com
    @S4nt0s
    What does
    "uncheck engines that use no keywords"
    mean?

    The only thing I can think is that keywords are not tacked onto the footprints for these platforms when scraping, but I don't know why that would be. 
  • s4nt0ss4nt0s Houston, Texas
    @loopline - By default, a lot of the platforms don't add your keyword to the footprint when scraping unless you check the option, "always use keywords to find target sites" in project options.

    So ya by default only blog comment, forums, pingback and trackback will add keywords to your footprints while scraping and that's what the option is for. 
  • So if I use scrapebox I can easily add my keyword to all footprint plaform engine types?

    Also @loopline, great scrapebox tutorials, I have watched all of your videos and learned much from it :).
  • looplineloopline autoapprovemarketplace.com
    @Neavon
     Yes you could use Scrapebox to merge in all your keywords with the footprints.  The only reason I can think to have it setup this way is that some footprints have a low yield in the volume of results they return.  Thus for many keywords it would be a waste of time.  That or too many footprints combined with too many keywords would wind up with too many queries. 

    I presume its volume of search results related though, as that would result in slowing down SER and thus people complain.  Thats the only good reason I can think of.  So also bear that in mind with scrapebox, but you could probably go faster in scrapebox then in SER, or at least let SER do what it does best, which is build links and let scrapebox scrape at the same time so you double your effective production. 

    Glad you liked the tutorials.  :)  When 2.0 comes out Im going to do a few crossover videos using scrapebox with SER. 
Sign In or Register to comment.