Skip to content

best practice for using GSA Ser and other questions - 2nd try

Hi all, 

I'm new to Ser and I have some questions that confuse me:

1.) What's the best practice for scraping ?

I own a license of gscraper and I scraped a lot of urls with the footprints extracted with the tool someone posted here. The scraping is very fast with gscraper and I got a really big list (because of many, many keywords). But what's the next step ? When I go to options -> advanced -> tools -> import urls (identify and sort in) it takes hours to process the lists I scraped before. Is this normal ? Or is there any faster way of importing and identifying links in Ser ? 

2.) When I start a new project, how many email adresses are you using for one project ? 

Why I ask ? Because I imported 350 email adresses into one project and the process of verifying takes very, very long. Is there a way of just processing my list (posting to articles,bookmark sites, ect.) without verifying all the email addresses every xx minutes ? I want to just post and later on I want to start the verify process manually. Is this possible ? 

3.) Search online for sitelists - question: What's that for ? 

I tried as described in a video but I was wondering how it works ? It finds only pages of pastebin. Can the scraper extract the urls on this pages to find new link targets ? Because the site command only send you pages inside the pastebin domain. How does it work ? 

4.) Cleaning up and duplicate removing

Does this process clean all the urls in the global list (identified, success and veryfied) ? Is it the same as if I copy all urls into scrapebox/gscraper, remove duplicates and scan the remaining urls for my  link ? Or is it just an alive-test with duplicates-removing ?

Many thanks for your help... 

magix            

Comments

  • grax1grax1 Professional SEO, UK | White Label SEO Provider
    2) I use 3-10 fresh emails in every project, I think that checking 350 emails may slow down your campaigns
  • Yes, thanks, I cut down the amount of emails-accounts to 30 and it's much better now.

    Any other tipps or hints regarding to my questions ?

  • """1.) What's the best practice for scraping ?

    I own a license of gscraper and I scraped a lot of urls with the footprints extracted with the tool someone posted here. The scraping is very fast with gscraper and I got a really big list (because of many, many keywords). But what's the next step ? When I go to options -> advanced -> tools -> import urls (identify and sort in) it takes hours to process the lists I scraped before. Is this normal ? Or is there any faster way of importing and identifying links in Ser ? """

    Even I would like an answer to this question . When I scrape urls from gscraper I get only 6-7 lpm but with inbuilt scraper lpm is much higher . What i am doing wrong .
  • Set gscraper to priority "above normal" and set your threads to 900 or even higher. No problem if you use their proxy service. Then you'll get a lot of lpm. BUT: that depends on your footprints. Try very common footprints like "leave a comment" -"comments closed" together with a very large list of keywords - then you will get many, many urls to post to...
  • My above example is if you go for blog comments...
  • royalmiceroyalmice WEBSITE: ---> https://asiavirtualsolutions.com | SKYPE:---> asiavirtualsolutions
    @magix 

    I posted something on regular maintenance tips for gsa ser recently which will help you with cleaning up allot more than dup post or urls --- you can have a look over here : http://asiavirtualsolutions.com/maintanance-for-gsa-search-engine-ranker/
Sign In or Register to comment.