Skip to content

Hard time getting Gscraper to work to its full potential. Anyone experiencing diffculties with it?

Hi all, bought Gscraper several days ago and by far I cannot
get it to work to its full potential. Never used it before so maybe I am doing
something wrong. Hopefully someone of you out there most experienced with it
can help.


Have been searching for solution everywhere and there is no info about that anywhere
that’s why I have been forced to post here.

I am using decent SolidSEOVps with 50 private proxies

Using the promised 1 month unlimited proxies from Gscrpaer
in Gscraper.


I set the Gscraper up
with gsa footprints several days ago and ended up with around 500,000 urls.
Now, I hit check pr on a domain level and had to wait like 5h or so more in
order to finish that task. After it was finished I loaded the remaining 150,000
or so remaining in gsa and started the project. After few hours it ran out of
targets. and thatwsit...so what, do I have to wait 5 days more to scrape
another 500,000 urls in order to start the project again? That’s just for 1
project, what about the other I have it will take ages.

Tried to run another project, and result you can only have 1
instance running and NO MORE. That’s what the support email said when I sent
one for help.

After that I tried to use it again and the filtering the google page rank is SUPER
slow. For 1h has checked only 0.01% and it freezes. So I cant use these either.

Now today its not even letting me to use the proxies from the
gscraper server it shows message "Operation failed, cannot connect to
Gscraper server or your additional service has expired"

Shot another email to the developer maybe they can tell me
something also

Hoping someone can enlighten me what have I done or what had
gone wrong.

Will appreciate if you point me to some good tutorials out
there. Also which scraper is best in your opinion. I am willing to change if necessary

Thank you in advance

Comments

  • You are prob not going to get much help on Gscraper here.. but I can tell you their public proxies are shit.

    I used them until I saw they would get burnt out during the day and then I would change over to my private proxies for a while too.

    Instead of waiting 5 hours on it to PR scan everything, dedupe by URL and then dedupe by domain to lower the total amount of targets from 500k to something smaller. THEN PR check the remaining. Prob will be a lot faster next time.
  • @OP
    1. After you buy Gscraper you get 7 days of free proxy service, not whole month.
    2. PR checking for harvested URLs is wate of time. Lets say you harvest 2 milion unique drupal domains. Gsa will be able to post to 0,2% of these domains so why check PR of all domains when most of them wont give backlinks.

    Harvest, import to GSA (sort and identify) than post with your settings.
    P.S. Fell free to PM me if you want to get some Gscraper proxies to test.
  • @satyr85 How can you check that, I can't imagine that I scraped 3 million links but only on 0,2% of the sites I can place my link on.
  • Unkown717 
    Thats my personal experience. I harvest alot and process alot (identified stats from 1 server). Note - im talking here about drupals not other engines. Some engines are easy. You have 10k identified unique domains and you will get 1-5k verified from this. Other engines (like for example drupal, wordpress) are harder - you will harvest alot domains but you wont get alot links.
  • @satyr85 Oh ok thanks for pointing on it.
  • so, we are 2 already with bad experience with it. Anyone else? What do you recommend? I red here in the forum if you want good backlinks you need to scrape them yourself. But HOW? Which tool do you recommend. it looks like Gscraper suck, or at least there is no available info on how to use it properly.
    >:P
  • mich158 
    Check my post above and my stats - all is harvested using Gscraper, its good software you just probably dont know how to use it. Good footprints + good kws + good proxies = results. If one of these 3 is not perfect you wont get good results. 
  • Ok thanks, question, I have 50 private proxies from buyproxies.org currently using in GSA, content machine and 2 other tools. Will it be good idea to use them in Gscraper as well?
  • Nope, private proxies are not good for scraping. You wont scrape any resonable amount of targets using private proxies. You need separate service for harvesting. You dont need that many private proxies for posting (if you have only 1 SER license). Buy 20-50 (you can start with 20) shared and for money you save buy proxy service for Gscraper. Also think about movling from VPS to dedicated server (you wont pay more than for vps). Ser + Gscraper + other tools = you will need alot of cpu power and ram.
  • Scrapebox v2 is awesome for scraping I'm finding, plus you get in-built proxies to use, which makes it even sexier. Gscraper still has it's place, but now I have two scrapers running full pelt with many instances running on each server. Try Proxy Bandit for scraper proxies.
  • mich158mich158 Zamunda
    edited March 2015
    Checked scrapebox out, is that the salespage? - http://www.scrapebox.com/. Nice is just once off payment of 97$. You say you get built in proxies to use, how is that? How much is that extra per month? Couldn't find that option in the sales page. Can I run multiple instances with 1 license? Txs
  • scrapebox.com/bhw - $57. Yes you get build in proxies but you wont be able to scrape google using these proxies. You may be able to harvest other engines using built in proxies but also not with good speed and other search engines are not even close in therms of results you get per set of keywords.

    Here is speed you can get harvesting google with sb built in proxy service (as you can see its not possible to get good google passed proxy service working 24/7 for small one time fee):
    image

  • Hey, thank you for pointing the discount link realy appreciated :)I justc couldnt understand whether you can have multiple instances running with 1 license.b Can you clairfy please
  • dwwwbdwwwb 94941
    edited March 2015
    Multiple instances of Scrapebox on the same machine is allowed.
  • Thanks, that's great, so basically I cant see the reason not to move from gscraper to scrapebox, so far based on what you all told me and based on the gscraper support I tried (not too helpful)  scrapebox is the best option.

    It would be also very helpful for all of us if more people give their experience and opinion about those tools

    So if anyone have a better experience it would be great to throw more light on the subject
  • Like Scrapebox for all the specialized tools and an interface I have been using for years- the new one (2.0) is an extension of that and much improved.  
    GScraper for me is awesome for huge around the clock scrapes, large file manipulation and a pretty good proxy service for the money (if you use it hard).

    I use both all day and night it is the foundation for finding good targets for GSA and other tools.
    This is only my assessment, I'm sure there are many different use cases out there. But either product when used correctly produces great results.

    Honestly I wouldn't want to live without both...
  • ok, maybe I will use both also. Can I use one proxy service for both? Which proxy serivice do you recommend?
  • I think GScraper offer a great proxy deal- this link (not an affiliate) is a great discount that ends today I think-


    With SB I always used buyproxies which work great, I'm using Proxy Bandit on one now and I like that too.

    Test, test, test for your own needs. But if you are serious about scraping don't be afraid of spending a little.
  • Thanks, absolutely, I have budged, thats not my concern. I am just trying to see which option will work for me best. If I need to spend more and  get that up and running I will. Just so far I havent been able to get the Gscraper to work well for me.

    As I said earlier i am frustrated and dont understand how can I kkep up feeding all my projects in GSA with the scraped links using Gscraper if fter I scraped for 5 days for just 1 project then filtered  150,000 pr domains I loaded in gsa and started the project. After few hours it ran out of
    targets. and thats it...so what, do I have to wait 5 days more to scrape
    another 500,000 urls in order to start the project again? That’s just for 1
    project, what about the other I have it will take ages.

    Then I tried to run another project, and result you can only have 1 instance running and NO MORE.

    How do you use them exactly to always keep feeding with good scraped URLs all your projects?

  • Use lots of keywords (or built ins) , this means 1000's and 1000's. 
    Lot's of proxies or GScraper service.
    Start with an engine or two that you want to target.
    Grab the footprints from SER or use your own (best).
    It results in millions of urls, be sure to remove duplicates.
    Sort and filter as needed. 
    And load 'em into projects.
    Just make sure you are submitting to the engines you scrape for.
  • Where do you get your keywords from? How targeted to the niche are the keywords you are using?
  • The dictionary ;)
    Built ins are good, and there are lots of lists out there too. The more words the better.

    Also, once you scrape millions of targets and test them, you can sort out the best ones for your needs. Including sites that match your niche. Best to get a lot to work with and practice on a throw away site first and build verified targets.
  • Ok thanks, very helpful
    :)
  • Look for FuryKyle's Keyword List on BHW (and maybe on here can't remember. 1 billion keywords...
  • FuryKyle's Keyword List is awesome and has over 20 languages and a footprint list as well- a great starting point. HAve fun!
Sign In or Register to comment.