Skip to content

SER "choked"? Bad LPM, no targets to post

edited April 2014 in Need Help
hey guys, 

i got problem with SER as it not finding any new targets to post. I added new keywords, my global lists are checked (submited and verified). Am running it by couple of months and there is decent list already but it seems like it slacked hard. What can I do about it? 

Comments

  • bump.
    @sven maybe? my LPM is 1 lol.
  • edited April 2014
    You can scrape more targets? Targets are dying all the time.
  • edited April 2014
    hey @fakenickahl I am looking for some scraper curently but cant decide which one to buy. Im using some Gscraper demo but it brings me lots of wierd results like "http://webcache.googleusercontent.com/" any ideas why?

    Anyway, i added new keywords (2k+ to each project), shouldnt SER start looking for new targets with them?

    Ser just not build links even for crap tiers :(
  • BrandonBrandon Reputation Management Pro
    2k keywords isn't that many. I usually import 50-100k if SER is scraping. Gscraper is the best for scraping.

    How many proxies do you have? That is what will slow down your scraping.
  • edited April 2014
    Sure 2k keywords doesn't sound like many, but if you merge 2k keywords with 1000 footprints, you'll get more than 2 million search phrases. If I'm not too bad at math, it'll take ser just about 700 days to go through all of these, assuming you have a search engine delay of 30 seconds. I know SER will not through the list of these one at a time, and other settings affects the number of 2 million, but I just wanted to prove a point.

    But I agree with Brandon that it's probably your proxies that are slowing you down. Are you seeing a lot of IP banned by search engines in the log? It can often be a good idea to scrape multiple google tlds, as it seems like you're only getting blocked on one at a time, if I remember correctly.
  • I think you both @Brandon @fakenickahl right, When i turn off proxies completly SER starting to blast things. I reported problem to proxy provider already.

    @Brandon am wanting to buy Gscraper but can you tell me why am having a lot of webchache.google results? Am having them in demo version and i even downloaded some cracked one and have same results. Dont wanna buy it if its provides my some wierd results. Am using SER footprints on it
  • I am having the exact same problem.

    I downloaded the demo version of Gscraper 2 days ago.. Using 100 private shared proxies from buyproxies
    I loaded 1000 footprints and 25 keywords (i found out that 25k keywords using 1000 results per keyword is just enough to stay under the 100k scrapped URLS per scrap in the demo version).

    Each run gets about 80k - 96k targets, after removing dupes i found 75% of remaining urls are all from webcache.googleusercontent.com doamin .. meaning this is a cached version of the actual domain .. when I manually looked up a few of the cached content, I found it was filled with porn, gambling .. talk about the worst of neighborhoods!

    The problem is those trigger keywords ( sex, gambling etc) are not included in the domain name only the contents .. is there a way to filter those out? is there a way to prevent GSCRAPPER or SCRAPEBOX from scrapping webcache.googleusercontent.com ???

    Thanks
  • s4nt0ss4nt0s Houston, Texas
    @borngreat007 - You would probably need to add something like -webcache.googleusercontent.com to the footprints you're scraping with. I know the minus sign excludes results but not sure the proper way to format it, if it should be -site:webcache.googleusercontent.com or just -webcache.googleusercontent.com.


  • Thank you very much s4nt0s .. will try that in the next batch .. for now i am just importing all url to scrapbox then using remove url containing "  .. " function
  • borngreat007 buy paid version of Gscraper. Demo version is very old and its not demo version of actual paid version. I just checked 2 files with harvest results and i dont have any results with "webcache.googleusercontent.com/" so it must be problem with demo version.
  • Really .. ok i will do that .. thanks
  • I see some movement on my topic ;) 
    But i still got some issues. I got active campains with submission status and no target links at all. I imported 100k keywords and it seems SER not scraping links for it? It only does when i stop project and change status to search only, wierd.
Sign In or Register to comment.