Skip to content

IS IT More effective to scrape URLS to post to in GSA SER or Scrapebox? Then take SB list to GSA?

Which method is better? because GSA Seems to do it slower but im not sure, alot of failed to match engines come up in the feed for GSA SER.
Tagged:

Comments

  • DeeeeeeeeDeeeeeeee the Americas
    edited September 2019
    Which method is better?

    I have a SB license but don't use it.  Why? I am OK with SER results I get, for what I do.

    I have been told, repeatedly, by GSA vets, that SB was made for scraping, specifically, and SER for posting.

    So SB may, in fact,  be faster than SER, when it comes to scraping. HOWEVER, SER does work well at scraping. It may not be as fast, though.

    You can use the included Footprint Studio module included free with SER to sort the CB scrapes into posting targets SER will recognize. (That list of footprints can be manually  expanded by YOU, the user, as well.)

    You can also feed the scrapes from SB into GSA Platform Identifier. That's a stand-alone program that you can have running alongside SER and CB. You can set up a SER project to work from a directory, and GSA-PI feeding that directory in real-time with matched target platforms it gets from SB.

    So those are three options I am aware of. I believe option three is probably your best bet, to get the most targets, fastest. Anyone know for sure? Thanks...
  • Deeeeeeee said:
    Which method is better?

    I have a SB license but don't use it.  Why? I am OK with SER results I get, for what I do.

    I have been told, repeatedly, by GSA vets, that SB was made for scraping, specifically, and SER for posting.

    So SB may, in fact,  be faster than SER, when it comes to scraping. HOWEVER, SER does work well at scraping. It may not be as fast, though.

    You can use the included Footprint Studio module included free with SER to sort the CB scrapes into posting targets SER will recognize. (That list of footprints can be manually  expanded by YOU, the user, as well.)

    You can also feed the scrapes from SB into GSA Platform Identifier. That's a stand-alone program that you can have running alongside SER and CB. You can set up a SER project to work from a directory, and GSA-PI feeding that directory in real-time with matched target platforms it gets from SB.

    So those are three options I am aware of. I believe option three is probably your best bet, to get the most targets, fastest. Anyone know for sure? Thanks...
    i will try the trail out and see if it works, I am still broke tho, not going to be a fast or e z buy i only have GSA from buying it like 8 years ago, just starting to try to use it now again. I do remember getting into position 1 or 2 on bing for my keyword using GSA SER.
  • @skyking i have been doing this for a few years. when i first started out I let SER do everything. Then I tried SBV1 and had mixed results. Went back to SER to do everything. Then SBv2 released and was a game-changer for me. I use it to scrape every day. An hour usually yields around 300-500k of fresh targets. Watch some of Looplines videos to learn more about SB. He is the king for SB.
    SER is awesome for what it does best. I just let it do that now. T2 only.
    For T1 I use RX.
    Should throw in too that success is not from the tools mentioned here, but from the proxies and captcha services you use. For example on RX, if you use DBC you will only get 50% of the results you get from using 2Captcha. 
    Thanked by 1Deeeeeeee
  • viking said:
    @skyking i have been doing this for a few years. when i first started out I let SER do everything. Then I tried SBV1 and had mixed results. Went back to SER to do everything. Then SBv2 released and was a game-changer for me. I use it to scrape every day. An hour usually yields around 300-500k of fresh targets. Watch some of Looplines videos to learn more about SB. He is the king for SB.
    SER is awesome for what it does best. I just let it do that now. T2 only.
    For T1 I use RX.
    Should throw in too that success is not from the tools mentioned here, but from the proxies and captcha services you use. For example on RX, if you use DBC you will only get 50% of the results you get from using 2Captcha. 
    what is RX and how do i import the list into GSA to use for posting onto the links?

    and do u have a mega footprint list that i can copy and paste please?
  • @skyking 'posting onto the links'...you mean adding more content to the accounts? Rankerx will do that for you. No need to bring to SER. Or do you mean to import the URL's to SER for a T2? Then you can export from Rx, copy and paste into a T2 project in SER and let it run. Or, you can simply run a T2 inside of Rx and not worry about it.
  • Theres one huge advantage about SER for scraping; you can make it working in passive mode on autopilot. Just add your footprints to engines
Sign In or Register to comment.