I use it only for scraping: extract all the footprints from Ser using the tool from Santos, then go to the "scrape" tab, import these footprints, insert keywords (relevant to your niche or click on "use built-in keywords" to use the general keywords in gscraper) -> located underneath the "footprints" field and then hit "Start Scrape"
I scrape without duplicate removal, create new file every 900000 lines so I end up with a very large amount of scraped urls. These url-files have a lot of duplicates inside but after this step I use the great "SERtools program to "remove duplicates" from all those scraped files at once.
Why I don't let Gscraper remove duplicates during a scrape ? Because it slows down GScraper and the usage of the SERtools program is much quicker to do the same in less time.
I used them but you only get urls from them with low PR - they only share crap to the mass. If you try my method you get much better results in using Ser to comment.
If you want to sort-out more you can - after duplicate removal - load all urls back into gscraper, check PR, remove PR<1, remove urls not indexed ect.
Then take the whole bunch into scrapebox, extract all external links from all urls and do a backlink-check in scrapebox (or hrefer if you own) for all of those urls. You find such a mass, Ser can post to, you can't post to all of them, even if you have 1tbps-line :-)
Comments
Hey mate, thanks for sharing here.
Anyway, did u use gscraper aa list?
How about using gscraper to post comments?