take scrapebox or your prefered scraper tool, search for a list of autoapprove-blogs using the filetype:txt option and copy the results (all the urls it finds) into a txt-file.
Then take Ser, import this txt-file using the "ability to import URLs holding site lists" option and it will open every single url from your file, scan its content (all the urls) and do a "identify and sort-in" action, to let your "identified"-list get really big
That's it. Hope it's clear now. I don't know if it could identify lists other than in .txt format but I think sven could clear-up this question...
@magix: I'm not sure I understand. With the "Import URLs (identify platforms + sort in)", doesn't GSA do the same i.e. visit the URLs, identify the platform and sorts them in? I also imported those via a text file.
I'm obviously missing something, because I don't know what you mean with the scrapebox filetype:txt thingy. Any link to a tutorial to use SB to find lists of auto-approve blogs?
Search blackhatworld dot com - you will find many, many SB tutorials there.
If you have a list of sites that are holding autoapprove-lists (for example a list of 5 urls and on every url there is a huge list with aa-urls), the you can import this list (of 5 urls) into Ser and Ser will go to thoose 5 urls, scan all the aa-links on thoose sites and imports and sorts them in.
@johnmiller yes SER will do that if you just import the URLS directly to a project. Importing them to identify is an extra step for most people. I import my URLs directly to a project and let SER do it's thing.
Comments
I'm obviously missing something, because I don't know what you mean with the scrapebox filetype:txt thingy. Any link to a tutorial to use SB to find lists of auto-approve blogs?
Would really appreciate some pointers, thanks.