Anychance of a Blacklist removal?
shaun
https://www.youtube.com/ShaunMarrs
@Sven
This is my current process.
Scrape (Scrapebox) - Identify (GSA PI) - Verify (GSA SER) - Live link building projects (GSA SER)
A massive amount of duplicate target URLs manage to make it through the system each time and end up getting reran through verification and waste a fair amount of system resources that could be used elsewhere.
Is there a way you could add a folder where the user would put all the scraped URLs from past scrapes. Then when a new project is made in PI for a new scrape it scans this folder first and removes all urls that have been ran in the past before identifying the URLs? Another way would be to have it run the new project and then at the end compaire it to the new list it has just produced and then remove all the duplicates from the old scrape folder. Another way that is probably the easiest is to have some type of mini feature where the tool compaires two folders to each other and removes all entries from folder B that are seen in folder A leaving folder B as fresh targets only to be identified.
Its late here and im crazy tiered so if im not making myself clear just let me know and I will try explain it some other eay.
Cheers
Shaun.
Comments
Import URL List -> Select the URL lists to compare (or on domain level)
It can help you remove the URLs (or domains) you have already processed.
Do note that many URLs will fail due to incorrect captchas or temporary unavailable. If they get into your blacklist, you'll never give them another chance to get verified again...