I have tried to research in the forum but I am not able to locate the answer needed:
When scraping new sites:
1. If we have the bandwidth and processing power, can the scraper be ran at the same time as GSA submissions?
2. When scraping, make assumption we scraped for Pligg sites. When it finds a potential site, GSA then tries to submit to it - correct?
3. If it was successful, where does GSA store the site data? Not the verified link - but the Pligg site itself?
4. Does GSA automatically remove duplicate sites it finds and if not, what happens with duplicates?