remove duplicates: URLs or domains?
A few questions for the pro SER users here:
Assuming you scrape your URLs with an external tool (f.e. SB) and import them into SER: do you first remove only duplicate URLs or do you go for "remove duplicate domains"?
Also, after having posted to a list a couple of times, do you use SER's in-built list cleaning features (remove dupes, clean list)?
Lastly, which lists doe SER's in-built list cleaning features actually focus on? I have 4 site lists (identified, successful, verified, failed) and after running the remove duplicate URLs feature, f.e., I see in the stats that only the "successful" list changed, all the other lists remain the same. Is it supposed to be like that?
Assuming you scrape your URLs with an external tool (f.e. SB) and import them into SER: do you first remove only duplicate URLs or do you go for "remove duplicate domains"?
Also, after having posted to a list a couple of times, do you use SER's in-built list cleaning features (remove dupes, clean list)?
Lastly, which lists doe SER's in-built list cleaning features actually focus on? I have 4 site lists (identified, successful, verified, failed) and after running the remove duplicate URLs feature, f.e., I see in the stats that only the "successful" list changed, all the other lists remain the same. Is it supposed to be like that?
Comments
For global list cleaning, you can remove dup domains (apparently it keep multiple url's per domains for things like blog comments, maybe @sven can confirm) or if you want to keep a big a list as possible then just remove dup url's
I thought it cleaned all lists, definitely it cleans verified but my understanding was it cleans identified, successful and verified etc.