I'm new to Ser and I have some questions that confuse me:
1.) What's the best practice for scraping ?
I own a license of gscraper and I scraped a lot of urls with the footprints extracted with the tool someone posted here. The scraping is very fast with gscraper and I got a really big list (because of many, many keywords). But what's the next step ? When I go to options -> advanced -> tools -> import urls (identify and sort in) it takes hours to process the lists I scraped before. Is this normal ? Or is there any faster way of importing and identifying links in Ser ?
2.) When I start a new project, how many email adresses are you using for one project ?
Why I ask ? Because I imported 350 email adresses into one project and the process of verifying takes very, very long. Is there a way of just processing my list (posting to articles,bookmark sites, ect.) without verifying all the email addresses every xx minutes ? I want to just post and later on I want to start the verify process manually. Is this possible ?
3.) Search online for sitelists - question: What's that for ?
I tried as described in a video but I was wondering how it works ? It finds only pages of pastebin. Can the scraper extract the urls on this pages to find new link targets ? Because the site command only send you pages inside the pastebin domain. How does it work ?
4.) Cleaning up and duplicate removing
Does this process clean all the urls in the global list (identified, success and veryfied) ? Is it the same as if I copy all urls into scrapebox/gscraper, remove duplicates and scan the remaining urls for my link ? Or is it just an alive-test with duplicates-removing ?
Many thanks for your help...