How deep does GSA spyder a site looking for a postable url?
googlealchemist
Anywhere I want
I'm trying to figure out the most efficient way to add newly scraped potential link targets to GSA.
If I add a random single inner url which itself directly does not have the ability to add a link...
Or...I add a whole big list of many sites which I trimmed to the root and then took out all duplicate domains in scrapebox...and just upload the root homepage of the site to GSA....
Does GSA crawl different inner pages from either the inner page or the root...looking for the right page to get a link from or do I need to extract all the links from each site and upload them all?
I plan on running all my scrapes thru the platform identifier first so maybe that is sorting that out for me first?
I guess most sites will have the registration/login links on every page for the most important contextual style link opportunity based sites? Do I need to adjust things for potential comment/guestbook type pages/links?
Thanks
If I add a random single inner url which itself directly does not have the ability to add a link...
Or...I add a whole big list of many sites which I trimmed to the root and then took out all duplicate domains in scrapebox...and just upload the root homepage of the site to GSA....
Does GSA crawl different inner pages from either the inner page or the root...looking for the right page to get a link from or do I need to extract all the links from each site and upload them all?
I plan on running all my scrapes thru the platform identifier first so maybe that is sorting that out for me first?
I guess most sites will have the registration/login links on every page for the most important contextual style link opportunity based sites? Do I need to adjust things for potential comment/guestbook type pages/links?
Thanks
Comments
Are you referring to the project options "try to locate new url on "no engine match" (useful for some engines)"
I couldnt find any global option for this, do I need to tick that in each project?
Thanks