How to work with scraped/verified URL list in GSA SER?
There is something about this procedure that I just don't get and hoping someone here can help. I've got a list (from a reliable source) of scraped URLs, verified, and a mixture of types of sites where I can add a link; ie, article site (articleMS, article beach, article friendly, etc), blogs (blogspot, blogengine, wordpress, etc), directories (indexu, myengines, otwarts mini, phplinks, etc), forums (aska bbs, burning board, ipboard, minibb, ipboard, etc), guest books (various), social bookmarks (various), you get the idea.
I think I know how to get them into GSA SER (advanced, import), but then what? Do I import them one at a time or just import them ALL and GSA SER will figure it out? How to I tell GSA SER to NOT look in the search engines but to only use the imported URLs?
Some help and direction here would be greatly appreciated.
I think I know how to get them into GSA SER (advanced, import), but then what? Do I import them one at a time or just import them ALL and GSA SER will figure it out? How to I tell GSA SER to NOT look in the search engines but to only use the imported URLs?
Some help and direction here would be greatly appreciated.
Comments
To import your list, simply import the URLs into a project. SER knows what to do with them.
You can leave the URL as it is and SER should know what to do with it. If you trim to root or folder it might have some content not being part of the engine itself in cases where the engine was installed to a sub folder.