Well, add e.g. "contact" in filters for "URL must have" and also in "anchor texts". Then let is parse sublinks to at least 1 level deep and restrict things to the domain. That should basically be it.
Well, add e.g. "contact" in filters for "URL must have" and also in "anchor texts". Then let is parse sublinks to at least 1 level deep and restrict things to the domain. That should basically be it.
I tried this but getting no results, could you do video on this maybe if poss? I am sure I am doing it wrong I want to scrape contact forms also
So then those are your contact URL's it found and its working? Unless you want to do something further like parse something else and or send somewhere I don't think you will see more as your just scraping for contat forms here. I think maybe the confusion is its doing all the steps all in one project.
Maybe you could add step to auto export to a file to be used elsewhere?
Or am I missing something?
The templates puts them in results now, the template parses the page title, probably what i was missing
Not right now as it is a very new product and GUI changes here and there. However, the video below is showing nearly everything you need to know (thanks to @s4nt0s for doing this).
Comments
I want to scrape contact forms also
Thanks for the help by the way
copied all the settings above, no parser set up
I can see the scrape pages in the queue and then when I parse I can see some contact form URLS in the queue, but nothing goes in the results tab