My optimised engines.
Just spent past hours optimizing the first batch of engine files footprints. I've learnt so much from you guys on this forum and I've always given back to any community which is helpful..so heres my offering.
Recently I've found the search engine in ser not finding may new sites and many search calls, returning nothing in results.
I have gscraper which quickly checks the search results on Google via the search terms used in the engines.
I've also got gscraper to scrape all my "identified files" in ser to find new footprints and checked those also.
I combined both and then checked how many results in Google for given search term and removed anything which is less than 1 million results.
I was surprised to find many default search strings in current engines returning 0 results or only few hundred, which is just wasting time & resources.
I've also enabled the engine to grab keyword from your keyword lists and add that to the search string.
This will create a My Profile section in your campaigns without overwriting any excising file. Also when updating, they will not be overwritten:)
This is my first time editing these FILES guys, so test them out yourself, play nice and give helpful advice if you can.
Zipped file included:
Article - Article Dashboard
Article - Article Script
Article - BuddyPress
Article - Drupal - Blog
Article - Joomla - Blog
Article - php Link Article
Articles - Moodle
Wikki - MediaWiki
just extract the zip file into your /ser/engines/ folder. wont overwrite anything.
Thanks
Recently I've found the search engine in ser not finding may new sites and many search calls, returning nothing in results.
I have gscraper which quickly checks the search results on Google via the search terms used in the engines.
I've also got gscraper to scrape all my "identified files" in ser to find new footprints and checked those also.
I combined both and then checked how many results in Google for given search term and removed anything which is less than 1 million results.
I was surprised to find many default search strings in current engines returning 0 results or only few hundred, which is just wasting time & resources.
I've also enabled the engine to grab keyword from your keyword lists and add that to the search string.
This will create a My Profile section in your campaigns without overwriting any excising file. Also when updating, they will not be overwritten:)
This is my first time editing these FILES guys, so test them out yourself, play nice and give helpful advice if you can.
Zipped file included:
Article - Article Dashboard
Article - Article Script
Article - BuddyPress
Article - Drupal - Blog
Article - Joomla - Blog
Article - php Link Article
Articles - Moodle
Wikki - MediaWiki
just extract the zip file into your /ser/engines/ folder. wont overwrite anything.
Thanks
Comments
@Zeusy - Can you explain this which you wrote:
"This will create a My Profile section in your campaigns without overwriting any excising file. Also when updating, they will not be overwritten"
Thanks.
Oh and Ron thanks for your extremly helpful posts sharing info to this community, I've learnt a lot from you;)
Regards your question.
Just download the rar, extract it into your install forder /ser/engines/
It will create NEW files in your engine folder so it will NOT overwrit your excisting engine.ini files in your folder.
Open a campaign and you will now see (left hand side) a new catagory called "MY Profile" so you can select them to be used.
Doing it this way means when a new update of SER is installed it wont overwite/update my files.
Hope that explains it ok??
Yes it does. I couldn't figure out why you kept saying it wouldn't overwrite, and that part about the profiles. Now I know what you did. Thanks!
Also on the keywords, I find single word gets you more results also.
I have a question: i am trying to find better footprints as well, for example for ning - i dont want all ning sites, just those where it is possible to post a blog.
So i would like to try this for example: "ugg inurl:/profiles/blogs/". Google shows 53,300 results. My question - is there any way how to scrape all these 53k sites? With scrapebox or Gscraper or whatever? Is the way to enter some huge common keyword list + this footprint a scrape for a very long time?
thanks
@kaykay to how many web2.0 can you post with the "web 2.0 fixed" section and do you use a paid captcha service with them?
@Zeusy, I finally had a chance to run this set of engines solo. I won't know the verifications for a while, but I was able to rattle off a quick 500 subs in a few minutes. I can tell they are working much more efficiently.
I will report back...
ron - Glad you see improvments, I do also and Im sure the others who try them will do. FYI I havent optimised the signup/registering process on the file, I just optimised the scraping of the search engines.
2. goto scrape tab
3. paste "ugg inurl:/profiles/blogs/" into footprints section.
4. import your keyword into keyword section.
5. click start scrape.
Its that easy....or Im misunderstand what you want todo?
I've just done a optimised ning engine.ini file. will upload it when i finish a few more...any requests??
I think i will create my own footprints - i dont want to get ALL possible sites of some platform, just ALL where it is possible to place my post - this concerns mainly contextual platforms. An example was Ning above - there are dozens of other Ning sites where only profile link could be created and no blog post is possible - i dont want them, they will just consume resources/captchas etc. The same counts for other contextual platforms...
Blog Comment - KeywordLuv
Blog Comment - Blogspot
Article - UCenter
Wikki - TikiWiki
Article - Press Release Script
Article - PHPMotion
Social Bookmarks - Pligg
Social Bookmark - NING
File is Here
1. copy out ser footprints
2. search online for all known footprints for whichever engine im working on.
3. Get gscaper to find more footprints in my engine "identified" file. (this is limited as ive only had ser for about 10days so my identified file isnt that big yet- better results would be found if file had more urls in it)
4. combine them.
5. put into gsraper and see how many results each search string
6. remove any less than says 500k results, some less than 1million, depending on QTY
7. remove any which are too general.
8. put each one at a time into my profile Engine file and test it in ser for web urls found and if correct engine found.
9. removing any which dont find correct engines.
10. When all completed, put all into the new engine file and ensure engine is told to select a keyword when searching.
11. Test again...then upload for you guys.
Or if you can find way to output the proxies gscaper gets form it server...now using those in SER would be fantastic
n00b alert, so after extracting into the engines folder I'll tick "My Profile" and untick the corresponding ones from GSA? Thanks again.