SERnuke Custom GSA SER Engines
cherub
SERnuke.com
Introducing SERnuke
Custom Private Engines for GSA Search Engine Ranker
We can now offer several packages of engines that enable SER to submit to a variety of new platforms. Working via an API key, SER will be able to download engines and their updates automatically.
Expand your range of link building targets with engines covering several link types. Each licence is a one-off cost, and includes lifetime updates.
Our engines are aimed at experienced SER users. If you're not already having success building links via the software, it's probably best to concentrate on your working knowledge of the existing engines and practices before looking at ours.
How it works
Register for an account on our site
Find a package of engines that you are interested in
Purchasing the package will give you access to an API key
Add this API key to the APIs section of your copy of SER
Start using your new engines!
Purchasing additional licences will allow you to use the engines on multiple copies of SER at the same time
What you will need
A copy of GSA SER
A decent set of proxies
A captcha solving service or app
Email accounts
A good working knowledge of SER usage
The ability to scrape your own lists
Our engine packs are limited to 100 sales each. This is to try and ensure the domains they can submit to don't get oversaturated with links. We've all seen what can happen to new platforms being added to SER by default.
Comments
That was without additional scraping in Scrapebox. I just did a basic run to test things out, and I was very happy with the results.
First impressions - really good engine and looks like there will be a lot of targets. The included footprints work really well with scrapebox. Not done a full run - just tested each footprint with a small group of keywords to see if the scraped links worked.
Mixed do follow and no follow, so far about 25% are do follow, but numbers will depend on what you can scrape.
The highest DA so far is DA37, but the vast majority of the links I've scraped are DA1 - DA15. But these values will go up as people start building tiered links to the sites.
@cherub Glad you you got this service off the ground. Looking forward to the next set of engines.
I bought the other Git-alikes engine too. Also seems like a very good one to go for. Mixed do follow/no follow links. Combination of contextual links and profile links. Similar site metrics to the wowonder engine. Found a few DA35 and DA37 sites, but most are DA1 -DA15 that I've scraped so far.
All engines combined, I've got 1700+ new sites to drop links on.
The Gitea engine stands out and there seem to be a ton of them out there. It makes both profile and contextual links. The profile link is do follow but the post url is no follow. Total unique domains now stands at 3156 sites!
I only added the Git Alike and WoWonder engines 2 -3 days ago and still need to scrape target URLs. In the meantime, I have expanded the footprints for these engines in Footprint Studio.
The support from @cherub has been good. When I had questions about the engines, he helped me quickly.
Only time will tell how well the engines are maintained, but @cherub has been doing a great job so far.
The CPU load from the SERNUKE engines is really good, unlike the other 2 paid web 2.0 services (which I would rather not name), which cause many Chromedriver browser-spawned processes that, at times, crash GSA ser with SERNUKE. I have no issues at all with the load from these engines.
Below are my results so far. Remember that the new engines were only added about two days ago.
I observed that Xevil is showing a lot of Bad site key | incorrect parameters for the Gitlab and never engines, so I wondered what Recaptcha Captcha solving you guys have been using. Maybe @Sven can see if the recaptcha can be fine-tuned.
Now, all that remains for me to do is scrape targets for the new engines, and then we will see the performance after running a full month.
I am also including the SERNUKE target URLs in the Asia Virtual Solutions GSA Site list so SERNUKE users can use the target URLs I am scraping.
Looking forward to the new engines to be released by @cherub
For existing API owners, SER should download the new engine during it's periodic update checks; alternatively you can force download of the engines via Options > Advanced > APIs > corresponding Update button. You may also wish to redownload the documentation pdf from your dashboard section, to access footprints and other target url sources.
Ok, can I pay with PayPal?
For existing API owners, SER should download the new engine during it's periodic update checks; alternatively you can force download of the engines via Options > Advanced > APIs > corresponding Update button. You may also wish to redownload the documentation pdf from your dashboard section, to access footprints and other target url sources.
Package #3 has been released!
Named Employment Package, it's based on platforms offering job search/portals, and consists of 4 engines, each posting contextual links in articles.
Check it out here!
Lots of good footprints included on the employment package and I've only used 3 footprints from each engine so far to scrape with, so plenty more sites to be found on the latest package.
That's the site numbers I've got for all 3 sets of packages. Many thousands of new sites. Over 10K lol!
Many thanks to @cherub It's really brought the software back to life with tons of new targets. These should last a long time as well as access to these sites is pretty exclusive.
I'm just using them as T1 to boost my referring domains on projects. Personally I wouldn't use them on T2 as the sites would get spammed too quickly and reduce their shelf life.