I used mostly the Decidem engine as part of the beta group, and the results were satisfactory. I only added the Git Alike and WoWonder engines 2 -3 days ago and still need to scrape target URLs. In the meantime, I have expanded the footprints for these engines in Footprint Studio.
The support from @cherub has been good. When I had questions about the engines, he helped me quickly. Only time will tell how well the engines are maintained, but @cherub has been doing a great job so far.
The CPU load from the SERNUKE engines is really good, unlike the other 2 paid web 2.0 services (which I would rather not name), which cause many Chromedriver browser-spawned processes that, at times, crash GSA ser with SERNUKE. I have no issues at all with the load from these engines.
Below are my results so far. Remember that the new engines were only added about two days ago. I observed that Xevil is showing a lot of Bad site key | incorrect parameters for the Gitlab and never engines, so I wondered what Recaptcha Captcha solving you guys have been using. Maybe @Sven can see if the recaptcha can be fine-tuned.
Now, all that remains for me to do is scrape targets for the new engines, and then we will see the performance after running a full month. I am also including the SERNUKE target URLs in the Asia Virtual Solutions GSA Site list so SERNUKE users can use the target URLs I am scraping.
Looking forward to the new engines to be released by @cherub
Hello, is this service still active and includes sernuke links?
If you mean the list service from Asia Virtual Solutions, I'm not actually sure. There are a few list SER list services offering SERnuke targets that I know of, but only one seems to advertise the fact.
Hmm, as SERnuke is an API-based service rather than an SEO service, I don't have any sample ranking reports for parasites, as the end result is down to the end user. The best I can offer is sample reports from each package in standard SER output format.
I've been using these engines since they were released and I can tell you that you will see good results from using them as T1 and T2. Links get indexed quick and the backlink pages rank as well.
Just make sure you use good content with it such as chat gpt 4.0 with lots of html structure. Don't use plan text for your content although the wiki engines don't seem to support html.
I'd also recommend scraping google for your own sites - there are a lot of them out there.
Hmm, as SERnuke is an API-based service rather than an SEO service, I don't have any sample ranking reports for parasites, as the end result is down to the end user. The best I can offer is sample reports from each package in standard SER output format.
Hmm, as SERnuke is an API-based service rather than an SEO service, I don't have any sample ranking reports for parasites, as the end result is down to the end user. The best I can offer is sample reports from each package in standard SER output format.
SERnuke Engine Pack #5 should be out during the last week of May 2025. It will consist of 4 engines, two posting articles, two posting profile-type links.
Named Real Estate Package, it's based on platforms offering real estate search and lettings functionality similar to Airbnb and other property selling sites. Consists of 4 engines, two posting contextual links, and two posting profile-style links.
I bought your wowonder package, scraped thousands of targets and started posting. After a day I got about 500 backlinks but 97% of them are all nofollow. What package of yours has the potential to get the most dofollow links for tier 1 contextuals?
I bought your wowonder package, scraped thousands of targets and started posting. After a day I got about 500 backlinks but 97% of them are all nofollow. What package of yours has the potential to get the most dofollow links for tier 1 contextuals?
Unfortunately it is not possible to know if a link will be dofollow or not until it is posted and live, so the only data I can really give you is by going off the test projects I run for each package and giving you the percentage of dofollow links compared to nofollow based on SER's reports. Data below:
This is my current breakdown of sites from one of my projects - unique domains:
In terms of sites, gitea, gogs and gitlab stand out - but these sites also do both profile and contextual links so you'd have to halve the above link numbers.
They are the older engines and i've been scraping and testing for many more months than the 2 newer engine packs.
The real estate and jobs package are very good for do follow contextuals which are the best for T1 as they make do follow keyword anchors.
Thanks for the replies guys! I also can't give numbers of potential targets for each package as that would be an impossible task, and is down to how many the end user can scrape/source.
My metric for determining whether a package is fit for sale is as follows: I scrape Google for each of the footprints supplied with the package - just the footprints, no additional keywords whatsoever. If I can get at least 400 verified links from this simple scrape then I deem a package viable.
Comments
can I use this SERnuke for parasite SEO?
for easy keywords, can I see any good result with the lists of SERnuke?
Just make sure you use good content with it such as chat gpt 4.0 with lots of html structure. Don't use plan text for your content although the wiki engines don't seem to support html.
I'd also recommend scraping google for your own sites - there are a lot of them out there.
Package #5 has been released!
Named Real Estate Package, it's based on platforms offering real estate search and lettings functionality similar to Airbnb and other property selling sites. Consists of 4 engines, two posting contextual links, and two posting profile-style links.
Check it out here!
He listed four of them.
In terms of sites, gitea, gogs and gitlab stand out - but these sites also do both profile and contextual links so you'd have to halve the above link numbers.
They are the older engines and i've been scraping and testing for many more months than the 2 newer engine packs.
The real estate and jobs package are very good for do follow contextuals which are the best for T1 as they make do follow keyword anchors.
My metric for determining whether a package is fit for sale is as follows:
I scrape Google for each of the footprints supplied with the package - just the footprints, no additional keywords whatsoever. If I can get at least 400 verified links from this simple scrape then I deem a package viable.