Skip to content

Some types of general blog - comment sites NOT working but instant verification shown

There are general blog - comment sites recognized by SER that are either
- may be configured as honeypot = spam trap
or simply SERVER mis-configured and useless for submission

below example URLs typically are recognized as instant verified,
yet
there is NO submission found and NO link on those URLs

important feature of those "general blog - comment" URLs are

the comment forms always exist on ALL pages - but NO comments or NO posts for the comment found = empty page except comment form
the URLs typically are

multiple sub.sub.sub.domain
and / or
deep inside sub/sub/sub/sub-folders

sub.domains AND sub-folders in URL typically have random letters = NO names that make any sense

changing folder names
OR
changing sub.sub.domain names gives NO 404 but another valid page with a comment filed


= you may try with the FIRST below URL and modify any part of sub-domain OR sub-folder
and you always find a valid empty page with comment field

general blog - comment
example URLs:

http://dpsqvu.umna.oknsytx.tcsq.qypvthu.loqu.forum.mythem.es/aabq/homograph/diable/isocephalous/gnomemummy/verschaerftest

http://cilindrischer.zuffolin.www.0l.ro/endarbei/pondrion/eichou/additionner/tidligt/fondavamo/oversparingly

http://solarlamp.bloq.ro/2013/02/20/hello-world/letimoa/derrubio/miscellanea

it would be useful to have such sites recognized and avoided in adding to global list when importing URLs or site-lists

this comment script is installed on a variety of domains, among others often on

mythem.es
mailld.com
www.0l.ro
spamtrap.ro

Comments

  • SvenSven www.GSA-Online.de
    all sites seem to use the same "addfly id"...thats all they have in common and the same style. If you see anything else which makes them unique in html let me know...for now I added "var adfly_id = 2275180" to the script as not allowed so it will skip it on next update,
  • I have spent a few hrs to remove all existing URLs using Linux with regex
    there however is NO strict regex that removes only those sites = hence lots of manual/visual human work before deleting each URL

    even the many sub.sub.sub.domains is no sure sign because there are many .edu sites having similar sub-domains for their projects

    besides the a.m. 4 domains with lots of circulating URLs
    I have NOT seen any other footprint for clear identification

    If I find anything new as footprint = I let you know
Sign In or Register to comment.