Skip to content

Online wayback machine downloader/website downloader/CMS converter

edited January 2019 in Buy / Sell / Trade
https://en.archivarix.com/ - Is an online wayback machine downloader, website downloader and CMS converter. It works very simple, just enter website URL, download options, your e-mail and wait a little bit. Content can be downloaded from the system in a zip file and installed on your server. For content management, we have developed a free Open Source CMS - It is one small PHP file and does not require any installation or database. See more detail about CMS here - https://en.archivarix.com/cms/ What's the purpose of it? Firstly - to create your PBN with the unique content found on the Web Archive. When parsing the site, you can set the parameters necessary for the normal use of content as a source of traffic and links. Such as deleting all external links and clickable contacts, removing counters, advertising and analytics, optimizing HTML code and images. Thanks to the Archivarix CMS, you can easily manage, search and replace, edit the site with the WYSIWYG editor, insert your own TDS scripts. It is also possible to work together with any other CMS, for example Wordpress on the same domain. Secondly, the system can be used to convert websites created in another CMS or in static HTML to Archivarix CMS. It is also possible to remove all external scripts, counters and advertisements (for example, if the site was on a free hosting).
Tagged:

Comments

  • Recently our system has been updated and now we have two new options.
    First -  you can download Darknet .onion sites. Just enter .onion website address in the "Domain" field here https://en.archivarix.com/website-downloader-cms-converter/ and our system will download it from the Tor network just like a regular website.
    And the second - Archivarix can not only download existing sites or restore them from the Web Archive but also extract content from them. Here https://en.archivarix.com/restore/ in the "Advanced options" field you need to select "Extract structured content".
    After that you will recieve a complete archive of the entire site, and an archive of articles in xml, csv, wxr and json format.
    When creating an archive of articles our parser takes into account only meaningful content excluding duplicate articles, control elements and service pages.

  • Pretty cool and a fun site to see how all those big websites used to be before who they are today.
Sign In or Register to comment.