Skip to content

Faster: Static pages or Dynamic Virtual URLs?

Trying to just be as efficient as possible.

Which loads faster:

WEBSITE A: Its pages are static HTML with some PHP, each index file in a different folder for each and every different page.  So 100 pages, 100 directories, each with a different index file.

WEBSITE B: A website where the user really only ever hits index.php in the root of the domain, and all web addresses are virtually created dynamically from that one PHP file, while content is loaded by the system from a single directory with all page content and meta data files in ONE place.

It seems a server would buffer a file called over and over, and if all requests to the server are handled with ONE PHP file, like on WEBSITE B,  I would guess most shared servers will buffer that and serve the users faster. I have NO IDEA whether this is the truth, though. Anyone know for sure?

And, having to load different index files for every directory means a spinning disk has to seek files on different tracks and sectors, like with WEBSITE A, corresponding to where each different index file is stored for every different page on the site,  and that takes more time than loading the SAME EXACT file in the root dir, and one of 100 files in the page data directory and computationally creating a virtual URL and loading a single appropriate data file for each and every page to display its own content and METAs.

At what point are parts of sites (files) buffered on most servers? Are they, even?

If we REALLY want to speed things up, would you even ever call files from many different directories? Wouldn't we just want a single file?


  • SvenSven
    It wouldn't matter. WEBSITE B would do fine. PHP is executed in both cases anyway.
Sign In or Register to comment.