Optimizing Little Hills, FaithTree and Open for Business
I’ve been deploying a lot of server optimization tricks to improve the performance of Little Hills, FaithTree and Open for Business these past months. They had all grown slow with the kludge of adding additional features and my once snappy homegrown content management system (also used here), SAFARI, was no longer snappy. That all threatened user frustration and, for that matter, lower rankings on Google, which pays attention to the performance of sites it refers people to.
I’ll blog about some of the other optimizations a different day, but one I’m rather pleased with is support for WebP. I’ll admit to ignoring it in no small part because Apple didn’t support it. I use primarily Apple Safari to browse the web, including pages served by my SAFARI (which I called that before Apple’s Safari was launched, so I’m being stubborn and keeping the name). However, WebP arrived on Safari with iOS 14 and MacOS Big Sur, so I revisited the format and found it could offer substantial savings in download size on many images. With support in every modern browser, I figured I’d implement support in SAFARI.
By support, I mean my goal was to be able to upload an image to the site and the software takes care of turning it into a WebP. SAFARI now does that, taking images of known types and running several layers of optimization. First, it tries to make a more efficient JPEG using ImageMagick’s PerlMagick module; once that is complete, it attempts to make a WebP version. It only keeps the optimized JPEG version if it is at least 20% smaller than the original. Likewise, it only keeps the WebP if it is that much smaller than the optimized JPEG. Since not everything older supports WebP, there’s no reason to mess with it, if a JPEG will do.
This is similar to what I later read described here. I want to implement more of Bernat’s optimizations in the future.
That’s great, I thought, but then I realized the next issue (which is described with a solution in Bernat’s piece, but I hadn’t found it yet at the time): if not every browser supports WebP, I have to be prepared to serve the JPEG still. The HTML5 <picture>
element supports giving different image options and letting the browser installed, but my goal was not to require me (or anyone else someday posting to Little Hills’ site) to manually think through the process of describing alternatives. It also does nothing to take advantage of the automatically generated alternatives I previously created when it comes to existing pages.
The solution was complicated by another optimization I made. My long worn solution would be to come up with a series of mod_rewrite
rules for my Apache server to analyze if the browser could handle WebP and then replaced the JPEG with a WebP if possible. However, I recently implemented an NGINX caching server to get its advantages over Apache, so I needed to figure out the NGINX way to do it. That turned out to be a bit confusing to me at first, but simple in practice.
Here’s the final bit of NGINX configuration magic used to reroute requests to WebP (or the optimized JPEG), as best suited. It finds any folder location under an images
directory and then looks for an .optimized
subdirectory. If it finds one, It looks for the WebP version if applicable and the optimized JPEG if the WebP version is unavailable or the browser does not support it.
location ~ ^(?<extra>.*?)(?<prefix>/images/(?:.*/)?)(?<rxfilename>.*)$ {
expires 365d;
add_header Pragma "public";
add_header Cache-Control "public, no-transform";
add_header Vary Accept;
try_files $extra$prefix.optimized/$rxfilename.$webp_suffix $extra$prefix.optimized/$rxfilename.jpg $uri =404;
}
This has been working great for a couple of months now — I started writing this post back on July 7 and then got sidetracked — so if you’re looking to optimize your NGINX configuration, I definitely commend it to you.
Sign Up
It does not add a lot of functionality just yet, but if you would be willing, would you please sign up for an asisaid Account? I am testing my newly minted user sign up and authentication system, part of the “project” I've been working on this summer.
Once you get your account and sign in using the sign in form located above the box to leave a comment on any entry, you'll notice that the comment area will show you signed in rather than giving you boxes to fill out with your name and so on. The information you normally would have typed (or that could optionally be stored in a cookie) will now be stored on the server, associated with your account profile.
It doesn't sound like much, but it required some hefty architectural changes to my codebase.
Thanks to any willing “beta testers.” 20 asisaid points for giving it a whirl.
Finally
After probably close to a decade of SAFARI, my homegrown blogware, having a non-functional “log off” function, I can actually log out of my administrative account again. Even when I'm slow to fix something, I do get to it sooner or later!
Under Construction
As I have noted from time to time, asisaid runs the bleeding edge version of my SAFARI content management system, functioning as the official guinea pig for the software. Right now, I am in the process of checking off some long term todos for SAFARI as part of developing a site and that means things could get a little crazy on here. In particular, the project is forcing me to go back and clean up some of my messiest, oldest code — some going back as far as eleven years — and trying to re-implement things properly.
If something goes wrong, would you please drop me a line via e-mail or Facebook, in case I miss it?
Thanks.
A SAFARI Glitch
With the way I have SAFARI setup, I have asisaid as a testbed, and then all of the other SAFARI enabled sites feed off of one codebase to which I push out updates. I did such a push a few days ago to fix the disabled comments bug that was affecting Ed's blog. Silly as I am, I didn't check afterwards to make sure everything was still OK on my church's site, which also uses SAFARI. As it turned out, I killed off most of the site because of a small bug that hasn't been a problem on the other sites.
Keeping the code centralized is good for saving time doing updates, but has its disadvantages…
More SAFARI Fixes
Just a heads up: I've corrected a bug on the “Recent Comments” page that caused the entries to be semi-randomly sorted. With this fix, the latest comments are again placed at the top of the page.
The road to 3G SAFARI is progressing! Let me know if you see any bugs. For those of you who read Ed's blog, I'm hoping to get this and some other fixes deployed there very soon.
Joining the Crowd
Well, I enjoyed my brief moment as t3h l33t subquery h4×0r, but in the end, subqueries seem to fall flat on their face. Two queries with subqueries were taking SAFARI 7-20 seconds to run, a totally unacceptable speed, especially since my goal with SAFARI was to build it in such a way as to allow it to be Slashdotted without performance problems. Conversely, using a join statement, I accomplished the same effect while reducing the processing time to less than two seconds (how much less, I cannot yet say, since I'm still working on developing the perfect query, but I'm expecting it to drop below 1 sec before I am done).
The road to the next generation of SAFARI progresses…
Update (10 July 2006 12:14 AM): OK, so I wanted to see if I could count the number of comments in my comments table in addition to comparing the objects (metadata) and articles (normal article stuff) table all in one query for efficiency. I ended up with what might be best termed a hybrid solution: the meat of the problem is taken care of via two LEFT JOINs, the latter one joining the results of a subquery to the main results. It seems reasonably efficient: it takes a mere 0.0274 seconds to process! How 'bout them apples?
Subqueries Take One
As you may recall, I recently discovered subqueries. While everything should seem functionally the same, you'll now be receiving subquery produced information when you view category pages. I think everything seems to be working OK thus far, although the subqueries seem to be a bit slower simply doing several queries, which suggests to me I must not be doing something right — why would initiating multiple queries be faster than one complex query?
Hopefully, I'll continue working on adding functionality via subqueries in the near future.
Oh, and about commenting or the lack of the ability to do thereof: I think I finally fixed SAFARI so that commenting should be on by default on asisaid. I'll be distributing that minor bug fix to other SAFARI-powered sites, such as Ed's, once I stabilize the subquery work a bit more.
Whoops!
Mark pointed out to me that I had disabled comments on new posts — not something I intended to do. If you tried to comment in the last day or so and found yourself unable to, please try again.
I'm sorry about that! I should be more thorough in checking my code. Sheesh.
SAFARI 2 Road Map
- Implement per post comment_disabled flag. (I promise this to Ed a long time ago.)
- Create query string option to use alternate themes (e.g. for optional front pages that don't look blog-like.)
- Implement metadata editor allowing editing of any standard metadata as well as addition of unlimited custom metadata.
- Allow data to be limited based on any metadata, not just category.
- Streamline category/metadata selection mode to use MySQL sub-queries.
- Rework search engine from SAFARI 1.x flat file database support to SAFARI 2.x SQL database system.
Launch of the new, reworked Open for Business.
Lower priority goals:- Implement user-end of multi-level threaded commenting.
- Finish auto-caching spider for high traffic readiness.
- User registration tied to e-mail address verification.
- User-only comment posting restriction mode.
This will make SAFARI 2 functionally complete.
SAFARI 2 Release Candidate Goals:- Rework inefficient subroutines, removing 1.x legacy code.
- Verify SQL injection protection.
- Decide on and reveal new name (too much confusion with Apple Safari web browser, even though SAFARI the CMS was first.)