The whole orange by Tim Dorr used under Creative Commons Attribution-Non-Commercial-Share Alike license
The first part of this series was a heavy on lists and common sense and light on the details. Cacti tend to be more interesting than audits, despite their importance, and the amount of work being put into such a security hazard can seem ill spent when all you want to do is get down and start fiddling. This is what the plan is all about. It marries riotous list-making with tinkering joy.
Toys!
The first thing I did when the prospect of a new server arose, before looking at prices or stats, was make a wishlist of everything that I wanted. Despite working without incident for so long, there are places and processes where certain aspects could be smoother – this is the case with any computer and having the time to figure out improvements is a rare joy; opposed something breaking and a near-as-dammit replacement is swiftly procured.
The wishlist was split into areas which are a pain to currently work with and areas where it would be good to try something different. The latter is obviously the more contentious – why change if it works – however I always like to try something new for every project, how else can I learn?
The most pressing area for me to address was the way Apache handles domains. I had originally stuck with each virtual host defined in one file, however for sites with many aliased domains this proved unwieldy so I switched to using Mass Virtual Hosting that Apache provides. I set up a “domains” folder and populated it with soft links to the appropriate web directories. This worked and meant Apache restarts were reduced; however the unintended consequence of this was that mod_rewrite no longer worked unless a RewriteBase statement was included. A small but vital change that causes some hassle when moving from a development to a production environment.
Also high on my list was to change the way permissions and PHP are run. The existing server uses a familiar but difficult to work with setup, whereby all servable files are owned by a single UID and GID (“ftp” for both in my case) and Apache is run similarly (“apache”). If PHP wishes to write to a directory it needs to be world writable. While not exactly a security nightmare, it still feels slightly uncivilised. There are numerous ways around this and thankfully Stuart Herbert summarises them all with his own conclusions and (slightly superfluous) benchmarks. mpm-itk certainly seems like the way forward given its active development and production server usage.
This leads to shell accounts; whereas in my existing set up I used the bare minimum of this lead to other problems down the line involving e-mail and FTP. There seems to be increasing pressure to abandon FTP as a protocol given its inherent weaknesses in security and optimisation; with shell accounts for all sites I can abandon plain FTP and stick to SFTP (not FTP over SSL) and feel like I’m being progressive.
E-mail is always a pain to set up and manage, but postfix made it almost bearable when coupled with the superb O’Reilly book on it. I’ve heard nothing but good things about Exim and with a similar book to go along with it, I’m certain the end result will be utterly indiscernible from a postfix installation.
My existing server came pre-installed with Fedora, unfortunately it is notoriously hard to update to major versions in-place which meant rolling my own versions of key services such as Apache (2.2.6), PHP (5.2.6) and MySQL (5.0.45). I would still be doing that – nothing beats an up-to-date, custom compilation – but the new server came with Ubuntu which meant going back to the dreamy apt-get. Having been “brought up” on the Debian way of thinking, Fedora seems to organise itself a little peculiarly.
Hard drive set up would be largely the same – two hard drives running in RAID-1 (striping) mode. The logic of this is questionable as it only provides for the scenario when one drive fails, this is unlikely if both hard drives are from the same manufacturing batch. Word to the wise: when buying multiple same-company hard drives, order from different places or check the serial numbers aren’t too close otherwise all your drives will fail around the same time. Thankfully the new server would be using 15,000 RPM SAS drives rather than standard 7,200 RPM SATA-I so at least if they fail there is more of a chance of being wildly catastrophic.
With a plan being only as good as its implementation how does one prepare beyond a simple list?
Virtualisation
Virtual machines and virtualisation have come a long way, with computing power ever increasing and the capabilities of multi-core processors, running a virtual machine has become a day-to-day occurrence for some. Fundamentally what this also allows you to do is try out some of your more fringe ideas in a safe environment before you run off and try implementing them on a computer several hundred miles from you.
Grabbing a free virtual machine like Sun’s VirtualBox means that for open source solutions like Fedora and Ubuntu, the cost to tinker and refine is measured in time rather than money. For other platforms there is likely a cost to pay for the OS and related software, however what may seem expensive now may be cheap in comparison to possible hair-pulling later and there is always the possibility of license transferral.
Deployment
There comes a time when all the planning in the world doesn’t compare to cutting your teeth on the target system. This is where the deployment plan comes in: that space of time when the urge to dive straight in is greatest and possibly most costly. Lamentably deployment plans are a uniquely personal affair and are tailored around your existing set up and choices made earlier. Whether you’re implementing something intricately complex or just throwing up the usual suspects, even just knowing which order to take them in can help smooth out what is going to be a busy and fraught process.
My own experience is as in-depth as earlier steps: controlling which services would go up first, what metrics I use to consider a service “ready”, which order the sites were going to be transferred, estimated dates when these would happen and so on. My service queue was: security (firewall, SSHblack etc.), MySQL (can be run without dependencies and can optionally drive other services), e-mail (get the worst out of the way early), Apache, PHP, tuning, benchamarking. My site queue was staged and involved a small, wholly controlled (DNS, database, no external services etc.) site first, then larger sites in batches over a weekend, then the largest sites last once everything had been run in for a few weeks.
Regardless of your plan, you are likely going to be running two servers in parallel; and given the nature of DNS, you are likely going to be in a position where databases and logs do not match up. Again how you deal with these is up to you, if you can write them off as “just part of the process” then all the better; if you already have a database delta script handy, superb – these are all considerations based on your circumstances. Perhaps this is a non-issue given your centralised NFS storage and redundant MySQL cluster.
Conclusion
As much as I advocate meticulous planning, there are some people who will still dive straight in and come out without any trench stories to share. For me, planning helps to reassure myself that I have everything sorted, if not in my head then on paper, and also those around me that I may be mad, but at least there’s a method. The core message is: do what makes you feel comfortable and reassures everyone involved.