by Jonathan Lent
Systems administrators today (Linux systems administrators, in the context of this post) have many valuable tools at their fingertips. After some initial time expenditures, learning curves, architectural decisions, dead-ends, inevitable frustration, and testing, the adoption of automation technologies can make day-to-day tasks much more productive, repeatable, iterative, and secure.
End-to-end, we are able to harness build technologies such as Cobbler and FAI to deploy a minimal operating system over the network, producing a baseline result that’s predictable, generally-useful, and ready for further customization. Alternatively, we can leverage the experience of others in VMware, AWS or other cloud environments and use community-maintained OS images or templates. With a minimal OS installation at-the-ready, configuration management solutions are introduced to save time and help prevent human error. They rapidly ensure a set of baseline configurations, security policies, and application frameworks to harden and prep the machine for hosting services.
Common to and critical for both of these solutions are the package managers and community-backed package repositories, provided by and supported within most modern mainstream Linux distributions. Continually-available package updates are motivated by the need for bug fixes, security patches, and implementation of feature requests. These updates are released by large communities of casual enthusiasts, security professionals, and other folks often volunteering their time.
However, this interwoven technology toolkit inevitably tempts the sysadmin with a sense of complacency, often dramatically altering workflows and gradually allowing lapses in fundamental understandings of system configurations. Reliance on these technologies can lead to an often assembly-line rush of productivity, but can dull your sensibilities, technical prowess, and overall skillset in the long-term. A recruiter friend of mine recently told me that it’s genuinely hard to judge ‘good-fit’ resumes for systems administrators anymore. More and more, individuals listing sysadmin experience have merely ‘gone through the motions’ in more modernized, turn-key environments. My intention is not to be disparaging — it’s important to acknowledge that some of our peers in industry have gotten lost or been misled along the way.
Let’s consider the fundamental task of installing an OS. An automated installer can produce a ready-to-go system in anywhere from 5-10 minutes. What happens when the network-based build infrastructure is not available? Perhaps the framework is down for maintenance, error-prone, or unavailable to the network segment that you’re building from. Ultimately, the concession is made that a manual build is required. More time is then spent preparing the install media (e.g. downloading an ISO, attaching to a VMware host, or creating a bootable USB thumb drive and physically attaching to a server). You’ll then find yourself staring at a GUI-based installer, second-guessing every decision along the way including disk partitioning, network configuration, and package selection. Finally you’ve got an OS to show for the efforts, though the differential in time spent simply trying to make the automated solution work versus what the manual installation itself would have taken makes you queasy. If only you’d known to short-circuit (abandoning the automation) and skip to the manual effort sooner…
Let’s also consider the way we configure our hosts. In the typical scenario, we take our freshly-installed OS, put in place our configuration management solution of choice, and let it do its job on the vanilla system while we watch the coffee percolate. It deploys thousands of lines of tweaks into configuration files, installs dozens of packages, provisions user accounts, pulls secrets from escrow services, and many more things that you’d be hard-pressed to list thoroughly if asked too quickly. Therein lies the problem: What if our configuration management solutions are not available? Maybe you’ve deployed an OS not yet accounted for in the configuration codebase. Maybe instead of a new system, you’re on-boarding a system that’s received effectively frozen, thus unable to unleash your default assortment of configurations. In either case, you must manually configure things one step at a time, and in the latter case, with one change management request after another. Manually configuring a system in the same thorough manner that a configuration management system does, backed by years of incremental improvements and testing, can be a daunting task.
Let’s finally consider a scenario whereas you can’t rely on your operating system’s package manager. What if a given piece of critical software is not available, else the OS provides a much older version than meets your needs? What about the opposite case, whereas you’re obligated to install a specific older version of software (ideally still secure) that is not available via the package distribution network? We suddenly find ourselves back in the trenches. Either we invest the time to roll our own custom package, facilitating future reinstallations or sibling server installations, else perform a one-off build. In either case, we’ll be reading up on and passing the appropriate parameters to the configure script, and then running make, make install. And then, silly us, we forgot the build dependencies. Oh, and then we realize that we must build five more antiquated software versions in order to get this first one to build successfully. After flirting with chicken-and-egg situations for what feels like an eternity, we finally pull it off, only to learn that our efforts were towards the wrong target OS or architecture. Drat.
The three scenarios above all ended with a negative or frustrated takeaway… but almost always, especially in cases where you are ultimately successful, those negative feelings quickly pass. Personally, I relish the opportunity to throw everything out the window and get my hands dirty. It’s similar to someone getting frustrated with urban sprawl and leaving the city to live in a cabin in the woods for a while. As silly as it sounds, manually building and configuring systems is my ‘cabin in the woods’. Manual operating system installations, system configurations, package builds, and all of the frustrations that come with it are experiences that every systems administrator worth their salt should experience regularly. These trials and tribulations strengthen your fundamental understanding of systems and remind you that taking tools for granted is a loser’s gamble. Complacency and the notion of a comfort zone are your enemies in this industry, ones that are more than willing to leave you behind in a heartbeat. Fancy turnkey deployment frameworks come and go, but a well-rounded set of experiences leave you feeling like you have warm sun on your face after a day spent chopping wood in the rain. Get your hands dirty, break things, and put them back together stronger than before. The cabin is a great place to visit. I wouldn’t want to permanently settle there, but it’s great knowing that it’s waiting for me if I need it.