The Craft: Automation and Scaling Infrastructure

9 Comments

“Progress is made by lazy men looking for easier ways to do things”
— Robert A. Heinlein

Until the late 18th century, craftsmen were a primary source of production. With specialized skills, a craftsman’s economic contribution was a function of personal quantity and quality, and a skilled artisan often found it undesirable, if not impossible, to duplicate previous work with accuracy. Plus, there is a limit to how much a skilled craftsman can do in one day. Scaling up the quantity of crafted goods to meet increased demand was a question of working more or adding more bodies — both of which potentially sacrificed quality and consistency.

Today, Internet applications and infrastructure are often the creations of skilled modern craftsmen. The raw materials are files, users, groups, packages, services, mount points and network interfaces — details most people never have to think or care about. These systems often stand as a testament to the skill and vision of a small group or even an individual. But what happens when you need to scale a hand-crafted application that many people — and potentially the life of a company — depend on? The drag of minor inefficiencies multiplies as internal and external pressures create the need for more: more features, more users, more servers and a small army of craftsmen to keep it all together.

These people are often bright and skilled, with their own notions and ideas, but this often leads to inconsistencies in the solutions applied across an organization. To combat inconsistency, most organizations resort to complicated bureaucratic change control policies that are often capriciously enforced, if not totally disregarded — particularly when critical systems are down and the people who must “sign off” have little understanding of the details. The end result is an organization that purposely curtails its own ability to innovate and adapt.

Computers are extremely effective at doing the same repetitive task with precision. There must be some way to take the knowledge of the expert craftsmen and transform it into some kind of a program that is able to do the same tasks, right? The answer is yes, and in fact, most system administrators have a tool belt full of scripts to automate some aspects of their systems. For both traditional craftsmen and system administrators, better tools can increase quantity and quality of the work performed.

Policy-driven automation facilitates both predictability and adaptability, reduces potential human errors and enables the organization to scale IT infrastructure without a proportional increase in head count. Commercial options are available, but they’re not entirely transparent, and for small or medium-sized organizations, they are prohibitively expensive. The open source options are varied, based on diverse philosophical and functional underpinnings, with different levels of adoption and community support.

Puppet, which was inspired by years of automation using CFEngine, is a relatively young open source configuration framework with a thriving community. Using parameterized primitives like files, packages and services, Puppet’s declarative language can model collections of resources and the relationships between them using inheritance and composition. Puppet enables consistent management of the server life cycle, building a system from a clean operating system, restarting services when their configurations change and decommissioning references to a retired system. Furthermore, Puppet uses “resource abstraction,” making the codified configurations portable across platforms and potentially generic enough to be shared within the community.

Executable, policy-driven automation doesn’t remove the need for knowledge and skill. Automation allows the knowledge to be invested in infrastructure design, and lets the computers carry out the results of the decisions. Instead of trying to replicate individual craftsmen and processes, systems like Puppet herald a bold new future of infrastructure where instead of micromanaging individual craftsmen we can build vast factories of instantly scalable, on-demand resources by describing how to solve the problem as a set of strategic rules for the infrastructure to understand and act on — rather than specifying individual units to be built by an army of craftsmen.

Andrew Shafer, partner at Reductive Labs, has developed high performance scientific computing applications, embedded Linux interfaces and an eCommerce SaaS platform. He currently works full time on Puppet, a free, open-source server automation framework available for Linux, Solaris, FreeBSD and OS X.

9 Comments

Tony Murphy Infrastructure Designer

sorry – put in the wrong web address in last comment!

I think it may be some time before Puppet can take the place of skilled craftsmen or even supplement it. Building scalable,available and performance infrastructure is still a very wetware intensive occupation though I don’t doubt that tremendous progress will be made over the coming years.

cheers
Tony

Tony Murphy

I think it may be some time before Puppet can take the place of skilled craftsmen or even supplement it. Building scalable,available and performance infrastructure is still a very wetware intensive occupation though I don’t doubt that tremendous progress will be made over the coming years.

cheers
Tony

Comments are closed.