DevOps Zone is brought to you in partnership with:

Peter Donald helps create and operate software for fire fighters and other emergency response organisations in Melbourne, Australia. He likes to focus on architectures and practices that help to increase the velocity and reliability of the software delivery process. When not at a desk he is out in the bush, wandering the the desert or up in the mountains. Peter has posted 1 posts at DZone. You can read more from them at their website. View Full User Profile

Evolving towards cookbook reusability in Chef

06.05.2012
| 5016 views |
  • submit to reddit

A few months ago, I started to invest heavily in Chef to automate the roll out of our applications and the supporting infrastructure. So far, so good but it has not always been sunshine and puppy dogs. One of the major challenges is attempting to reuse cookbooks found on the community site, on GitHub or even within our own organization. I have found that I frequently had to customize the cookbooks heavily or rewrite the cookbooks from scratch to meet our needs.

Recently I have discovered a pattern that we use in our internal cookbooks that seems to make reuse possible, even easy. So I thought I would send it out into the world to see if it is something that others would find useful. So here is how it evolved...

Phase 1: Cookbook as a big bash script

In the beginning, our cookbooks mostly felt like big bash scripts. Conceptually they would do something along the lines of;

bash "install mypackage" do
  cwd "#{Chef::Config[:file_cache_path]}"
  code <<-EOH
wget http://example.com/mypackage-1.0.tar.gz
tar xzf mypackage-1.0.tar.gz
cd mypackage-1.0
./configure && make && make install
  EOH
  not_if { File.exists?("/usr/bin/mypackage") }
end

This was fast to write but that is the best that could be said about this technique. This approach resulted in no reusability of cookbooks unless we had the exact same requirements on a different node.

Phase 2: Attributes to customize

We quickly ran into issues when we needed to customize the application based on the environment. At which point we introduced attributes to customize the application. Conceptually, our recipes started to look something like;

bash "install mypackage" do
  cwd "#{Chef::Config[:file_cache_path]}"
  code <<-EOH
wget http://example.com/mypackage-#{node[:mypackage][:version]}.tar.gz
tar xzf mypackage-#{node[:mypackage][:version]}.tar.gz
cd mypackage-#{node[:mypackage][:version]}
./configure && make && make install
  EOH
  not_if { File.exists?("/usr/bin/mypackage") }
end

template "/etc/mypackage.conf" do
  source "mypackage.conf.erb"
  mode "0644"
  variables(
      :database => node[:mypackage][:database],
      :user => node[:mypackage][:user],
      :password => node[:mypackage][:password]
    )
end

Phase 3: Partition the recipes into units of reuse

Further down the track we found that different nodes would have different requirements. i.e. One installation of mypackage would use a local database for authentication while another installation would authenticate against Active Directory. This resulted in us splitting recipes into multiple recipes based on the units of reuse. So our hypothetical "mypackage::default" recipe would be split into "mypackage::default", "mypackage::db_auth", "mypackage::ad_auth". The role would include the particular recipes that it required.

Phase 4: Resources to the rescue

Resources (via LWRPs) were the next abstraction that we introduced. This made it easy to repeat similar sets of complex actions in many recipes with minor differences in configurations. A typical scenario involves defining multiple queues in a message broker, such as this snippet using the glassfish cookbook;

glassfish_mq_destination "WildfireStatus queue" do
  queue "Fireweb.WildfireStatus"
  config {'validateXMLSchemaEnabled' => true, 'XMLSchemaURIList' => 'http://...'}
  host 'localhost'
  port 7676
end

glassfish_mq_destination "PlannedBurnStatus queue" do
  queue "Fireweb.PlannedBurnStatus"
  config {'maxCount' => 1000, ...}
  host 'otherhost'
  port 7676
end

It should be noted that these resources can be composed. So that low level resources can be used to build up high level resources. So we actually have a glassfish_mq resource that uses the glassfish_mq_destination resource in it's implementation.

glassfish_mq "MessageBroker Instance" do
  instance "MessageBroker"
  users {...}
  access_control_rules {...}
  config {...}
  queues {
    "Fireweb.WildfireStatus" => {'validateXMLSchemaEnabled' => true, 'XMLSchemaURIList' => 'http://...'},
    "Fireweb.PlannedBurnStatus" => {'maxCount' => 1000, ...}
  }
  port 7676
  admin_port 7677
  jms_port 7678
  jmx_port 8087
  stomp_port 8087
end

Phase 5: Data driven reuse

The use of resources allowed us to easily create customized cookbooks but authoring the cookbooks could get monotonous. There was a lot of boilerplate code in each recipe. We reacted by storing a simplified description of the resources as data, interpreting the description and invoking the resources to represent the data. Sometimes the description was stored in data bags, sometimes the description was synthesized by searching the chef server, sometimes the description was synthesized using a rule layer.

For example, we discovered the set of queues to create in our message broker by searching the chef server for nodes in the same environment that declared a requirement for message queues in the attributes (i.e. "openmq.destinations.queues"). When configuring the logging aspects of our systems, we search for a graylog2 node and ensure we get the production node in the production environment and the development node in all other environments.The .war files and their required customizations are declared in a data bag and we query the data bag when populating our application server.

Phase 6: Policy recipe + Attribute driven recipe

The data driven approach saved us a lot of work but it limited the amount of cookbook reuse; business rules were encoded into the the way we stored, synthesized and discovered the data. It also meant that some of our core cookbooks changed every time we changed the way we abstracted our application configuration data.

Our most recent approach has been to pull the the business specific policy code out into a separate cookbook and then include a recipe that uses the attributes defined on the current node to drive the creation of the infrastructure.

Our policy cookbooks tend to look something like the following.

node.override[:openmq][:extra_libraries] =  ["http://example.org/repo/myext.jar"]

queues = []
search(:node, 'openmq_destinations_queues:* AND NOT name:' + node.name) do |n|
  queues.merge( n['openmq']['destinations']['queues'].to_hash )
end
queues.merge( node['openmq']['destinations']['queues'].to_hash )

include_recipe "glassfish::attribute_driven_mq"

This approach seems to have given us a way to create a reusable cookbook ( glassfish in the case above) with the components that are less likely to be reused in a separate "policy" recipe. We are already using this to successfully manage an application server, a message broker, to configure monitoring and logging and to apply firewall rules. I wonder if this is an approach that others have discovered and if it could be applied to other cookbooks.

Published at DZone with permission of its author, Peter Donald. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)