redbluemagenta

a blog curated/written/whatever by christian 'ian' paredes

Acknowledgements

I just wanted to write a new blog post about all the folks who have helped me, mentored me, and otherwise gave a ton of support during the past few years of my career. I’m incredibly thankful for everyone’s support over the years, and I couldn’t have made it to this point without you guys. Thanks.

If I forgot you, let me know.

  • John Keithly: for your technical support class in Ballard High School. You’ve taught all of us a ton of knowledge on how computers worked, how to troubleshoot these infernal machines, and for allowing us to break and fix Linux and Windows machines.
  • Richard Green: for taking the chance on having me administer a few of your UNIX boxes at the UW. It was my first real job where I had to do that, and I learned a ton from you about UNIX, scripting, and making sure shit doesn’t go up in flames.
  • Hoang Ngo: for your sage advice on surviving in the computing industry, for your knowledge on networking, and for being an overall awesome, humble guy to work with. Your dedication to the craft is pretty damn amazing, and I’m incredibly humbled by your huge swath of knowledge.
  • Michael Fox: for being one of the most awesome bosses ever, for shielding our team from politics, keeping us sane during those intense days of setting up old machines for the school, and for helping us climb out of the hole we were in.
  • Stephen Balukoff: though we squared off quite a bit during my tenure, you were probably the most knowledgeable person in UNIX that I’ve ever known, and you understood lots of the things I was going through at the time. I’m also incredibly humbled by the knowledge you had of databases, UNIX, and all things systems administration.
  • Benjamin Krueger: you have and always had been an awesome mentor - you’ve helped me a lot in my career, taught me a lot about systems administration, and showed me the awesomeness of configuration management.
  • Lee Damon: for also being an awesome mentor - you’ve seen a lot of what the tech industry has to offer, and you’ve definitely gave me a lot of advice on my career as I’ve advanced from a junior systems administrator to more of a mid-level systems administrator. Thanks so much.
  • Mike Sawicki: though we haven’t really worked together much, your insight and advice have helped me a lot. You’ve also provided a ton of awesome career advice and have shown me a lot of awesome shit in the industry, too.
  • David Lemphers: for being an awesome mentor. We differed a lot in how we approached problems, but it was damn healthy, and it was some of the better discussions I had while approaching computing problems. Further, you’ve showed me a lot of awesome ways to create distributed systems.

Again, let me know if I’ve missed anyone. (If you’re not listed, it doesn’t mean I don’t appreciate you - I certainly do! I’ve been blessed with knowing so many awesome and knowledgeable folks in my short career - everyone’s been awesome, and I can’t thank everyone enough for everything up to this point.)

Amazon

On the 27th of February, I start at Amazon as a systems engineer in the AWS Cloudfront team.

I can’t even say how excited I am - I’ve been wanting to be at Amazon for the longest time now, and I finally did it. I’m at fucking Amazon, working with a large installation that has way more machines than I could even fathom.

I’ll continue to blog here, but when it comes to work related things, I’ll have to keep a tight lip. Sorry folks. I’m not sure what this means exactly in terms of the kinds of posts I’ve made so far (mostly reviews and quick guides of various software), but I’d imagine those types of posts will likely be a lot less frequent. What I would like to do instead, is talk about other things that I might be doing on my off time - this means board games, other OSS software that I’m working on, music, and movies.

Anyway, this doesn’t mean the death of this blog - I hope this actually spurs more discussion on a wider variety of things.

Cheers, gl, hf, etc. etc.

RunDeck

There’s times when we need to enforce adhoc control over our machines. This blog post by Alex Honor does a way better job than I could to explain why we still need to worry about adhoc command execution, but a quick summary would probably be the following:

  1. Sometimes, we just need to execute commands ASAP - unplanned downtime and adhoc information gathering both spur this activity.
  2. We need to orchestrate any set of commands across many machines - things like getting your DB servers up and prepped before your web servers come online is one example of this.

RunDeck can handle these two scenarios perfectly. It’s a central job server that can execute any given set of commands or scripts across many machines. You can define machines via XML, or use web servers that respond with a set of XML machine definitions (see chef-rundeck for a good example of this.) You can code in whatever language you want for your job scripts - RunDeck will run them on the local machine if the execute bit is set (or, if you’re running jobs across many machines, it will SCP them onto the machine and execute them via SSH.)

You can define multiple job steps within a job (including using other jobs as a job step) - thus, you can, for example, have a job step that pulls down your web application code from git across all web application servers, another job step that pulls down dependencies, then one more to start the web application across all of those machines, all of which is coordinated by the RunDeck server. As another example, you can also coordinate scheduled jobs that provision EC2 servers at a certain time of the day, run some kind of processing job across all of those machines, then shut all of them down as soon as the job is finished.

There’s another way these jobs can be started, not only just by manual button pushing and cron schedules: they can also be started by hitting a URI via the provided API with an auth token. To expand on the previous example, if say you don’t have a reliable way to check to see if the processing is finished from RunDeck’s point of view, you can probably have some kind of job that’s fired from the workers, which hits the API and tells RunDeck to run a ‘reap workers’ job.

mcollective vs. RunDeck?

A few people have asked me how this compares with mcollective.

I would probably say that they’re actually complimentary with each other - mcollective is incredibly nice, in that it has a bit of a better paradigm for executing commands across a fleet of hosts (pub/sub vs. SSH in a for loop.) You can actually use mcollective as an execution provider in RunDeck, by simply specifying ‘mco’ as the executor in the config file (here’s a great example of this.) RunDeck is nice, in that you can use arbitrary scripts with RunDeck and it’ll happily use them for execution - plus, it has a GUI, which makes it nice if you need to provide a ‘one button actiony thing’ for anyone else to use. RunDeck can also run things ‘out of band’ (relative to mcollective) - for example, provisioning EC2 machines is an ‘out of band’ activity (though you can certainly implement this with mcollective as well, mcollective’s execution distribution model doesn’t seem to ‘fit’ well with actions that are meant to be run on a central machine.)

But again, you can tie mcollective with RunDeck, and they’ll both be happy together. I can see right away that the default SSH executor will likely be a pain in the ass once you get to about 100+ machines or so.

Conclusion

RunDeck is pretty damn nice. We’ve been putting it through its paces in our staging environment, and it’s held up very nicely with a wide variety of tasks that we’ve thrown at it. The only worries I have are the following:

  1. Default SSH command executor will likely start sucking after getting to about 100+ machines.
  2. Since it can execute whatever you throw at it, it’s possible that you might end up with yet another garbled mess of shell and Perl scripts. This is probably better solved with team discipline, however.
  3. There doesn’t seem to be a clear way to automate configuration changes for RunDeck - thus, be sure to keep good backups for now (though, this is probably because of my own ignorance - the API is pretty full featured, and I believe you can add new jobs via the API, but it would’ve been awesome to be able to just edit job definitions in a predictable location on the disk.)

Despite these worries, I highly recommend RunDeck - it’s helped us quite a bit so far, it’s quick to get setup, and quick to start coding against for fleet wide adhoc control.

Transitioned to Octopress

So, I’ve transitioned from Jekyll to Octopress. It comes with a really nice default HTML5 template that I’ve edited a bit (changed some fonts, colors, etc.), and has a decent fallback for mobile devices, all of which was lacking in my last site design with Jekyll (if anyone’s interested, the old site is still on my GitHub account.)

What’s awesome about Octopress, anyway?

  • It comes with a nice set of Rake tasks for handling common tasks, such as generating a new site and rsync’ing the pages over to a server. Also comes with Rake tasks for creating new posts and pages.
  • The repository is very well laid out, with nice separation between themes and content.
  • The default theme comes with scss files, which is way better to deal with than regular old css.
  • Comes with a sane set of plugins that make things look awesome, such as code snippets, image embedding, and HTML5 video embedding (with Flash video fallback.)

Anyway, the site looks way better now that it’s on Octopress, and it’s a bit easier to maintain as well.

Converting Puppet Modules to Chef Cookbooks

Over a couple of caffeine induced nights, I’ve converted a handful of our Puppet modules over to Chef cookbooks (by hand) to see how it’d all turn out. We’ve not yet decided whether we will actually use Chef in production, but I figure that if I’ve lowered the bar of entry, it would make the decision much easier (we’d still have to worry about whether the technical merits justify putting in time to transition everything over to Chef, but at least the initial cost of converting things over isn’t as painful as it would’ve been.)

Here’s a sample snippet of Puppet code that I’ll be referring to for the rest of the article:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class foobar {
  $version = "1.0.0"
  $tarball = "foobar-${version}.tar.gz"
  $url     = "example.com/${tarball}"

  file { "/opt/foobar":
    ensure => directory,
    mode   => 0755,
  }

  exec { "download tarball":
    path    => "/bin:/usr/bin:/sbin:/usr/sbin",
    cwd     => "/opt/foobar",
    command => "wget ${url}",
    creates => "/opt/foobar/${tarball}",
  }

  exec { "extract tarball":
    path    => "/bin:/usr/bin:/sbin:/usr/sbin",
    cwd     => "/opt/foobar",
    command => "tar zxvf ${tarball}",
    creates => "/opt/foobar/foobar-${version}",
    before  => File["/opt/foobar/foobar-${version}/config.cfg"],
  }

  file { "/opt/foobar/foobar-${version}/config.cfg":
    content => template("foobar/config.cfg.erb"),
  }

  file { "/opt/foobar/foobar-${version}/staticfile":
    source => [ "puppet:///modules/foobar/staticfile.${hostname}",
                "puppet:///modules/foobar/staticfile" ],
  }
}

Puppet resources port pretty well over to Chef resources - you simply have to make sure that you’re using the right Chef resource when rewriting it (for example, you use “file” in Puppet for templates, regular files transferred from the server, regular files managed only locally, and directories - these are all separate things in Chef, with resources such as “cookbook_file”, “template”, “file”, “directory”, and “remote_directory.”) As for variables, this might be a bit of a judgment call - I’ve mostly thrown tunables into node attributes, and leave temporary recipe-specific variables where they were in Puppet. In this case, I’d likely throw $version into “node[:foobar][:version]” as an attribute - the tarball name, if the naming convention doesn’t change between versions, could probably stay inside the recipe itself. The URL might be a tunable too, and so could probably be stuffed into “node[:foobar][:url]” - what if you had mirrors of these files in each geographical area?

Here’s what our code will look like so far, given the above:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# foobar/attributes/default.rb:
default[:foobar][:version] = "1.0.0"
default[:foobar][:tarball] = "foobar-#{node[:foobar][:version]}.tar.gz"
default[:foobar][:url]     = "example.com/#{node[:foobar][:tarball]}"
# Replaces the following Puppet code:
# $version = "1.0.0"
# $tarball = "foobar-${version}.tar.gz"
# $url     = "example.com/${tarball}"

# foobar/recipes/default.rb:

directory "/opt/foobar" do
  mode "0755"
end
# Replaces the following Puppet code:
# file { "/opt/foobar":
#   ensure => directory,
#   mode   => 0755,
# }

Now, what do we do with the exec resources? There’s still a few things that make sense to port directly to Chef’s “execute” resources - things like updating database users, running a program to alter configuration (think “make” for creating sendmail database files), or extracting tarballs. However, we do have one exec resource that can be replaced by a regular Chef resource - the exec resource that downloads a tarball onto local disk. This is supplanted by the “remote_file” resource - keep in mind that we likely do not want to keep downloading the file on each Chef run (we guarantee this in Puppet with some kind of command in the “onlyif” or “unless” statement - test -f? Current downloaded file md5sum hash matches? etc.) Probably the simplest way to deal with this is to use a SHA256 hash in the remote_file resource - that’s what we’ll do in this next code example:

1
2
3
4
5
6
7
8
9
10
11
12
13
# foobar/recipes/default.rb:
remote_file "/opt/foobar/#{node[:foobar][:tarball]}" do
  source node[:foobar][:url]
  mode "0644"
  checksum "deadbeef"
end
# Replaces the following Puppet code:
# exec { "download tarball":
#   path    => "/bin:/usr/bin:/sbin:/usr/sbin",
#   cwd     => "/opt/foobar",
#   command => "wget ${url}",
#   creates => "/opt/foobar/${tarball}",
# }

There isn’t a way to do a simple existence check for the file before attempting to download it (like what we’ve done in our Puppet code above), but this is a bit more robust for dealing with remote files.

The tarball extraction exec resource in Puppet is easy to port over, and we’ll just go ahead and just do a straight translation over to Chef:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# foobar/recipes/default.rb:

execute "extract tarball" do
  cwd "/opt/foobar"
  command "tar zxvf #{node[:foobar][:tarball]}"
  creates "/opt/foobar/foobar-#{node[:foobar][:version]}"
end

# We'll go ahead and throw in the template, too:

template "/opt/foobar/foobar-#{node[:foobar][:version]}/config.cfg" do
  source "config.cfg.erb"
end

# Replaces the following Puppet code:
# exec { "extract tarball":
#   path    => "/bin:/usr/bin:/sbin:/usr/sbin",
#   cwd     => "/opt/foobar",
#   command => "tar zxvf ${tarball}",
#   creates => "/opt/foobar/foobar-${version}",
#   before  => File["/opt/foobar/foobar-${version}/config.cfg"],
# }
#
# file { "/opt/foobar/foobar-${version}/config.cfg":
#   content => template("foobar/config.cfg.erb"),
# }

Since Chef executes things in order, we don’t have to specify that the tarball extraction step must come before writing the template file - we simply place it before the template creation step.

Also, Chef doesn’t require you to specify a global “Path” variable in the top level of your class hierarchy - it uses the PATH variable that’s sourced in the shell that’s running Chef. If you need to override this, you can by setting the “path” attribute in the execute resource.

Lastly, dealing with the regular static file at the end of the Puppet module is a bit simpler in Chef than in Puppet - sometimes, we need to have specific files for specific hosts or operating systems, and the way we do this is by throwing the file into different directories, instead of coming up with a naming convention and trying to stick with it throughout Puppet module development. In this case, we have file specificity broken up by hosts, and so we can create the following folders and files in our Chef cookbook:

1
2
3
foobar/files/host-host1.example.com/staticfile
foobar/files/host-host2.example.com/staticfile
foobar/files/default/staticfile

We then write the following in our recipe:

1
2
3
4
5
6
7
8
9
10
# foobar/recipes/default.rb:

cookbook_file "/opt/foobar/foobar-#{node[:foobar][:version]}/staticfile" do
  source "staticfile"
end
# Replaces the following Puppet code:
# file { "/opt/foobar/foobar-${version}/staticfile":
#   source => [ "puppet:///modules/foobar/staticfile.${hostname}",
#               "puppet:///modules/foobar/staticfile" ],
# }

All in all, it’s not too tough to port things over to Chef - there’s nothing like chef2puppet for converting things automatically to Chef, but Chef resources map pretty well to Puppet resources, and so the task isn’t nearly as hard as it could have been. Chef has a bit more structure in place for stashing tunables separate from what actually executes, but it could possibly be a bit more constraining than what others could decide on with structuring their Puppet modules.

If anyone has any questions about this, don’t hesitate to let me know via email or through the comments section below.

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.