Recently, I finished reading The Packer Book by James Turnbull.
When I started reading this book, I had no experience with Packer (but a high-level understanding of its use case). And so, I was looking for a book that would provide a good foundation.
I particularly found chapter 3 (“First Steps with Packer”) chapter 4 (“Provisioning with Packer”), chapter 6 (“Testing Packer”), and chapter 7 (“Pipelines and Workflows”), of most value, because it built on what little I already knew, and expanded on this through demonstration of testing images, and integration into pipelines (which is a key component of
The only thing I wish this book (or similar resources like it) had, was examples specific to Microsoft Azure (since that’s the environment I’m working in). Most DevOps tooling tutorials, books, videos, courses, etc. all seem to focus on Amazon Web Services (AWS).
I’ve decided to share my highlights from reading this specific publication, in case the points that I found of note/interest will be of some benefit to someone else. So, here are my highlights (by chapter). Note that not every chapter will have highlights (depending on the content and the main focus of my work).
Chapter 1: Introduction to Packer
- Packer is a free and open-source image-building tool, written in Go and licensed under the Mozilla Public License 2.0. It allows you to create identical machine images, potentially for multiple target platforms, from a single configuration source. Packer supports Linux, Microsoft Windows, and Mac OS X, as well as other operating systems, and has support for a wide variety of image formats, deployment targets, and integrations with other tools.
- Packer allows you to create pipelines for building and deploying images, which in turn allows you to produce consistent, repeatable images.
- Packer is also portable. As it has a central-configuration construct—an image template—it allows you to standardize images across multiple target platforms. You can build images across cloud platforms—like Amazon and Google—that are consistent with images built on internal platforms like VMware or OpenStack, container environments like Docker and Kubernetes, and even individual development environments located on developer laptops.
- Packer allows you to bake an appropriate and testable portion of your configuration into images without the overhead and complexity of previous image-building solutions.
- You can also ensure that consistent configuration for things like patching, time, networking, security, and compliance are maintained across environments. For example, an infrastructure or a security team can use Packer to build images that are then shared with other groups to provide baseline builds that force cross-organizational standards.
Chapter 2: Installing Packer
- TIP On some Red Hat and Fedora-based Linux distributions there is another tool named packer installed by default. You can check for this using which -a packer. If you find this tool already present you should rename the new packer binary to an alternate name, such as packer.io.
- NOTE Packer requires Go 1.6 or later.
Chapter 3: First Steps With Packer
- Packer calls the process of creating an image a build. Artifacts are the output from builds. One of the more useful aspects of Packer is that you can run multiple builds to produce multiple artifacts.
- A build is fed from a template. The template is a JSON document that describes the image we want to build—both where we want to build it and what the build needs to contain.
- To determine what sort of image to build, Packer uses components called builders. A builder produces an image of a specific format—for example, an AMI builder or a Docker image builder.
- User variables are useful in these three ways:
- As shortcuts to values that you wish to use multiple times.
- As variables with a default value that can be overridden at build time.
- For keeping secrets or other values out of your templates.
- As shortcuts to values that you wish to use multiple times.
- User variables must be defined in the variables block. If you have no variables then you simply do not specify that block.
- If a variable is null then, for a template to be valid and executed, its value must be provided in some way when Packer runs.
- TIP You can find a full list of the available functions in the Packer engine documentation.
- NOTE You can only use environment variables inside the variables block. This is to ensure a clean source of input for a Packer
- If you attempt to define the same variable more than once, the last definition of the variable will stand.
- Specify the builder you want to use using the type field, and note that each build in Packer has to have a name. In most
casesthis defaults to the name of the builder
- However, if you need to specify multiple builders of the same type—such as if you’re building two AMIs—then you need to name your builders using a name key.
- NOTE If you specify two builders of the same type, you must name at least one of them. Builder names need to be unique.
- Our AMI name uses two functions: timestamp and clean_ami_name. The timestamp function returns the current Unix timestamp. We then feed it into the clean_ami_name function, which removes any characters that are not supported in an AMI name.
- NOTE There’s also a
uuidfunction that can produce a UUID if you want a more granular name resolution than time in seconds.
- Packer builders communicate with the remote hosts they use to build images with a series of connection frameworks called communicators. You can consider communicators as the “transport” layer for Packer. Currently, Packer supports SSH (the default), and WinRM (for Microsoft Windows), as communicators.
- Packer comes with a useful validation sub-command to help us with this. It performs syntax checking and validates that the template is complete.
- TIP You can use the packer inspect command to interrogate a template and see what it does.
- TIP You can also output logs in machine-readable form by adding the
-machine-readable flag to the build process. You can find the machine-readable output’s format in the Packer documentation.
- NOTE If the build were for some other image type—for example, a virtual machine—then Packer might emit a file or set of files as artifacts from the build.
Chapter 4: Provisioning With Packer
- Provisioners execute actions on the image being built. These actions, depending on the provisioner, can run scripts or system commands, and execute third-party tools like configuration management.
- You can use one or more types of provisioners during a build—for example, you could use one provisioner to configure and install the requirements for another provisioner.
- Provisioners are defined in their own JSON array, provisioners, inside your Packer template.
- Each provisioner is defined as an element in the
provisionersarray. Every provisioner has one required key: type, the type of provisioner.
- The shell provisioner executes scripts and commands on the remote image being built. It connects to the remote image via SSH and executes any commands using a shell on that remote host.
- The shell provisioner can execute a single command, a series of commands, a script, or a series of scripts.
- TIP There’s also two flags, -debug flag which provides more information and interaction when building complex templates, and -on-error, which tells Packer what to do when something goes wrong. The -debug flag also disables parallelization and is more verbose. The Packer documentation has some more general debug tips.
- NOTE The shell provisioner has a cousin called shell-local that runs commands locally on the host running Packer.
- TIP The inline commands are executed with a shebang of /bin/sh -e. You can adjust this using the inline_shebang key.
- In addition to a command or series of
commandsyou can run a script using the script key. The script key is the path to a script to be executed.
- The location of the script can be absolute or relative, depending on how it is specified. If it is specified relative, then it is relative to the location of the template file.
- NOTE The -x flag on the shebang is useful as it
executed, allowing you to see what’s happening when your script is run.
- By default, it’ll execute the command by
chmod’ingit into an executable form and running it. This means the script you write needs to have a shebang, then can be executed by being directly called.
- TIP You can modify the method by which Packer executes scripts by changing the execute_command key, for example instead of making the script executable you could call it with another binary.
- The last execution method for the shell provisioner is to execute a series of scripts, expressed as an array in
the scripts key. The scripts will be executed in the sequence in which they are defined.
- Packer provides a provisioner that allows us to provision files, like content or configuration files, into our host.
- TIP If your provisioning process requires a reboot or restart, you can configure Packer to handle delays, failures, and retries.
- The file provisioner uploads files, via Packer’s communicators, by default SSH from our local host to the remote host. The file provisioner is usually used in conjunction with the shell provisioner: the file provisioner uploads a file then the shell provisioner manipulates the uploaded file. (This two-step process primarily caters for file permission issues—Packer typically only has permission to upload files to locations it can write to, such as /
tmp.) We can then execute the shell provisioner with escalated privileges—for example, by prefixing it with the sudocommand.
- The file provisioner specifies a source and destination for the file. The source is defined absolutely or relative to the template.
- The destination is on the remote host and Packer must be able to write to it. Packer also can’t create any parent directories—you’ll either need to create those with a shell provisioner command or script prior to the
upload,or upload to an existing directory.
- In addition to uploading single files, we can also use the file provisioner to upload whole directories. Like with single file provisioning, the destination directory must exist. And upload behavior is much like
rsync: the existence of a trailing slash determines the behavior of the upload.
- If neither the source nor the destination
havea trailing slash then the local directory will be uploaded into the remote director.
- If the source has a trailing slash and the destination does not, then the contents of the directory will be uploaded directly into the destination.
- TIP You can also upload symbolic links with Packer but most provisioners will treat them as regular files.
- TIP If there isn’t a provisioner that meets your needs, you can add your own via a custom provisioner plugin.
Chapter 5: Docker and Packer
- To build Docker images, Packer uses the Docker daemon to run containers, runs provisioners on those containers,
thencan commit Docker images locally or push them up to the Docker Hub.
- When building Docker images, Packer and the Docker builder need to run on a host that has Docker installed.
- The type of builder we’ve specified is docker. We’ve specified a base image for the builder to work from; this is much like using the FROM instruction in a Dockerfile, using the image key.
- The type, as always, and the image are required keys for the Docker builder. You must also specify what to do with the container that the Docker builder builds.
- The Docker builder has three possible output actions. You must specify one:
- Export – Export an image from the container as a
tar ball, as above with the export_path key.
- Discard – Throw away the container after the build, using the discard key.
- Commit – Commit the container as an image available to the Docker daemon that built it, using the commit key.
- Export – Export an image from the container as a
- Sometimes a provisioner isn’t quite sufficient and you need to take some additional actions to make a container fully functional. The docker builder comes with a key called changes that allows you to specify some Dockerfile instructions.
- NOTE The changes key behaves in much the same way as the docker commit –change command line option.
- You can’t change all Dockerfile instructions, but you can change the CMD, ENTRYPOINT, ENV, EXPOSE, MAINTAINER, USER, VOLUME, and WORKDIR instructions.
- Post-processors take actions on the artifacts, usually images, created by Packer. They allow us to store, distribute, or otherwise process those artifacts.
- For each post-processor definition, Packer will take the result of each of the defined builders and send it through the post-processors. This means that if you have one post-processor defined and two builders defined in a template, the postprocessor will run twice (once for each builder), by default.
- There are three ways to define post-processors: simple, detailed, and in sequence. A simple post-processor definition is just the name of a post-processor listed in an array.
- A simple definition assumes you don’t need to specify any configuration for the post-processor. A more detailed definition is much like a builder definition and allows you to configure the post-processor.
- The last type of post-processor definition is a sequence. This is the most powerful use of post-processors, chained in sequence to perform multiple actions. It can contain simple and detailed post-processor definitions, listed in the order in which you wish to execute them.
- Any artifacts a post-processor generates is fed into the next post-processor in the sequence.
- NOTE You can only nest one layer of
- TIP You can tag and send an image to multiple repositories by specifying the docker-tag and docker-push post-processors multiple times.
- There are also other post-processors that might interest you. You can find a full list in the Packer documentation.
Chapter 6: Testing Packer
- TIP Serverspec also supports running tests remotely. We could make use of the shell-local provisioner to run Serverspec in its SSH mode, which connects via SSH and executes the tests. This would save us uploading and
installinganything on the image. This blog post discusses running Packer and Serverspec in this mode. Or you can see an example of the configuration in this chapter adapted for SSH in this Gist.
- NOTE If we wanted to tidy up after running our tests we could also uninstall the
- TIP There are alternatives to Serverspec, like InSpec, Goss, or TestInfra that might also meet your testing needs.
- Serverspec uses the same DSL as RSpec. To write tests we define a set of expectations inside a specific context or related collection of tests, usually in an individual file for each item we’re testing.
- TIP There’s also the useful
serverspec-init command which initializes a set of new tests.
- We’re requiring a spec_helper. This helper loads useful configuration for each test and is contained in the spec directory in the spec_helper.
rbfile. Let’s see it now.
- Serverspec has two modes of operation—the one we’re using now, exec, runs all tests locally—and an SSH mode, which, as we mentioned earlier, allows us to run the tests remotely.
- We generally want to set a context for our tests; this groups all of the relevant tests together. To do this we
- Each assertion is wrapped in
an it… end block. Inside that blockwe use the expect syntax to specify the details of our assertion.
- NOTE Serverspec automatically detects the operating system of the host on which it is being run. This allows it to know what service management tools, package management tools, or the like need to be queried to satisfy a resource. For example, on Ubuntu, Serverspec knows to use APT to query a package’s state.
- TIP Check out Better Specs for some tips and tricks for writing better RSpec tests.
- TIP You can find the full list of available resources in the Serverspec documentation.
Rakefilerequires rake and the rspecRake tasks and then creates a new Rake task. That task executes any files ending in _spec in the spec directory as RSpec tests. It also ensures that if any of the tests fail that it’ll return a non-zero exit code.
- TIP When testing like this, it’s useful to run Packer with the -debug flag enabled, which stops between steps and allows you to debug the server if any issues emerge.
Chapter 7: Pipelines and Workflows
- An override allows you to specify a varying action on a specific build. Each override takes the name of a builder, in our case amazon-
ebs, and specifies one or more keys that are different for that provisioner when it is executed for that build.
- NOTE The execute_command key also has access to the Vars variable, which contains all of the available environmental variables.
- The only key
constrainsa post-processor to only run when specific builders are invoked. We can specify an array of builder names—in our casethis is the docker builder—for which these post-processors will be executed. This prevents the amazon-web builder from unnecessarily triggering the post-processors for Docker images.
- There’s also a second key, except, that performs a similar but reversed operation. If you use the
exceptkey, post-processors will run for all builders except those listed in that key.
- If you ever need to only run one builder, there is another command line argument, -only, that you can pass to the packer build command.
There’sa useful blog post and a tool called Bakery that showsome good CI/CD pipeline ideas.
Chapter 8: Extending Packer
- Packer plugins are standalone applications written in Go that Packer executes and communicates with. They aren’t meant to be run manually—the Packer core provides an integration API that has the communication layer between the core and plugins.
- Packer’s plugins are Go applications. Their architecture is a little unusual. They are loaded at runtime as separate applications, and then IPC and RPC are used to communicate between the core and the various plugins. The core manages
starting, stopping, and cleaning up after each plugin execution.
- They’re linked via an interface with the core, but have their own dependencies and are isolated from the process space of the Packer core.
- As Packer plugins are written in Go, it’s useful to get a good grounding in it. There are some resources available to help with that.
- Each type of plugin has a defined interface. These take advantage of Go interfaces to define what is required to instantiate each type of plugin. To build a plugin we define the interface required for each type of plugin:
- TIP The best way to understand how plugins work is to look closely at the existing plugins in the Packer core and the documentation for the specific plugin types.