Packer 101
Going from 0 to having a custom-built Ubuntu AWS AMI.
I was in the midst of modernizing client’s existing AWS infrastructure, when the client requested that one of their core server-cluster be upgraded.
Since a full migration onto a container orchestrator was months in the future, I still deemed it wortwhile to make at least some improvements, in the form of custom AWS AMIs. As the development team was relatively small, my goal was to lessen their need to fiddle with the servers, be it with SSHing onto it or needing to manually set up the needed monitoring tools. I also provided a fully automated deployment so that running an Ansible playbook wasn’t necessary anymore, at least for these servers.
Show me the code!
For the actual implementation, the prebaked VM needed
- a CodeDeploy agent, for automated deployments, and
- a Prometheus
node-exporter
, given the planned EKS migration. We used Packer as our tool of choice. (If you’re fully committed to AWS you can also use ‘EC2 Image Builder’.)
Presuming you have asdf
installed, the zeroth step is to get Packer:
asdf plugin-add packer https://github.com/asdf-community/asdf-hashicorp.git
asdf install packer 1.8.0
asdf local packer 1.8.0
In the same directory, create an empty file with the .pkr.hcl
extension. We’ll add the required packer
, source
, and build
blocks to this file in the following paragraphs.
In packer
block, pin to the same version you just installed.
packer {
required_version = "~> 1.8.0"
}
For easier maintenance, extract the important configs into locals
:
locals {
aws_region = "eu-west-1"
instance_type = "t3a.micro"
# If more than a single such AMI needs to be built each day, make the AMI's time-signature more granular.
today_date = formatdate("YYYY-MM-DD", timestamp())
ami_owner_id_Canonical = "099720109477"
ami_ubuntu_version = "focal-20.04"
ami_ubuntu_arch = "amd64"
ami_name = "ubuntu-${local.ami_ubuntu_version}-${local.ami_ubuntu_arch}-${local.today_date}"
}
The source
block is pretty much standard. Setting a few well chosen tags
will come helpful later, such as when provisioning servers of mixed architectures.
source "amazon-ebs" "custom" {
ssh_username = "ubuntu"
region = local.aws_region
instance_type = local.instance_type
ami_name = local.ami_name
force_delete_snapshot = true
launch_block_device_mappings {
delete_on_termination = true
device_name = "/dev/sda1"
encrypted = false
iops = 3000
throughput = 125
volume_size = 8
volume_type = "gp3"
}
source_ami_filter {
filters = {
name = "ubuntu/images/*ubuntu-${local.ami_ubuntu_version}-${local.ami_ubuntu_arch}-server-*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = [local.ami_owner_id_Canonical]
}
tags = {
Name = local.ami_name
ManagedBy = "Packer ${packer.version}"
Base_AMI_ID = "{{ .SourceAMI }}"
Base_AMI_Name = "{{ .SourceAMIName }}"
}
}
Finally the build
block. This one contains two shell
provisioners, and an Ansible playbook that installs an AWS CodeDeploy agent. Note that a provisioner
is named from the POV of the provisioned server: a -local
provisioner, such as ansible-local
, runs on the server itself.
build {
sources = ["source.amazon-ebs.custom"]
provisioner "shell" {
execute_command = "echo 'packer' | sudo -S env {{ .Vars }} {{ .Path }}"
script = "${path.root}/scripts/base.sh"
max_retries = 1
timeout = "5m"
}
provisioner "shell" {
execute_command = "echo 'packer' | sudo -S env {{ .Vars }} {{ .Path }}"
script = "${path.root}/scripts/prometheus_node_exporter.sh"
max_retries = 1
timeout = "5m"
}
provisioner "ansible" {
playbook_file = "${path.root}/scripts/codedeploy.yml"
extra_arguments = [
"--extra-vars",
"aws_region=${local.aws_region}"
]
}
}
Supporting scripts
Per above .hcl
, I’ve placed Bash and Ansible into a sibling scripts
directory.
The base.sh
provides unattended APT upgrades:
#!/usr/bin/env bash
set -Eeu
export DEBIAN_FRONTEND="noninteractive"
apt_conf_periodic="/etc/apt/apt.conf.d/10periodic"
apt_conf_unattended_upgrades="/etc/apt/apt.conf.d/50unattended-upgrades"
cat << END > "$apt_conf_periodic"
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";
END
cat << END > "$apt_conf_unattended_upgrades"
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
"${distro_id}ESM:${distro_codename}";
};
Unattended-Upgrade::Package-Blacklist {
};
Unattended-Upgrade::DevRelease "false";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
END
chown -R root:root "$apt_conf_periodic"
chown -R root:root "$apt_conf_unattended_upgrades"
service unattended-upgrades restart
prometheus_node_exporter.sh
sets up and starts Prometheus’ node/host exporter:
#!/usr/bin/env bash
set -Eeuo pipefail
export DEBIAN_FRONTEND="noninteractive"
###################################
# Config
node_exporter_arch='linux-amd64'
node_exporter_version='1.3.1'
###################################
new_user=prometheus
sudo useradd --no-create-home --shell /bin/false "$new_user"
tar_file="node_exporter-${node_exporter_version}.${node_exporter_arch}.tar.gz"
wget "https://github.com/prometheus/node_exporter/releases/download/v${node_exporter_version}/${tar_file}"
tar xvfz "${tar_file}"
base_dir="${tar_file%.tar.gz}"
dir="/etc/prometheus"
mkdir -p "$dir"
mv "$base_dir/node_exporter" "$dir"
rm "$tar_file"
chown -R "$new_user":"$new_user" "$dir"
cat << END > /etc/systemd/system/node_exporter.service
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
ExecStart=/etc/prometheus/node_exporter
[Install]
WantedBy=default.target
END
systemctl daemon-reload
systemctl start node_exporter
systemctl enable node_exporter
And codedeploy.yml
is an Ansible playbook that install a CodeDeploy agent:
---
- hosts: all
user: ubuntu
become: yes
become_user: root
become_method: sudo
tasks:
- name: Install Dependencies
apt:
pkg: ['ruby-full', 'wget']
state: present
update_cache: true
- name: Gather package facts
package_facts:
manager: auto
- name: Install Code Deploy Agent
block:
- name: Fetch the CodeDeploy install script
get_url:
url: "https://aws-codedeploy-{{ aws_region }}.s3.{{ aws_region }}.amazonaws.com/latest/install"
dest: /tmp/codedeploy-install
mode: 0700
- name: Run the installation script
become: true
command: /tmp/codedeploy-install auto
when: "'codedeploy-agent' not in ansible_facts.packages"
Once you have all the above in place, building your AMI is merely a matter of:
input='a.hcl'
packer fmt $input \
&& packer validate $input \
&& packer build $input
You can manually verify that the desired services were indeed provisioned. Instantiate an EC2 off of the newly minted AMI and run:
service unattended-upgrades status
service codedeploy-agent status
Closing tips
The most general tip is to enable debug logs when you encounter an issue, using PACKER_LOG=1
before a Packer command. More specific tips are below, each in its own section.
“Builds finished but no artifacts were created.”
If you encounter this error, recheck whether a source
actually is being referred to in your build
block:
source "amazon-ebs" "custom" { ...
}
build {
sources = ["source.amazon-ebs.custom"] ...
Packer console
If you wish to use packer console
for anything more than mere variable evaluation, such as
echo {{timestamp}} | packer console
you have to start it with the --config-type=hcl2
flag. Only then will you also be able to evaluate HCL expressions:
> formatdate("YYYY-MM-DD", timestamp())
> 2022-04-24
Ansible and “unrecognized arguments”
You might encounter this in the logs, leading to Packer exiting.
amazon-ebs.custom: ansible-playbook: error: unrecognized arguments:
/path/to/codedeploy.yml
If Ansible gives you trouble, verify that you’re quoting extra-vars
correctly; per docs, “arguments should not be quoted”.
- not OK
extra_arguments = [
"--extra-vars \"foo=${local.bar}\""
]
+ OK
extra_arguments = [
"--extra-vars", "foo=${local.bar}"
]
Next up
In other articles, we’ll see how we can make use of the now-baked-in node-exporter
for monitoring with Prometheus. And how we can leverage the AWS CodeDeploy agent for automated deployments onto EC2 instances.