3 min read

Provisioning this Blog on DigitalOcean

I recently rewrote the infrastructure for this blog, which was long overdue.  My previous server had fallen into the trap of pet vs cattle, so it was difficult to manage, upgrade, etc.

Disclaimer: I've included a referral link for DigitalOcean below.

With the cattle > pets metaphor in mind, I started over using Terraform to provision the infrastructure on DigitalOcean.  I really enjoy using DigitalOcean for personal projects, as it is easy to use and has predicable pricing.

Here are a few of the goals I had going into this project:

  • The process needs to be completely automated
  • I should be able to destroy and recreate a server without losing anything
  • Software installation, patching, reboots, and keeping services running should be completely automated

Basically, I'm trying to make it as easy for my future self as possible.

I'm not going to include everything in this post, but I want to highlight a few key parts of the setup:

Terraform Setup

To start with, we need the DigitalOcean provider to be able to interact with their API.

provider "digitalocean" {
  token = var.digitalocean_token
  version = "~> 1.0"
}

Instead of storing the Terraform state on my local machine, where it might get lost, I created a bucket on Google Cloud Storage. This keeps it nicely secured and lets me access it from different machines.

terraform {
  backend "gcs" {
    bucket  = "davebauman-devops"
    prefix  = "davebauman.io/terraform/state"
  }
}

Volume Storage

One of my goals was to be able to delete and recreate the VMs without losing anything, and the best way to do that is to use Volume Block Storage.  My 5GB volume costs me $0.50 a month, so it's pretty affordable.

resource "digitalocean_volume" "data_volume" {
  region                  = var.do_region
  name                    = "dbv1"
  description             = "davebauman.io data volume"
  size                    = 5
  initial_filesystem_type = "ext4"

  lifecycle {
    prevent_destroy = true
  }
}

I turned on prevent_destroy to avoid any accidental deletes.

Droplet

Next up we have the Droplet (compute VM):

resource "digitalocean_droplet" "web" {
  name       = "davebauman-io"
  image      = "fedora-31-x64"
  size       = "s-1vcpu-1gb"
  region     = "nyc1"
  ipv6       = true
  monitoring = false

  ssh_keys = [
    "${digitalocean_ssh_key.key1.fingerprint}",
    "${digitalocean_ssh_key.key2.fingerprint}"
  ]

  user_data = templatefile("files/cloud-init.tpl", {
    key-1 = file("files/key1.pub")
    key-2 = file("files/key2.pub")
    ssh_port = var.ssh_port
  })
}

resource "digitalocean_volume_attachment" "data_volume_attachment" {
  droplet_id = digitalocean_droplet.web.id
  volume_id  = digitalocean_volume.data_volume.id
}

This does a couple things.  First off, it creates a new Fedora VM in the smallest size, it attaches our data volume, and it specifies a Cloud-init file to do some initial setup for the VM.

I actually used Ansible to provision the software side, but before I can even run Ansible I needed to do some setup.  Here's what the cloud-init.tpl file looks like:

#cloud-config
users:
  - name: deploy
    ssh-authorized-keys:
      - ${key-1}
      - ${key-2}
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    groups: sudo
    shell: /bin/bash
mounts:
  - [ /dev/disk/by-id/scsi-0DO_Volume_dbv1, /mnt/dbv1, "ext4", "defaults,nofail,discard", "0", "0"]
runcmd:
  # Update SSH settings
  - sed -i -e '/Port 22/c\Port ${ssh_port}' /etc/ssh/sshd_config
  - sed -i -e '/PermitRootLogin/c\PermitRootLogin no' /etc/ssh/sshd_config
  - sed -i -e '$aAllowUsers deploy' /etc/ssh/sshd_config
  - dnf install -y policycoreutils-python-utils
  - semanage port -a -t ssh_port_t -p tcp ${ssh_port}
  - systemctl restart sshd
  # Assign permissions
  - chown deploy:deploy /mnt/dbv1

Cloud-init automatically processes this first thing when the VM comes online, and does the following:

  • Creates a new deploy user, with the SSH keys previously mentioned
  • Mounts the volume to /mnt/dbv1 automatically
  • Updates the SSH port and prevents the root user from logging in

This is just enough to slightly secure the box and give me the access I need to run Ansible to finish the setup.

What Else?

I have a few other things not mentioned here: I configured the DigitalOcean firewall to restrict inbound/outbound access to my VM. I uploaded my SSH public keys to DigitalOcean.  And I'm manging my DNS via DigitalOcean as well, so I have the domain and records scripted out.

The other major thing I left out is the API tokens.  I had to create a DigitalOcean API token for the Terraform provider to use; it was referenced at the very top in the provider.  Since I used GCS for the Terraform state, I also had to provide a GCP credential file.

Finale

The big upgrades here for me are the external volume and the cloud-init setup.  While a volume doesn't replace my backup strategy, it would make it trivial to recreate the droplet without concern.  And the cloud-init doesn't do much, but having those core tasks handled immediately is very satisfying.

In a future post I'll go over my Ansible setup, which takes over after Terraform finishes with the infrastructure.  OS configuration, software setup, patching, etc. is all handled by Ansible.