<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Dave Bauman]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://blog.davebauman.io/</link><generator>Ghost 5.4</generator><lastBuildDate>Sat, 10 May 2025 07:32:07 GMT</lastBuildDate><atom:link href="https://blog.davebauman.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Migrating My Blog from Traefik to Caddy]]></title><description><![CDATA[<p>Previously I wrote about how I setup the infrastructure for this blog and other websites I&apos;m hosting, but I focused on the Terraform and DigitalOcean configuration.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://blog.davebauman.io/provisioning-this-blog-on-digitalocean/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Provisioning this Blog on DigitalOcean</div><div class="kg-bookmark-description">I recently rewrote the infrastructure for this blog, which was long overdue. Myprevious server had fallen into</div></div></a></figure>]]></description><link>https://blog.davebauman.io/migrating-my-blog-from-traefik-to-caddy/</link><guid isPermaLink="false">667a166c8359b80001c396a3</guid><category><![CDATA[blog]]></category><category><![CDATA[infrastructure]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Tue, 25 Jun 2024 02:08:23 GMT</pubDate><content:encoded><![CDATA[<p>Previously I wrote about how I setup the infrastructure for this blog and other websites I&apos;m hosting, but I focused on the Terraform and DigitalOcean configuration.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://blog.davebauman.io/provisioning-this-blog-on-digitalocean/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Provisioning this Blog on DigitalOcean</div><div class="kg-bookmark-description">I recently rewrote the infrastructure for this blog, which was long overdue. Myprevious server had fallen into the trap of pet vs cattle[https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/],so it was difficult to manage, upgrade, etc. Disclaimer: I&#x2019;ve included a referra&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://blog.davebauman.io/favicon.ico" alt><span class="kg-bookmark-author">Dave Bauman</span><span class="kg-bookmark-publisher">Dave Bauman</span></div></div></a></figure><p>I never got around to posting a follow up about the software side of the configuration, but I just migrated off of Traefik so I wanted to explain the before and after.</p><h1 id="traefik">Traefik</h1><p><a href="https://github.com/traefik/traefik">Traefik</a> is a reverse proxy. &#xA0;From their GitHub:</p><blockquote>Traefik (pronounced <em>traffic</em>) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components and configures itself automatically and dynamically.</blockquote><p>Since I wanted to have multiple applications running on the same VM, I needed a way to route between different applications. &#xA0;This is what I used Traefik for.</p><p>It has two main benefits that worked well for me:</p><ul><li>Easy HTTPS with Let&apos;s Encrypt automation</li><li>Auto-configures itself based on Docker labels</li></ul><p>Both of these worked great, and once I had it setup, I was able to add additional Docker apps with minimal fuss, just by adding labels to the Docker configuration.</p><p>Here are the important bits from my <code>traefik.toml</code> file:</p><pre><code>################################################################
# Entrypoints configuration
################################################################

[entryPoints]
  [entryPoints.web]
    address = &quot;:80&quot;
  [entryPoints.web.http]
    [entryPoints.web.http.redirections]
      [entryPoints.web.http.redirections.entryPoint]
        to = &quot;websecure&quot;
        scheme = &quot;https&quot;

  [entryPoints.websecure]
    address = &quot;:443&quot;
    
################################################################
# Docker configuration backend
################################################################

# Enable Docker configuration backend
[providers.docker]

################################################################
# Let&apos;s Encrypt
################################################################

[certificatesResolvers.davebaumanio.acme]
  storage = &quot;/etc/traefik/acme/acme.json&quot;

  [certificatesResolvers.davebaumanio.acme.dnsChallenge]
    provider = &quot;digitalocean&quot;
    delayBeforeCheck = 0</code></pre><p>The first section enables HTTP/HTTPS entrypoints and automatically redirects to HTTPS, and the second section enables support for Docker.</p><p>The third configures Let&apos;s Encrypt with a DNS challenge, and I pass in a DigitalOcean API token in the environment variable <code>DO_AUTH_TOKEN</code> and it just works.</p><p>To host this blog, I just needed to launch the Ghost Docker container with the following labels:</p><pre><code>    labels:
      traefik.http.routers.ghost.rule: &quot;Host(`blog.davebauman.io`)&quot;
      traefik.http.routers.ghost.entrypoints: &quot;websecure&quot;
      traefik.http.routers.ghost.tls.certresolver: &quot;davebaumanio&quot;
      traefik.http.routers.ghost.tls.domains[0].main: &quot;davebauman.io&quot;
      traefik.http.routers.ghost.tls.domains[0].sans: &quot;*.davebauman.io&quot;</code></pre><p>Since I setup a wildcard cert, I could use the same certificate for this blog and other apps running on that domain.</p><p>If I wanted another app, it would look the same except the <code>Host()</code> rule would need to change.</p><p>Overall this worked well, and I never had any issues once I completed the initial setup, and it was easy to scale out to additional apps without any fuss.</p><h1 id="caddy">Caddy</h1><p>Traefik is a solid reverse proxy, but it is not a web server. &#xA0;This was fine initially, since I was putting it in front of applications with their own servers (Ghost, Koken, etc).</p><p>But I recently wanted to host a static site, so I started looking into options. &#xA0;The most obvious is to just run Nginx or another web server in a Docker container, and use Traefik to route to it. &#xA0;</p><p>There are also some not-really maintained projects to integrate a web server with Traefik, but there is no built-in functionality for this, and I didn&apos;t want to add a dependency that might not get updated or not work with a future version of Traefik.</p><p>So I setup Nginx, which was easy enough to configure and worked well. &#xA0;But by the time I setup a second site in Nginx, I started to think about why I was running Traefik to route to different apps, and Nginx to route to different virtual hosts. &#xA0;Could I simplify my stack and do it all in one?</p><p>As it turns out, yes. &#xA0;Traefik is a reverse proxy not a web server, but many web servers are also reverse proxies. &#xA0;<a href="http://nginx.org/">Nginx</a> and <a href="https://caddyserver.com/">Caddy</a> are the ones I&apos;m most familiar with, but probably most web servers can do this.</p><p>I hadn&apos;t used Caddy before, but I&apos;d heard a lot of good things about it so I decided to abandon both Traefik and Nginx and see if Caddy could replace them both.</p><p>My Caddyfile has gotten a little longer with things like logging, compression, and cache control, but this is basically what I started with:</p><pre><code>static.davebauman.io {
  root * /srv/static
  file_server
}


blog.davebauman.io {
  reverse_proxy ghost:2368
}</code></pre><p>This creates two hostnames, one of which is served out of a folder, and the other is routed to my Ghost container. &#xA0;Since I&apos;m running Caddy in Docker, I can reference other containers in the same network by name.</p><p>Caddy also provides Let&apos;s Encrypt integration out of the box&#x2014;I didn&apos;t even have to configure it, it just worked. &#xA0;I don&apos;t have wildcard support; that would require additional setup, but I&apos;m not even sure I need that.</p><p>There is a popular module <a href="https://github.com/lucaslorentz/caddy-docker-proxy">caddy-docker-proxy</a> that provides the same Docker label integration as Traefik, but I decided not to use it since I wasn&apos;t sure if I could use both Docker labels and a Caddyfile. &#xA0;Since I&apos;m new to Caddy I wanted to stick with a Caddyfile to make it easier to learn and debug.</p><h1 id="conclusion">Conclusion</h1><p>It&apos;s not really a surprise that I could use Caddy to replace the combination of Traefik and Nginx. &#xA0;I could have done the same with Nginx instead, but I really liked how easy it was to configure and use Caddy. &#xA0;It&apos;s really just that good.</p><p>Traefik has a lot of features I wasn&apos;t using, and the features I was using have been easily replaced. &#xA0;While I lost the Docker label support, I&apos;m not running enough sites or changing the configuration frequently enough for that to matter (and I could add that to Caddy if I wanted). &#xA0;The biggest advantage to me is that I have a simpler tech stack to maintain.</p><p>I didn&apos;t do any benchmarking on this; allegedly Caddy uses more memory that Nginx, but it probably doesn&apos;t use more memory than both Traefik and Nginx combined, so overall I should be ahead.</p><p>Now, all the traffic to my VM goes through Caddy, and is either routed to another Docker app, or served directly from the disk. &#xA0;Easy!</p>]]></content:encoded></item><item><title><![CDATA[Panda CSS]]></title><description><![CDATA[<p>I recently started a new web project using <a href="https://astro.build/">Astro</a>. &#xA0;I initially added <a href="https://tailwindcss.com/">Tailwind</a> since it has an official integration, but it&apos;s not really my favorite. &#xA0;There&apos;s plenty of debate about mixing CSS and HTML, but I don&apos;t actually mind that. &#xA0;What</p>]]></description><link>https://blog.davebauman.io/panda-css/</link><guid isPermaLink="false">66253ddf8359b80001c3954d</guid><category><![CDATA[css]]></category><category><![CDATA[panda-css]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Sun, 21 Apr 2024 19:46:45 GMT</pubDate><content:encoded><![CDATA[<p>I recently started a new web project using <a href="https://astro.build/">Astro</a>. &#xA0;I initially added <a href="https://tailwindcss.com/">Tailwind</a> since it has an official integration, but it&apos;s not really my favorite. &#xA0;There&apos;s plenty of debate about mixing CSS and HTML, but I don&apos;t actually mind that. &#xA0;What I&apos;m not a fan of is the utility class soup that Tailwind turns into, especially with <a href="https://tailwindcss.com/docs/hover-focus-and-other-states">pseudo-classes, media classes, etc</a>.</p><p>I started mixing in a bunch of handcrafted CSS, but that undercut the value of Tailwind, and I had to configure all my variables in two places.</p><p>But then I discovered <a href="https://panda-css.com/">Panda CSS</a>, and I&apos;m in the process of migrating both the Tailwind and plain CSS classes into Panda.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://panda-css.com/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Panda CSS - Build modern websites using build time and type-safe CSS-in-JS</div><div class="kg-bookmark-description">Build modern websites using build time and type-safe CSS-in-JS</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://panda-css.com/apple-touch-icon.png" alt><span class="kg-bookmark-author">Build modern websites using build time and type-safe CSS-in-JS</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://panda-css.com/og-image.png" alt></div></a></figure><p>Panda CSS is developed by the same team as <a href="https://v2.chakra-ui.com/">Chakra UI</a>, a React-based component library. &#xA0;I&apos;ve been using Chakra UI extensively for a few years, so I feel right at home with Panda since it has a lot of similiarities.</p><p>Panda is a CSS-in-JS library, but with a build step instead of a runtime. &#xA0;Styles are defined inline with style objects, and during the build Panda statically analyzes all the files to replace the style objects with generated utility classes.</p><p>The basic usage is to define a style object in HTML, something like this:</p><pre><code>&lt;div class={css({
    fontSize: &apos;16px&apos;,
    color: &apos;gray&apos;
})}&gt;
    ...
&lt;/div&gt;</code></pre><p>I&apos;m not using React in Astro, but <a href="https://docs.astro.build/en/basics/astro-components/">Astro components</a> support JavaScript frontmatter and JSX-like expressions which get evaluated during the build. &#xA0;Here&apos;s an example component:</p><pre><code class="language-typescript">---
import { css } from &apos;styled-system/css&apos;;

const { href, ...props } = Astro.props;

const { pathname } = Astro.url;
const subpath = pathname.match(/[^\/]+/g);
const isActive = href === pathname || href === &apos;/&apos; + subpath?.[0];
---

&lt;a
  href={href}
  class={css({
    display: &apos;flex&apos;,
    alignItems: &apos;center&apos;,
    gap: &apos;1rem&apos;,
    padding: &apos;var(--space-3xs) 0&apos;,
    &apos;&amp; .active&apos;: {
      fontWeight: &apos;bolder&apos;,
      borderTop: &apos;5px solid var(--pink-500)&apos;
    },
    &apos;&amp;:hover&apos;: {
      color: &apos;var(--primary)&apos;
    },
    fontWeight: isActive ? &apos;bolder&apos; : &apos;normal&apos;
  })}
  {...props}
&gt;
  &lt;slot /&gt;
&lt;/a&gt;
</code></pre><p>The <code>css()</code> function will convert all the properties into bespoke utility classes during the build step. &#xA0;Panda has built-in theme support, or you can bring your own variables (see above). &#xA0;Also <a href="https://panda-css.com/docs/concepts/patterns">Patterns</a> and <a href="https://panda-css.com/docs/concepts/recipes">Recipes</a> make it easy to reuse styles across your site.</p><p>So it&apos;s not that different from Tailwind in the end, but I prefer Panda&apos;s developer experience.</p><p>After experimenting with various CSS approaches for my new Astro web project, I&apos;ve settled on Panda CSS as my preferred solution. &#xA0;I prefer its approach and developer experience over Tailwind, and while I appreciate the simplicity of handwriting CSS, it can get messy and Astro doesn&apos;t support dynamic <code>&lt;style&gt;</code> tags.</p><p>Overall, Panda CSS strikes a nice balance between the flexibility of utility classes and the usability of CSS-in-JS. Its integration with Astro and focus on type-safe, build-time generation make it a compelling choice for modern web development projects.</p>]]></content:encoded></item><item><title><![CDATA[Running Stable Diffusion on Arch Linux]]></title><description><![CDATA[Getting Stable Diffusion running on Arch Linux with an Nvidia graphics card.]]></description><link>https://blog.davebauman.io/running-stable-diffusion-on-arch-linux/</link><guid isPermaLink="false">6318885a8359b80001c39435</guid><category><![CDATA[stable-diffusion]]></category><category><![CDATA[machine-learning]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Wed, 07 Sep 2022 15:23:44 GMT</pubDate><content:encoded><![CDATA[<p>I wanted to play around with Stable Diffusion and I have an Nvidia GPU, so I gave it a shot. I was able to get it running pretty quickly thanks to a number of helpful wrapper scripts, GUIs, and packages the community has created.</p><p>There were basically two choices for the install: Anaconda or Docker. &#xA0;I initially attempted Anaconda but it got stuck installing dependencies, so I quickly switched to Docker. &#xA0;If you are not using Anaconda for other things, I would recommend using Docker as it keeps your system cleaner.</p><h3 id="my-setup">My Setup</h3><p>For context, here&apos;s what I&apos;m running on:</p><ul><li>Intel Core i9-9900K 3.6 GHz</li><li>32GB DDR4 RAM</li><li>Nvidia GeForce RTX 2070 8GB</li><li>Arch Linux with KDE</li></ul><p>I have Docker installed:</p><pre><code>sudo pacman -S docker 
</code></pre><p>The AUR package <code><a href="https://aur.archlinux.org/packages/nvidia-container-toolkit/">nvidia-container-toolkit</a></code> is also required to access GPUs from within Docker:</p><pre><code>yay -S nvidia-container-toolkit</code></pre><p>Start or restart Docker after installing:</p><pre><code>sudo systemctl start docker</code></pre><h3 id="installation">Installation</h3><p>There are a number of different GUIs for Stable Diffusion, and things are changing quickly so there might be newer or better options available shortly.</p><p>One of the most popular is <a href="https://github.com/sd-webui/stable-diffusion-webui">sd-webui/stable-diffusion-webui</a>, which provides a frontend for txt2img, img2img, and a handful of additional models, optimizations, etc.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/sd-webui/stable-diffusion-webui"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - sd-webui/stable-diffusion-webui: Stable Diffusion web UI</div><div class="kg-bookmark-description">Stable Diffusion web UI. Contribute to sd-webui/stable-diffusion-webui development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">sd-webui</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/b81cc4f6f6b25a4df7118582a18ce029389d36acf185c8ec87573eb9720c2fc6/sd-webui/stable-diffusion-webui" alt></div></a></figure><p>It can be run directly, but it also provides a Docker Compose setup which is what I&apos;m interested.</p><p>Get started by cloning the repo:</p><pre><code>git clone https://github.com/sd-webui/stable-diffusion-webui.git
cd stable-diffusion-webui</code></pre><p>Next, copy the example environment file and rename to <code>.env_docker</code></p><pre><code>cp .env_docker.example .env_docker</code></pre><p>I manually updated the <code>WEBUI_ARGS</code> flag in the environment file:</p><pre><code>WEBUI_ARGS=--extra-models-cpu --optimized-turbo</code></pre><p>This tells it to run extra models on my CPU, and <code>--optimized-turbo</code> allows running on GPUs with less than 10GB of VRAM.</p><p>Finally, bring up the Docker container:</p><pre><code>docker compose up</code></pre><p>The first time it runs, it will download a handful of large model files. &#xA0;This will take a while, but afterwards you can add <code>VALIDATE_MODELS=false</code> to the environment file to skip checking the files.</p><p>Alternately, if you already have the model files downloaded, you can save time by manually add them to the following locations before launching the container:</p><ul><li><code>sd-v1-4.ckpt</code> &#x27A1; <code>models/ldm/stable-diffusion-v1/model.ckpt</code></li><li><code>RealESRGAN_*.pth</code> &#x27A1; <code>src/realesrgan/experiments/pretrained_models/RealESRGAN_*.pth</code></li><li><code>GFPGANv1.3.pth</code> &#x27A1; <code>src/gfpgan/experiments/pretrained_models/GFPGANv1.3.pth</code></li></ul><p>Once it&apos;s running, open <a href="http://localhost:7860/">http://localhost:7860/</a> to access the UI.</p><p>If you need to stop the Docker container, just press <code>Ctrl-C</code> and it will stop automatically. &#xA0;The next time you start it will be faster since the Docker image is prebuilt and the models all downloaded.</p><h3 id="additional-upscalers">Additional Upscalers</h3><p>Here&apos;s how to add the Latent Diffusion Super Resolution and GoLatent upscalers:</p><pre><code>cd src
git clone https://github.com/devilismyfriend/latent-diffusion.git
mkdir -p latent-diffusion/experiments/pretrained_models</code></pre><p>Then download <a href="https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" rel="nofollow">LDSR (2GB)</a> and <a href="https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1" rel="nofollow">its configuration</a>, and add them to the <code>src/latent-diffusion/experiments/pretrained_models</code> directory as <code>model.ckpt</code> and <code>project.yaml</code>.</p><p>You can restart the Docker container for this change to take effect.</p>]]></content:encoded></item><item><title><![CDATA[Provisioning this Blog on DigitalOcean]]></title><description><![CDATA[<p>I recently rewrote the infrastructure for this blog, which was long overdue. &#xA0;My previous server had fallen into the trap of <a href="https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/">pet vs cattle</a>, so it was difficult to manage, upgrade, etc.</p><p><em>Disclaimer: I&apos;ve included a referral link for <a href="https://m.do.co/c/9a5c096d7e62">DigitalOcean</a> below.</em></p><p>With the cattle &gt; pets metaphor</p>]]></description><link>https://blog.davebauman.io/provisioning-this-blog-on-digitalocean/</link><guid isPermaLink="false">5f05082a72f68f0001bfe7f2</guid><category><![CDATA[blog]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[digital-ocean]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Wed, 08 Jul 2020 00:26:24 GMT</pubDate><content:encoded><![CDATA[<p>I recently rewrote the infrastructure for this blog, which was long overdue. &#xA0;My previous server had fallen into the trap of <a href="https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/">pet vs cattle</a>, so it was difficult to manage, upgrade, etc.</p><p><em>Disclaimer: I&apos;ve included a referral link for <a href="https://m.do.co/c/9a5c096d7e62">DigitalOcean</a> below.</em></p><p>With the cattle &gt; pets metaphor in mind, I started over using <a href="https://www.terraform.io/">Terraform</a> to provision the infrastructure on <a href="https://m.do.co/c/9a5c096d7e62">DigitalOcean</a>. &#xA0;I really enjoy using DigitalOcean for personal projects, as it is easy to use and has predicable pricing.</p><p>Here are a few of the goals I had going into this project:</p><ul><li>The process needs to be completely automated</li><li>I should be able to destroy and recreate a server without losing anything</li><li>Software installation, patching, reboots, and keeping services running should be completely automated</li></ul><p>Basically, I&apos;m trying to make it as easy for my future self as possible.</p><p>I&apos;m not going to include everything in this post, but I want to highlight a few key parts of the setup:</p><h2 id="terraform-setup">Terraform Setup</h2><p>To start with, we need the DigitalOcean provider to be able to interact with their API.</p><!--kg-card-begin: markdown--><pre><code class="language-terraform">provider &quot;digitalocean&quot; {
  token = var.digitalocean_token
  version = &quot;~&gt; 1.0&quot;
}
</code></pre>
<!--kg-card-end: markdown--><p>Instead of storing the Terraform state on my local machine, where it might get lost, I created a bucket on Google Cloud Storage. This keeps it nicely secured and lets me access it from different machines. </p><!--kg-card-begin: markdown--><pre><code class="language-terraform">terraform {
  backend &quot;gcs&quot; {
    bucket  = &quot;davebauman-devops&quot;
    prefix  = &quot;davebauman.io/terraform/state&quot;
  }
}
</code></pre>
<!--kg-card-end: markdown--><h2 id="volume-storage">Volume Storage</h2><p>One of my goals was to be able to delete and recreate the VMs without losing anything, and the best way to do that is to use Volume Block Storage. &#xA0;My 5GB volume costs me $0.50 a month, so it&apos;s pretty affordable.</p><pre><code class="language-hcl">resource &quot;digitalocean_volume&quot; &quot;data_volume&quot; {
  region                  = var.do_region
  name                    = &quot;dbv1&quot;
  description             = &quot;davebauman.io data volume&quot;
  size                    = 5
  initial_filesystem_type = &quot;ext4&quot;

  lifecycle {
    prevent_destroy = true
  }
}</code></pre><p>I turned on <code>prevent_destroy</code> to avoid any accidental deletes.</p><h2 id="droplet">Droplet</h2><p>Next up we have the <a href="https://www.terraform.io/docs/providers/do/r/droplet.html">Droplet</a> (compute VM):</p><pre><code class="language-hcl">resource &quot;digitalocean_droplet&quot; &quot;web&quot; {
  name       = &quot;davebauman-io&quot;
  image      = &quot;fedora-31-x64&quot;
  size       = &quot;s-1vcpu-1gb&quot;
  region     = &quot;nyc1&quot;
  ipv6       = true
  monitoring = false

  ssh_keys = [
    &quot;${digitalocean_ssh_key.key1.fingerprint}&quot;,
    &quot;${digitalocean_ssh_key.key2.fingerprint}&quot;
  ]

  user_data = templatefile(&quot;files/cloud-init.tpl&quot;, {
    key-1 = file(&quot;files/key1.pub&quot;)
    key-2 = file(&quot;files/key2.pub&quot;)
    ssh_port = var.ssh_port
  })
}

resource &quot;digitalocean_volume_attachment&quot; &quot;data_volume_attachment&quot; {
  droplet_id = digitalocean_droplet.web.id
  volume_id  = digitalocean_volume.data_volume.id
}</code></pre><p>This does a couple things. &#xA0;First off, it creates a new Fedora VM in the smallest size, it attaches our data volume, and it specifies a <a href="https://cloudinit.readthedocs.io/en/latest/index.html">Cloud-init</a> file to do some initial setup for the VM.</p><p>I actually used Ansible to provision the software side, but before I can even run Ansible I needed to do some setup. &#xA0;Here&apos;s what the <code>cloud-init.tpl</code> file looks like:</p><pre><code class="language-cloud-config">#cloud-config
users:
  - name: deploy
    ssh-authorized-keys:
      - ${key-1}
      - ${key-2}
    sudo: [&apos;ALL=(ALL) NOPASSWD:ALL&apos;]
    groups: sudo
    shell: /bin/bash
mounts:
  - [ /dev/disk/by-id/scsi-0DO_Volume_dbv1, /mnt/dbv1, &quot;ext4&quot;, &quot;defaults,nofail,discard&quot;, &quot;0&quot;, &quot;0&quot;]
runcmd:
  # Update SSH settings
  - sed -i -e &apos;/Port 22/c\Port ${ssh_port}&apos; /etc/ssh/sshd_config
  - sed -i -e &apos;/PermitRootLogin/c\PermitRootLogin no&apos; /etc/ssh/sshd_config
  - sed -i -e &apos;$aAllowUsers deploy&apos; /etc/ssh/sshd_config
  - dnf install -y policycoreutils-python-utils
  - semanage port -a -t ssh_port_t -p tcp ${ssh_port}
  - systemctl restart sshd
  # Assign permissions
  - chown deploy:deploy /mnt/dbv1</code></pre><p>Cloud-init automatically processes this first thing when the VM comes online, and does the following:</p><ul><li>Creates a new <code>deploy</code> user, with the SSH keys previously mentioned</li><li>Mounts the volume to <code>/mnt/dbv1</code> automatically</li><li>Updates the SSH port and prevents the <code>root</code> user from logging in</li></ul><p>This is just enough to slightly secure the box and give me the access I need to run Ansible to finish the setup.</p><h2 id="what-else">What Else?</h2><p>I have a few other things not mentioned here: I configured the DigitalOcean firewall to restrict inbound/outbound access to my VM. I uploaded my SSH public keys to DigitalOcean. &#xA0;And I&apos;m manging my DNS via DigitalOcean as well, so I have the domain and records scripted out.</p><p>The other major thing I left out is the API tokens. &#xA0;I had to create a DigitalOcean API token for the Terraform provider to use; it was referenced at the very top in the provider. &#xA0;Since I used GCS for the Terraform state, I also had to provide a GCP credential file.</p><h2 id="finale">Finale</h2><p>The big upgrades here for me are the external volume and the cloud-init setup. &#xA0;While a volume doesn&apos;t replace my backup strategy, it would make it trivial to recreate the droplet without concern. &#xA0;And the cloud-init doesn&apos;t do much, but having those core tasks handled immediately is very satisfying.</p><p>In a future post I&apos;ll go over my Ansible setup, which takes over after Terraform finishes with the infrastructure. &#xA0;OS configuration, software setup, patching, etc. is all handled by Ansible.</p>]]></content:encoded></item><item><title><![CDATA[Webpack Bundle Analyzer and Ionic]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>As a way to procrastinate on my latest mobile app, I started to look into reducing the size of the vendor file&#x2014;that is, the external libraries that are bundled with my application code. In theory, a bigger application means slower startup times, which translates into a worse user</p>]]></description><link>https://blog.davebauman.io/webpack-bundle-analyzer-and-ionic/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0bc</guid><category><![CDATA[angular]]></category><category><![CDATA[ionic]]></category><category><![CDATA[webpack]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Sat, 24 Mar 2018 01:03:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>As a way to procrastinate on my latest mobile app, I started to look into reducing the size of the vendor file&#x2014;that is, the external libraries that are bundled with my application code. In theory, a bigger application means slower startup times, which translates into a worse user experience.  So just useful enough to be worthwhile but a great way to put off doing the actual development...</p>
<p>Like Kindertron Flash Cards, I&apos;m developing it in JavaScript using <a href="https://ionicframework.com/">Ionic</a>.  This uses Webpack (or optionally Rollup) under the hood, but abstracts the build system by default.</p>
<p>My first order of business was determining what exactly what in my vendor file.  Webpack <em>packs</em> all the 3rd-party libraries together into a single file, so I needed a way to inspect it and see exactly what was included.</p>
<h1 id="webpackbundleanalyzer">Webpack Bundle Analyzer</h1>
<p>I found a plugin which does exactly this: <a href="https://github.com/webpack-contrib/webpack-bundle-analyzer">Webpack Bundle Analyzer</a>.  It runs as part of the build and outputs a lovely treemap visualization.</p>
<p>Since Ionic hides the Webpack configuration by default, but it&apos;s surprisingly easy to get this integrated:</p>
<ol>
<li>
<p>Install the plugin</p>
<pre><code> npm install --save-dev webpack-bundle-analyzer
</code></pre>
</li>
<li>
<p>Open the <code>package.json</code> and add this section:</p>
<pre><code> &quot;config&quot;: {
   &quot;ionic_webpack&quot;: &quot;./config/webpack.config.js&quot;
 }
</code></pre>
</li>
<li>
<p>Create this file <code>webpack.config.js</code>:</p>
<pre><code> const webpackConfig = require(&apos;../node_modules/@ionic/app-scripts/config/webpack.config&apos;);
 const BundleAnalyzerPlugin = require(&apos;webpack-bundle-analyzer&apos;).BundleAnalyzerPlugin;

 webpackConfig.prod.plugins.push( new BundleAnalyzerPlugin({
     analyzerMode: &apos;static&apos;,
     generateStatsFile: true
 }))
</code></pre>
<p>You can put it anywhere you like, just don&apos;t forget to update the paths in both <code>package.json</code> and the file.</p>
</li>
<li>
<p>Do a production build:</p>
<pre><code> ionic build --prod
</code></pre>
</li>
</ol>
<p>During the middle of the build, a new browser window will open with the results of the bundle analysis.</p>
<p><img src="https://blog.davebauman.io/content/images/2018/03/Screenshot_20180324_004429.png" alt loading="lazy"></p>
<p>The above config only added it for production builds, so that&apos;s the only time it will appear.  I don&apos;t think regular builds will give meaningful numbers, so this should be fine.</p>
<h1 id="reducingvendorjs">Reducing Vendor.js</h1>
<p>Webpack supports <a href="https://webpack.js.org/guides/tree-shaking/"><em>tree shaking</em></a>, and it works in Ionic as well.  However, there may be code changes required for it to have the maximum effect.  Many older libraries aren&apos;t modularized in a way that lets the tree shaking algorithm work.  This is most evident in the large <a href="https://lodash.com/">Lodash</a> rectangle above.  I&apos;m not using all the features of Lodash, but the entire thing is being included.  So it was my first step towards a smaller build.</p>
<p>First, I switched from Lodash to <a href="https://www.npmjs.com/package/lodash-es">Lodash-es</a>, which is a modularized build.</p>
<p><img src="https://blog.davebauman.io/content/images/2018/03/Screenshot_20180324_002245.png" alt loading="lazy"></p>
<p>This didn&apos;t actually do anything to the <code>vendor.js</code> file, just swapped out one monolithic Lodash module for 631 modules.  In order to tree shake correctly, I have to avoid importing the entire module at once.  So I went through my entire code base replacing this:</p>
<pre><code>import * as _ from &apos;lodash&apos;; 
</code></pre>
<p>with this:</p>
<pre><code>import cloneDeep from &apos;lodash-es/cloneDeep&apos;; 
import omit from &apos;lodash-es/omit&apos;; 
...
</code></pre>
<p>I never import anything directly from <code>lodash-es</code>; always from the sub-module.  This reduces the build quite a bit:</p>
<p><img src="https://blog.davebauman.io/content/images/2018/03/Screenshot_20180324_005046.png" alt loading="lazy"></p>
<p>At this point my Lodash usage went from 527KB to 155KB!  That&apos;s a pretty hefty reduction in size.</p>
<p>I was so motivated, I continued onto Rxjs. I&apos;m not going to go into detail, but basically I switched to <a href="https://github.com/ReactiveX/rxjs/blob/master/doc/pipeable-operators.md">pipeable operators</a> which were recently shipped in version 5.5.  This took me from 850KB to 558KB.</p>
<h1 id="nextsteps">Next Steps</h1>
<p>As I showed at the beginning, it&apos;s pretty easy to enable Webpack Bundle Analyzer without messing with Ionic&apos;s default build system.  And once it&apos;s enabled, it&apos;s easy to see how it changes over time.</p>
<p>However, the biggest downfall to all this work was that my overall <code>vendor.js</code> went from 5.55MB to 4.9MB&#x2014;a 12% reduction.  The major culprits in my vendor file are Ionic-angular (1.4MB) and @angular (1.1MB) themselves.  Firebase also takes up almost 1MB.  So while it&apos;s great that I&apos;m making improvements, the lower bound on <code>vendor.js</code> is still pretty large.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Dev Diary: Kindertron Flash Cards #2]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Welcome back!  It&apos;s been too long since the last Dev Diary, but I&apos;ve been hard at work on large new feature!  It&apos;s now wrapped up and shipped to the Apple App Store &amp; Google Play Store last week, so it&apos;s live in</p>]]></description><link>https://blog.davebauman.io/dev-diary-kindertron-flash-cards-2/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0bb</guid><category><![CDATA[kindertron]]></category><category><![CDATA[dev diaries]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Tue, 16 May 2017 12:49:17 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Welcome back!  It&apos;s been too long since the last Dev Diary, but I&apos;ve been hard at work on large new feature!  It&apos;s now wrapped up and shipped to the Apple App Store &amp; Google Play Store last week, so it&apos;s live in the wild!</p>
<p>So what is this feature?  Basically it lets users add new custom albums, using their own photos.  This is great for adding family members, vacation memories, toys, etc.  Whatever you want, really.</p>
<h1 id="thechallenge">The Challenge</h1>
<p>This proved a bit more complicated than expected, mainly because I didn&apos;t design for it upfront.  I was hard-coding the built-in albums, without the expectation that users could add their own.  I would need a user-modifiable way to store the list of albums/photos.  Furthermore, I debated for <strong>a long time</strong> about whether users should be able to add their own photos to non-custom albums.  Basically, could they extend the build-in albums (e.g. Animals) with other photos?  For this, I concluded that while I wouldn&apos;t build this into the next version, I should design with it in mind.  That way, I would avoid repeating this lengthy refactoring process if I ever wanted to add it.</p>
<h1 id="apppreferences">App Preferences</h1>
<p>The first order of business was to add some persistent user settings.  This was pretty easy with the <a href="http://ionicframework.com/docs/native/app-preferences/">App Preferences</a> cordova plugin. But I had to build out a whole new Settings page, and I spent some time adding toggle switches for previously hard-coded settings (like whether the user can click to advance the flashcards).</p>
<p>At the same time, I redesigned how the albums/photos are stored.  I went with a hybrid model: there&apos;s a built-in Image Database which lists all the photos, captions, file names, etc.  Then there is an Album List which is stored in the App Preferences, and references photos by key.  This allows the user to customize the albums, add/remove/reorder photos, etc., without affecting the original data.  And it lets me push out new photos or change existing photos without impacting their customization.  I had to add a synchronization step, in case I want to push out an update with a new (or removed) photo.</p>
<h1 id="customalbums">Custom Albums</h1>
<p>Finally, we get to the custom albums!  I built out another page to list all the albums/photos, accessible from the Settings page.  Then I had to figure out how to get photos from the user&apos;s phone.</p>
<p>The best option I could find was using the <a href="http://ionicframework.com/docs/native/camera/">Camera</a> plugin, configured to pull from the <code>PHOTO_LIBRARY</code> instead of the camera itself.  Although this is where things got complicated.</p>
<p>My initial approach was to get a native URI and convert it to a file:/// URI, which I could directly add to an <code>&lt;img&gt;</code> tag to display.  That was perfect, because I didn&apos;t have to store the images inside my app.</p>
<p>This approach needed some platform-specific code, since the native URIs are different and I couldn&apos;t find a consistent way to convert.  But after a little time in the emulator, everything was working great.</p>
<p>That is, until I tried loading photos from Google Photos, which were not on my phone.  That did not work, because I was using a device-local URI, and the photo was not on my device!</p>
<p>So, I started over again from scratch.  The <a href="http://ionicframework.com/docs/native/camera/">Camera</a> plugin has another option, to get a copy of the selected image in a temporary photo.  So my next approach was to get this image and copy it to the persistent app data folder.  This also required platform-specific code:  I used the <a href="http://ionicframework.com/docs/native/file/">File</a> plugin on Android, which was pretty straightforward. But on iOS, it can&apos;t be used with the URIs returned from the Camera.  So I used a combination of <code>resolveLocalFilesystemUrl()</code> and <code>resolveDirectoryUrl()</code> to get a <code>FileEntry</code> and <code>DirectoryEntry</code>, then used <code>FileEntry.copyTo()</code>.</p>
<p>Once the photo is in the app data folder, everything else works basically the same.  The upside is that it&apos;s a bit faster to display the photos, but the downside is that it needs to manually cleanup those file when removing photos/albums or resetting the entire Settings.</p>
<h2 id="finale">Finale</h2>
<p>Eventually I got everything working the way I wanted on Android and iOS, so it was time to push out the release.  I had done a ton of refactoring in the meantime, and added lots of new photos, so it was a long-overdue release!</p>
<p>Next up: adding sounds!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Dev Diary: Kindertron Flash Cards #1]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;m starting a new thing here: Development diaries.  I&apos;m going to use this series to talk about the design and development process of some of my projects.  My plan is to keep them mostly real-time, but in this case I&apos;m going to talk about</p>]]></description><link>https://blog.davebauman.io/dev-diary-kindertron-flash-cards-1/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0ba</guid><category><![CDATA[kindertron]]></category><category><![CDATA[dev diaries]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Mon, 17 Apr 2017 20:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;m starting a new thing here: Development diaries.  I&apos;m going to use this series to talk about the design and development process of some of my projects.  My plan is to keep them mostly real-time, but in this case I&apos;m going to talk about a major milestone in this project: Submitting my mobile app to the app stores.</p>
<h1 id="submittingtotheappstores">Submitting to the App Stores</h1>
<p>I haven&apos;t made a mobile app before, so this was a new experience for me. I&apos;m not sure what I expected, but it was definitely a more complicated and drawn out process than I thought.</p>
<p>Since I used Ionic to build this app, it was relatively easy to release both an Android and iPhone version. At least from the code side of things.  But let&apos;s look at what I had to do for each platform:</p>
<h2 id="googleplay">Google Play</h2>
<p>Google Play requires a Developer Account, after which the <a href="https://play.google.com/apps/publish/">Developer Console</a> can be used to manage and release apps.  There&apos;s a one-time fee of $25 USD to create the account.</p>
<p>I actually paid for two accounts. First I created one for myself, but then I realized my personal information would appear in the Google Play Store, which I&apos;d rather avoid.  So I created a new Kindertron Google account, signed up as a Developer, then gave admin access to my primary Google account.  This lets me manage my Kindertron account using my personal account.</p>
<p>Next up, I had to create the store listing.  This was pretty straightforward, although I needed both screenshots and a description that inspired people to download it&#x2014;neither of which I had created yet.</p>
<p>I also had to improve my icon and splash screen, and fill out a bunch of categorization and content rating fields.  Pretty easy.</p>
<p>Finally, I needed a signed <code>.apk</code>, which is the compiled application package, signed with my private key. Fortunately Ionic/Cordova have this wrapped up into one easy command:</p>
<pre><code>ionic build android --prod --release -- --keystore=../kindertron.keystore --alias=kindertron
</code></pre>
<p>Once everything was in place, I submitted for review, and shortly after it appeared in the Google Play Store.</p>
<h2 id="applesappstore">Apple&apos;s App Store</h2>
<p>This one was a bit more complicated. Again, I needed to sign up for the Apple Developer Program, which is considerably more expensive at $99 USD per year.</p>
<p>They also don&apos;t allow pseudonyms or company names, unless you have an actual company with a DUNS number assigned by Dun &amp; Bradstreet (D&amp;B).  This is most charitably a hassle, and less charitably a <a href="https://blog.metamorphium.com/2012/12/03/apple-duns/">scam</a>.  Regardless, I don&apos;t have a business entity so my Apple Developer Account is in my real name. Not ideal, but whatever.</p>
<p>Next, I went through a similar process for filling out the App Store listing.  Although Apple is a bit more demanding, with specific screenshot resolutions, a privacy policy, and a support website.</p>
<p>I didn&apos;t have these, so this sent me on a long tangent which ended up with buying a domain name and building a website:  <a href="https://kindertron.com">https://kindertron.com</a>.  At least afterwards I could update my Google Play Store listing to share the same privacy policy and website URL.</p>
<p>Finally, I needed to upload the app build.  Since I don&apos;t have a Mac, I used <a href="https://ionicframework.com/products">Ionic Cloud</a>&apos;s Package service, which can build both Android and iOS packages and provides 100 free builds a month.  <a href="http://docs.ionic.io/services/profiles/">This</a> was a helpful guide to creating all the certificates setup without a Mac.</p>
<p>After getting the iTunes Connect and Ionic Cloud settings all in place, I can do a new iOS build like this:</p>
<pre><code>ionic package build ios --profile iosprod --release
</code></pre>
<p>The completed builds can be downloaded on the Ionic Cloud website.</p>
<p>But my effort to complete this without a Mac was stumped at this point, because Apple doesn&apos;t give you a way to upload the <code>.ipa</code> package directly in the browser.  Only <a href="https://developer.apple.com/library/content/documentation/LanguagesUtilities/Conceptual/iTunesConnect_Guide/Chapters/UploadingBinariesforanApp.html">Xcode or Application Loader</a> work, and they are both Mac applications.</p>
<p>So I borrowed my wife&apos;s Macbook, downloaded the <code>.ipa</code> file, and tossed it into Application Loader.  30 seconds later it was done, and I clicked <em>Submit for Review</em> on the iTunes Connect website.</p>
<p>Apple has a manual review process, which took a few days for me.  Mostly it was just pending, and the actual time spent in review was very small.  But eventually it made it into the App Store and started appearing in search listings.</p>
<h1 id="conclusion">Conclusion</h1>
<p>In retrospect, I was quite unprepared for this process when I started.  I didn&apos;t have screenshots, a description, a website, a privacy policy.  At least I had a logo, although I even changed that during this entire process.</p>
<p>But now that it&apos;s done, it&apos;s much easier to roll out new updates for both platforms.  I&apos;ve also started using both beta programs, since I&apos;m currently working on features that are very device specific.  More on that later.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Introducing Kindertron Flash Cards]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;ve finally launched my first mobile app on both the Google Play and iTunes App Stores!  It&apos;s a children&apos;s educational app, featuring over 50 photo flash cards that little hands can swipe and click through.</p>
<p>Getting it launched to the app stores was a</p>]]></description><link>https://blog.davebauman.io/introducing-kindertron-flash-cards/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0b9</guid><category><![CDATA[kindertron]]></category><category><![CDATA[mobile]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Mon, 10 Apr 2017 01:41:04 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;ve finally launched my first mobile app on both the Google Play and iTunes App Stores!  It&apos;s a children&apos;s educational app, featuring over 50 photo flash cards that little hands can swipe and click through.</p>
<p>Getting it launched to the app stores was a big hurdle that I&apos;m happy to have out of the way. There is still a ton of new features and content in the works, and hopefully I&apos;ll be able to finish and release an update soon.</p>
<p>It&apos;s currently <strong>FREE</strong> so give it a try!</p>
<div style="text-align: center !important;"><a style="display: inline-block;vertical-align: top" href="https://play.google.com/store/apps/details?id=com.kindertron.flashcards&amp;utm_source=blog&amp;utm_campaign=blog&amp;pcampaignid=MKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1"><img style="height: 62px; margin:0px;padding:0" alt="Get it on Google Play" src="https://play.google.com/intl/en_us/badges/images/generic/en_badge_web_generic.png"></a><a style="display: inline-block;" href="https://itunes.apple.com/us/app/kindertron-flash-cards/id1219694621?ls=1&amp;mt=8"><img style="width: 145px; margin:9px;padding:0" alt="Download on the App Store" src="https://blog.davebauman.io/content/images/2017/04/Download_on_the_App_Store_Badge_US-UK_135x40.svg"></a>
</div>
<div style="margin: auto;width: 150px;"><img src="https://blog.davebauman.io/content/images/2017/04/icon-1.png" style="width: 150px"></div>
<br>
<h6 id="fineprint">Fine Print</h6>
<p><small>Google Play and the Google Play logo are trademarks of Google Inc. Apple and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc., registered in the U.S. and other countries.</small></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Angular-numeraljs 2.0]]></title><description><![CDATA[Angular-numeraljs, a filter for applying Numeral formats, has been upgraded to use Numeral.js 2.0]]></description><link>https://blog.davebauman.io/angular-numeraljs-2-0/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0b8</guid><category><![CDATA[projects]]></category><category><![CDATA[angularjs]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Tue, 14 Feb 2017 12:39:26 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Some time back, I wrote an Angular.js library to wrap <a href="http://numeraljs.com/">Numeral.js</a>. I&apos;m a big fan of Numeral for formatting numbers.</p>
<p>My wrapper adds a new <a href="https://docs.angularjs.org/guide/filter">filter</a> to conveniently apply Numeral formats in your view:</p>
<pre><code>&lt;p&gt;
    {{ price | numeraljs:&apos;$0,0.00&apos; }}
&lt;/p&gt;
</code></pre>
<p>Pretty simple, but I wrote it so you don&apos;t have it.  It is the cleverly-named <a href="https://github.com/baumandm/angular-numeraljs">angular-numeraljs</a>, and has a modest following on GitHub.</p>
<p>Much to my surprise, Numeral recently dropped a 2.0 release after a 3-year hiatus.  With a little prompting from the GitHub community, I just released a corresponding <a href="https://github.com/baumandm/angular-numeraljs/releases/tag/2.0.0">2.0</a> version which will track Numeral&apos;s 2.0 branch.</p>
<p>There are a few <a href="https://github.com/baumandm/angular-numeraljs/blob/master/CHANGELOG.md#200">breaking changes</a>, both in the Numeral library as well as in my wrapper.  I took the opportunity to revise the angular-numeraljs interface to more closely match that of Numeral itself.</p>
<p>The release is now available on <a href="https://github.com/baumandm/angular-numeraljs/releases/tag/2.0.0">GitHub</a>, <a href="https://bower.io/">bower</a>, and <a href="https://www.npmjs.com/package/angular-numeraljs">npm</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Cyclotron: NASA's Astronomy Picture of the Day]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Today I&apos;m going to walk through how to automatically pull NASA&apos;s <a href="https://apod.nasa.gov/apod/astropix.html">Astronomy Picture of the Day</a> and display it on a <a href="http://cyclotron.io">Cyclotron</a> dashboard.</p>
<p>NASA has the API documentation available <a href="https://api.nasa.gov/api.html#apod">here</a>.  It&apos;s pretty straightforward without a lot of parameters, and can be used with a</p>]]></description><link>https://blog.davebauman.io/cyclotron-nasas-astronomy-picture-of-the-day/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0b7</guid><category><![CDATA[cyclotron]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Fri, 10 Feb 2017 14:05:07 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Today I&apos;m going to walk through how to automatically pull NASA&apos;s <a href="https://apod.nasa.gov/apod/astropix.html">Astronomy Picture of the Day</a> and display it on a <a href="http://cyclotron.io">Cyclotron</a> dashboard.</p>
<p>NASA has the API documentation available <a href="https://api.nasa.gov/api.html#apod">here</a>.  It&apos;s pretty straightforward without a lot of parameters, and can be used with a demo API key.  Here&apos;s a sample URL that pulls up an HD version of today&apos;s Picture of the Day:</p>
<pre><code>https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY&amp;hd=true
</code></pre>
<p>Cyclotron has a JSON Data Source which can load this, and the HTML Widget can be used to display the picture.  There&apos;s also an Image Widget&#x2014;but it can&apos;t be used as it doesn&apos;t support Data Sources.</p>
<h2 id="setup">Setup</h2>
<p>To get started, you&apos;ll need Cyclotron to be installed and running.  There&apos;s a <a href="http://www.cyclotron.io/gettingstarted.html">Getting Started</a> guide that walks through the installation process.</p>
<p>Once Cyclotron is up and running, it should look something like this:</p>
<p><img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-1.png" alt loading="lazy"></p>
<h2 id="creatingadashboard">Creating a Dashboard</h2>
<p>Click on the <strong>New Dashboard</strong> icon to start a new dashboard.  This opens the Dashboard Editor, and you can start filling out the fields.  I called my dashboard &quot;nasa-picture-of-the-day&quot;.  The <em>Name</em> must be lowercase and snake-case, and it will automatically correct it if needed.  I also added a few <em>Tags</em> and a <em>Description</em>, although those are optional.</p>
<p><img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-2.png" alt loading="lazy"></p>
<p>The Dashboard can be saved anytime, so you can either save incremental changes or everything once at the end.  I prefer to save frequently in case I want to rollback, but also so I can preview my changes.</p>
<h2 id="addingadatasource">Adding a Data Source</h2>
<p>Switch to the Data Sources section and click on Add Data Source:<br>
<img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-3.png" alt loading="lazy"></p>
<p>Select the new Data Source, and change its type to JSON.<br>
<img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-4.png" alt loading="lazy"></p>
<p>Switching to JSON type loads a specific set of properties for working with web services. Give the Data Source a name&#x2014;I used the name of the API, &quot;apod&quot;. Next, paste in the NASA API URL from above.<br>
<img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-5.png" alt loading="lazy"></p>
<p>Cyclotron knows how to run this Data Source, load the API URL, and fetch the JSON response.</p>
<p>There is one remaining step, and that is to transform the response JSON into Cyclotron&apos;s standard data format.  The NASA API returns a single JSON object, but Cyclotron typically expects an array of objects, which easily maps to rows and columns of a table.</p>
<p>Fortunately, this transformation can be done easily inside Cyclotron using JavaScript. Data Sources have a Post-Processor property, which is an optional JavaScript function. If specified, it runs the function every time the Data Source completes, passing in the original response. This allows the Post-Processor to inspect, modify, or even replace the response.</p>
<p>In this case, we need a very simple Post-Processor: all it needs to do is return an array containing the original object:</p>
<p><img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-6.png" alt loading="lazy"></p>
<p>Here&apos;s the text:</p>
<pre><code>pp = function (data) {
    return [data];
}
</code></pre>
<p>The function wraps the original JSON object inside an array and returns the array, replacing the original. Any Widgets that use this Data Source will get the modified array, rather than the original object.</p>
<h2 id="addinganewpage">Adding a New Page</h2>
<p>Switch to the Pages section and click on Add Page:</p>
<p><img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-8.png" alt loading="lazy"></p>
<p>That creates a new &quot;Page 1&quot;.  Open the details of that Page so we can add Widgets to it.  But first, we should update the Layout of the Page.  The default is a 2x2 grid, but we want to change that to 1x1 to display a single HTML Widget.  Change the <em>Grid Columns</em> and <em>Grid Rows</em> properties both to 1.</p>
<p><img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-9.png" alt loading="lazy"></p>
<p>Next, we want to add an HTML Widget to the Page.  Click the Add Widget button at the top of the page, which creates a new, blank Widget.</p>
<p><img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-10.png" alt loading="lazy"></p>
<h2 id="addinganhtmlwidget">Adding an HTML Widget</h2>
<p>Switch to viewing the Widget 1 properties.  Initially, no properties are visible except the Widget Type.  Selecting HTML from the dropdown will load a set of type-specific properties.</p>
<p><img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-11-1.png" alt loading="lazy"></p>
<p>The HTML Widget has two specific properties that we need to provide: the Data Source, so it can load data from the NASA API, and the HTML content, so it knows how to display the data.</p>
<p><img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-12.png" alt loading="lazy"></p>
<p>Data Source is a dropdown of all the previously-defined Data Sources&#x2014;we only have one, so select it.  That leaves the HTML property.</p>
<p>Here&apos;s the HTML content I&apos;m using:</p>
<pre><code>&lt;h1&gt;#{title}&lt;/h1&gt;
&lt;img src=&quot;#{hdurl}&quot; /&gt;
</code></pre>
<p>The template notation used in Cyclotron is <code>#{columnName}</code>, where <code>columnName</code> is a column in the Data Source. In this case, it&apos;s using the <code>.title</code> and <code>hdurl</code> columns from the APOD API.</p>
<p>When used without a Data Source, the HTML Widget just displays the contents of the HTML property as-is.  But if a Data Source is selected, it becomes a repeater&#x2014;that is, it renders the contents of the HTML property for each row in the Data Source.  In this case, there&apos;s only one row in the Data Source, so it will output exactly one <code>&lt;h1&gt;</code> and one <code>&lt;img /&gt;</code>.</p>
<h2 id="preview">Preview</h2>
<p>Now that we have the Data Source and Widget hooked up, it&apos;s a good time to preview what it looks like.  You could have previewed the dashboard at any point, but there was nothing to see until now.  Click the Save button then the Preview button to open the dashboard in a new tab.</p>
<p>Here&apos;s my dashboard:<br>
<img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-13.png" alt loading="lazy"></p>
<h2 id="additionalstyling">Additional Styling</h2>
<p>The HD picture of the day is a bit larger than my screen, so we can improve the dashboard by adding some CSS styling.  This could be added in the HTML Widget using a <code>&lt;style&gt;</code> tag, but Cyclotron also has a separate built-in section for any CSS rules or overrides.</p>
<pre><code>img {
    width: 100%;
}
</code></pre>
<p>This is a pretty broadly-applied CSS rule, but works since our dashboard has no other images!  It will scale larger images down, and smaller images up.</p>
<p>Now the dashboard looks like this:<br>
<img src="https://blog.davebauman.io/content/images/2017/02/cyclotron-nasa-14.png" alt loading="lazy"></p>
<h2 id="finaldashboard">Final Dashboard</h2>
<p>That wraps up this dashboard. It&apos;s linked to the NASA Astronomy Picture of the Day, so every day it will display something different.  Here&apos;s the final, complete JSON of my dashboard:</p>
<pre><code>{
    &quot;dataSources&quot;: [{
        &quot;name&quot;: &quot;apod&quot;,
        &quot;postProcessor&quot;: &quot;pp = function (data) {\n    return [data];\n}&quot;,
        &quot;type&quot;: &quot;json&quot;,
        &quot;url&quot;: &quot;https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY&amp;hd=true&quot;
    }],
    &quot;description&quot;: &quot;Displays NASA&apos;s Astronomy Picture of the Day!&quot;,
    &quot;name&quot;: &quot;nasa-picture-of-the-day&quot;,
    &quot;pages&quot;: [{
        &quot;frequency&quot;: 1,
        &quot;layout&quot;: {
            &quot;gridColumns&quot;: 1,
            &quot;gridRows&quot;: 1
        },
        &quot;widgets&quot;: [{
            &quot;dataSource&quot;: &quot;apod&quot;,
            &quot;html&quot;: &quot;&lt;h1&gt;#{title}&lt;/h1&gt;\n&lt;img src=\&quot;#{hdurl}\&quot; /&gt;&quot;,
            &quot;widget&quot;: &quot;html&quot;
        }]
    }],
    &quot;sidebar&quot;: {
        &quot;showDashboardSidebar&quot;: true
    },
    &quot;styles&quot;: [{
        &quot;text&quot;: &quot;img {\n    width: 100%;\n}&quot;
    }],
    &quot;theme&quot;: &quot;darkmetro&quot;
}
</code></pre>
<p>You can copy this straight into Cyclotron by creating a new dashboard, then selecting Edit JSON and pasting this into the JSON Editor.</p>
<p>Suggestions for future improvements:</p>
<ul>
<li>Ability to view previous days&apos; images</li>
<li>Support for <code>media_type: &quot;video&quot;,</code></li>
<li>Improved image display, maybe using <code>object-fit: cover</code></li>
</ul>
<p>Happy dashboarding!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Monglorious: Java Edition]]></title><description><![CDATA[Monglorious is the MongoDB client library created to execute string queries, as opposed to using a DSL library created in your language of choice.]]></description><link>https://blog.davebauman.io/monglorious-java-edition/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0b5</guid><category><![CDATA[monglorious]]></category><category><![CDATA[clojure]]></category><category><![CDATA[java]]></category><category><![CDATA[projects]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Fri, 20 Jan 2017 12:30:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://blog.davebauman.io/introducing-monglorious/">Previously introduced</a>, Monglorious is the MongoDB client library I created to execute string queries, as opposed to using a DSL library created in your language of choice.  Monglorious is written in Clojure, which is a great language for parsing text and interpreting it.  It&apos;s not the most popular, but it does run on the JVM so interop with Java and other JVM languages is possible.</p>
<p>However, I&apos;ve found the Clojure-using-Java interop to be much smoother than the reverse, Java-using-Clojure.  This is largely due to the dynamic type system in Clojure, which leads to a lot of type casting on the Java side.</p>
<p>All this to say, I decided to write a Java library to wrap Monglorious, so I could provide a more idiomatic interface and hide the Clojure interop details.  And thus <a href="https://github.com/baumandm/monglorious-java">monglorious-java</a> was born.</p>
<p>Here&apos;s an example:</p>
<pre><code>try (MongloriousClient monglorious = new MongloriousClient(&quot;mongodb://localhost:27017/testdb&quot;)) {
    long actual = monglorious.execute(&quot;db.documents.count()&quot;, Long.class);
}
</code></pre>
<p>To use, just download the JAR attached to the most recent <a href="https://github.com/baumandm/monglorious-java/releases">release</a>.  Then add it to your project and import the main class:</p>
<pre><code>import org.baumandm.monglorious.java.MongloriousClient;
</code></pre>
<p>It is being released an &#xFC;berjar containing an AOT&apos;d version of Monglorious, so that any downstream users don&apos;t have to add a Clojure AOT step to their build workflows.  It&apos;s not ideal, but it should be simpler for everyone else who wants to use it.</p>
<p>You can find the code <a href="https://github.com/baumandm/monglorious-java">here</a>, and more information about Monglorious <a href="https://baumandm.github.io/monglorious/">here</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Keeping Your GitHub Fork Up-to-Date]]></title><description><![CDATA[GitHub doesn't have a way to keep a fork in sync with the upstream branch, but it's pretty painless to do on the command-line.]]></description><link>https://blog.davebauman.io/keeping-your-github-fork-up-to-date/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0b6</guid><category><![CDATA[git]]></category><category><![CDATA[tools]]></category><category><![CDATA[github]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Thu, 19 Jan 2017 00:47:17 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>A long-standing <a href="https://github.com/isaacs/github/issues/121">issue</a> with GitHub is that after forking a repository, there&apos;s not an easy way to keep it updated with changes on the upstream repository.  This isn&apos;t an issue if you fork and submit a pull request immediately, but if it takes some time, or if you want to re-use the same fork later, you&apos;ll probably want it to be up-to-date.</p>
<p>For opaque reasons, this isn&apos;t available on the GitHub website, although apparently it used to be a long time ago. Odd, considering that the ease of forking and submitting pull requests seems like one of the major features of GitHub.</p>
<p>Regardless, this is pretty painless do to manually on the command-line.  First, you&apos;ll need to add a 2nd remote to the repository.  By default, git uses an &quot;origin&quot; remote:</p>
<pre><code>git remote add upstream &lt;URL&gt;
git remote -v
</code></pre>
<p>This only has to be done once to the local repository.  Then fetch and merge changes from upstream:</p>
<pre><code>git fetch upstream
git checkout master
git merge upstream/master
</code></pre>
<p>This merges all upstream changes into your fork. If you want to push that back to GitHub:</p>
<pre><code>git push
</code></pre>
<p>All done!</p>
<p>This is documented in more detail on <a href="https://help.github.com/articles/configuring-a-remote-for-a-fork/">Configuring a Remote for a Fork</a> and <a href="https://help.github.com/articles/syncing-a-fork/">Syncing a Fork</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Installing Koken on Docker]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>My photo website is running <a href="http://koken.me/">Koken</a>, which bills itself as a &quot;content management and web site publishing for photographers&quot;.</p>
<p>I recently had some server issues, so I ended up reinstalling the entire site from scratch. On a good note, it allowed me to test my backup strategy&#x2014;</p>]]></description><link>https://blog.davebauman.io/installing-koken-on-docker/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0b2</guid><category><![CDATA[koken]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Mon, 05 Dec 2016 06:09:22 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>My photo website is running <a href="http://koken.me/">Koken</a>, which bills itself as a &quot;content management and web site publishing for photographers&quot;.</p>
<p>I recently had some server issues, so I ended up reinstalling the entire site from scratch. On a good note, it allowed me to test my backup strategy&#x2014;fortunately it worked.</p>
<p>So here&apos;s my take on how to install Koken via Docker. There are  official instructions <a href="http://help.koken.me/customer/portal/articles/1648433-installing-koken-at-digitalocean-using-docker">here</a>, but it is a bit light on details, and skips things like running Koken as a service.</p>
<h1 id="gettingavm">Getting a VM</h1>
<p>I use DigitalOcean (here&apos;s a <a href="https://m.do.co/c/9a5c096d7e62">referral link</a>), but there are countless other VM providers like Vultr or Linode, and larger platforms like AWS, Google Cloud Platform, and Azure.</p>
<p>DigitalOcean offers <em>One-click apps</em>, which are basically VM images with pre-installed applications. There&apos;s one for Ubuntu 16.04 with Docker installed.</p>
<p><img src="https://blog.davebauman.io/content/images/2016/12/Screenshot_20161204_174423.png" alt="Docker One-click app on Digital Ocean" loading="lazy"></p>
<p>Of course you can can always just create any new VM and install Docker manually.</p>
<p>I&apos;m running the smallest VM I could, which is 512MB RAM and 1 vCPU. It may not be the snappiest, especially when generating new thumbnails, but it gets the job done.</p>
<p>After creating the VM, you&apos;ll want to do some initial server setup: create an account, disable root login, enable a firewall.  Whether or not you created a VM on DigitalOcean, they have pretty good guides that walk through all of that: <a href="https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-16-04">Initial Server Setup with Ubuntu 16.04</a>.</p>
<h1 id="installation">Installation</h1>
<p>First, we&apos;re going to download the official <a href="https://hub.docker.com/r/koken/koken-lemp/">koken-lemp</a> Docker image:</p>
<pre><code>docker pull koken/koken-lemp
</code></pre>
<p>The source for this container is available on GitHub <a href="https://github.com/koken/docker-koken-lemp">here</a>.</p>
<p>Next, create two folders on the VM to store website and database data:</p>
<pre><code>mkdir -p /data/koken/www
mkdir -p /data/koken/mysql
</code></pre>
<p>These will be mapped to the standard Nginx and MySQL folders in the Docker container, so you can easily access this data for backups.</p>
<p>Now we can launch the container manually to ensure it works:</p>
<pre><code>docker run --name koken_server -p 80:8080 -v /data/koken/www:/usr/share/nginx/www -v /data/koken/mysql:/var/lib/mysql -d koken/koken-lemp /sbin/my_init
</code></pre>
<p>Let&apos;s break this down:  It&apos;s telling Docker to run a new container, with name <em>koken_server</em>.  We&apos;ll be able to use that name later to access the container.  The argument <code>-p 80:8080</code> is mapping from port 8080 in the container, to port 80 on the host VM.  This will let you or others access the Koken website over port 80.  Then there are two <code>-v</code> arguments, which map folders on the host VM to folders inside the container. As I mentioned above, this will make it easier to access data in those folders.</p>
<p>Next, <code>-d koken/koken-lemp</code> is telling Docker which image to run. This is the same one we pulled previously&#x2014;if you didn&apos;t pull it earlier, Docker will download it now.  And finally, <code>/sbin/my_init</code> is the startup script for the image.</p>
<p>If the above command worked without errors, there should be a new Docker container running on your server.  You can check for running containers with:</p>
<pre><code>docker ps
</code></pre>
<p>If the output of this command is empty, then something must have gone wrong.  You can access Docker logs for the container using its name:</p>
<pre><code>docker logs koken_server
</code></pre>
<p>On the other hand, if everything worked and the container is running, you should be able to access Koken at the public URL for the server.  Open it in a browser and complete the installation process.</p>
<h1 id="runningasaservice">Running as a Service</h1>
<p>Running the Docker container manually works for testing, but if anything happens or the server is restarted, you&apos;ll have to login and start it again.  So instead of manually, we can set it up to run as a service.</p>
<p>Before doing this, you&apos;ll want to kill and removing the running container:</p>
<pre><code>docker kill koken_server
docker rm koken_server
</code></pre>
<p>Don&apos;t worry, this doesn&apos;t delete your data, as it&apos;s synced to the <code>/data/koken/</code> folder.</p>
<p>How to achieve this depends on the specific OS being run: Ubuntu 14.04 uses Upstart, whereas Ubuntu 16.04 uses Systemd.</p>
<h2 id="usingupstart">Using Upstart</h2>
<p>Create a new file, <code>/etc/init/docker-koken.conf</code>:</p>
<pre><code># /etc/init/docker-koken.conf
description &quot;Koken Docker Container&quot;
author &quot;Dave Bauman&quot;
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
  /usr/bin/docker run --name koken_server -p 80:8080 -v /data/koken/www:/usr/share/nginx/www -v /data/koken/mysql:/var/lib/mysql -d koken/koken-lemp /sbin/my_init
end script
</code></pre>
<p>This has the same Docker command we run manually, but wrapped up in an Upstart script that triggers after Docker starts.</p>
<p>You can launch the service immediately with:</p>
<pre><code>sudo start docker-koken
</code></pre>
<p>You can use <code>docker ps</code> to check that the container launched without issues, and verify by loading the website.</p>
<h2 id="usingsystemd">Using Systemd</h2>
<p>Create a new file, <code>/etc/systemd/system/docker-koken.service</code>:</p>
<pre><code>[Unit]
Description=Koken Docker Container
Author=Dave Bauman
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run --name koken_server -p 8080:8080 -v /data/koken/www:/usr/share/nginx/www -v /data/koken/mysql:/var/lib/mysql -d koken/koken-lemp /sbin/my_init
ExecStop=/usr/bin/docker stop -t 2 koken_server
ExecStopPost=/usr/bin/docker rm -f koken_server

[Install]
WantedBy=default.target
</code></pre>
<p>Like the Upstart script, this too runs the same Docker command, and triggers after Docker starts.</p>
<p>Enable the service to start on boot:</p>
<pre><code>sudo systemctl enable docker-koken.service
</code></pre>
<p>And launch it immediately:</p>
<pre><code>sudo systemctl start docker-koken.service
</code></pre>
<p>Again, use <code>docker ps</code> to check that the container launched without issues, and verify by loading the website.</p>
<h1 id="backup">Backup</h1>
<p>Backing up the Koken data is critical to avoid losing data, and using Docker only makes it slightly more complicated.  The mapped data volumes allow you to access the data directly from the host VM.</p>
<p>The website data can be backed up directly, as it&apos;s mostly just PHP files and images.  All the uploaded images go into <code>/data/koken/www/storage</code>, so backing up this folder ensures safety of all the photos.  But the metadata, like titles, tags, categories, etc., are stored in the MySQL database.</p>
<p>Copying the MySQL files directly may cause issues unless the MySQL server is shutdown beforehand. So I&apos;m using <code>mysqldump</code> to output the contents of the database to a script.  The only complication here is that this application needs to be run from inside the container.</p>
<p>Fortunately, recent version of Docker added <code>docker exec</code> which makes this easy:</p>
<pre><code>docker exec -it koken_server mysqldump -ukoken -pPASSWORD koken --single-transaction --routines --triggers &gt; /data/koken/mysql/backup.sql
</code></pre>
<p>This command enters the <code>koken_server</code> container and launches <code>mysqldump</code>. Replace the word <code>PASSWORD</code> above, with the auto-generated password the installation script created. You can find it inside <code>/data/koken/www/storage/configuration/database.php</code>.</p>
<p>And finally, the output of <code>mysqldump</code> is piped into a single file for easy backup.</p>
<p>Restoring the database from the backup is just the opposite:</p>
<pre><code>docker exec -it koken_server /bin/bash
mysql -u koken -pPASSWORD koken &lt; /var/lib/mysql/backup.sql
</code></pre>
<p>This does it in two steps, first by launching the <code>bash</code> shell inside the container, then piping the backup script into the mysql command.</p>
<h1 id="finale">Finale</h1>
<p>Using Docker makes installing Koken pretty easy, as the official Docker image comes with Nginx, PHP, MySQL already installed and configured.  If you use the <a href="http://help.koken.me/customer/portal/articles/1648433-installing-koken-at-digitalocean-using-docker">official instructions</a> it&apos;s even easier as they include a <a href="https://gist.githubusercontent.com/bradleyboy/48b67b5e9ebf91031a19/raw/create_koken.sh">single bash script</a> to orchestrate the entire thing. It&apos;s basically the same thing, but I prefer to run it as a service to ensure it stays up.</p>
<p>I also saw a <a href="https://github.com/robrotheram/docker-koken-lemp">fork</a> of the default Docker repo, which removes MySQL so it can be run as a separate container. This is where Docker starts to get more interesting, as you can <a href="https://docs.docker.com/compose/">compose</a> applications out of multiple containers, or co-locate multiple applications on the same server.  I imagine this option would be appealing if you wanted to share a MySQL instance with multiple applications on the same server.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Introducing Monglorious!]]></title><description><![CDATA[Introducing Monglorious, a new MongoDB client library which executes strings in MongoDB shell syntax.]]></description><link>https://blog.davebauman.io/introducing-monglorious/</link><guid isPermaLink="false">5ab5eaaca1b25300019cc0b1</guid><category><![CDATA[projects]]></category><category><![CDATA[clojure]]></category><category><![CDATA[monglorious]]></category><dc:creator><![CDATA[Dave Bauman]]></dc:creator><pubDate>Tue, 29 Nov 2016 07:24:05 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;ve been burning the midnight oil on a new project, <a href="https://baumandm.github.io/monglorious/">Monglorious</a>. It&apos;s basically a MongoDB client library which executes strings in the syntax of MongoDB shell commands. This is in stark contrast to virtually every other MongoDB library which provide a domain-specific language (DSL) for building queries.</p>
<h1 id="why">Why?</h1>
<p>So why create yet another MongoDB library? DSLs are great, and they make it very easy to translate queries into code. But I had a particular use case which involves storing user-submitted queries in a database for future execution. So I need something analogous to SQL for MongoDB (which doesn&apos;t exist). Not the SQL syntax per se, but rather a canonical string representation of a MongoDB query.</p>
<h1 id="howdoesitwork">How does it work?</h1>
<p>String queries are parsed with a custom EBNF grammar which specifies the supported query syntax. I&apos;m using a parsing library, <a href="https://github.com/Engelberg/instaparse">Instaparse</a>, which transforms the query string into an <a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">abstract syntax tree</a> (AST). This representation can be easily translated into corresponding MongoDB library calls, which in turn are executed to return the results.</p>
<p>Monglorious is written in Clojure, which I&apos;ve found to be a great match for language parsing and evaluation. And it&apos;s leveraging <a href="http://clojuremongodb.info/">Monger</a> for the underlying MongoDB calls.</p>
<h1 id="whatsnext">What&apos;s next?</h1>
<p>A Java interop layer is top priority, if only to expand the audience who might consider using it. Clojure itself provides some Java class generation, but I&apos;ll probably end up writing a custom wrapper for a more idiomatic feel.</p>
<p>Another feature I&apos;m considering is the ability to provide post-parse, pre-execution function hooks. Basically, allowing the calling code to specify a function that gets called with the AST as an argument, before execution. The function could then inspect, modify, and/or reject the execution. Some of the uses I had in mind: restricting the full set of queries to a subset; enforcing permissions for different collections; aliasing collections or column names; even rewriting queries to some degree.</p>
<p>And finally, Monglorious currently only supports querying data. For completion, it would be nice to expand to insert/update/delete operations, although it&apos;s harder to imagine a compelling use case for these.</p>
<p><strong>Check it out here:</strong> <a href="https://baumandm.github.io/monglorious/">https://baumandm.github.io/monglorious/</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>