MySQL client in official Ruby docker image

I’ve got a Sinatra app, it uses MySQL, and it works on my laptop! It didn’t work in a docker container though and instead was hemorrhaging this terribly unhelpful message when using prepared statements:

Using unsupported buffer type: 245 (parameter: 7)

Wow was that ever useless. It took a while but I got to the bottom of it. The official Ruby docker image is built on Debian Jesse and ships with MySQL 5.5, but I’m using MySQL 5.7 (yay for JSON support!). The mysql2 gem is a native extension which links against libmysql. Turns out MySQL 5.5 client doesn’t play nice with MySQL 5.7 server.


The fix was to pull in the mysql distribution source list and install the latest libmysqlclient-dev. I put the list in a gist if you want it (Oracle makes it super cumbersome packaging the list as a deb, honestly Oracle…). Afterwards add this do your Docker file:

ADD mysql.list /etc/apt/sources.list.d/mysql.list
RUN apt-get update
RUN apt-get install -y --force-yes mysql-common libmysqlclient-dev
RUN gem install mysql2

And hooray it works! If Googling this baffling error message brought you here, don’t forget to like, comment and subscribe.


Using Amazon Lambda with API Gateway

Amazon Lambda is a “serverless compute” service from AWS that lets you run Nodejs, Python, or other code. They have extensive documentation, but I found a disconnect: I was struggling to get the API gateway to map into Lambda functions. I put together a YouTube video, but some of the highlights are below. The source code for the project is on GitHub.



The Lambda function I demo is very simple, returning a URL based on a given input. When creating it, filter for “hello-world”, and adjust the following:

Name bsdfinder
Description BSD Finder
Role lambda_basic_execution

For the others, just accept the defaults. The IAM Role can be tweaked to allow your lambda function to access S3 as well.

Your method needs to have a event and context objects. The event object is how properties are fed into your function from the API Gateway, and the context object is how execution is stopped and response values passed out.

Test it

Once it’s saved, click Actions → Configure test event. Use this test data:

  "bsd": "open"

The execution should succeed and the results look like:

  "code": 302,
  "location": ""

If you got this far, the lambda function works.

API Gateway

The API Gateway fronts requests from the internet and passes them into the Lambda function, with a workflow:

Method Request → Integration Request → Lambda → Integration Response → Method Response

Method Request Specify what properties (query string, headers) will be accepted
Integration Request Map method request properties to the lambda fucntion
Lambda Properties passed into event object, passed out with context object
Integration Response Map lambda context properties to response
Method Response Response finally gets returned to UA


I deliberately kept this document thin. Check out the YouTube video to see it all come together.

CoreOS on VMWare using VMWare GuestInfo API

CoreOS bills itself as “Linux for Massive Server Deployments”, but it turns out, it’s excellent for smaller deployments as well. CoreOS was intended to run in big cloud providers (Azure, AWS, Digital Ocean, etc) backed by OpenStack. I’ve been running CoreOS for a while now, on premises in VMWare. It is kind of a pain: clone the template; interrupt the boot to enable autologin, set a password for “core”; reboot again; re-mount the disk rw; paste in a hand crafted cloud-config.yml. Fortunately for me (and you!), VMWare has an API to inject settings into guests, and CoreOS has added support for using those settings in coreos-cloudinit. Information about the supported properties is here.

My new process is:

  1. Clone the template to a new VM
  2. Inject the config through the guest vmx file
  3. Boot the VM

It’s all automated though, so mostly I get other productive work or coffee drinking done while it’s conjured into existence.

The short version is that you base64 encode and inject your cloud-config, along with any additional parameters. For me it looks like:

Property Value Explanation
hostname coreos0 The hostname, obviously 🙂 ens192 The name of the interface to assign id #0 for future settings. The kernel names the first nic ens192 in VMWare.
interface.0.role private This is my internal (and only) NIC.
interface.0.dhcp no Turn off DHCP. I like it on, most like it off.
interface.0.ip.0.address CIDR IP address
interface.0.route.0.gateway Default gateway
interface.0.route.0.destination Default gateway
dns.server.0 DNS server [some ugly string of base64] Base64 encoded cloud-config.yml base64 Tells cloudinit how to decode

Ok, so this is all great, but how to you get it into VMWare? You have two choices, you can manually edit the .VMX file (boring!) or you can use powershell. The script I use is in github, but the workflow is basically:

  1. Clone the VM
  2. Connect to the datastore and fetch the vmx file
  3. Add the guestinfo to the VMX file
  4. Upload the VMX file back to the datastore
  5. Boot the VM


The script will pick up a cloud-config.yml and base64 encode it for you and inject it. Check out the source in github to learn more. If you’re looking at the CoreOS documentation on the VMWare backdoor, you need to put “guestinfo.” infront of all the properties. For example, guestinfo.dns.server.0. The VMWare RPC API only passes properties that start with guestinfo.

This is how it looks when it’s written out to the VMX: = "base64"
guestinfo.interface.0.dhcp = "no" = "I2Nsyb2JAd............WJ1bnR1MA0K"
guestinfo.dns.server.0 = ""
guestinfo.interface.0.ip.0.address = ""
guestinfo.interface.0.route.0.gateway = ""
guestinfo.interface.0.role = "private"
guestinfo.interface.0.route.0.destination = "" = "ens192"
guestinfo.hostname = "coreos0"

Wanna see it in action?

Splunk forwarders on CoreOS with JournalD

The team at TNWDevLabs started a new effort to develop an internal SaaS product. It’s a greenfield project, and since everything is new, it let us pick up some new technology and workflows, including neo4j and nodejs. In my role as DevOps Engineer, the big change was running all the application components in Docker containers hosted on CoreOS.

CoreOS is a minimalist version of Linux, basically life support for Docker containers. It is generally not recommended to install applications on the host, which raises a question: “How am I going to get my application logs into Splunk?”.
With a more traditional Linux system, you would just install a Splunk forwarder on the host, and pick up files from the file system.


With CoreOS, you can’t really install applications on the host, so there is the possibility of putting a forwarder in every container.


This would work, but it seems wasteful to have a number of instances of Splunk running on the same host, and it doesn’t give you any information about the host.
CoreOS leverages SystemD which has an improved logging facility called JournalD. All system events (updates, etcd, fleet, etc) from CoreOS are written to JournalD. Apps running inside docker containers generally log to stdout, and all those events are sent to JournalD as well. This improved logging facility means getting JournalD into Splunk is the obvious solution.

The first step was to get the Splunk Universal Forwarder into a docker container. There were already some around, but I wanted to take a different approach. The idea is instead of trying to manage .conf files and getting them into the container, I leverage the Deployment Server feature already built into Splunk. The result is a public image called tnwinc/splunkforwarder. It takes two parameters passed in as environment variables: DEPLOYMENTSERVER and CLIENTNAME. These two parameters are fed into $SPLUNK_HOME/etc/system/local/deploymentclient.conf when the container is started. This is the bare minimum to get the container up and talking to the deployment server.


Setting up CoreOS

Running CoreOS, the container is started as a service. The service definition might look like: ExecStart=/usr/bin/docker run -h %H -e -e CLIENTNAME=%H -v /var/splunk:/opt/splunk –name splunk tnwinc/splunkforwarder

-h %H Sets the hostname inside the container to match the hostname of the CoreOS host. %H gets expanded to the CoreOS hostname when the .service file is processed.
-e DEPLOYMENTSERVER The host and port of your deployment server
-e CLIENTNAME =%H This is the friendly client name.
-v /var/splunk:/opt/splunk This exposes the real directory /var/splunk as /opt/splunk inside the container. This directory persists on disk when the container is restarted.

Now that Splunk is running and has a place to live, we need to feed data to it. To do this, I setup another service, which uses the journalctl tool to export the journal to a text file. This is required as Splunk can’t read the native JournalD binary format:

ExecStart=/bin/bash -c '/usr/bin/journalctl --no-tail -f -o json > /var/splunk/journald'

This command dumps everything in the journal out to JSON format (more on that later), then tails the journal. The process doesn’t exit, and continues to write out to /var/splunk/journald, which exists as /opt/splunk/journald inside the container.

Also note that as the journald file will continue to grow, I have an ExecStartPre directive that will trim the journal before the export happens:

ExecStartPre=/usr/bin/journalctl --vacuum-size=10M

Since I’m not appending, every time the service starts, the file is replaced. You may want to consider a timer to restart the service on a regular interval, based on usage.

Get the data into Splunk

This was my first experience with Deployment Server, it’s pretty slick. The clients picked up and reached out to the deployment server. My app lives in $SPLUNK_HOME\etc\deployment-apps\journald\local. I’m not going to re-hash the process of setting up a deployment server, there is great documentation at on how to do it.

My inputs.conf simply monitors the file inside the container:

sourcetype = journald

The outputs.conf then feeds it back to the appropriate indexer:

defaultGroup = default-autolb-group
server =

Do something useful with it

None of getting the data into Splunk is special, but the props.conf is fun so I’m covering it separately. Running the journal out in JSON structures the data in a very nice format, and the following props.conf helps Splunk understand it:

KV_MODE = json
pulldown_type = 1
KV_MODE=json Magically parse JSON data. Thanks, Splunk!
TIME_PREFIX This ugly bit of regex pulls out the timestamp from a field called __REALTIME_TIMESTAMP
TIME_FORMAT Standard strpdate for seconds
MAX_TIMESTAMP_LOOKAHEAD JournalD uses GNU time which is in microseconds (16 characters). This setting tells splunk to use the first 10.

Once that app was published to all the agents, I could query the data out of Splunk. It looks great!


This is where the JSON output format from journald really shines. I get PID, UID, the command line, executable, message, and more. For me, coming from a Windows background, this is the kind of logging we’ve had for 20 years, now finally in Linux, and easily analyzed with Splunk.

The journal provides a standard transport, so I don’t have to think about application logging on Linux ever again. The contract with the developers is: You get it into the journal, I’ll get it into Splunk.

etcd 2.0 cluster with proxy on Windows Server 2012

I got interested in etcd while working on CoreOS for a work project. I’m no fan of Linux, but I like what etcd offers and how simple it is to setup and manage. Thankfully, the etcd team produces a Windows build with every release. I created a Chocolatey package and got it published, then used it to setup the cluster. The service is hosted by nssm, which also has a Chocolatey package and the extremely helpful nssm edit tool for making changes to the service config. Etcd has great documentation here.

Etcd is installed by running:

choco install -y etcd --params="<service parameters>"

Where service parameters is:

etcd --name "%COMPUTERNAME%" ^
--initial-advertise-peer-urls "http://%COMPUTERNAME%:2380" ^
--listen-peer-urls "http://%COMPUTERNAME%:2380" ^
--listen-client-urls "http://%COMPUTERNAME%:2379," ^
--advertise-client-urls "http://%COMPUTERNAME%:2379" ^
--discovery "<your_token_here>" ^

Or you can install it in proxy mode by running:

choco install -y etcd --params="--proxy on --listen-client-urls --discovery<your_token_here>"

Proxy mode is especially useful because it lets applications running on a machine be ignorant of the etcd cluster. Applications connect to localhost on a known port, and etcd in proxy mode manages finding and using the cluster. Even if the cluster is running on CoreOS, running etcd in proxy mode on Windows is a good way to help Windows apps leverage etcd.

Watch a demo of the whole thing here:

Migrating from Linux to Windows

Five years ago, I setup a home server on Debian 5. Since then it’s been upgraded to Debian 6, then Debian 7, been ported to newer hardware and had drives added. Today I migrated it to Windows Server 2012 R2. After five years running on Linux, I had simply had enough. I’ll get into the why in a little bit, but first some how.


I didn’t have enough space anywhere to backup everything to external storage, so I did it on the same disk. The basic process was as follows:

  • Boot a Knoppix live DVD and use GNU Parted to resize the partition to make room for Windows
  • Install Windows, then install ext2fsd so I could mount the Linux partition
  • Move as much as I could from the Linux partition to Windows
  • Boot off Knoppix, shrink the Linux partition as much as possible
  • Boot back into Windows and expand that partition into the reclaimed space
  • Repeat until there was no EXT3 left

Needless to say it took a while, but it worked pretty well. Ext2fsd is pretty slick, I use it when I have to dual-boot as well, along with the ntfs driver Linux to access my Windows partitions.


So now on to the why. Why would someone ever stop using Linux, when Linux is obviously “better” right? The fact is, Linux hasn’t really matured in 5 years. Debian (and it’s illegitimate child, Ubuntu), is a major distro, but still there are no good remote/central management or instrumentation tools, file system ACLs are kludgy and not enabled by default, single-sign-on and integrated authentication is non-existent or randomly implemented, distros still can’t agree on fundamental things like how to configure the network. It’s basically a mess and I got tired of battling with it. Some of my specific beefs are:

  • VBox remote management is junk, Hyper-V is better in that space. VBox has a very low footprint and excellent memory management, but remote administering it is not a strength.
  • Remote administration in general is better in Windows. An SSH prompt is not remote administration, being able to command a farm of web servers to restart in sequence, waiting for one to finish before starting the next, that is remote administration. Cobbling something together in perl/python/bash/whatever to do it is not the same. Also nagios is terrible. Yes, really, it is. Yes it is.
  • Linux printing is pretty terrible. I had CUPS setup, had to use it in raw mode with the Windows driver on my laptop. Luckily Windows supports IPP, I never got print sharing going with Samba
  • Just a general feeling of things being half finished and broken. Even gparted, you have to click out of a text box before the ok prompt will be enabled. Windows GUIs are better, and if you’re a keyboard jockey like me, you can actually navigate the GUI really easily with some well established keyboard shortcuts (except for Chrome, which is it’s own special piece of abhorrent garbage).

Linux is a tool, and like any tool it has it’s uses. I still run Debian VMs in Hyper-V for playing with PHP and Python, but even then after development I would probably run these apps on Windows servers.


Building a custom Debian LiveCD

These are the steps I used to build a Debian 7 LiveCD on a Debian 7 host system. They’re based heavily on the instructions posted by Will Haley in this post.

For more details about customizing the environment, read about:

  • Syslinux – this is the suite of tools that lets you boot Linux from just about anything
  • Genisoimage – this is the ISO image creation tool, with parameters to make it bootable
  • Readyroot – a chroot management script I posted and use to manage these environments

The first think to do is setup a chroot environment. I have a generic post on the subject here. If you are following along, I’m making the following assumptions:

  • You have created /usr/src/readyroot
  • Your image is located in /usr/src/mycd_live_boot/chroot
  • Your current working directory is /usr/src/mycd_live_boot

Some other files, like the initramfs also go under mycd_live_boot, that’s why it’s in a chroot subdir.

Setup the host system

First you need to add some packages to your host system

apt-get install syslinux squashfs-tools genisoimage rsync

Customize your LiveCD

Enter the live environment

/usr/src/readyroot chroot open

Now you need to install live-boot

apt-get install live-boot

Now that’s technically all your need, but it leaves you with a fairly useless system. To add some of the packages you’ve come to expect, run:

apt-get install network-manager net-tools openssh-client ftp pciutils usbutils rsync nano

This is a good time to add whatever other tools you need. You can always re-enter the chroot and make changes.

Before you leave, it’s a good idea to cleanup with

apt-get clean

Now just exit to return to the host root, and detach it with

/usr/src/readyroot chroot close

Setting up the CD

Create two directories to contain the filesystem archives

mkdir -p image/{live,isolinux}

Copy the kernel and initrd. This is assuming you only have a single version 3 kernel.

cp chroot/boot/vmlinuz-3.* image/live/vmlinuz1 && 
cp chroot/boot/initrd.img-3.* image/live/initrd1

Create a file at image/isolinux/isolinux.cfg with this content

UI menu.c32

prompt 0
menu title Debian Live

timeout 300

label Debian Live 3.2.0-4
menu label ^Debian Live 3.2.0-4
menu default
kernel /live/vmlinuz1
append initrd=/live/initrd1 boot=live

Copy the ISO boot files

cp /usr/lib/syslinux/isolinux.bin image/isolinux/ && 
cp /usr/lib/syslinux/menu.c32 image/isolinux/

Creating the ISO

Create the SquashFS file

test -f image/live/filesystem.squashfs && 
rm -f image/live/filesystem.squashfs; 
mksquashfs chroot image/live/filesystem.squashfs -e boot

Create the ISO file

cd image && 
genisoimage -rational-rock \
    -volid "Debian Live" \
    -cache-inodes \
    -joliet \
    -full-iso9660-filenames \
    -b isolinux/isolinux.bin \
    -c isolinux/ \
    -no-emul-boot \
    -boot-load-size 4 \
    -boot-info-table \
    -output ../debian-live.iso . && 
cd ..

Now burn the ISO and you’re ready to go. It’s easier to attach it to a VM to test.