For the past few years I’ve been taking part in Eric Wastl’s
Advent of Code, a coding challenge that provides a 2-part problem each
day from the 1st of December through to Christmas Day. The puzzles are always interesting — especially as they
get progressively harder — and there’s an awesome community of folks that share their solutions in a huge variety
To up the ante somewhat, Shane and I usually have a little informal
competition to see who can write the most performant code. This year, though, Shane went massively overboard and
wrote an entire benchmarking suite
and webapp to measure our performance, which I took as an invitation and personal challenge to try to beat
him every single day.
For the past three years I’d used Python exclusively, as its vast standard library and awesome syntax lead to
quick and elegant solutions. Unfortunately it stands no chance, at least on the earlier puzzles, of beating the
speed of Shane’s preferred language of PHP. For a while I consoled myself with the notion that once the
challenges get more complicated I’d be in with a shot, but after the third or fourth time that Shane’s solution
finished before the Python interpreter even started1 I decided I’d have to jump ship. I started using Nim.
DNS-over-TLS is a fairly recent specificiation described in RFC7858, which enables DNS clients to communicate with servers over a
TLS (encrypted) connection instead of requests and responses being sent in plain text. I won’t ramble on about
why it’s a good thing that your ISP, government, or neighbour can’t see your DNS requests…
I use an EdgeRouter Lite from Ubiquiti Networks at
home, and recently configured it to use DNS-over-TLS for all DNS queries. Here’s how I did it.
I was thinking about switching DNS providers recently, and found myself whoising random domains
and looking at their nameservers. One thing lead to another and I ended up doing a survey of the nameservers of
the top 100,000 sites according to Alexa.
Most popular providers
The top providers by a large margin were, unsurprisingly, Cloudflare and AWS Route 53. Between them they
accounted for around 30% of the top 100k sites.
I’ve been spending some time recently setting up automated testing for our collection of Android apps and
libraries at work. We have a mixture of unit tests, integration tests, and UI tests for most projects, and
getting them all to run reliably and automatically has posed some interesting challenges.
Running tests on multiple devices using Spoon
Spoon is a tool developed by Square that handles distributing
instrumentation tests to multiple connected devices, aggregating the results, and making reports.
As part of our continuous integration we build both application and test APKs, and these are pushed to the
build server as build artefacts. A separate build job then pulls these artefacts down to a Mac Mini we have in
the office, and executes Spoon with a few arguments:
Spoon finds all devices, deploys both APKs on them, and then begins the instrumentation tests. We use two
physical devices and an emulator to cover the form factors and API versions that are important to us; if any test
fails on any of those devices, Spoon will return an error code and the build will fail.
I recently came across a useful tool on GitHub called ssh-audit. It’s a small Python script that connects to an SSH server,
gathers a bunch of information, and then highlights any problems it has detected. The problems it reports range
from potentially weak algorithms right up to know remote code execution vulnerabilities.
I recently noticed that I’d accidentally lost my previous GPG private key — whoops. It was on a drive that I’d
since formatted and used for a fair amount of time, so there’s no hope of getting it back (but, on the plus side,
there’s also no risk of anyone else getting their hands on it). I could have created a new one in a few seconds
and been done with it, but I decided to treat it as an exercise in doing things properly.
Background: GPG? Yubikey?
GPG or GnuPG is short for Gnu Privacy Guard, which is a suite of
applications that provide cryptographic privacy and authentication functionality. At a basic level, it works in a
similar way to HTTPS certificates: each user has a public key which is shared widely, and a private key that is
unique to them. You can use someone else’s public key to encrypt messages so only they can see them, and use your
own private key to sign content so that others can verify it came from you.
A Yubikey is a small hardware device that offers two-factor
authentication. Most Yubikey models also act as smartcards and allow you to store OpenPGP credentials on
One of my favourite hobbyhorses recently has been the use of HTTPS, or lack thereof. HTTPS is the thing that
makes the little padlock appear in your browser, and has existed for over 20 years. In the past, that little
padlock was the exclusive preserve of banks and other ‘high security’ establishments; over time its use has
gradually expanded to most (but not all) websites that handle user information, and the time is now right for it
to become ubiquitous.
Over the past few weeks I’ve gradually been migrating services from running in LXC containers to Docker
containers. It takes a while to get into the right mindset for Docker - thinking of containers as basically
immutable - especially when you’re coming from a background of running things without containers, or in “full”
VM-like containers. Once you’ve got your head around that, though, it opens up a lot of opportunities: Docker
doesn’t just provide a container platform, it turns software into discrete units with a defined interface.
With all of your software suddenly having a common interface, it becomes trivial to automate a lot of things
that would be tedious or complicated otherwise. You don’t need to manage port forwards because the containers
just declare their ports, for example. You can also apply labels to the application containers, and then query
the labels through Docker’s API.
I recently picked up a couple of Belkin’s WeMo Insight
Switches to monitor power usage for my PC and networking equipment. WeMo is Belkin’s home automation brand,
and the switches allow you to toggle power on and off with an app, and monitor power usage.
The WeMo Android app is pretty dismal. It’s slow, doesn’t look great, and crashed about a dozen times during
the setup process for each of my two switches. It also doesn’t provide much information at all about power: you
can see average power draw and current power draw, and that’s basically it.
Belkin has provided an option to e-mail yourself a spreadsheet with historical power data, and can even do it
on a regularly scheduled basis, but that’s not really a nice solution if you want up-to-date power stats. Even if
you were happy with data arriving in batch, having to get hold of an e-mail attachment and parse out a weirdly
formatted spreadsheet doesn’t make for easy automation. It also relies on Belkin supporting the service
indefinitely, which isn’t necessarily going to happen.
Sense is a little device that sits by your bedside and, in conjunction with a
little ‘pill’ attached to your pillow, monitors your sleeping patterns and any environmental conditions that
might hamper them. Android and iOS apps show you your sleep history, and offer suggestions for improvements.
Sense was Kickstarted in August
2014, raising over 2.4 million US dollars, and shipped to backers in mid 2015. The campaign blurb included this
Building with Sense
You’ll always have access to your data via our API. Take it, play with it, graph it, do whatever you want
with it. It’s yours. That’s important to us.
We enjoy tinkering with and building on-top of other products we like. Sense will let you have that
We’d love to hear your thoughts on what you might want to build with Sense, and how you could directly
interact with the hardware, and the data it collects.
Sounds great! But a year after shipping, there’s no sign of an API, and some of us who enjoy tinkering are
getting a bit restless…