I have a large number of keyword bookmarks in whichever browser I happen to be using at the time that I’ve been building up over the years1. One of the ones I particular enjoy is text: data:text/html, <html contenteditable>. What that does is open a new tab where I can take notes, completely locally. It’s really handy… but there’s one big problem: I often accidentally close the tab and lose whatever I had been typing. So I decided to take a few minutes to write up a simple extension of the idea that would save the data to LocalStorage.

# Adding HSTS to Redirects in Apache

TLDR:

# Use 'always' so headers are also set for non-2XX and unset to avoid duplicates
header always set Strict-Transport-Security "max-age=16070400; includeSubDomains;"
</IfModule>

Slightly1 longer version:

HTTPS everywhere is a worthwhile goal. Even when you have traffic that isn’t super interesting or sensitive by itself, the fact that you’re encrypting it makes traffic that really does need to be encrypted safer against tools that grab all of the encrypted traffic they can to decrypt later if/when possible.

One of the downsides of using HTTPS though is that without certain things in place, many users will still type domain.com in their address bar from time to time, completely missing out on the https://. While you can immediately redirect them, that very first request is a risk, since if a man-in-the-middle attack happens to catch that request, they can downgrade the entire connection.

Enter HTTP Strict Transport Security (HSTS). It’s a HTTP header that you can send on the first HTTPS connection you establish with a compatible client. Once you’ve done that, any further requests (until the header’s TTL expires without being renewed) will be sent to https:// no matter what the user types. Which solves the first request problem for all sessions… but it still doesn’t fix the very first time you have to get the header. So how do you fix that?

# Counting and Sizing S3 Buckets

A long time ago in a galaxy far far away, I wrote up a script that I used to take an AWS S3 bucket and count how many objects there were in the bucket and calculate its total size. While you could get some of this information from billing reports, there just wasn’t a good way to get it other than that at the time. The only way you could do it was to… iterate through the entire bucket, summing as you go. If you have buckets with millions (or more) objects, this could take a while.

Basically:

conn = boto.connect_s3()
for bucket in sorted(conn.get_all_buckets()):
try:
total_count = 0
total_size = 0
start = datetime.datetime.now()

for key in bucket.list_versions():
# Skip deleted files
if isinstance(key, boto.s3.deletemarker.DeleteMarker):
continue

size = key.size
total_count += 1
total_size += size

print('-- {count} files, {size}, {time} to calculate'.format(
count = total_count,
size = humanize.naturalsize(total_size),
time = humanize.naturaltime(datetime.datetime.now() - start).replace(' ago', '')
))

# Creating a temporary SMTP server to 'catch' domain validation emails

One problem that has come up a time or two is dealing with email-based domain validation (specifically in this case for the issuance of TLS certificates) on domains that aren’t actually configured to receive email. Yes, in a perfect world, it would be easier to switch to DNS-based validation (since we have to have control of the DNS for the domain, we need it later), but let’s just assume that’s not an option. So, how do we ‘catch’ the activation email so we can prove we can receive email on that domain?

# Deep Dreams with Fish and Docker

DeepDream is a research project originally from Google that gives you a look into how neural networks see the world. They’re fascinating, bizarre, and a lot of fun to play with. A bit of work getting them to work on your own machine though.

Luckily, GitHub user saturnism has put together a lovely Docker-based tool that will do just that for us: deepdream-cli-docker. Unfortunately, the commands are still a bit long. Let’s clean it up a bit and add the ability to dream about non-JPGs (animated GIFs especially!).

# Generating zone files from Route53

Recently I found myself wanting to do some analysis on all of our DNS entires stored in AWS’s Route53 for security reasons (specifically to prevent subdomain takeover attacks, I’ll probably write that up soon). In doing so, I realized that while Route53 has the ability to import a zone file, it’s not possible to export one.

To some extent, this makes sense. Since Route53 supports ALIAS records (which can automatically determine their values based on other AWS products, such as an ELB changing its public IP) and those aren’t actually ‘real’ DNS entries, things will get confused. But I don’t currently intend to re-import these zone files, just use them. So let’s see what we can do.

# Automatic self-signed HTTPS for local development

From time to time when doing web development, you need to test something related to HTTPS. In some cases, the application you’re writing already supports HTTPS natively and that’s no problem. But more often (and probably better, in my opinion) is the case when you have another service (be it an AWS ELB or an nginx layer) that will terminate the HTTPS connection for you so your application doesn’t have to know how to speak HTTPS.

In those cases, how can you test functionality that specifically interacts with HTTPS?

Today I will show you autohttps, a thin nginx proxy using Docker and a self signed certificate to automatically create an HTTPS proxy in front of your application.

# Making Fish Shell Smile

When working in a shell, from time to time, I need to know if a command succeeded or failed. Sometimes, it’s easy:

$make noise make: *** No rule to make target noise'. Stop. Sometimes, less so: $ grep frog podcasts.json > podcasts-about-frogs.txt

Since, alas, I don’t have any podcasts about frogs, that command would fail silently. But that’s fixable!

$grep frog podcasts.json > podcasts-about-frogs.txt$ # Bash/Zsh
$echo$?
1

$# Fish$ echo \$status
1`