Matt Moriarity

December 2018


I think I've got basic support for receiving Webmentions working on mattmoriarity.com. At the very least, replies from Micro.blog should show up like comments on posts. This was a lot more work than I expected it to be.


I can tell it's unusually rainy in Austin right now because I'm starting to get annoyed by all the Dark Sky notifications.


I've been mostly ignoring my email and letting it pile up while on vacation and it feels weird.


Very concerned: I listened to MBMBaM for the first time today, and now Instagram is suggesting I follow the McElroy brothers. That's creepy.


I finished the main story of the Spider-Man game. I was struck by how wholesome this game is, mainly in its relationships. There's a lot of good in this story.

There's also some stuff after the credits that is really adorable.


Watching The Great British Baking Show has really improved my holiday season.


Anti-capitalism on an individual scale misses the point completely. The whole benefit comes from living in a society without the structural problems and perverse incentives of capitalism.




I think Spider-Man might be one of the few games I try to 100%. What an impressive game.



Me: I guess I'll see what this Kevin Spacey video is.

...three seconds later...

Me: "Oh fuck you" *closes YouTube*



Having async/await support in Node.js has drastically improved the experience of writing server-side JavaScript. It was clever to make it essentially sugar over promises.


Starting to work on Webmention support for my blogging engine so I can get comments back on the site.



Microblogging with Serverless

I've been microblogging here on mattmoriarity.com for the last year. I like posting short little thoughts (I don't often have the attention span or the time for long posts like this one), and I've been using Twitter for that for over a decade now. But these days, I prefer to post everything first to my own site and then syndicate it to other places like Twitter.

When I set up this site, I used WordPress. It was, and still is, the best free off-the-shelf tool for the job, with a community of plugins and people doing IndieWeb and microblogging things. I've used it successfully for this purpose for a year now, but recently I found myself wanting something different. I wanted a little more control over my microblogging workflow.

At the same time, I've also recently become interested in AWS Lambda and other AWS offerings. AWS has a really generous free tier. After making a small project with Lambda and being impressed with how easy it was to build, I started to wonder if I could build my own blogging engine with Lambda.

As a matter of fact, I could and I did! This site is now running on my new AWS-powered blogging engine. I figure that it may be interesting to others how I went about assembling this all together, so here goes.

S3 for static site hosting

Initially, my plan was to see how well I could render a website dynamically using Lambda functions. This would have been your typical ordinary dynamic web site, just running in a serverless environment.

The plan changed when I realized that S3 can be used to serve a static website. There are two pieces that come together to make this a really good solution:

  • If you name your S3 bucket by a domain name, you can use a CNAME DNS record to point the domain at that bucket's contents.
  • Beyond just serving files, S3 has explicit support for serving websites. You can configure your bucket to serve index.html files when requesting a directory and serve a particular page for errors.

Beyond that, S3 is really cheap! It's not actually part of the AWS Free Tier, but that's okay. The normal costs of operating a website on S3 are minimal. It costs less than $0.01 to serve 10,000 GET requests, so even for a high traffic site, you won't be paying much.

One thing you don't get from S3 is HTTPS. S3 buckets serving websites only serve content over HTTP, but if you're interested in serving your site over HTTPS, you have some options.

One is AWS CloudFront, which puts your S3 bucket on a CDN protected by an AWS-issued certificate. You lose some of the special website behavior from S3 like serving index.html, and propagating changes to the CDN can be slow, so I don't like this option too much. I ended up using CloudFlare, which will run your site through its caching proxies and serve it over HTTPS for you.

In addition to serving the generated web pages for the site, I also use S3 to store any uploaded photos, as well as the templates that are used to generate the pages.

DynamoDB for document storage

AWS has many different database offerings, but DynamoDB is the only one that is always part of the AWS Free Tier. Even if there were other options, though, DynamoDB is a pretty compelling choice for a lot of applications, including this one. DynamoDB is a NoSQL database with design goals for distributed scaling that greatly exceed what I need for storing the contents of a blog or two. It does, however, let me store a bunch of documents that have somewhat unpredictable structures.

I'm using DynamoDB to store a few different kinds of data:

  • Some configuration data for the site: title, author, ping URLs, etc. This is all in one config document.
  • Every static page on the site as its own document.
  • Every post published to the site as its own document. Rather than choose my own schema for posts, I've decided to embrace microformats for how I store my posts. I use property names that match the ones specified for microformats, and I support storing any unknown properties so I can decide what to do with them later.

The free tier of DynamoDB is limited in two different ways: storage capacity and throughput. You get 25GB of storage for free, which is way more than I need, especially since media content is all stored directly in S3.

Throughput is a more scarce resource. AWS measures throughput based on how much data you have to scan through when querying. Using the free tier effectively requires giving some thought about how to query for exactly what you need, and if you're used to RDBMS's like I am, you might be surprised by the limitations of this. I'm hoping to write another post talking more about the specifics of how I've approached this. I think I've ended up with a pretty good solution that avoids querying for more data than is needed.

Lambda and API Gateway

I use Lambda functions for two main groups of dynamic behavior:

  • Adding or updating the content in the database using an HTTP API based on the Micropub specification
  • Generating the static content in response to database changes and uploading it to S3

I really love using Lambda for this kind of work. Its model makes it really easy to focus in on the unique work that your code needs to do. It's been very pleasant to not give much thought to processes or HTTP servers or how to scale them.

The pricing model is also very well suited to this use case. Lambda functions are paid per request: if you're not responding to a request, you're not paying. Since I'm not blogging constantly, and because the site is statically generated, most of the time I'm not actually making requests. Lambda is very efficient for this kind of workload. The free tier gives you a whopping 1,000,000 requests for free, which is more than enough for how often I'll be querying my API.

Lambda functions alone do not an HTTP API make. Something has to call those functions in response to HTTP requests, and that's what API Gateway is for. At its most basic, API Gateway lets you define which HTTP requests will be handled by which Lambda functions. It also provides a proxy that constructs an event payload for your Lambda function that includes useful information from the HTTP request in a structured form.

I also use another feature of API Gateway: custom authorizers. This lets me define a particular lambda function for validating authorization tokens for my API. I use this to both ensure a valid token is provided and provide information to my other functions about who the token belongs to and what accesses it grants. Once I have this authorizer, it's easy to attach it to the different API endpoints that need to be protected.


I'm very happy with how this engine has turned out, and I'm glad that I chose to run it on AWS. It's proving to be a very fun project, and one of the more cost-effective ways that I can blog the way I want to.



I'm having a hard time reconciling how much I hate Jeff Bezos with how much I'm loving AWS.


I drank two beers last night and woke up with a big dumb headache. Feels good to be approaching 30. Real good. 😕



Lesson learned: always set up a dead letter queue for your SQS queues. Otherwise, you may be like me and end up using 300,000+ message receives in a few days for what should be a really low traffic app.


Last night I found out a friend of mine doesn't like season 3 of The Good Place and now I need a new friend.


I love the Elm architecture so much. It's honestly hard to imagine writing web apps any other way. I wish this was how native iOS development worked too. Purely functional programming can really excel for UIs if you have the right abstractions.



I'm surprised by how painless it is to upgrade our kubeadm-based clusters to v1.13.1. Much less scary than I thought it would be.



The hype for Kelsey Hightower's KubeCon talk has been intense. I've gotta watch this as soon as I can!


It is terrifyingly windy in Austin right now. If I go outside, I might just blow away, never to be seen again.


I went ahead and watched this talk, it's really good. Rick clarified a lot of things that were still fuzzy for me, and really demonstrated the power DynamoDB has if you design for it.


The replies to this tweet are a treasure trove of entitlement. We need to learn as a culture that buying a good doesn't mean we have a blank check to make demands on the producer.


I'm looking forward to getting to catch up on all the KubeCon talks once slides or video are online.


A Philosophy of Software Design is great, but the part that stuck with me most is about comments. I used to avoid them, thinking my code spoke for itself. I've completely changed. I now find many cases where I can make my code clearer with a few words.


This largely rings true for me. I'm working with (and digging) Kubernetes for my job in infrastructure, but for my own stuff, Serverless has a lot to offer me as a developer.


Finding that the less I care about catching a Pokémon, the more likely it is to succeed, even under the worst conditions.


I've just published my first npm package: nunjucks-s3-loader. It's a small piece extracted from my blogging engine. It lets you easily use nunjucks to render templates stored in S3.


I think our dogs get more cuddly and adorable in the winter because they're cold.


I wish I could be at KubeCon this week! I got into this stuff a bit too recently for that to be practical though. Hopefully I'll be able to attend next year!




Every NoSQL database I've used so far has been based around a similar idea: what if you never had to think about how you put data in, and instead devoted all of your energy into figuring out how you're ever going to get it back out.


The 12" MacBook is good for a lot of things, but compiling TypeScript is not one of them.


Well, I thought I was going to publish my first npm package today, but I accidentally unpublished it and locked myself out for the next day. I guess I'll just publish it tomorrow.




Absolutely loved the Janets episode of The Good Place this week! Season 3 has been incredible.


Betty and Jughead (mostly Betty) are absolutely carrying Riverdale, it's not even close.




Kubernetes in the streets, Serverless in the sheets.



I have apparently somehow found a way to make Node run out of memory while trying to compile my TypeScript code. Not sure how I'm managing that.