You read that title right.
www.danott.co is a Rails application deployed to Netlify.
More specifically it’s a static site that is generated by Rails.
Whenever I push to my git repo Netlify runs bin/rails build
and a fresh site is deployed in under a minute.
I recently migrated my site from Jekyll deployed on GitHub pages to Gatsby deployed on Netlify. The pain points of my Jekyll setup were all self induced. I had a clobbered together setup for managing JavaScript and stylesheets. I was trying to do some things that the system wasn’t built to do. I wanted first class support for modern front end tooling, and dynamic build time data.
As an example pain, the links page is driven by my Pinboard bookmarks. I wanted to be able to redeploy when I added a new bookmark. Gatsby’s build-time fetching of data sources paired with Netlify’s build hooks fit this need nicely.
I’d been rolling with this for a few months, but something felt off. Don’t get me wrong, I like Gatsby well enough. Working with React components that are hydrated by GraphQL is pretty cool. The hot reloading of every change is a developers dream. It’s the single-page-app by default that didn’t mesh with my values.
See, I came up in the time of progressive enhancement and “the semantic web”. While some in this community take these principles to a religious level that is not helpful, I do believe in the main sentiment. For my personal website, I want to be delivering static html, with a little bit of style and JavaScript sprinkled in.
Rails is really good and rendering html with a little bit of style and JavaScript sprinkled in. It’s also the hammer I’m most comfortable with, so I fully recognize I’m making my personal website look like a nail. 🔨
I’ve implemented a build script that is invoked with bin/rails build
.
This script is a composition of a few smaller steps.
bin/rails import
bin/rails webpacker:compile
bin/rails html:build
The idea for step one is inspired by Gatsby. I built a small class that hits the json endpoint and stores the data locally for filtering/rendering/etc.
Step two is standard fare for deploying a modern Rails application, static or otherwise.
Step three is where the magic happens. Reaching back into the history of Rails, you’ll discover the actionpack_page-caching gem. This gem writes the response body of a Rails controller action to a html file on the filesystem. This file can then be read by a server to avoid the Rails stack entirely. I still have a hard time imagining how this gem could be practical in most Rails applications, but it does exactly what I need for generating a static site.
So I built a tiny class called the Crawler
.
This class is responsible for keeping track of all the pages we want to generate.
And then generating them.
(Another Gatsby inspiration, mimicing the createPage
API.)
class Crawler
attr_reader :paths
def initialize(paths: [])
@paths = paths
end
def call
Hash[paths.map { |path| visit(path) }]
end
def visit(path)
env = Rack::MockRequest.env_for(path).merge("HTTP_HOST" => "www.danott.co")
rack_response = Rails.application.call(env)
[path, rack_response.first]
end
end
# An oversimplification of the pages registered.
Crawler.new(paths: %w[/links /poetry, /posts]).call
# => { "/links" => 200, "/poetry" => 200, "/posts" => 200 }
I’m returning a hash of requested paths to response codes so I can test that my build script is working!
class CrawlerTest < Minitest::Test
def test_smoke_test
responses = Crawler.new(paths: all_the_paths).call
assert responses.values.all? { |code| code == 200 }
end
end
In development I have page caching turned off and the development experience is like any other Rails app. In production I have page caching turned on, and the crawler generates everything in the build step. In the immediate I’m very happy to have the trusty hammer of a majestic monolith powering my static site.
Published: 2019-12-17