Nate on Tech

You (Probably) Don't Need a CMS

You’re reading this post, served up as HTML, from a website of mine that uses no servers and costs less than five dollars ($5.00) a month to host. I don’t spend any time laying awake at night wondering if my OS is patched, I don’t worry about whether or not my content management system (CMS) has the latest security patch, and I don’t particularly worry about high availability or disaster recovery, and could scale to millions of users without my intervention. For less than sixty bucks a year.

And all I do to deploy new posts to my site is type git push, which commits my content to my source code repository. When I push the code to the central repository, a pipeline starts the “build”, which in this case is a process that uses Jekyll to generate the static HTML for the site and copy the static HTML out to Amazon S3. I have an Amazon CloudFront distribution out in front of the S3 bucket, which is probably overkill considering the traffic that I’m likely to get, but on the other hand I like having the knowledge that I probably don’t ever have to worry about scaling.

I use AWS CloudFormation to provision all of this, including the S3 bucket, the CloudFront distribution, the Amazon Route 53 entries, and the entire continuous integration/continuous delivery pipeline (CI/CD) that uses AWS CodePipeline and AWS CodeBuild. This is the second site that I’ve built with the same template.

You can view the template at my Github repository. The architecture looks like the image here:

Website in S3

I had considered an actual, real CMS but as I am the type of person who does not want to spend a lot of extra time managing the servers, not having any servers to manage really appealed to me. And, to be honest, I don’t plan on needing any of the other benefits of a CMS. I simply found a theme that I like, typed this first post in Markdown format, and committed my changes. About fifteen minutes after using git push my content is up on my site. That’s quickly enough for me.

And it’s actually quickly enough for a fairly broad number of use cases. Many sites don’t need to use a CMS. Static HTML and JavaScript is just fine for many cases, and hosting that static content doesn’t require web servers and load balancers anymore.

Making a CMS nothing more than over-complicated machinary that bloats the cost of a site and provides nothing of value and only risk from an availability and security standpoint.

For dynamic content, using a Single Page Application (SPA) and web services works very well. In many cases, using Functions as a Service (FaaS) such as AWS Lambda fronted by Amazon API Gateway allows you to create web services that use no servers, and like the S3 bucket and CloudFront distribution, require no daily management on my end.

When you get into huge numbers of transactions, FaaS isn’t always the most cost-effective solution, but it has the added benefit of being the most cost-effective–by far–when transactions drop off. If you have a use case that involves small numbers of transactions during periods of little activity, you should do the math to determine the best course of action for you.

I do not rejoice in other people having struggles, but I have to admit that whenever I read about security breaches in popular CMS solutions I both breathe a sigh of relief and wish that more companies would adopt static site generators and static site hosting solutions like S3 and CloudFront.

Not just for blogs

While I am using this architecture and build process for blogging, most corporate sites could use the same approach. I have a Dockerfile that I use locally that does the same build steps as my CodeBuild project, so I can view all the changes before I push them. And since my CodePipeline is using the master branch as a source, I just use the GitFlow scheme for branching my site content. I write and preview my posts on a different branches and then merge them into master when I am happy with them. That way, my new posts are not pushed to the site until I merge them into the master branch and push them to the central repository.

This model would allow corporate sites to be created, edited, and internally previewed easily before pushing them and solves one of the hardest problems I’ve ever seen with corporate information sites–managing content in a CMS in different environments1.

Because I can push my code and have the build push out to production within about 20 minutes, maximum, that is more than adequate for many companies that have relatively static information or information that only changes a few times a year or even a few times a month.

Using manual approval in the pipeline

I don’t have a manual approval step for my own blog site, but you can put a manual approval step into the CodePipeline pipeline. You can use the manual approval step to “hold” the deployment to a production environment, as an example, and then have the pipeline send a notification to a business user with a URL for them to approve the site after previewing it in a staging environment. I am a little disappointed that CodePipeline does not seem to supply pre-made signed URLs for approving or declining the manual approval step, but it was easy enough to implement as a webhook using Amazon API Gateway REST endpoint proxying a small AWS Lambda function written in Python to make the approval and expose it as a webhook.

For many enterprises, the manual approval in combination with GitFlow and a deployment pipeline that automatically deploys into a staging (preview) and then a production environment will control leaks, allow the business to review the site before having the final say on when it is released. All this with the advantages of having no servers to maintain or get compromised, and costing less than $5 USD a month and scaling to millions of users.

1 Ironically, most of the CMS systems that I have seen in corporate use in multi-environment scenarios do not do much of anything to actually make content management easier.