Nate on Tech

Making an application serverless

A while ago I was looking for a realistic example of a “legacy application” to use as a basis from which to build a series on modernizing an application both by refactoring the architecture into microservices but also building the application out in AWS cloud-native services.

Good news. I found it.

For personal reasons, I had already decided that I would refactor the Taiga back-end. The main reasons are that I would like to self-host it, but I don’t have much money for paying for the posting in a scalable, highly-available method unless I take advantage of native services such as Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. It took me embarrassingly long to put two-and-two together: that I could refactor the Taiga backend for my needs but also provide my methodology as an example of how enterprises can follow the same process and architecture to achieve similar results.

Going 100% serverless

The idea behind this effort is to go 100% serverless. I understand that at the end of the day even Amazon Simple Storage Service (S3) actually runs on servers, so what I mean by serverless is that in this entire architecture I do not manage any server infrastructure as it is traditionally known. No virtual machines (VMs) no manage, on operating systems to manage, and no application servers or other dependencies.

I will be using 100% native AWS services.

Aside from benefits of less management, the solution is much more secure and will cost a lot less in ongoing operating costs. The development and deployment of the solution will be different and arguably a trifle more complex in the beginning, but it is my secondary goal in this exercise to demonstrate how automation and well-designed processes can make development and deployment a lot easier.

Differences in architecture

Shown here is a traditional 3-tier architecture that has been adapted to best practices for deploying such an application in AWS. It uses AWS Auto Scaling, pinned to two services for each tier for now, across two Availability Zones.

Traditional web app in
AWS

The database is hosted using Amazon RDS (Relational Database Service) which is a managed service so at least this architecture would ease the maintenance and risk of managing database software and the operating systems on which the databases are running.

Here is the architecture going all serverless:

Serverless web app in AWS

Amazon CloudFront is in front of a site in the form of static assets hosted in S3 (the Taiga front-end). Rather than using a second tier for servers as in normal n-tier architectures, the API will be Amazon API Gateway REST endpoints that proxy AWS Lambda functions.

For this exercise, I will write the functions in Java. The original Taiga back-end is written in Python and Python works well in AWS Lambda, but I wanted to use Java as a learning experience and also to make it substantially different than the Taiga code base to prove the point of this exercise.

Amazon S3 is used to store any artifacts, such as attachments to tickets. Finally, I’m going to target DynamoDB for the database. I am going to keep that decision relatively fluid as it may change later, but I will make sure to address that in a future post if necessary.

Serverless blocking and tackling, too

Not shown in either architecture is the continuous integration/continuous deployment (CI/CD) process that I am going to build and use, including the git repository, the build “server”, and deployment process.

For this entire setup, I’m going to use the AWS native tools: AWS CodeCommit for the managed git repository, AWS CodeBuild for doing the builds, AWS CodeDeploy to do the deployments, and AWS CodePipeline to manage all these in pipelines.

I will use AWS CloudFormation to provision all of these services in my account, from the git repository to the DynamoDB tables and S3 buckets to all of the IAM roles and policies that will be required by the various services.

Not only will my infrastructure be serverless, but so will all of the services needed to host my code, build my solution, and deploy it.

Summary

This is the start of a series, which is relatively open-ended at this point, that will demonstrate in concrete steps how to modernize a monolithic backend into managed and serverless services to create a cloud-native version of the application.

Next up: Creating a scalable CI/CD pipeline