Skip to content

Making the Switch: Private Servers to AWS

The Problem

Startup environments don’t tend to be the best for running your own servers. Doing so costs time and money, and often expertise you may not have, or can’t afford to hire, and with scalability issues and no uptime guarantees, it often feels like it’s not worth the effort. Thankfully, modern cloud environments – like AWS – have a solution for this, complete with low maintenance costs and high availability and scalability that you won’t get managing your own infrastructure. 

 

However, sometimes your system – in part or in whole – has been built to run on private servers, or without consideration of a modern serverless approach. So, what do you do in this case? How do you take an application built to run on a private server – and possibly connected to a privately managed database – and transfer it to a managed infrastructure?  

Our Approach

logo of amazon webservices

At Faslet, our platform was originally built as a NodeJS application, designed to connect to a PostgreSQL database. The original intention was to roll this out to a private server and run it as is, but when we started working with it, we felt we should focus our resources on onboarding new customers, and not maintaining servers. So we came up with an approach to take this architecture and deploy it to AWS as a scalable solution.

 

Frontend content (Widget)

For the Widget itself, built as a VUE web component, we did the obvious thing. We deployed the build artefacts to an S3 bucket, which was wired up to a CloudFront distribution. This means that we had a CDN in place, giving us scalability at a very low cost. In fact, even after a few months of growing traffic, the S3 bucket and CloudFront distribution have cost $0.00. 

Database (postgreSQL)

 

The approach for the database was a bit of a challenge. Ideally, we would look at using something like DynamoDB for this instead of PostgreSQL, but it had already been built, and some of the data really lends itself to a relational database, so we decided to keep this structure. However, what we really wanted to avoid was having an unmanaged SQL instance running either on EC2 or in RDS, so we decided to use a Serverless Aurora PostgreSQL instance. This gave us the flexibility to have something scalable and managed by Amazon. The primary disadvantage of this so far has been that you can’t just connect to it from anywhere and it requires a connection to come from inside the VPC that it lives in. This has proved to be a minor inconvenience and thus far we have not really needed to manage the database directly. RDS Serverless also handles scaling for you where you can set a minimum and maximum range, and it will allocate servers as needed. It’s not the cheapest approach (Dynamo would be considerably cheaper), but it still comes in at under $200 a month for two environments, one of which is serving customers 24/7. This can be cost and performance optimized by adding an Elasticache in front of this.

Backend service

logo of docker

Since the backend was intended to be run on a private server, it wasn’t designed to run in a Lambda. Lambdas have strict limits on response sizes and package sizes, and unfortunately, the application included some management software and external packages that violate those limits. In the future, we hope to remove these limits and split it out into Lambda functions, but in the interim, we needed a solution.

 

Enter Docker and ECS. Amazon makes it extremely easy to deploy a docker container to the cloud, and with some help from an Application Load Balancer, it becomes very easy to scale this. First, you need to deploy your docker image to the Elastic Container Registry (ECR). Once deployed you can set up a Fargate Task Definition, and a Fargate Service that starts up some Fargate Tasks. With some wire up between this and the Application Load Balancer, you end up with a scalable backend running entirely in a managed service. 

Pitfalls and Future

Other than the Database, the largest cost that we have is the Application Load Balancers. For lower traffic scenarios, this could be swapped out with an API Gateway which does not have an hourly cost associated with it and will give you a million free requests a month in the Free Tier. Thankfully, AWS also provides 750 Application Load Balancer hours per month in the Free Tier, which should cover a single Application Load Balancer (excluding request costs). In our case, since we have 2 environments, we get the first half of the month for free. Note that these Free Tiers expire 12 months after creating up your account.

 

Longer-term, this approach buys us time to build our own replacements for the management software in the backend. Once we’ve fully replaced that, we can then migrate our backend to a Lambda function instead of an ECS instance and split out the different functions into different lambdas with their own storage mechanisms, possibly eventually moving to DynamoDB which is considerably cheaper for our volumes. 

The Outlook

Thanks to technologies like Docker and ECS, moving your backend from a legacy private server to a managed cloud is relatively easy. Serverless SQL Databases such as Aurora can simplify your ability to scale without having to rewrite your legacy SQL database. Using these things will buy your organization time to build your customer base and rewrite some parts of your system, while saving maintenance costs. A tool such as Pulumi or Terraform can help you write this infrastructure as code and push it to your source control. Moving to managed services – such as those that AWS offers – frees up your development team to work on the things that really matter to your business.