3 Common Costly Mistakes in AWS

The Cloud has been seen as a great equalizer. It allows companies with very little capital to innovate, test, and bring new technology into the world at a low cost.

This has driven a lot of companies to utilize public cloud platforms, like AWS, in order to allow for the same rapid innovation.

Over the last few years though, organizations have realized how hard it is to control costs in this fashion and can see ever increasing cost creep in their technology departments.

Most companies that have gone to the cloud for the first time experience a cost shock, this article is targeted at helping customers that are new to AWS find savings in common locations.

Not "right-sizing" your instances (If you can't autoscale)

Before the Cloud developers would have to ask for a server that would be able to meet current and future expected need because it couldn't be easily modified after the purchase.

It is more common than you think to see a server running with less than 10% CPU and memory utilization, and for every instance you have that is running at this utilization you are wasting money.

In this example, we'll look at an EC2 Instance used for development purposes:

Instance Size: C5.XLarge
CPU Utilization: Less than 1% at all times
Memory Utilization: Less than 3% at all times
Estimated Monthly Cost before right-sizing: $124
CPU Utilization:

Memory Utilization:

The above graphs can show us that this instance is severely underutilized at all times of the day for the last two weeks.

We know that a c5.Xlarge has 4vCPUs and 8GB of RAM and that the machine is used for development purposes. In this case, we can see that similar performance could be obtained on a t3.small, and knowing that the baseline performance will never exceed 10% CPU or 10% memory we can be certain that credits will be available for when the development instance requires more performance.

This simple modification results in a savings of $109/Month, roughly an 87% cost reduction.


Treating the Cloud like a Datacenter

When moving to the Cloud most organizations make the same mistake, treating the Cloud just like another datacenter.

What do we mean by this? It's difficult when moving to the Cloud for the first time to understand the benefits past, "I don't have to mount these servers myself". Most of the time the first migration to a Cloud provider is a pure Lift-and-Shift where almost everything remains identical except the location.

This is great for getting your organization accustomed to a public cloud, but in order to really lower your costs it'll be required to leverage the appropriate services.

The next step is to evaluate your infrastructure and identify what items could be offloaded to a service. Let's think about Databases as an example. Over the years databases and the management of databases have grown increasingly complex. A Database Administrator is required to build out a scalable database design and implement it, then that DBA will spend most of their days maintaining the system. A large number of features that were quite complex to implement before the cloud are all managed for you. Auto-recovery, read-only replicas, multi-master configurations and sharding are all available for you without the complexity of implementing them yourself.

Instead of making your DBAs do the same tedious work every day, they can add value to the organization by implementing new tooling to make day-to-day work even more efficient.

Using the wrong storage type

In the Cloud you have many different options for file storage - it can be hard to pick the correct one. When there is EBS, EFS, FSX, FSX with Lustre, S3, and many others, each with their own more detailed configuration options, how do you know what to use?

EBS is one of the first storage types you'll run into in AWS, and one of the easiest to misconfigure.

A common mistake people make is utilizing Provisioned IOPS EBS volumes when it isn't necessary. Provisioned IOPs are great, and they can be used to help ensure that you get the throughput you need when you need it, however it's quite expensive. A common misunderstanding is that you need to use provisioned IOPs if you're looking for specific performance. Let's look at two EBS volumes pricing, one with Provisioned IOPs and one utilizing a General Purpose SSD

In this example, we have a client that needs to be able to consistently hit near 3000 IOPs, with less than 128Mbps throughput and needs 1TB storage

Provisioned IOPs Drive:
Storage : 1000 Gb X $0.125 = $125/Month
IOPs: 3000 X $0.065 = $195/Month
Max Throughput: 1000 MB/s

General Purpose SSD:
Storage: 1000 Gb X $.010 = $100/Month
IOPs: 3 times storage size = 3000 IOPs
Max Throughput: 250 MB/s

Both of these drives will provide similar performance from the customer's perspective and meet all of their requirements, but by leveraging the General Purpose SSD in this specific case we can lower the price the client pays while retaining their performance.

Now this won't work in every situation, but by recognizing that provisioned IOPs drives are really only useful in extremely high performance environments, you can save your client money in the long run.