In the realm of cloud computing, one aspect that excites us all is the potential for cost savings. The allure of achieving remarkable benefits with minimal effort is undeniable. With the abundance of cloud providers available, hosting solutions in the cloud not only offers unparalleled flexibility, reliability, and scalability, but it also presents a more cost-effective alternative to maintaining the same solution in a traditional data center over time. The key lies in leveraging the cloud effectively to optimize your costs. In this article, we will explore several proven strategies for optimizing and reducing your cloud costs. By identifying these optimization opportunities, we aim to empower you to unlock savings and fully enjoy the advantages of cloud computing. Join us as we embark on a journey to maximize the value of your cloud investments while minimizing expenditure. Let’s delve into the realm of cloud cost optimization and unleash its potential together.
Before you start
About this post:
- 5 – 15 min average reading time
- Suitable for intermediate through to advanced
What you will gain reading this post:
- Proven cloud costs saving strategies for already existing resources as well as new resources in the cloud
What you can do to help support:
- Like, comment and share this article
- Follow this blog to receive notifications of new postings
Now, let’s get started.
Startup and Shutdown
- Difficulty to adopt: 1 out of 10 (manual), 3 out of 10 (automated)
- Limitations: Unlikely suitable for production environments expect for specific use cases, but great for non production environments
If you are running VM instances here are a few important consideration to make before implementing this approach.
Will your data be deleted if you start or stop your instance, meaning is it using a persistent storage or ephemeral?
Does your instances need to be running 24/7 or are there times where they are not used? If you determine you are in a position to start and stop instances this is a simple and effective way to reduce your cloud costs.
AWS does not charge for STOPPED instances, GCP does not charge for TERMINATED instances, and Azure does not charge for STOPPED (Deallocated) instances. However, you must remember that other resources like persistent disk and static IPs will still incur a cost.
Check out the following guides to help automate this approach.
- GCP recommended solution
- AWS simple and recommended solutions
- Azure guide
This is a proven way where large cost savings can be made.
Scaling
- Difficulty to adopt: 2 out of 10
- Limitations: Only a subset of cloud services support scaling to zero and solutions must be build to run in those services
If you have times where you are not receiving any traffic, the result is you are paying for resources that you actual don’t require.
However rather than scheduling startup and shutdowns, for reasons such as, you have unpredictable traffic flow and/or require the ability to scale up quickly to support an influx of traffic. You can make use of scaling down to zero.
One cloud provider that does this very well is GCP, with a few of their services such as Cloud Run, App Engine Standard, Knative, and Kubernetes (kinda) keeping in mind for this you must have at least one node to always be available in the cluster to run system Pods.
Alternatively, you could utilise components such as Osiris, which can scale your idle Kubernetes workloads down to zero, but make sure you check out the limitations first.
For the other cloud providers, AWS and Azure it may be possible, however I haven’t seen any official documentation stating it is possible.
This is a proven way where large cost savings can be made.
Spot and Reserved Instances
- Difficulty to adopt: 5 out of 10
- Limitations: With Spot instances the workloads need to be flexible, they must be able to support unscheduled shutdowns, there is no guarantee on next spot instance availability. With Reserved instances, you are committed for a period or time, and to make the most of the savings you need to be sure on the capacity you are reserving.
Do you have non critical workloads that can be stopped and commenced at a later time? Spot instances for heavy non critical work loads are actually a better way to control costs. Does your workflow run in containers?
See how you can still use containers with spot instances in AWS and GCP to further reduce your cloud costs. Better yet you can mix these instances with on demand and reserved instances, check out these examples from AWS and GCP for possible strategies.
To better help you understand the different Cloud Provider options available then be sure to check out this posting for an overview.
Do you have workloads that you know will be running for a 1 – 3 year timeframe? Will this workload always be running and constant?
If that is the case, then reserved instances are the better option for you to significantly be able to reduce your cloud costs.
Does you workflow run in containers?
Try checking out this detailed cost breakdown of running on-demand and reserved instances in both managed and non managed containers for each Cloud Provider, and see the cost savings.
This is a proven way where large cost savings can be made.
Serverless
- Difficulty to adopt: 1 out of 10 (monitoring), 5 out of 10 (optimisation)
Cloud providers such as GCP, AWS and Azure all have Serverless capabilities available for use.
While Serverless means more than functions. Functions can be a large contributor to the overall cost, and therefore, will be the focus point for the section.
Are you monitoring the functions and the costs you are currently occurring for your functions?
If not, START.
Metrics and analytics play a vital role in the decisions we make and when determining an outcome.
GCP, AWS and Azure all support tagging of the resources providing you a way to track costing of individual functions and help you determine which functions are costing you the most.
Be sure to checkout this great use case and journey post, saving them thousands through optimisation.
Influencing factors such as execution time and resource consumption (memory and CPU), vastly affect the cost.
The formula is total cost = invocations + execution time + networking
AWS (Pricing) 256MB memory 300 ms execution time | – Invocations (2,000,000 * $0.0000002) – GB seconds (150,000 * $0.0000166667) | $0.40 $2.50 |
256MB memory 500 ms execution time | – Invocations (2,000,000 * $0.0000002) – GB seconds (250,000 * $0.0000166667) | $0.40 $4.17 |
512MB memory 300 ms execution time | – Invocations (2,000,000 * $0.0000002) – GB seconds (300,000 * $0.0000166667) | $0.40 $5.00 |
GCP (Pricing) 256MB memory (400MHz CPU) 300 ms execution time | – Invocations (2,000,000 * $0.0000004) – GB seconds (150,000 * $0.0000025) – GHz seconds (240,000 * $0.0000100) | $0.80 $0.38 $2.40 |
256MB memory (400MHz CPU) 500 ms execution time | – Invocations (2,000,000 * $0.0000004) – GB seconds (250,000 * $0.0000025) – GHz seconds (400,000 * $0.0000100) | $0.80 $0.63 $4.00 |
512MB memory (800MHz CPU) 300 ms execution time | – Invocations (2,000,000 * $0.0000004) – GB seconds (300,000 * $0.0000025) – GHz seconds (480,000 * $0.0000100) | $0.80 $0.75 $4.80 |
Azure (Pricing) 256MB memory 300 ms execution time | – Invocations (2,000,000 * $0.0000002) – GB seconds (150,000 * $0.000016) | $0.40 $2.40 |
256MB memory 500 ms execution time | – Invocations (2,000,000 * $0.0000002) – GB seconds (250,000 * $0.000016) | $0.40 $4.00 |
512MB memory 300 ms execution time | – Invocations (2,000,000 * $0.0000002) – GB seconds (300,000 * $0.000016) | $0.40 $4.80 |
It is evident to see, that the longer a function takes to execute, and the more resources are consumed from the executing function, it can result in significant increases to the costs.
If you multiply this by how many functions you own or require the cost can grow very large quickly.
It is also worth noting that the above calculations do not include network data transfer as well as other resource cloud costs that may interact with the functions or vice versa.
However, if you are to take away one point take away optimising functions is essential, and can provide great cost savings.
This is a proven way where large cost savings can be made.
Sizing and Type
- Difficulty to adopt: 1 out of 10
Do you understand the workload you are expected to support? How are you expecting to scale?
Selecting the size and type of the instance you are looking to run is a relatively simple exercise, if you understand what you are looking to support.
Based on the instance type and size you will be allocated more resources of a type over another for example memory over cpu, and you can also choose custom sizes if the requirements of the predefined sizes don’t match your workload.
- By oversizing, you are actually underutilisation the instance but still paying for the high cost required to support a more intense workload, as you scale this can increase your cloud costs dramatically.
- On the other side of the scale, if you undersize, you can end up scaling more instance than you may actually require and could have benefited more from a large size requiring less instances and resulting in a larger cloud cost than required.
Ensure you have adequate metics to determine you are utilising the instances sufficiently and you will benefit from a more appropriate cloud bill.
For sizing and types GCP, AWS and Azure all support it.
Hosting
- Difficultly to adopt: 2 out of 10
Are you making use of a CDN? Each cloud provider AWS, GCP and Azure have one.
Is your website sitting behind it? This is very highly likely, if you are creating website in the cloud.
Where is your website hosted?
Today, many websites are static and don’t require to be hosted on web servers anymore, as a result, they can be hosted in storage options such as a Bucket sitting behind a CDN distribution.
From a cloud costing perspective this means that the cost of storage in a Bucket is a much cheaper option as opposed to running on a server instance or containers.
Check out the AWS, GCP and Azure guides on hosting a static website.
Archive data and use lifecycle policies
- Difficulty to adopt: 2 out of 10
Is the data retained for longer than 1 month? How often is the data accessed, for instance, is it infrequently accessed such as once a month / once a year?
If that is the case, you could really benefit from archiving your data by moving your data to a different storage class that has a lower storage cost associated to it.
To help automate that you could write a lifecycle policy that will move the data into the specified storage tier based once the desired condition have been met.
For Buckets, GCP, AWS and Azure all support this.
In this example, referring to the table below, this does not contain all the storage tiers, however it does clearly show a massive cost difference and savings that can be made, for instance.
AWS | 50 TB (50000 GB) Standard Storage * $0.023 per GB | $1,150.00 |
50 TB (50000 GB) Standard I/A Storage * $0.0125 per GB | $625.00 | |
GCP | 60 TB (61440 GB) Standard Storage * $0.026 per GB | $1,597.44 |
60 TB (61440 GB) Nearline Storage * $0.010 per GB | $614.40 | |
Azure | 50 TB (50000 GB) Hot Storage * $0.0253 per GB | $1,265.00 |
50 TB (50000 GB) Cold Storage * $0.01373 per GB | $686.50 |
Additionally, it is important to be aware that frequently accessing the data from the non standard storage tier does incur a higher retrieval cost, and all tiers have additional costing to take into account, such as network data transfer etc.
Also to benefit from the savings it is important to use the different tiers following their recommended use cases.
This is a proven way where large cost savings can be made.
Create a new account per product
- Difficulty to adopt: 1 out of 10 (create), 7 out of 10 (maintain)
- Limitations: No support for grouping such as Organisations, best for personal and/or start-ups
By creating a new account per product, you can ensure that you get the benefit from the extra savings offered for the first 12 months.
This however can result in multiple accounts, and managing multiple accounts can become a maintenance overhead.
Therefore, unless you have templated the configuration it is not really sustainable for a large numbers of accounts.
If you want to link all of your free accounts to an Organisation to remove the maintenance overhead, it is not supported or very limited as the accounts must be managed independently to qualify for the Free Tier.
The cost savings come with each cloud provider supplying additional savings, be it free credit such as GCP or Azure or additional service usage such as AWS you can have further reduced cloud costs in the first 12 month.
Conclusion
In short, there are some areas you most likely have addressed, there is also a high probability that you have the ability to optimise and improve on some of these proven ways mentioned through out this post, even to the point of reviewing services you are running in certain Cloud Providers and potentially making a switch, and as a result, you could potential reduce your cloud costs by thousands of dollars.
No matter your current situation, I highly recommend you review your current environment and have in place the adequate metrics in place to capture areas of optimisation in the ever growing services and reap the benefits of a much nicer cloud bill.
Did this help?
- Like, comment and share this article
- Follow this blog to receive notifications of new postings
- View previous postings