Containers on AWS

March 9, 2020
How to choose between ECS, EKS and Fargate on AWS for deploying your conatiners.

Containers on AWS

In true AWS style you can deploy containers to AWS in a plethora of ways. Why? Simple pick the way that works best for your application and workflow. I’ve built a decision tree to help you navigate the decision:

AWS containers decision tree Download AWS containers decision tree

Theres a couple of questions that can really shape the decision and path you take so lets kick off with them. Follow along on the diagram and you should get to a logical ending.

These two questions almost exclude each other. On one hand you have the ever popular Kubernetes options, this is going to give you an open deployment method and toolchain but heres a warning. KUBERNETES WILL NOT GIVE YOU CLOUD AGNOSTIC DEPLOYMENTS

Wow hang on! Whats he saying!

No really it’ll get you close but each cloud provider has its own quirks. Defining loadbalancers, storage and security will all have their own vendor centric annotations in your deployment. Meaning if you do want to move, its not as straight forward pointing kubectl at another cluster. you’ll need to make changes. Investigate this and understand the differences if being able to move between clouds is a key strategy of yours.

Now lets talk about native AWS integrations, by this I mean access to AWS fundamentals such as VPC’s, autoscaling, IAM policies, loadbalancing and then accessing other AWS services. As you would imagine AWS ECS has all this built in by the bucket load and its easy to access.

Kubernetes on the other hand has none of that, without doing the work yourself. Amazon EKS (an AWS managed control plane) can access lots of these but you still have to do some extra work, this is all extra stuff you are going to be responsible for maintaining and patching. In EKS VPC access is sorted out the box for you by the use of the CNI VPC Plugin, ELB’s also work out the box. But if you want IAM for your container workloads, ALB’s or NLB’s you are going to have to do some extra work, and its not always straight forward (future blog post here). Also I will say AWS has done a great job open sourcing these extras and you can run them yourself on your own install of kubernetes running on EC2. If you want to secure access to other AWS services through IAM or autoscale for your node pools, yet again you are going to need to add the components in yourself, in the shape things like kube2iam and cluster-autoscaler.

But there are lots of positives to kubernetes, lots of extensions, things like helm for managing deployments, tight integration with the wonderful gitlab and a great community. Just be prepared to maintain the work you put in.

If you’ve chosen Kubernetes you’ve hit another decision:

If you want to do this yourself and have full control of the environment there’s lots of options to roll your own. My favourite being kops, it works really well and even handles rolling upgrades superbly. The diagram shows you a few options but there are many many ways to achieve this. The investigation of these options is beyond the scope of this post.

If you want to make your life a lot, lot, lot easier and answer yes you want AWS managed control plane this is going to lead you to Amazon EKS. A fully managed control plane provisioned, secured, scaled and run by AWS. It’s also going to unlock some more features for you that you get above and beyond doing it yourself on EC2. The big winner in this for me is Fargate mode, that is to say serverless containers.

This is a recent announcement (November 2019 it went GA), you can now have Fargate mode for EKS. This basically allows you to launch Pods without first provisioning infrastructure on EC2 in node pools. This is achieved by using Firecracker which uses a microVM to run kubectl, kube-proxy and your Pod. It makes things super simple and removes all the management overheads of running kubernetes control planes and managing worker nodes with all their updates, whilst still allowing you to leverage the power of Kubernetes. Better still you can run EKS in hybrid mode with worker node pools on EC2 and Pods in Fargate, so if you need to do something special like have a container access a shared file system such as Amazon EFS which is mounted to you your worker nodes you can, but if you have containers that need no special requirements Fargate reduces your operational management overhead.

Now between serverless nodes and you managing the EC2 worker nodes there’s another option. Managed nodes:

Amazon EKS managed node groups create and manage Amazon EC2 instances for you. When there’s updates to the underlaying EC2 AMI it’s replaced for you whilst maintaining your kubernetes application and ensuring its running. Managed node groups also look after scaling of your cluster for you. Best of all there’s no additional costs to managed node groups!

Back to Native Integrations

Let’s go back to that question about nativie integrations. I feel strongly that this is really important. It’s rare that you do everything inside a container, you are going to need a DB potentially or access to object storage. If you want these to work seamlessly go the AWS supplied route. You have a super simple option where AWS takes care of everything, but you don’t get flexibility which is to use Elastic Beanstalk (I do not recommend this, I’m not a fan) or you can use ECS which has a fully managed control plane and an option to have serverless containers with Fargate also. It works out the box with IAM, VPC’s and all flavours of Load Balancers (no ELB classic in fargate mode). Cluster and workload scaling is also built in (you have to do this yourself in EKS), containers can have their on IAM permissions to access other services and there’s a tonne of logging options. Natively you have cloudwatch and you can enhance that with container insights. It’s also easy to swap the logging to fluentBit and push to ElasticSearch if you wanted, there are a few different drivers built in to help you.

ECS has drawbacks however, it’s not as widely adopted as kubernetes by the community so support is less from your peers, it also doesn’t work with third party tools like helm. Some people complain it’s too simple and dismiss it straight away, but I ask you to look again, simple is elegant engineering in my mind. Do you need all those feature kubernetes can bring you? Are you really going to use them? If not, save yourself time and effort of setting up all the native integrations and just go with ECS. Everything is there from the get go and there is no extra time you have to spend to set up IAM, loadbalancing, cluster autoscaling etc etc.

My money is on doing the simple solution. Remember the container is what lets you move clouds easily, not the scheduler despite its promises.


Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.