Launching “AiTest” soon. A Cross-Browser performance testing SaaS platform. Automated but real load testing of your application. Stay tuned!

One tool for test automation for every service, application, and platform. aiTest Launching Soon - Secure Your FREE Spot (Limited to the First 100 Signups)!

One tool for test automation for every service, application, and platform. aiTest Launching Soon - Secure Your FREE Spot (Limited to the First 100 Signups)! | Join us on Tuesday, 25th August 2023, for an insightful webinar on 'Enhance the efficiency of Cloud monitoring using LogicMonitor' and optimize your cloud operations like never before!

Exploring Container Orchestration Options with AWS ECS, EKS, and Fargate

container orchestration with AWS ECS, EKS and Fargate.

There is no doubt that microservices came as a boon to developers in the cloud. And one of the reasons for their growing popularity is the innovation we know as the containers. Containers basically contain everything needed to run an application—such as the configuration, code, runtime engine, etc. Containers still need to be managed though. They need to be deployed, networked, unneeded replicas must be deleted when not needed, crashed containers must be restarted, etc. All of this is not possible to do manually. Remember even the most basic microservices app will comprise tens or hundreds of containers. Clearly, some sort of automation tool is required. Enter container orchestrators: Kubernetes, Docker Swarm, Apache’s Mezos, Hashicorp’s Nomad, and AWS’s own container orchestration tool: AWS Elastic Container Service (ECS). 

AWS ECS

AWS ECS

Container orchestrators like ECS manage the entire lifecycle of a container, including starting, rescheduling, load balancing, etc. Let’s understand how this works.

The first step would be to create an ECS cluster. This will include all the services needed to manage individual containers in the cluster. In other words, the ECS cluster acts as the control plane for your VMs that are running containers. These VMs are nothing but EC2 instances, which will host the containers and be managed by us. The EC2 instances will have the container runtime and ECS agents installed. This will enable the ECS processes to communicate with individual EC2 instances and manage them. I.e. you can delegate management of your containers so you don’t have to contend with the hassle of repetitive, manual container management tasks.

ECS with EC2 Instances

However, you do still have to manage the VMs, i.e. the EC2 instances. Viz: create the EC2 instances, join them to the ECS cluster, and ensure you have sufficient instances and resources to schedule the next container and manage the server OS, as well as the container runtime. On the positive side, you have complete access to your infrastructure. If you want AWS to manage the hosting infrastructure as well, there is a solution:

AWS Fargate Graph

AWS Fargate

AWS Fargate is basically a serverless compute engine designed for containers. Since it is fully managed by Amazon you’re spared the task of provisioning or managing servers. Amazon assumes the responsibility of managing your server infrastructure. And of course, you only pay for the resources you need to run your containers.

Think of Fargate as an alternative to EC2, but instead of provisioning EC2 instances and connecting them to your ECS cluster, you delegate this to Fargate, which will spin up the VM to run your container, after analyzing how what resources—CPU, RAM, Storage—it needs to be deployed and run. All of this happens automatically. All you need to do is hand over your new container/s to Fargate—using the interface. The advantages for you are many. To begin with, you don’t have to bother about having sufficient EC2 instances or resources to schedule a new container. You don’t have to provision any infrastructure before you deploy your containers and you only use as much infrastructure and resources as your containers actually use—which means you pay only for what you use. In comparison when using EC2, you pay for the whole server even if you’re only running a few containers or none. With AWS taking on the responsibility of managing your infrastructure, you’re free to focus on the management of your actual application itself. On the flip side, if you need access to the actual infrastructure running your containers, EC2 is a better fit.

One big advantage of running your container application on AWS is that you have access to multiple supplementary services from AWS. For instance, CloudWatch for monitoring, Elastic Load Balancer (ELB) for load balancing, IAM for permissions, etc.

How dose EKS Work?

You will also have the storage (Etcd) replicated so you don’t lose data like cluster configuration. AWS also handles data backup. In a nutshell, with EKS the Kubernetes master nodes are not your worry any longer. All you need now are the worker nodes, the infrastructure that actually runs your containers. Here, again, you follow the same process as in ECS. You create EC2 instances—the compute fleet—of virtual servers and connect them to the EKS. This gives you a complete Kubernetes cluster and you can connect to the cluster using the Kubectl command and begin deploying containers inside the cluster.

Communication in ECS happened through the ECS agent installed on the EC2 instances. This allows the control plane to communicate with individual nodes. In the Kubernetes world, the worker nodes and master nodes communicate using Kubernetes processes. These are not specific to AWS. 

As far as managing your EC2 instances is concerned, in EKS too you have to manage the OS yourself and the processes running on your EC2 instances. But you can make it easier on yourself by choosing semi-managed EC2 instances. This allows you to group your worker nodes logically. The semi-managed option creates and deletes EC2 instances, which makes it easier to manage. Your node groups automatically have all the necessary processes installed on them, so you don’t have to install container runtime Kubernetes worker processes to make worker nodes.

You still have to manage other tasks—autoscaling, for instance. Autoscaling is not configured out of the box. You need to configure settings in Kubernetes and the AWS side of things to make autoscaling possible. And you have to handle the creation of new EC2 instances yourself. If you don’t want this hassle too, simply use Fargate. You can also have your containers running on EC2 instances and Fargate simultaneously for the same EKS cluster.

Amazon EKS Graph

Amazon EKS

But what if you still want to leverage the AWS ecosystem and you’re using or want to use Kubernetes? After all, it is the most popular container orchestration tool right now.  Amazon has a solution for that as well: Amazon EKS (Elastic Kubernetes Service). As mentioned, if you’re already using Kubernetes, i.e. your project is deployed on a Kubernetes cluster and you want to put it on AWS infrastructure, you can still retain your Kubernetes tool, instead of using a proprietary tool like ECS. This way if you decide to migrate to some other platform at some future date, you can do this easily with EKS; because, although AWS is managing your Kubernetes cluster, Kubernetes itself is not proprietary to AWS; it is an independent tool, so you can use it anywhere—on another cloud platform, or even on-premises if you have your own infrastructure. There is no doubt that Kubernetes is a very popular orchestration tool, and you also have access to a big community and multiple tools and plugins being developed in the Kubernetes world. However, if you’re using other AWS tools and services in your Kubernetes cluster, you will have to replace them with other options, as they are specific to Amazon.

EKS working on AWS infrastructure is similar to how ECS works. So you create a cluster—this represents the control plane and will be the master nodes in your EKS cluster. AWS will provision Kubernetes master nodes in the background, and these master nodes already have all the necessary Kubernetes master services installed—all of this from provisioning to management is handled by AWS. Another thing that AWS does for you is to automatically replicate the master nodes across the Availability Zones (AZ) in the region you’ve selected. I.e. if there are three AZs in your region, you will have automatic replication of your master nodes in all three AZs.

You will also have the storage (Etcd) replicated so you don’t lose data like cluster configuration. AWS also handles data backup. In a nutshell, with EKS the Kubernetes master nodes are not your worry any longer. All you need now are the worker nodes, the infrastructure that actually runs your containers. Here, again, you follow the same process as in ECS. You create EC2 instances—the compute fleet—of virtual servers and connect them to the EKS. This gives you a complete Kubernetes cluster and you can connect to the cluster using the Kubectl command and begin deploying containers inside the cluster.

Communication in ECS happened through the ECS agent installed on the EC2 instances. This allows the control plane to communicate with individual nodes. In the Kubernetes world, the worker nodes and master nodes communicate using Kubernetes processes. These are not specific to AWS. 

As far as managing your EC2 instances is concerned, in EKS too you have to manage the OS yourself and the processes running on your EC2 instances. But you can make it easier on yourself by choosing semi-managed EC2 instances. This allows you to group your worker nodes logically. The semi-managed option creates and deletes EC2 instances, which makes it easier to manage. Your node groups automatically have all the necessary processes installed on them, so you don’t have to install container runtime Kubernetes worker processes to make worker nodes.

You still have to manage other tasks—autoscaling, for instance. Autoscaling is not configured out of the box. You need to configure settings in Kubernetes and the AWS side of things to make autoscaling possible. And you have to handle the creation of new EC2 instances yourself. If you don’t want this hassle too, simply use Fargate. You can also have your containers running on EC2 instances and Fargate simultaneously for the same EKS cluster.

Conclusion

Conclusion:

AWS offers a choice of container orchestration tools. If you’re locked into AWS, you have ECS, whereas if you prefer to keep your options open, or are already using Kubernetes, you can continue to do so, while leveraging AWS infrastructure. For users who don’t want any hassle of managing the underlying infrastructure, AWS offers Fargate—a serverless compute engine designed for containers. And it works with both ECS and Kubernetes. Fargate saves you the bother of managing your server infrastructure so you can focus on your app development instead. And of course, you have a cost benefit as you only pay for the infrastructure and resources you actually use.

More To Explore

News & Events

Logic Monitor Webinar

Don’t miss the chance to join our AAIC & LogicMonitor webinar and win amazing prizes in our lucky draw just by registering! The top three