Both serverless and containers are new disruptive technologies designed to allow developers to build applications with more flexibility and at lower costs than the traditional method of building applications that are hosted on servers or virtual machines (VM). But which one is the better option? As we will see there are several differences between the two and the choice of which to use depends on the needs of the application, traffic predictability, etc.
The first thing to understand is that although the term is ‘serverless’, it doesn’t mean there are no servers. As of now, at least, every workload needs a server. But in the case of serverless, the servers have been abstracted away by the Cloud Service Provider (CSP) so the user doesn’t have to bother about managing the underlying infrastructure.
Since the CSP assumes total responsibility for the actual servers, with serverless computing you’re saved such tasks as provisioning, monitoring, patching, scaling, etc.
Since you do not control the underlying environment, it also means that you cannot install software, like Web Servers or Appservers
Automatic scaling. You can, however, install code libraries—for instance, if you have a Lambda function written in Python and has some dependencies on certain libraries, you can package the libraries with Lambda.
Another distinctive feature of serverless computing is that it automatically scales up and down in tune with actual traffic. This means you do not have to define an autoscaling group or schema. The CSP makes it happen for you.
Pay for what you use
Not only do you not have to bother about the infrastructure, but you’re also not billed for non-usage, i.e. if your workload remains idle, you’re not billed at all.
The final differentiator that has made serverless computing so popular is its high availability.
Serverless offers no—or very small storage space, and it is temporary
When traffic cannot be easily predicted, serverless is the better bet, as it scales up automatically and you only pay for what you use.
Amazon offers several serverless services, the most commonly used ones are Amazon DynamoDB, Amazon API Gateway, AWS Step Function, Amazon Simple Queue Service, and the crown jewel of AWS’s serverless offerings: AWS Lambda.
Key benefits of serverless computing
As its name implies, Containers are a kind of virtual box in which the code and all its dependencies, i.e. configuration, code, runtime engine, etc., are packed away. The most popular container form is called Docker. The advantage here is that since everything is contained in the package—i.e. the Docker container—you can run the application smoothly in any computing environment. Of course, the containers still have to be managed, i.e. aspects like deployment, networking, scaling, health monitoring, etc. must be managed. Losing a node or process, for example, can disrupt your service and cause downtime. It’s not possible to manually manage multiple containers; a problem that is solved by container orchestrators like Kubernetes, Amazon EKS, Amazon ECS, and Docker Swarm.
You can take a closer look at the differences between ECS, EKS, and Fargate, here.
What does a container contain?
Unlike serverless, where the CSP handles management tasks, with containers, since the user has full control of the underlying infrastructure—VM, OS, etc., it follows that any management and orchestration that needs to be done falls to the user.
It also means that you are free to install almost any software that you need—and perhaps the most attractive feature about containers is that you can use pre-packaged images that are already available. For instance, say you are developing an app in Java; you will need the Java software installed, some testing tools, etc. A container with all of this is already made available to you. You just need to pick it up and run it.
Serverless lets you pick the compute power you need—from 128 MB to 3 Gigabytes, with a runtime time limit ranging from one second to 15 minutes.
Where containers are concerned, adjustment of your VM parameters needs to be done by you. Once your container is up and running, it can be tricky to change an EC2 instance type. Ideally, you need to plan for this beforehand.
Containers come with hard disks attached to the nodes
Serverless is well suited for event-driven architectures; it can be natively integrated with other cloud services, making it easy to trigger your Lambda instance when it is needed.
Containers excel when you need to run specific software; say you need to run a webserver or an app server, it is easy to install it in a container and run it. This is not possible serverless, simply because you have no access to the underlying infrastructure.
Serverless are the definitive winner in use cases that see unpredictable traffic.
Conversely, containers are better suited to use cases where the traffic is predictable because you’re paying for the underlying instances, whether you use it or not. When containers scale the entire VM scales and when traffic increases beyond its capacity, it spins up another Kubernetes node—i.e. another EC2 instance—and even if this node is at just 50% utilization, you have to pay for the entire EC2, i.e. you’re paying for idle resources.
The difference in cost incurred in Serverless and Containers can be best explained with a couple of use cases
In the first case, let’s assume your traffic volume is three million a month, each execution consumes 512 MB of memory and takes 300ms. Let’s also assume that the traffic is completely unpredictable. Using the AWS-provided lambda calculator (you can try it out here), you will see that it costs just USD eight per month.
In this same use case the container configuration will cost as follows: US$144/month for the control plane; $14 for a small EC2 (t3), which will function as our worker node. This comes to around $160/month, and the cost will increase if there is a spike in traffic, which we have assumed is wildly unpredictable.
In the second use case, let’s put the traffic volume at 90 million per month; memory @512 MB/execution, and 250 milliseconds per execution. Finally, let us assume that the traffic is pretty predictable.
The cost here comes to $206 for Lambda and with the same parameters the container cost will be as follows: $144/month for the control plane; $29/month for a t3 medium worker node, making it $173/month.
The above scenarios make one thing clear: there is no blanket winner, where cost is concerned. If the traffic is predictable, using containers allows you to select the right VM and CPU, making containers a better option in such cases.
In a nutshell, serverless services, like AWS Lambda allow you to run code, which meets high traffic demands at any time, without provisioning or managing servers. And, of course, you don’t pay for idle resources. However, if you need more control over the underlying environment, say access to a web server, and the traffic is quite predictable, containers are the way to go.