Serverless model makes it easy to develop and deploy your applications and services in the cloud quickly. However, does that make it an ideal choice for all your needs? Amazon’s decision to move away from Serverless back to a monolith architecture for its Prime Video monitoring service has rekindled the discussion about whether serverless model is suited for all use cases. Learn when to use and when not to use serverless model in your projects.
Table of Contents
Serverless is a computing model where you can build and run applications and services without the need to deploy your code on servers directly or to manage and monitor the availability and scalability of the application or service. In a serverless architecture, the cloud platform automatically handles server provisioning, scaling, and maintenance, allowing developers to focus solely on writing code and implementing business logic.
Contrary to its name, serverless is not really server-less. There are servers behind the scenes that execute your code, but you do not deal with them directly. AWS Lambda and Fargate are examples of serverless functions that abstract the underlying servers from your deployment and execution process.
History of computing infrastructure and evolution of serverless
From the early days of computing up until a decade-and-odd ago, hosting applications on bare-metal hardware aka servers was the only option available. This is the most inflexible and cumbersome option as every aspect of the infrastructure must be handled manually by the Dev/DevOps team.
Virtual Machines came into being to overcome the limiations of bare-metal approach, and they were used extensively to manually or semi-automatically (using scripts) scale-up and scale-down infrastructure in response to demand. VMs could be launched and terminated on-demand to meet the variable workloads. However, this approach was far from perfect as it suffered from lack of flexibility and dependency management issues. The Dev/DevOps team was still responsible for installing and configuring the dependencies in the individual VMs. Amazon Auto Scaling Groups overcomes these limiations by using an EC2 image and managing a pool of EC2 instances that are front-ended by Load Balancers to automatically scale-in and scale-out applications and services.
Containers started getting popular in the last decade since they offer a layer of abstraction on top of what VMs offer. Containers are self-contained pods that package an application or service and all of its dependencies into a single deployable image, which makes scaling applications up and down in response to demand, easy and quick. But the responsibility of packaging and deploying the application or service still rested with the Dev team or the DevOps team.
Serverless takes this a step further by completely abstracting how a particular piece of code is deployed and managed in the cloud, and makes irrelevant the question of “where” the application is hosted. This means that there is no infrastructure spin-up, and therefore no servers of VMs to manage.
Serverless is generally seen as a cost effective solution since it runs in Cloud and is billed only for its execution time. However, this is far from truth and serverless can end up being very expensive if not used correctly. This article lists all factors that needs to be considered before deciding to adopt serverless model for your needs.
Although Serverless sounds exciting and optimal, it is not suitable for all use cases. There are situations where embracing Serverless model is beneficial and where Serverless could prove to be counter-productive and expensive.
When to use serverless model
Serverless automatically scales-out and scales-in in response to demand which makes it an ideal solution for unpredictable workloads that can vary drastically throughout the day. Traditional or monolith solutions would result in higher infrastructure management and operational overhead in such cases. Serverless provides an efficient way to handle both low and high traffic periods.
Serverless models excel in event-driven scenarios. Serverless is a natural fit for applications and services that rely heavily on events like user uploads, database changes, or API calls. Services like AWS Lambda, Azure Functions, and Google Cloud Functions are designed to respond to events instantly, making them perfect for tasks like image processing, data transformation, and real-time notifications.
Serverless is an excellent choice for hosting Microservices. Each service can be implemented as a Serverless function or a set of serverless functions which ensures scalability and isolation, and also makes deployment easy.
Rapid Development and Prototyping
With Serverless, there is no infrastructure provisioning and no server deployment or configuration. It just works with the click of a button or with some basic Infrastructure As Code (IaC) scripts. This makes it an ideal candidate for prototyping and proof-of-concept (PoC) development. Quicker time to market ensures early failure detection and course correction which can be quite expensive and time consuming with traditional development approaches.
If your functions or tasks are designed to execute quickly within a few seconds, you would benefit greatly from Serverless model as it is optimized and well-suited for such functions. Moreover, the pay-per-second model ensures that you pay only for the time your functions execute and helps in keeping the cost low. Traditional approaches keep the servers at full capacity even when functions are short-lived, which is both inefficient and expensive.
When not to use serverless model
Steady or Predictable Workloads
Serverless is charged based on the execution time. Therefore, if your workload has steady/consistent usage or has variable usage that can be predicted by the time of the day or day of the week, it generally makes sense to go with monoliths or containers. They are cost-effective compared to serverless approach in fixed-workload situations.
This is exactly the reason Amazon Prime migrated its video monitoring service from serverless to monolith. More on this further down the article.
Long running functions
Serverless functions have execution limits and are generally not suited for long running tasks. Apart from being less optimal, they can also prove to be expensive in such situations. If you have long running tasks like data processing pipelines and data transfer modules, you should consider using container models or monoliths.
Resource intensive tasks
Tasks that require heavy resources like database connections, multiple network round-trips, multiple orchestrated API invocations, reading or writing large files, etc. are not suitable candidates for serverless functions from a cost perspective. The backend dependencies can slow down your functions which results in higher cost even when the functions are themselves idle and waiting for the backend information to arrive. These use cases are better suited for asynchronous/callback functions running on containers or virtual machines.
Custom runtime environment or Non-standard Tech Stack
Serverless functions like AWS Lambda and Azure Functions generally support standard technology stacks like Java, .NET and Python. They also support only standard libraries and packages. If your functions use non-standard tech stack or custom libraries or packages, then you cannot use serverless functions. Container applications with custom images will work better in such situations.
If your functions or tasks are stateful, i.e., they need to maintain state between subsequent invocations, it is difficult to achieve it using serverless model as serverless is optimized for short-lived and stateless tasks. Maintaining state across function invocations involves implementing a custom state management solution which can easily be achieved using container applications and monolith applications.
Large Data Transfers
If your requirement involves transferring large chunks of data between applications/services or between applications and databases within the cloud, or between cloud and the Internet, you should seriously consider keeping away from serverless model and embrace traditional approaches like containers and monoliths. Cloud providers typically charge for all kinds of data transfer between different components within the cloud or between cloud and the Internet. Specifically, data transfers across regions can be quite expensive.
The Amazon Prime case
A few months ago, Amazon had published an article that discusses how Prime Video addressed challenges related to scaling up its audio and video monitoring service while reducing costs by 90%. Initially, Prime Video employed a distributed serverless architecture but encountered scaling bottlenecks and high costs. To overcome these issues, their infrastructure was rearchitected into a monolith application, packing all components into a single process and eliminating the need for intermediate storage. This change significantly reduced infrastructure costs and improved scalability, allowing them to monitor thousands of streams effectively. The article emphasizes that the choice between microservices and monolith architecture should be made case by case. These changes ultimately resulted in higher quality monitoring for all customer streams, a better user experience and significant cost reductions.
A pragmatic approach to new development
If you are developing a new application/service and are unable to decide if Serverless is a right choice, then you can consider prototyping your application using Serverless model and then migrating to containers or monoliths later if needed.
- Serverless has zero infra setup and can get you started with development instantly.
- Serverless provides basic features like high-availability and security out-of-the-box, so you do not have to worry about implementing them manually.
- Serverless is production-ready from day one and can help you deploy your applications and services quickly and confidently to production.
If you decide to stay with Serverless, well and good. Moving to monoliths or containers later due to cost or other factors usually requires a complete re-architecture of your solution. However, the additional efforts can be justified by the savings in time and cost and the risks averted by opting for serverless in the beginning.
Serverless offers numerous advantages, such as scalability, cost-effectiveness, and rapid development, making it a compelling choice for many use cases. However, it’s crucial to assess your specific use case to determine whether serverless is the right fit. Consider factors like workload, execution time, and control over the environment when deciding whether to go with serverless or consider more traditional infrastructure solutions like monolith applications. The key is to choose the approach that aligns best with your application’s requirements and goals.
If you found this article useful and actionable, please leave a comment. If you have suggestions to improve the article, leave a comment.