Serverless computing is one of the paradigms experiencing rapid growth in the cloud. What are its actual benefits? What differentiates it from other models? With the advent of serverless computing, developers and IT departments have had the opportunity in recent years to focus on strategic activities, leaving out time-consuming tasks such as planning, procurement, and maintenance of computing resources. It is good to clarify immediately that we are in the field of cloud computing. As you know, there are three main cloud computing models, which differ in the different levels of control, flexibility, and resource management:
- Infrastructure as a Service (IaaS) encompasses the essential fundamental elements of the infrastructure, including network aspects, machines (virtual or on dedicated hardware), and storage.
- Platform as a Service (PaaS), with which you are released from managing the underlying infrastructure (hardware and operating systems) to focus on the distribution and management of applications. With this model, you no longer have to spend time on resource procurement, software maintenance, patching, etc.
- Software as a Service (SaaS) finally offers a complete platform managed by the service provider and allows users to focus on the application layer.
In recent years we have witnessed the evolution of the models described above. Therefore, different paradigms of management and use of resources were born: one of these is the so-called Function as a Service (Faas), also called serverless computing. Serverless computing is a cloud computing paradigm that allows applications to run without worrying about underlying infrastructure problems. The term “serverless” could be misleading: one might think that this model does not involve using servers for processing. In reality, it indicates that the provisioning, scalability, and management of the servers on which the applications are run are administered automatically, in a completely transparent way for the developer.
All this is possible thanks to a new architecture model called serverless. The first FaaS model dates back, as mentioned, to Amazon’s release of the AWS Lambda service. Over time, more alternatives have been added to Amazon’s solution, developed by other prominent vendors, such as Microsoft, with its Azure Functions, and IBM and Google, with its Cloud Functions. There are also reasonable open-source solutions: among the most used, we have Apache OpenWhisk, used by IBM itself on Bluemix for its serverless offering, and OpenLambda and IronFunctions, based on Docker container technology.
What Is A Function?
A function contains code that a developer wants to run in response to specific events. The developer takes care of configuring this code and specifying the requirements in terms of resources within the console of the reference vendor. Everything else, including resource sizing, is handled automatically by the provider, based on the workload required.
What Are The Advantages Of This Approach Over Traditional Cloud Computing? Why Did You Feel The Need To Opt For A New Computation Model?
The benefits deriving from serverless computing are many:
- No infrastructure management: Developers can focus on the product to build rather than running and managing servers at runtime.
- Automatic scaling: resources are automatically recalibrated to cope with any workload, without requiring a configuration for scaling but reacting to events in real-time.
- Resource Usage Optimization: As computing and storage resources are dynamically allocated, you no longer need to invest in excess capacity upfront.
- Cost reduction: In traditional cloud computing, you pay for running resources even when they are not used. In the serverless case, the applications are event-driven, so when the application code is not running, there is no charge, so there is no need to pay for unused resources.
- High Availability: The services that manage the infrastructure and application ensure high availability and fault tolerance.
- Improved Time-To-Market: Eliminating infrastructure management burdens allows developers to focus on product quality and get code to production faster.
Possible Problems And Limitations
As always, not all that glitters is gold. There are cons to consider when evaluating the adoption of this paradigm:
- Possible loss of performance: If the code is not used very frequently, there could be problems with latency in its execution compared to the case in which it is continuously running on a server, a virtual machine, or a container. This happens because, contrary to what occurs using auto scaling policies, with the serverless model, the cloud provider often completely deallocates resources if the code is not used. This implies that if the runtime takes some time to start (for example, JRE ), additional latency is inevitably created in the initial start phase.
- Stateless: Serverless functions operate in stateless mode. Suppose you want to add some element-saving logic, for example, parameters to be passed as arguments to a different function. In that case, you must add a persistent storage component to the application flow and tie the events together. In this case, Amazon provides an additional tool, called AWS Step Functions, designed to coordinate and manage the status of all microservices and distributed components of serverless applications.
- Resource limit: serverless computing is not suitable for some types of workloads or use cases, particularly those with high performance, both of the limits on the use of resources that are imposed by the cloud provider (for example, AWS limits the number of concurrent executions of Lambda functions), and due to the difficulty in provisioning the desired number of servers in a limited and fixed period.
- Debugging and monitoring: if you rely on non-open-source solutions, developers will depend on vendors for debugging and monitoring applications. They will therefore not be able to diagnose any problems in detail using different profilers or debuggers, relying on the tools made available by the respective providers.
There are currently several companies that rely on serverless computing. AWS Lambda is used, for example, by Localytics to process billions of data points in real-time and to process historical and stored data in S3 or streamed from Kinesis. The Seattle Times newspaper uses AWS Lambda to resize its online edition images to display correctly on multiple devices, whether desktops, tablets, or smartphones.
Spandex In The Serverless Environment
Spindox uses AWS Lambda for several purposes:
- I am automating the backup processes of the AMIs.
- We are launching Cloud Formation scripts to generate, for example.
- Virtual machines that backup shared folders and publish them on S3.
It also integrates Lambdas with IoT projects to process and log requests from Chatbot developed internally.
It, therefore, appears evident that the use of serverless computing is strictly linked to the type of product to be developed and that not all applications are suitable for this paradigm. The limitations become evident, especially when dealing with legacy systems, which are not always easily adaptable to new technologies, or systems that are too complex, which would risk uncontrollably increasing costs.
If used with due care, the advantages for new applications are evident in both the development and quality of the product created. Using resources only when they are needed makes it a very flexible and attractive model for companies, but what does it mean on the side of development and operations teams? Could not the lack of resource planning lead to the development of insufficiently engineered applications, lacking the proper precautions both in terms of performance and use of the resources themselves?