An evolution of Infrastructure-as-a-Service public cloud platforms that eliminates server set-up and maintenance, Serverless has been, alongside Kubernetes, among the fastest-growing approaches to cloud development. Serverless architecture is based on the use of Functions-as-a-Service (FaaS) and a quickly growing ecosystem of services that help software development teams build and maintain apps more quickly, reliably, and cheaply.
Serverless architecture is a natural evolution of the established software development trend
Serverless promises to make cloud development faster, cheaper, more reliable and minimise points of failure. It does so by building on the trend of optimising what a development team has to code from scratch and configure by extending that into infrastructure.
The architectural approach goes a long way to reducing, in many cases almost eliminating, the risk of infrastructure as a point of failure and efficiency bottleneck. Developers can focus on just the app itself, leaving infrastructure provisioning to the FaaS provider, of which the three largest and most mature are the public cloud giants AWS, Microsoft Azure, and Google Cloud Platforms.
Offering benefits across a wide range of use cases from Big data processing to short-running tasks and user-facing apps, Serverless is quickly gaining traction as a strategic choice of architecture. Its technical efficiencies and pay-as-you-go business model makes Serverless an attractive option across the spectrum, from start-ups to multi-national enterprises.
Slashdata’s most recent State of Cloud-native Development report, published in December 2021, puts active use of FaaS and Serverless architecture at only slightly behind that of container orchestration tools and platforms like Kubernetes and Docker.
And while the Slashdata report also shows Serverless architecture adoption has slowed over the past couple of years, DataDog’s State of Serverless report shows that software development teams that have adopted Serverless are ramping up their usage. Lambda functions by AWS were last year invoked x3.5 more than two years earlier. The functions of each organisation also ran for an average of 900 hours a day. Around 50% of AWS users also use FaaS.
Source: DataDog
FaaS adoption is also growing quickly among Azure and GCP users. Between 2019 and 2021, the share of Azure organizations running Azure Functions grew from 20% to 36% and over a quarter of organisations using Google Cloud now use Cloud Functions.
But despite its strengths, Serverless isn’t always the optimal architectural choice for every app and organisation. It’s great for short-lived tasks and Serverless’s pay-as-you-go business model can massively reduce cloud costs for such loads. However, cost efficiencies can flip when Serverless architecture is used for more computationally intensive loads.
When is Serverless architecture probably not the right approach?
Broadly speaking, Serverless architecture is probably not the optimal choice for your next app if:
Workloads will be constant – Serverless’s strength is scaling infrastructure up and down for dynamic loads that peak and trough throughout the day or have strong seasonality.
Avoiding vendor lock-in is strategically important – the most mature Serverless platforms are part of the big public cloud vendors AWS, Azure and GCP and not compatible with each other. Even the open-source offering by OpenWhisk isn’t compatible with other platforms, though there is hope that Serverless will eventually benefit from the same kind of community standards as the OCI introduced for containers.
Advanced monitoring required – despite the growing number of Serverless monitoring tools available, both third party products and those offered by Serverless providers, feature sets remain relatively rudimentary. Serverless architecture may not be the best choice for your right now if robust environment monitoring is a priority even if tools will undoubtedly improve.
You will have long-running functions – one of the most significant limitations of Serverless functions is that code instances can only run for limited amount of time – up to 15 minutes for Lambda. While that’s enough for a majority of workloads, it may fall short of the needs of some applications. Longer workloads can also be more expensive on a Serverless platform compared to a standard cloud service because the pricing model is designed for short-lived functions.
You plan to use programming languages not supported by Serverless platforms – Serverless platforms only support code written in a selection of core programming languages which makes it a problematic choice of architecture for apps written in other languages. APIs, wrapping and other tricks can be used to run other programming languages but are usually cumbersome solutions and best avoided.
AWS Lambda natively supports:
Java, Go, PowerShell, Node.js, C#, Python, and Ruby
Azure Functions natively supports:
C#, F#, JavaScript, TypeScript, PowerShell, Java, Python
Google Functions natively supports:
Node. js, Python, Go, Java, . NET, Ruby, and PHP
Now we’ve quickly addressed the limitations of a Serverless architecture approach, let’s take a closer look at its strengths and when it might be the right choice for your next app.
Serverless strengths
Now we’ve quickly addressed the limitations of a Serverless architecture approach, let’s take a closer look at its strengths and when it might be the right choice for your next app.
Serverless Takes Focus Off Infrastructure and Onto Product
Serverless technology means developers no longer have to worry about infrastructure and can focus all their efforts on the product.
Problems that usually demand significant developer input such as scaling and reliability are minimised by a Serverless vendor taking on infrastructure provisioning. The faster, cheaper software development process facilitated by Serverless can also boost innovation by slashing the price of failure. No server management is necessary
Although ‘serverless’ computing does actually take place on servers, developers never have to deal with the servers. They are managed by the vendor. This can reduce the investment necessary in DevOps, which lowers expenses, and it also frees up developers to create and expand their applications without being constrained by server capacity.
Cloud costs optimisations – you are only charged for the server space and computation your apps use
Serverless computing works on a ‘pay-as-you-go’ business model with users only charged for what they actually use. Because functions and other services are spun up and then deleted as soon as they finish, there is no overhead. Provisioning is also precise and automatically scales up and down as needed through peaks and troughs of user activity or data flows.
Service usage can be charged as precisely as 100-millisecond increments.
Automated scalability
Already mentioned, one of the most valuable characteristics of Serverless architecture is that it scales automatically to smoothly meet demand before then scaling down again. A Serverless application will handle unusually high peaks in requests or process a single request from a single user in the same way without the need for any manual intervention.
More traditional architectures, even those which are cloud-based, can be overwhelmed by a sudden increase in requests.
Faster deployment and iteration
The fact that launching new working versions of an application doesn’t involve uploading any code to servers or backend and infrastructure configuration can significantly reduce the time required to build and deploy a new app or iterate on active software.
Because Serverless architecture is based on microservices and is not a monolithic stack, code can be either uploaded all at once or function at a time. This also means faster, easier deployment of updates like new features, patches and fixes easier and faster.
Leveraging edge computing can decrease latency
While one of the drawbacks of Serverless is latency resulting from starting functions that haven’t been used for some time ‘cold’, the architectural approach also allows for edge computing.
The code of a Serverless app can be run from anywhere so providers can run functions from servers as close to the user as possible. This helps reduce latency because requests don’t have to travel to an ‘origin server’ that could be on the other side of the world.
From this point of view, if you have a more international audience, you should consider your Serverless vendor’s global datacentre infrastructure. AWS has the broadest geographical reach of the three main providers.
When is a Serverless approach worth considering?
A Serverless approach could be worth serious consideration under the following circumstances:
- You are developing small to mid-sized applications
- Loads are unpredictable
- The application requires a lot of quick (fail-fast) experimenting
- You have a team prepared and able to leverage Serverless advantages
Common Serverless use cases
Serverless architecture is increasingly popular for use cases like analytics pipelines and Big Data (map-reduction problems, high speed video transcoding, stock trade analysis, and compute-intensive Monte Carlo simulations for loan applications) as well as:
- Web applications
- Backends
- Data processing
- Chatbots
- Virtual assistants like Amazon Alexa and Google Assistant
- IT automation
Some other more specific use cases include:
Media and log processing – Serverless approaches propose natural parallelism. It allows easier processing of compute-heavy workloads without the need to develop multithreaded systems or scale compute fleets by hand.
IoT backends – Permits bringing any supported code, such as the ones in native libraries, simplifying the development process of cloud-based systems that can include device-specific algorithms.
Custom logic and data handling – Used in on-premises appliances such as AWS Snowball Edge. Serverless applications can smoothly function in a wide variety of environments, including within devices, because they decouple the application itself from the execution environment.
Does Serverless make DevOps redundant?
Implementing a serverless architecture doesn’t mean that there is no room for DevOps. There is still a need for monitoring, deployment, security, networking, support and debugging. The ops of DevOps is often redistributed across the team rather than by a dedicated DevOps engineer but a DevOps approach and culture can still hugely benefit Serverless app development.
Caveats and downsides to Serverless architecture
As mentioned at the outset of this article, while Serverless architecture brings many advantages and efficiencies to the software development lifecycle (SDLC), no single architectural approach, tool, or technology represents only improvements on alternatives in every context. There are always use cases that play to the strengths or highlight the limitations of alternative technologies which may overlap in their potential application but also have key differences.
These are some of the most documented limitations of Serverless architecture:
Monitoring is complicated
Any discussion on the disadvantages of a serverless approach inevitably touches on shortcomings around advanced observability and monitoring. Monitoring tools, both native to Serverless platforms and provided by third parties, exist but tend to offer limited features and don’t offer the same level of insight as available for more standard cloud-native and containers-based development.
That’s to a large extent because Serverless architectures don’t require the same level of monitoring and observability because the infrastructure is handled by the vendor. But monitoring is still often important for security and trouble-shooting. Serverless architectures are a collection of one or more short-lived and isolated functions. This distributed nature helps with enhanced scalability and efficiency but also makes it difficult to provision the kind of event-data analytics platforms that ingest and present application logs.
However, Serverless monitoring is improving thanks to new tools and the further development of existing solutions and should continue to evolve favourably over coming years. Some monitoring and logging platforms have already seen massive improvements over a short period of time. In the meanwhile, it’s better to keep on our toes. Serverless functions are stateless, which in many cases makes them hard to debug.
Startup latency and “cold starts”
When Serverless is considered as a choice of architecture, the issue of “cold starts” should be considered. The term refers to the longer time needed to awaken a dormant function that hasn’t been used for some time. Developers can work around the latency cold starts can introduce by keeping functions “warm”. This is possible if you hit them at regular intervals. Note, however, that this works only for smaller functions or workflows that are quite straightforward.
Smaller applications built on efficient code also reduce cold start times, as does the choice of faster programming languages like Python or Go. And, as mentioned earlier, Serverless apps can also leverage edge computing as a way to minimise latency.
The different Serverless providers have varying performance when it comes to reducing cold start times with AWS coming out on top in comparative analysis and Azure trailing by some distance.
Source: Mikhail Shilkov
Execution duration
FaaS functions are usually limited by a maximum allowed duration of each invocation. This is 15 minutes in the case of AWS. The maximum duration of Azure Functions depends on the payment plan selected and is limited to 10 minutes on the Consumption plan. However, users on the Premium plan can take advantage of run duration defaults of up to 30 minutes to prevent runaway executions and can also modify the host. json configuration to make the duration unbounded. And Google Cloud Functions have a maximum run time of 9 minutes.
A few years ago maximum function run times were 5 minutes across all the main Serverless vendors. That was enough for the large majority of needs but could occasionally be a blocker for certain apps. The current maximum runtimes mean their extended limitations are now very rarely an obstacle but still worth keeping in mind for outlying use cases involving tasks with particularly long run-times. Using Serverless for long-run tasks can also become expensive and invert the usual cost-efficiencies associated with the approach.
Many long-lived tasks can be broken down into a series of smaller, coordinated microservices better suited to FaaS functions but that might not always be a viable course of action.
Security
The bad news is that JSON parsing can be rather tricky in a Serverless environment. The good news is that AWS services give Lambda the event payload in a defined structure per service. If processing messages are embedded in the JSON payload itself, all you need to do is to explore JSON schema validation tools. Next check the data types of attributes in JSON following validation. And if you’re processing binary objects, explore packages that can help verify or test the contents.
Testing and CI/CD
Your CD pipeline should be captured as code and version controlled. Builds should be replicable. Dependencies, including transient dependencies, should be locked down to exact versions. If a minor/patch version, updates can creep in between builds, in which case the build cannot be reproduced.
Serverless Stack Providers
A host of new stack providers have entered the serverless space over the past few years. However, while the functionality they often provide can be great for particular use cases, they can’t offer the same range of services as the behemoths of AWS, Microsoft Azure, and Google Cloud Platforms.
Lambda — Amazon Web Services
Amazon Web Services (AWS) is the cloud computing market leader and offers the widest range of supporting tools and resources.
A big plus to implementing AWS Lambda is the all-encompassing and well-written documentation that is the result of the FaaS platform’s relative maturity compared to rivals.
Lambda has benefited from increased credibility and profile as a result of the success some of its high profile customers such as Netflix, Thomson Reuters, Vogue and The Guardian. Another strength is the “democratic” approach it takes to its standard pricing model which allows for up to one million requests and 400 terabyte-seconds of compute time per month. This is more than sufficient for you to take it for a spin to size up all the pros and cons within the context of your particular needs without being hit by a huge bill.
Azure Functions (Microsoft Azure)
Microsoft is quickly expanding the Azure functionalities suite (as well as its client base), as it goes head-to-head with AWS for Serverless market share. Supported resources are pretty much along the same lines to what AWS offers but Azure also provides quite a few additional features specific to the .NET and Typescript audience.
Microsoft takes care of its community of developers by meticulously documenting all its products and creating the conditions for further improvements. Azure’s attractive pricing model also ensures constant community growth. Azure offers pricing plans that are initially by far the lowest among large providers when comparing offers for the same workload. But…be careful. The introductory pricing offer is for two years only, after which Azure becomes much more expensive.
If choosing between the two major players (AWS and Azure), you are most likely to choose the one that provides the most comfortable environment and best support for the technologies that you anticipate applying on the stack.
Cloud Functions (Google Cloud Platform)
It was only to be expected that Google would throw its hat into the ring for the Serverless market in competition with AWS and Azure. Google’s Cloud Functions does not offer anything particularly unique when compared to the other two but several of the features provided are still worth noting.
The attention to detail that has been paid to its documentation demonstrates that Google has put significant effort into making it in-depth, easy-to-understand and simple to navigate.
The pricing model for Google Cloud Functions is slightly different from those of AWS and Azure — Google’s free tier allows for 2 million invocations per month, with a charge of $0.0000004 per invocation above that.
In conclusion
The decision to migrate legacy apps to Serverless architecture or adopt Serverless computing for new projects should be well-considered. To benefit from a Serverless approach you have to clearly understand why your project may need it, how it is implemented and what drawbacks you may have to contend with.
The K&C team have significant experience with Serverless architecture. If you’re not sure if Serverless is the right approach for you, then please do give us a shout and we’ll be happy to offer an insightful assessment of its compatibility with your unique needs or help you develop your next project end-to-end!