2017 became a landmark year for the adoption of serverless architecture. More than half of 2018 is behind us, and the trend hasn’t lost any momentum. The big ServerlessConf 2018 in San Francisco proved it, as developers along with tech-savvy entrepreneurs expressed their thoughts and shared their experiences on the serverless approach.
A recent analysis of AWS customers discovered that serverless adoption is growing 2.5x faster than the adoption of server containers. From this, it becomes clear that serverless architecture in cloud computing is a ‘must have’ for companies now and in the future. It permits the developers to create and run programs and services, without sweating over over the managerial processes that come with running a vast server framework.
K&C experts build backend apps for different clients. Over the last year, we have been looking at some of the practices we have used when building these backends for our clients and wanted to share some of our conclusions with you.
There are many new stack providers that want to jump on the serverless bandwagon. Even though the functionality they provide is great, they are still behind the renowned behemoths such as Amazon, Azure, and Google.
Amazon Web Services (AWS) is the name that first comes to mind when thinking about cloud computing. Amazon is the largest provider in the cloud space and has a wider range of supporting tools and resources than any other competitor.
A big advantage of implementing AWS Lambda is the all-encompassing documentation, which well written due to the project’s maturity. You have the opportunity to see all updated information or new functionalities recorded online. This eliminates unpleasant moments such as it-doesn’t-work-like-it-says-it-does.
Lambda helps deliver a serverless application that can run at scale demands a platform with a broad set of capabilities:
-Cloud logic layer
-Orchestration and state management
-Responsive data sources
-Application modeling framework
-Application and integrations libraries
-Security and access control
-Reliability and performance
-Global scale and reach
Lambda has credibility because of some of its customers: InVision, CircleCI, and 9GAG and adheres to quite a “democratic” position regarding its pricing model — one million requests and 400 terabyte-seconds of compute time per month. This is a sufficient amount to try it out, size up all the pros and cons, without getting a huge bill.
The Azure platform from Microsoft is quickly expanding its functions (as well as its client base), as it competes with AWS for market share. Supported resources are pretty much the same to what AWS offers, but Azure also provides quite a few additional features specific to the .NET and Typescript audience.
Talking about the community of developers, Microsoft documents all its products and creates the most comfortable conditions for further improvements. Its pricing model also ensures constant community growth. Azur boasts cost estimates that are in fact the lowest among large providers when citing the same workload.
Choosing between the two major players (AWS and Azure), you are more likely to choose the one with the most comfortable environment and best support for the technologies that you’d like to apply on the stack.
It would be strange if Google didn’t take part in the serverless race with Amazon Azure. Cloud Functions do not offer anything completely different, even though some of the features provided are worth noting.
Considering its documentation, it’s easy to see that Google has put lots of effort into making it in-depth, easy-to-understand and simple to navigate.
The pricing model for Google Cloud Functions is slightly different from those of AWS and Azure — Google’s free tier allows for 2 million invocations per month, with a charge of $0.0000004 per invocation after that.
Just as we promised you, we suggest you read about some bottlenecks connected with serverless architecture and its implementation.
Each talk about the disadvantages of a serverless approach begins with the topic of complicated observability, which causes developers to lose a substantial part of critical insight into their functions. Therefore, an overwhelming majority of developers just don’t know how to work with new features in order to perform even the simplest tasks.
Although the issue is pretty hard, serverless observability could become much better in the next couple of years. Some monitoring and logging platforms have already seen some massive improvements in a short period of time. In any case, it’s better to keep on our toes due to the fact that serverless functions are stateless makes, which them hard to debug in the majority of cases.
When you think “serverless”, “cold starts” often come to mind as well.
Yet, there is a pretty easy workaround — just keep your functions “warm”. This is possible if you hit them at regular intervals. Note however, that this works only for smaller functions or workflows that are pretty straightforward.
If you want to lessen cold start times, you should always mind the size of your application, and thus your code. Also, we advise you to choose the language more carefully — good choices are Python or Go.
FaaS functions are usually constrained considering the allowed duration of each invocation. Currently, the response to an event of the AWS Lambda function takes around 5 minutes. Microsoft Azure and Google Cloud Functions are the same in this regard.
This results in re-architecture of certain classes of long-lived tasks in order to make them suitable for FaaS functions — you may need to create several different coordinated FaaS functions, whereas in a traditional environment you may have one long-duration task performing both coordination and execution.
Serverless computing of the type that we can observe with AWS Lambda is without doubts a highly helpful resource. So, don’t put off including the technology into your DevOps delivery chain. However, it is also fair to say that although serverless computing is irreplaceable in a variety of tasks, it still can’t substitute some other technologies when it comes to deploying and managing your own containers. Serverless computing is designed to work with containers, rather than replacing them.
The bad news is that JSON parsing can be rather tricky. The good news is that AWS services give Lambda the event payload in a defined structure per service. All you need to do is to explore JSON schema validation tools if processing messages embedded in the JSON payload itself. Then, check the data types of attributes in JSON after validation. And if you’re processing binary objects, explore packages that can help varify or test the contents.
Your CD pipeline should be captured as code and version controlled. Builds should be reproducible. Dependencies, including transient dependencies, should be locked down to exact versions. If a minor/patch version, updates can creep in between 2 builds, in which case the build is not reproducible.
The choice to migrate to serverless architecture should be a well-considered decision. To benefit from a serverless approach, you have to perfectly understand why your project may need it, how it is implemented and what drawbacks you may face.
The K&C team have worked with serverless architecture for a couple of years. If you’re not sure if serverless is for you, then come to us and get an insightful reply!