This is the second part of our series of articles dedicated to serverless architecture. In the first part, we got acquainted with the idea of serverlessness, discussed more practical affairs, such as the stack providers of serverless architectures and some caveats related to a serverless approach. In this article, we’re going to find out why AWS Lambda makes the difference and list the primary benefits for enterprises.
One thing we are 100% sure of, is that serverless architecture is the best choice to save costs and improve time-to-market during work on a project. We came to the conclusion that with a serverless approach, a business entity can solve 80% of its enterprise requirements at 20% of the cost of managing server infrastructure. Enterprises shift their stack to serverless due to the attractiveness of cost saving.
A case in point is the story of Postlight Company, which builds growth-oriented platforms, websites, and apps. They were rewriting the Readability Parser API (which enabled the widely-used Readability read-it-later app). One of their goals was to cut monthly costs (which were around $10,000 at that time).
The company substantially decreased its expenditure due to the employment of serverless architecture, functioning on AWS Lambda and API Gateway. This was created and deployed using the Serverless framework.
Another good example is Rob Gruhl , Senior Manager of Emerging Technologies at Nordstrom, one of the first people using serverless architectures to run real-time web applications on a massive scale. Here’s how he describes his experience of using a combination of Kinesis and Lambda:
It was very efficient from a code complexity standpoint, a cost standpoint, and it also did extremely well in our A/B testing.”
Joe Emison, the technical co-founder of BuildFax and a contributor to The New Stack, expressed his admiration of the serverless approach in his interview with Forrest Brazeal:
And if we take that view, and we say let’s optimize our organizations for these great front-end customer-facing experiences, we ask: how can we spend as little time and effort and money on the back end and still have it work and scale?”
The most obvious Lambda implementations are seen in analytics pipelines and Big Data (map-reduction problems, high speed video transcoding, stock trade analysis, and compute-intensive Monte Carlo simulations for loan applications). On top of that, we should also highlight such common use cases as:
Among others use cases, we can also point to the following:
-Media and log processing – Serverless approaches propose natural parallelism. It allows easier processing of compute-heavy workloads without the need to develop multithreaded systems or scale compute fleets by hand.
-IoT backends – Permits bringing any code, such as the ones in native libraries, simplifying the development process of cloud-based systems that can include device-specific algorithms.
-Custom logic and data handling – This is used in on-premises appliances such as AWS Snowball Edge. Due to the fact that they decouple business logic from the details of the execution environment, serverless applications can easily function in a wide variety of environments, including on a device.
Companies resort to cloud computing due to the more seamless process of building, deploying, and managing fleets of servers, as well as the applications that function on them. With the AWS Lambda service, cloud computing becomes even better. It tackles such issues as the complexity of dealing with servers and implements a pay-per-request billing model. This all leads to a simpler adoption of a microservices architecture, which in turn results in better agility, where there is no need to think about fleet management or idle servers.
Talking about services, we can’t but mention different variants of their implementation. Let's say we have a UI, Gateway API and a dozen services behind it. Unfortunately it is still not enough to build a normal application. After all, you have to interconnect the services somehow. Usually, there are three ways:
-Service Discovery (RPC Style) - services know about each other and communicate directly.
-Message Bus (Event-driven) - if you use the "pub-sub" template, neither the "subscriber" knows those who are subscribed to it, nor does the "publisher" know where the content comes from. They are only interested in content of a certain type — they subscribe to messages. This is called message-driven or event-triggered architecture.
-Hybrid - a mixed version, when for some cases we use RPC, and for others - message bus.
In terms of Lambda execution, we’d like to discuss the second option — event-triggered architecture, which includes sync-, async-, or stream-based execution models, and which has no local states. Such diversification gives us a greater labour division. This allows more engineers to work on the system by giving them areas where they can work in relative isolation.
To better understand how it functions, let’s imagine that we’re writing a user centric application that has to synchronously answer UI requests and asynchronously process them, updating a database. It can be any object on a website that has to redirect the user to another webpage and allow us to collect needed user behavior data. Within the serverless architecture, this whole process runs in the event-triggered environment provided by the cloud platform vendor.
With this, there’s no need to use a load balancer. This is thanks to a pull model in the asynchronous event-driven workloads that allows us to keep tasks to be performed or data to be processed in the form of messages within Amazon Simple Queue Service (SQS), as well as a streaming data (Amazon Kinesis). After that, numerous compute nodes are enabled to to pull and process them in a distributed manner.
Also, we should remember how secure the whole system can be if single-purposed functions are applied. This dramatically reduces the attack surface. You just need to give each function the exact permission it needs and that’s all. This is an important, but often undervalued benefit of single-purpose functions.
As everybody knows, OOP patterns make our life easier. They are not connected to a particular language or operating system. Patterns just streamline the whole development process and provide a rough outline of your future projects.
A serverless architecture is no exception. It also has its own models that have been tried and tested.
Below, you can see some patterns presented at the ServerlessConf 2018.
Now we’d like to talk about how to move from a monolith to serverless, if you decided that this is really what you need. How do we divide it into parts?
First of all, ask yourself what is the business issue that you are trying to solve by moving to serverless? Is slow feature delivery losing you customers and market initiatives? Are there problems of scalability and stability that are making your brand less competitive? Do you want to minimize expenditures associated with running your system? Or do you want to reduce the ops overhead for your developers, to help them focus on creating business value?
Your next step is to figure out something that is limited by business logic. Of course, for this you have to understand the monolith. For example, a good candidate for a separate service is part of a monolith that requires frequent changes. Due to this, you get immediate benefit from the allocation of servers — you do not have to test the monolith often. It is also good to single out in a separate service something that delivers the most problems and does not work well.
When you divide a monolith into services, pay attention to how your teams are structured. After all, there is the empirical Conway's law, which says that the structure of your application repeats the structure of your organization. If your organization is built on technological hierarchies, it will be very difficult to build a microservice architecture. Therefore, we need to highlight feature teams that will have all the necessary skills to write the needed logic from beginning to end.
Then, there should be time to think about AWS Lambda and its scope for composability, which in turn leads to beneficial choices. For example, you can implement pub-sub using Lambda with SNS, Kinesis Streams or DynamoDB Streams. Best not to hurry when implementing them — it is better to try and create proof-of-concepts to get to know which of your ideas are best to be implemented. Treat this as your playground. Here you can learn quickly and make mistakes with minimal financial risks.
After that, apply continuous delivery, choosing a deployment framework, e.g. a serverless one. Then, think about testing automation. The serverless paradigm has a different risk profile compared to its server counterpart. That’s why you need to think through testing even more carefully. As the next step, you should build observability into the system and pay attention to security.
Implementing a serverless architecture doesn’t mean that there is no room for DevOps. Monitoring, deployment, security, networking, support, and often some amount of production debugging and system scaling - are still there. Thus, there is room for a “Dev” person, while “ops” responsibilities are redistributed within the team so as to focus more on the application and less on how to get it deployed.
This means that some of the DevOps tasks have become closer to code and the developer who creates the system code.
When using a serverless architecture, developers can focus on the core product without having to worry about managing servers or runtime environments and their maintenance when working in the cloud or in a local environment. This allows developers to save time and effort, which can be spent on developing excellent products with high reliability and scalability.
However, these all are just words without authority to back them up. Let’s rely on the words of Tim Wagner, responsible for engineering at Coinbase, who deals with serverless architecture on a daily basis. According to him, the things in serverless architecture worth acknowledging are:
-FaaS is about running backend code without managing your own server systems or your own long-lived server applications
-Horizontal scaling is completely automatic, elastic, and managed by the provider
-The order of magnitude opex savings,
-Better time to market,
-Built in security and governance,
-Layer 7 beats layer 5 any day of the week,
-Value chain insights through per request pricing.
Solutions for serverless computing free developers from many routine management procedures and reduce operating costs. The spectrum of serverless solutions is growing. That’s why FaaS (function as a service, or fPaaS), AWS Lambda and Azure Functions exist, and are used to great effect. The possibilities of serverless computing contribute to everything from the transition of flexible IT companies to public clouds.
If you’re thinking about shifting your project from monolith to serverless architecture or you’ve just set up your business and are deciding on the best architectural solution for your project, feel free to approach the K&C team as we know how to make legacy code shine bright.