In our previous article on serverless architecture in web development we discussed why we believe serverless to be the future of cloud native development. The focus was undeniably on the strengths of serverless architecture. In this instalment of our Serverless Series, we’ll add more balance to the picture by also outlining serverless cons and the circumstances under which it may not be the optimal approach for your next app.
Of course, no technology or architecture is the perfect solution for all circumstances. Can perceived weaknesses in serverless web development be bridged in a way that means they do not drag on the technology solution or business case to the extent the strengths are compromised?
We’ll also apply the theory of serverless web development’s pros and cons to example applications. This will illustrate the conditions under which the balance of serverless’s pros and cons make it a great choice of tech stack and those where it probably isn’t the optimal choice.
In the interest of balancing out the slightly gushing pro-serverless position in our previous article, let’s kick off with drawbacks to serverless web development this time around:
So, which are the possible concerns and drawbacks to taking a serverless development approach?
Discussions with our own architects and clients around whether serverless is the right way to go with a new development project often sees concerns raised around vendor lock-in. There is the perception that once the serverless architecture of an application is set up with one Cloud vendor (GCD, AWS and Azure being the usual options) it’s very difficult (expensive and time consuming) to then migrate to another if circumstances change.
In reality, and given the right approach from a new application project’s outset, vendor lock-in doesn’t need to be a serverless development drawback. At least, not for most applications. Migration between vendors can be unavoidably complicated for really big applications.
For example, you are designing a web application with the following functionality:
Let’s compare the tech stacks a traditional web development or serverless development approach would require.
Traditional web development:
Serverless web development with AWS:
Many different applications include the need for user identification, data storage, notifications and payments. Beyond that, only the ‘core’ of the application, will be something that could be considered ‘unique’.
Traditional web develop necessitates custom configuration and coding of user identification, data storage, notifications and payments. So any changes made to improve, evolve or fix problems in the app necessitate a new software development iteration cycle. This means any changes are resource (time and money) intense.
Serverless web development, on the other hand, allows you to use ‘plug-and-play’ technologies for the common functionalities the app involves – user identification, payments etc. The AWS tools listed above (Cognito, DynamoDB etc.) just need to be configured and can then quickly and easily be changed between test and production environments.
This means serverless development should save a lot of time and money both in the initial development stage and when any subsequent changes or updates need to be introduced.
But how does the above relate to ‘vendor lock-in’ concerns around serverless development? Let’s say you want to move your application from AWS to Google Cloud. Several AWS technologies have been used in your application. Which was great while using the AWS Cloud. But it’s going to be a problem now, right? Yes. Switching them out for Google Cloud equivalents will be a pain. Which is the crux of the vendor lock-in criticism of serverless development.
But that doesn’t need to be the case. If the Serverless Framework is adopted from the get-go, a serverless application can be built as ‘Cloud-vendor agnostic’. The serverless framework solution allows you to set up your serverless architecture with one common configuration file in which you will only need to change the name of the Cloud vendor to switch out AWS technologies for Google Cloud (or those of any other major vendor) equivalents. Nothing else needs to be touched and your app will work exactly as before in its new Cloud home.
Serverless development done right should mean migrating between Cloud vendors is as easy as changing mobile operators while retaining your old number has become in recent years. The framework that supports serverless development is maturing quickly and obvious weaknesses such as vendor lock-in are being addressed. Businesses are increasingly convinced that major cons to the serverless tech stack are being neutralised, leaving its strengths uncompromised.
Patrick Brandt, Solution Architect at the Coca Cola Company recently stated:
“The Serverless Framework is a core component of The Coca-Cola Company’s initiative to reduce IT operations costs and deploy services faster”.
Too positive? Are we skating over drawbacks to serverless? From my point of view there is only one thing that could mean vendor lock-in is a concern that could dissuade you from adopting serverless development for your next project – the components you NEED to use for common functionalities require unique code full control over is non-negotiable.
Another argument often used in opposition to a Serverless development approach to new applications is potential computing costs. I’ve heard a number of times that Cloud resources can be expensive and the user has no control over costs.
That is partially true. Traditional development does mean computing resource overheads can be accurately forecast. A business knows exactly how many servers will be needed for an application, where they’ll be located etc. Budgeting is easy.
If you opt for a Cloud Serverless environment, you receive the bill at the end of the month and it can be difficult to predict the exact cost. A sting in the tail is possible. This lack of control over overheads is often what discourages companies from investing in Serverless technology.
From a business perspective, not being able to accurately control or predict costs is a deal-breaker. Could that be the bottleneck that means the future for Serverless development will fail to match current hype?
I don’t think so. Firstly, accurately predicting Cloud resource costs for serverless applications really isn’t that difficult if you know what you are doing. You only need to define exactly what Cloud resources your app will used and how these fit into a vendor’s pricing structure. Yes, you may not be able to accurately predict demand for your application and usage levels. If it goes viral will you be bit with a Cloud vendor invoice that could kill your company?
It is a consideration but not one that, in the vast majority of cases, will really influence if Serverless is the appropriate technology. In fact, start-ups ofter favour Serverless exactly because costs are back loaded. Running an app is very cheap until it has a large number of users, at which point additional costs should be justifiable. This also makes Serverless an ideal architecture for MVPs and new products.
Firstly, if an application is one that monetises directly, revenues should scale with Cloud resource costs if demand spikes. If an application doesn’t monetise directly, presumably it adds another kind of business value that will indirectly represent financial gain for the company.
There may be scenarios where unexpectedly high Cloud resource costs could negatively impact a business’s cash flow despite positives of higher than anticipated demand for an application. But it should be clear from the outset if there is any chance of such a scenario unfolding. There may be other solutions than simply rejecting Serverless and its strengths as a technology stack.
In the majority of scenarios, an application maintaining consistent performance during demand peaks will be the overriding business consideration. Have you ever left a portal because it’s slow or has crashed during periods of peak use? I did exactly that last week when buying a gift for a relative.
Three e-markets offering the same product at the same price. Two were significantly slower than the third (2-4 seconds slower filtering). Yes, perhaps the slower applications were just the result of inferior build. But if they have the same code how efficiently can they can scale to meet demand?
If you’re hard wiring server capacity, how do you know what resources peak demand might require? The chances are your servers will rarely be close to optimal capacity. They’ll either offer too much capacity, that you’ve paid for and is sitting idle 90% of the time or not enough at peak moments, either slowing down or crashing, losing your business.
With Serveless, you don’t need to ‘hard plan’ capacity. It will scale seamlessly to meet demand. There are many ways you might lose business but server capacity running at the level you need it to be is not one of them.
Serverless is a particularly good fit if you don’t really know what demand might be put on an application. You will only pay for what you use, allowing you to feel things out. That doesn’t mean cost planning isn’t important in Serverless. Components costing should be diligently researched and technology optimised for data query planning, lambda memory and time consumption planning.
In conclusion, if your application is mature and demand trends and server capacity requirements accurately predictable for the long term, Serverless may not be the cheapest option available to you. Opting for your own fixed server resource might make sense. But even in this scenario, a hybrid Cloud solution able to scale with any unexpected peaks in demand is still worth considering.
I agree that migrating existing architecture to a Serverless architecture or hybrid solution can represent a challenging epic. However, the crux of the problem tends to be, in my experience, reliance on developers that lack the relevant expertise. Transitioning an organisation to the Cloud requires investment in new skills. That might mean training being provided for in-house development professionals or bringing in experienced outside help.
One of the fundamental differences between Serverless development and traditional development is that Serverless developers need to consider and be able to accurately calculate the costs associated with how they have built an application. How much will the technology components used, database requests, computing time and performance cost? Are those costs compatible with the application’s business case and plan? Traditional web developers don’t have to worry about these questions. It’s not their job.
For me personally, as a developer who has transitioned from traditional to Serverless development, this was one of the most difficult evolutions in the nature of the job to get to grips with. An organisational transition to Serverless, either entirely or for certain applications, should take this into consideration. Developers need to be re-educated that their job now involves managing an application’s running costs within the context of its business case.
Let’s sum up the business considerations and technical qualities of an application that broadly speaking mean it would generally benefit from going Serverless:
When Serverless is probably not the optimal technology stack for an application:
In our next article on Serverless development, we’ll outline the strengths and benefits of the common ‘plug and play’ components offered by AWS.
Munich-based Krusche & Company has established itself as one Germany’s most trusted development and IT consulting firms over our over 20 years of operations. Specialised in DevOps and Cloud transformation and development, we provide consultants, dedicated teams and team extensions. Our extensive list of partners ranges from some of Europe’s best known brands to high-performing SMEs and exciting start-ups.
If you are considering a Serverless approach to your next project or a wider DevOps, Cloud or Serverless organisational transition, please do not hesitate to get in touch. We’d love to hear from you.