Test automation for a multi-tenant CRM

Developing the test pyramid for a multi-tenant CRM

Introduction

Modern digitally progressive businesses and organisations need reliable, informative, trustworthy, reusable and optimal automated testing to achieve maximal user journey coverage across software projects.

For many businesses, automated test development, maintenance and coverage is an Achilles heel as a result of the upfront costs of doing it properly.

However, an increasing number of companies and projects are now investing resources in test automation across the software development life cycle (SDLC) as a result of the tangible improvements to project quality that.

Big applications practically have to rely on test automation. Doing so manually would involve large numbers of manual QA experts to test and verify any changes and updates and escalating costs.

Investing upfront in a robust automated testing system means software development teams and other stakeholders can be confident in the quality and reliability of strategically important applications.

And while doing so frontloads costs compared to using manual QA engineers, longer term it saves money – both on testing and potentially as a result of less reliable, lower quality software systems.

Why K&C?

K&C’s software developers and QA engineers are highly skilled and mature professionals able to identify gaps, limitations, opportunities much faster than others.

K&C teams include full-stack and QA automation engineers able to develop, maintain and test a wide range of application types.

What we did step-by-step

  • Analysis of business needs and development roadmap
  • Cost analysis of the choice between keeping the existing solution or providing a new one, including costs of support and maintenance.
  • Analysis of current technology stack and scalability
  • Analysis and choice of popular and well-supported technologies and tools for the tech stack.
  • Communication of findings to Informa and recommended course of action from team leads.
  • After Informa took their decisions on the strategic choices available to them in the context of the analysis and recommendations provided we prepared the development and testing plans.
  • Started development and implementation
  • Iterated on development while maintaining, updating and improving the CRM

Case study background

Our client, Informa, is the largest events company in the world. The project  we are working on with them is a scalable and modular white label CMS used by leading online, offline and hybrid conferences as well as other events including festivals.

The application was initially built with a monolith architecture over 6 years ago now. The original team were experienced, skilled software developers but the monolithic architecture was not scalable and be suitably adapted to current and planned future needs.

The original software development team also took on the QA role alongside development responsibilities and set up automated testing for the monolithic CRM. However, as the application’s features were consistently iterated on at the request of the business an increasing number of bugs and other issues started to appear.

The strategic decision was taken to bring in a K&C software development team to migrate the legacy app from a monolithic to microservices-based architecture. Initially, manual QA engineers were introduced into the team as a stand-alone role.

At a later date the decision was taken to evolve the CRM as a SaaS to be licensed to the multiple event brands under the Informa umbrella. Alongside the architectural changes that required the decision was taken to introduce a comprehensive automated testing system.

Without it, manual regressions of the application would have taken up to 2 weeks, followed by re-testing and any fixes required. This wasn’t an option for the CRM’s agile development approach.

The solution – an automated test pyramid

Our solution was to develop an automated test pyramid with the following components:

  • Unit/component tests
  • Service integration tests
  • Service API tests
  • Feature functional tests
  • Feature non-functional tests
  • End-to-end tests

The unit/component and service integrations test were developed by the software developers and the remaining components by QA automation engineers.

The tech stack

The technology stack used in this project includes but is not limited to:

Languages

Java & Typescript

Frameworks

Spring & React.js

Test frameworks

Puppeteer, Percy, Junit, Artillery, Lighthouse and Postman

Puppeteer provides a high-level API to control Google Chrome(or any other Chrome DevTools Protocol based browser) built by Google.

Jest is a framework developed by Meta Platforms (ex-Facebook).

Both are popular, well-supported and continuously developing tools used by a wide range of applications that require browser automation.

The choice of these tools and technologies future-proofed the project and meant there was a deep pool of specialists from which to hire new team members while allowing for a high level of customisation.

Challenges

We had several problems with end-to-end test frameworks while setting up E2E automation due to:

  • The framework’s support and development
  • The legacy application’s reliance on locally and not cloud-hosted tools
  • Browser protocol restrictions and its lack of maturity.

We had to implement our solution without re-developing the existing application and developing new features.

We also had inherent limitations to our options eg. you do not want to add Python test frameworks into a Java/Javascript codebase.

Achievements

  • Our team consists of 2 POs, 3 QAs, 16 developers and we keep hiring.
  • Bi-weekly release with only smoke tests requiring manual intervention
  • 80% test coverage of the codebase
  • 60% of hotfixes are due to business requests and not system failures

The Result

That approach has helped us improve the project’s quality and the efficiency of ongoing work as developers were not also taking on the QA role and able to focus only on the development while QA automation engineers develop, maintain and optimise automated testing.

The result was a sharp reduction in the amount of overtime required of the team and the number of hotfixes and production issues. It also reduced the number of features to be released per sprint but the quality of those releases increased which attracted more users of the CRM.

The decision between Java and Javascript test frameworks for end-to-end tests

We were choosing between Java and JavaScript test frameworks for end-to-end tests. First we selected Selenium+Mocha+Chai as great-value and well-supported stack for browser test automation.

We compared the stack to the Serenity BDD framework which natively offers valuable features that needed to be introduced to Selenium+Mocha+Chai stack through third-party libraries or in-house development.  Serenity BDD also demonstrated better performance and we thought we had our test stack.

But we concluded Serenity was the best option for our context as we weren’t going to use BDD so went back to the drawing board.

We then assessed CodeceptJS, which provides a superset for all protocols as devtools or web drivers and supports Typescript out of the box. However, we concluded it still lacks features we required which we would have to develop by ourselves. The tool also offers limited support.

Finally, we settled on the same combination on Puppeteer + Jest + Java/Type-script we were using in other tests. This resulted in a clearer and more consistent test set requiring the same tech stack across development and maintenance.

What Informa said

“Without the K&C team’s re-thinking of project architecture and test automation across the different stages of the SDLC, the project might have reached a limit as leading a new feature to release might take x2-x3 time of its development time.”

“Initially, our CRM had only 1 tenant and now we have 11 with unique styles and feature sets and the number keeps growing quarterly!”

Do you want to know more or just get in touch?

Fill in the form below and we will get back to you within 24 hours.

"*" indicates required fields

Full Name*

Featured case studies