The V-model (also referred to as the V model) is a software development and testing methodology that is generally considered a variant of the traditional Waterfall approach. In contrast to the standard Waterfall methodology’s life cycle phases, which cascade one after the other in a simple linear development process, the V-model is characterised by a testing phase running in parallel to each development phase.
The evolution of the Waterfall methodology to incorporate more defined testing phases is depicted by the v-shaped infographic that explains the evolved approach. The V-model’s v also represents the extended ‘Verification and Validation model’ the methodology can also be referred to as.
The book Developing and Managing Embedded Systems and Products – Methods, Techniques, Tools, Processes, and Teamwork by Kim R. Fowler and Craig L. Silver defines the V-model’s verification and validation components as follow:
“Verification is an objective set of tests to confirm that the product meets the metrics of the requirements, while validation seeks to demonstrate that the product meets the original intent.”
Can We Help You With Your Next Software Development Project?
Flexible models to fit your needs!
Diagrammatically, the steps within the V-model on both the verification and validation stages of the process flow upwards from the coding phase forming a V.
The vertical axis of the V-model diagram represents the level of abstraction each phase of the software development life cycle (SDLC) is from the final software product. Requirements analysis sits at the top of the V with coding/implementation at the base. The horizontal axis represents the project’s timeline with requirements analysis ‘furthest away’ from the implemented software.
The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction (coarsest-grain abstraction uppermost), respectively.
Like every good superhero, every good software development methodology or framework has its own origin story. The V-model’s, says Tim Weilkiens, in his 2007 book Systems Engineering with SysML/UML, is that it was originally developed for the German state as a methodology for planning and implementing public sector system development projects.
The German influence is apparent in the V-model’s opinionated approach that systemitises the software development life cycle in a way that parallels traditional systems engineering. The V-model 97 first released was subsequently updated as the V-model XT (Extreme Tailoring) in 2004 after the consensus was reached the original methodology was no longer a good fit for contemporary software development projects.
The current V-Model XT from 2004 is based on V-Model 97, its predecessor. The model’s revision was motivated when, after 7 years, the old V-Model was found to no longer comply with the current state of the art in projects. It was no longer suited for supporting the most recent techniques and methods.
The V-Model XT is a toolbox consisting of defined roles, products, and activities that can be adapted to a specific project. Rules ensure that the tailored approach remains logical and consistent.
In a traditional approach to the V-model, the SDLC phases are those of the Waterfall model:
Coding sits at the base of the V as the last phase of the development cycle before the testing cycle starts with component testing:
K&C - Creating Beautiful Technology Solutions For 20+ Years . Can We Be Your Competitive Edge?
Drop us a line to discuss your needs or next project
A brief description of the four pre-coding and then coding development phases:
The first step of any software development project, regardless of methodology or framework, is to define what the software should do – its utility. The scope of that utility is decided by bringing together all of the stakeholders including users, the app’s sponsors/owners and any other relevant party its use will impact.
Techniques used might be interviews, other forms of market and user research and analysis of how the software might affect other individuals, processes, revenues or expenses within an organisation, in the case of professional software.
Requirements are then defined based on the information gathered and formed as a high-level requirements document the final product must correspond to.
The requirements analysis then informs the design of the proposed software system at the functionality level. Required functionalities, the user-interface elements they will be accessed by, high level user stories, workflows and data structures are all decided upon at this stage.
The system test plan and documentation are prepared during the system/functionalities design phase, allowing for more time for its execution at later stages.
Following on from the system/functionalities design phase is the architecture design phase. This stage usually sees a number of potential technical approaches proposed and decisions taken based on their technical and financial pluses and minuses.
A high level tech stack has probably already been decided on during the requirements analysis or system design phases but details like database technology and hosting specifics (eg. public cloud, private cloud, hybrid, which provider etc.) will probably be firmed up during the architecture design phase.
That information also allows for the design and documentation of integration testing at this stage.
The low-level, granular, details of specific components are decided at the component design stage of the software development process. Each functionality, and the components they consist of, are described in detail, including details of how they all fit together. Back end components like the API and database tables are also granularly documented.
The API interface specification and detailed component descriptions mean component tests can also now be created.
In this, the main, phase, the actual code that each detail of each component consists of, and is needed to bind them all together into functionalities of a complex software system, is written. Front end and back end software developers will use the coding languages and specific frameworks or libraries previously decided upon for the project.
All the specifications determined in the earlier stages of the development process are brought to life by code and software development tools.
A brief description of the four testing phases that correspond to the development phases:
Component tests are designed to verify the smallest components that make up a working software system function as intended. Components can be defined as, depending on the coding language be used, modules, units or classes. If defined as a unit or class this phase will be referred to as unit or class testing, rather than component testing.
Component tests verify if the output of a component is what would be expected from a particular input based on the specifications. Tests must verify single components in isolation and independent of dependencies or interfacing of other components to better pinpoint bugs or other defects.
Component testing is usually implemented via Selenium automation test frameworks like PyUnit, JUnit, JEST, Jasmine etc.
Component tests can also be used to verify non-functional aspects like architecture efficiency – storage consumption, memory consumption etc. and maintainability aspects including code complexity and documentation quality as well as functionalities.
After components have been tested and verified in isolation, integration tests verify correct functioning and communication between them. This phase is associated with the architecture design phase. Integration testing is used to verify how groups of components or broader subsystems within a software system work together.
Integration tests can be developed based on the functional specification, the system architecture, use cases, or workflow descriptions.
The testing phase associated with the system/functionalities design phase, system tests verify the functionality of the entire software system and its communication with external systems eg. browsers, hardware etc. it will run on. Any compatibility issues should be uncovered here.
At the highest level of abstraction is acceptance tests which correspond with the requirements analysis phase. Stakeholders perhaps not directly involved in the later phases of the development process, like business owners and end users, may again be involved at this stage.
Testing will usually take place in the user environment and is designed to verify the software meets the high-level business requirements as well as non-functional performance such as load speeds, UX quality etc.
Most acceptance tests will be manual rather than automated, or at least involve a significant manual element.
The advantages of the V-model are that it is simple in structure so easy to understand and apply and its greater emphasis on testing compared to the standard Waterfall methodology. Linear software development methodologies have largely been replaced by iterative, flexible approaches. But there can still be a place for something like the V-model in simple, usually fixed price, software development projects with well-defined requirements.
The V-model has largely fallen out of common usage as a methodology in contemporary software development for the same reason as the Waterfall approach it is an evolution of – it implies that a software system’s requirements are fully decided on and detailed in the conceptual or preliminary stage of the life cycle.
Especially for more complex software systems, that’s usually not the case. Requirements, design, and evaluation often go through several iterations before final integration and acceptance and for many software products iteration is a constant cycle.
That contemporary reality is why the various frameworks that fall under the umbrella of agile development methodologies now dominate software development.
The V-model’s weaknesses can be summed up as:
When does IT Outsourcing work?
(And when doesn’t it?)