In our previous articles about software development team performance, we talked about alignment and productivity.
Today, let us talk about quality.
Quality is a critical pillar of software development and measuring the performance of your team against this criterion is fundamental.
Quality consists of 5 dimensions: reliability, performance, scalability, supportability, and ergonomics/ease of use.
Quality is obviously a topic "per se", with its own best practices and organization challenges. We developed this topic in an article the "Software testing best practices". It became the most visited post on our blog.
So the idea today is not to address the entire broad topic of quality management; instead we will focus on how to measure the performance of your team based on the five dimensions of quality as it relates to software development.
Measuring reliability of an application can be pretty straightforward, even if comparisons are not always easy because this information is most of the time kept confidential inside companies.
Basically the best indicator is the # of open bugs and their evolution over time. Of course, the less bugs the better. But be careful, a lack of bugs may indicate insufficient testing or that the application is not being used (except, of course, when you reach this state over the long term). And in general few bugs means that there are few evolutions.
What is really important to monitor over time when delivering software is:
· The trend for the # of bugs: it should increase at the time you deliver a new version of your software and decrease regularly from there; then,
· The split in importance of bugs: critical, major or minor,
· The consistency of the bug / line of code ratio when delivering new software.
An important indicator that we will talk about later on is the time needed to correct a bug. If a lot of time is needed to fix a significant number of bugs, it may be a sign of a bad design and those situations are likely to have an impact on reliability.
The consequence of a lack of reliability can be disastrous, especially for software publishers. Unreliability can dramatically increase the support cost of an application. There are some real stories of vendors that were killed or lost significant business by a lack of reliability of a specific version of their software.
We are all software users. Who of us has never grunted against a slow application or an unresponsive web site?
It is obvious when we are on the user side, but some developers tend to minimize the importance of implementing a responsive user interface. In the end, whatever the technical reason behind it or the work that is done by the software, users won’t use a solution that is too slow.
Consequently, when coding, you need to choose technical options that provide a good performance experience for your users. Of course, this does not mean doing anything only for the sake of performance, or everything would be written in assembly language, but it is important to always optimize the components that have a strong impact on performance: remote connections, data access, manipulation of the appropriate reduced set of data, usage of loops (especially imbricate loops).
If some data processing requires a lot of time, you need to think about asynchronous mechanisms, splitting data or using other design options to avoid a bad performance experience for your users.
The consequence of a lack of performance will be user non-acceptance of the application, whatever the proper implementation of features you provide. If it is an internal application, lack of performance may cause conflict in a company. If this is a product you sell… then you won’t sell it!
People tend to confuse performance with scalability. These are two different aspects of quality, and sometimes they require contradictory design options.
Scalability is the ability for your application to grow without limit, either in the number of users it supports or in the volume of data it can manage. For example, Access is a very well performing DBMS, but it is not scalable; it doesn’t work well with many users.
With the emergence of the Internet, SaaS and cloud applications, scalability has become often more important than performance. And it is most of the time a technically challenging issue if you have to deal with large number of users sharing the same data.
With the diminishing cost of hardware, multiplying the numbers of machines is clearly less costly than over-optimizing performance with developer work time. But simply adding machines does not work; you need to design your application in a way you can split the work between those machines and still guarantee consistency.
Measuring the scalability of your applications is a rather costly process. It requires setting up complete bench platforms dedicated to the process of evaluating your application under a heavy load of users/data.
It also requires the appropriate tools and skills to correctly interpret the measures and be able to assess the elements you need depending on the growth of your user base. This process is known as "capacity planning".
A lack of scalability can cause a major business disruption. If the application becomes stuck, then it cannot welcome new users. And there is usually no short term solution if the application has not been designed with scalability in mind.
Supportability is the ability of an application to be correctly operated in production.
One key element is the proper management of exceptions and errors, as mentioned in our article "Code instrumentation best practices".
A production-aware application should also be able to deal with non-forecasted events such as network failures, bandwidth drops or hardware issues.
To measure results on this dimension of quality, we recommend tracking the average time to correct a bug, which involves:
· finding what happened,
· finding the piece of code concerned,
· being able to get the data used in the scenario that caused the error,
· identifying the non-forecasted scenario,
· proposing a valid correction for the code.
The speed at which you can reproduce issues is also a key indicator for an application’s supportability.
Thinking about disaster recovery and how you can restore the application in case of major events is also part of this dimension.
The consequence of having an application low in supportability is the risk of downtime. Beyond the induced costs and the image problem, a discontinuity of service can impact business revenue when the system is down.
Ergonomics and ease of use
Ergonomics and ease of use is also a major pillar of quality.
Measuring this dimension and making comparisons is not so easy though. People tend to either underestimate the importance of ergonomics, or – on the other hand – expect every application to have the same ease of use as major consumer web sites, which are developed with tens of millions of dollars.
Some applications bear some intrinsic complexity of their business which might require very specific knowledge making it a specific a challenge to keep usage simple.
The best indicator of ergonomics is the ability to use and learn an application quickly. If possible, we recommend measuring this for new users. How much time does it require to get them on board using the application? If a long time is needed, how much is intrinsic to the complexity of the application and how much is related to the ergonomics and design?
It is better to measure this on new users than on users that might have been used to other systems. Their judgment may be biased by habit and some reluctance to change.
Increased ease of use will lower training costs, improve productivity of your users and ease the acceptance of the applications.
As with performance, a lack of ergonomics may result in user non-acceptance of the application and extra hidden costs.
In conclusion, when measuring the quality of your software development, don’t forget to assess all 5 dimensions. They all contribute to make your software successful.
As always, feel free to comment and share your own experiences.