Software Quality Assessment based on Lifecycle Expectations

Catégorie : Blog Page 1 of 2

The SQALE Definition Document has been downloaded over 10,000 times!

The SQALE method just passed an important milestone. Indeed, since the launch of the method on the sqale.org site in August 2010, over 10,000 people have downloaded the definition document. This is quite impressive in regards of the technical (and tedious) nature of the document. Of the 22,000 site visitors, nearly half of them have downloaded this document.

It’s impossible to know the exact number of current users. The method is now supported in the open-source version of SonarQube (the most used static analysis tool according to a recent survey. Today, hundreds of thousands of developers monitor some SQALE indicators in their daily quality dashboard. This makes SQALE the number one method for managing technical debt.

The SQALE Method is used worldwide, but it is impossible to know exactly the geographical distribution of the current users. The geographical distribution of the site visitors may quite well represent the user distribution. From the sqale.org web statistics, the majority of visitors are located in the USA. The table below shows the detailed distribution of the site visitors during the last month.

The SQALE Debt Map: How to use it

I previously explained the use of the SQALE Pyramid here . I will explain here how to use the Debt Map indicator.

We can produce this indicator at 2 levels:
• The first level is the Project level. In this case each point of the map is a file.
• The second level is the Portfolio level. In this case each point is an application.

We’ll see how to use this indicator in each case.

The Project level Debt Map
In this case, each file of the application is placed on the graph (X and Y axes) according to two measures:
X is the total amount of Technical Debt of the file. This is the estimated time required to fix any non-conformity identified. The higher this value is, the more time will be needed to get a “right” file.
Y is the cumulated “Non-Remediation Cost” of all non-conformities identified in the file. The concept of “Non-Remediation Cost” was presented and explained here. To summarize, this represents the business impact of the non-conformities. The higher the value, the greater the risk incurred if the file is delivered as it is.

Figure 1: SQALE Debt Map at file level

This graph allows you to quickly analyze all the files of the application. If we take the example of Figure 1:

  • File 2 includes about 5 times more Technical Debt than File 1.
  • All things being equal, the technical debt of File 3 is 10 times more dangerous/risky than File 2.

This graph is also useful to make decisions about remediation priorities. For example, in the case of a project working on an application with a legacy part.

If you have very little time available, you will refactor File 4 because it has little debt but this debt is potentially highly damageable. Compared to refactoring File 3, your task will have a much higher Return on Investment.

If you have more time available, you will extend the operation to all the files with a Non-Remediation Cost over a given threshold. As an example, you will decide to refactor the 5 files whose Non-Remediation Cost is over 500. By doing so, you will decrease significantly the level of exposure of your users.

The Portfolio level Debt Map
In this case, the points on the map are applications. Each application is positioned according to its Technical Debt density and its Non-Remediation Cost density.
This allows to analyze the situation of a complete portfolio and to compare all the different applications whatever their technology, size and context.

If we take the example of Figure 2:

  • App B contains about 5 times more Technical Debt than App A.
  • All things being equal, the technical debt of App C is 40 times more dangerous than App A.

This will help to analyze the situation and identify which part of your portfolio needs attention.

Figure 2: SQALE Debt Map at application level

If an application provides very little « business value » and its annual maintenance workload is very low, the fact that it is not very well positioned in the Map Debt is not worrisome.
On the contrary, if an application is very critical, and its code is not of good quality (that is to say, it is positioned at the top right of the map), we understand that this represents a risk and improving the code of this application may be of high priority.

The SQALE Pyramid: A powerful indicator

Meaningful insights into your Technical Debt

The SQALE Pyramid is certainly the most useful indicator of the SQALE method. It gives a lot of information on the nature of the technical debt and thus helps to make decisions. I will try to show how it helps to answer questions that often arise once you have quantified the technical debt of your application.

Imagine that you have analyzed the code of your application or your project and the total technical debt estimated with SQALE is 50.7 days.

We will run through some questions that could be asked and will see how the SQALE pyramid helps to respond.

Is it a short-term or long-term debt?

The SQALE pyramid shows the distribution of the technical debt according to the chronology of expectations during the life cycle of a code file. The short term parts of the technical debt are the lower layers of the pyramid (Testability and Reliability) and the parts that will have an impact in the longer term (Maintainability, Portability, Reusability) are the upper layers of the pyramid.
The following example (as reported by the SonarQube tool) shows the distribution of a debt of 50.7 days: there are 13.8 (4.0 + 9.8) days with rather short-term and 30 days of a long-term nature. This is long term as the impact of this debt will only be perceived when transferring the maintenance of the code to another team.

How critical is my technical debt?

All the issues found in the code are not identical. Some may have high negative impact on the business, such as security or reliability related issues. Following the Technical Debt metaphor, this is the part of the debt with the highest interest. In this category, you will find issues such as logic errors or mismanagement of exceptions.

Other issues are less critical because their presence won’t directly affect the business.

In the example below, the amount of critical debt (that means related to the « Reliability » and “Security ” layers of the pyramid) is 10.4 days, or 20% of total.



How much effort should I spend to make my code more reliable?

Firstly, if you want your code to be reliable, you should also include into your Quality Model a requirement related to test code coverage. This will ensure the efficiency of your test activities (either unit test, integration and/or functional tests). This requirement (e.g. 80 % lines coverage rate for all files) should be integrated into your SQALE Quality Model under the Reliability characteristic.

As explained in various articles available on this site, in order to ensure reliability of the code, you should at least solve all the issues related to testability and reliability. So the effort to spend is the sum of the Testability and Reliability debt (which in the SQALE Method is called the SQALE Consolidated Reliability Index -SCRI). In our example it is 13.8 days.

This effort is necessary to improve the reliability of the application, but of course, this is not sufficient. The reliability of your application depends also on other efforts performed on additional activities like peer reviews, beta testing, etc.


How much effort should I spend to make my code more maintainable? (in other words, to reduce the required annual charge to fix bugs and implement Change Requests)

The same logic applies. You must look at the SQALE Consolidated Maintenance Index (SCMI). To reduce future maintenance costs, you should resolve issues related to testability, reliability, changeability, security, performance and maintainability. In this example, you will need to spend a 50.7 days’ workload.

Where do I start to repay the technical debt of my code?

They are multiple strategies for setting refactoring priorities.

The most relevant one depends mainly on your context and especially on the budget you can allocate to this activity.

Let’s illustrate 2 cases:

1 – You are far from the delivery date, and so you can allocate a workload representing a large percentage of your total technical debt (at least 60%)

In this case, you need to improve the quality of code by first making it testable. That is to say, solve such issues as too complex methods, duplicated code, etc. Then you pay back the debt associated to the next upper layer of the SQALE pyramid, which is the reliability, and so on.


2 – You have very little time. You can’t repay the debt related to testability because it’s structural and so time consuming. So, you will deliver your application with remaining debt, thus it would be wise to reduce the criticality of this debt. You will focus your efforts to correct the critical issues, the ones with the highest potential business impact. These are the issues related to reliability. In this case you will need 9.8 days.

It should be noted that this last strategy to improve the code is not optimal, because maybe you’ll fix potential bugs in pieces of code that should be refactored for testability reasons. This time may then be lost. We can say that this is the « quick and dirty » way to manage Technical Debt.


Summary

As shown, this pyramid helps to answer many questions related to source code quality. To summarize, the SQALE pyramid helps you to analyze and understand your technical debt on three aspects:

  • · Maturity of the debt
  • · Severity
  • · Remediation order

Instead of communicating just the total amount of Technical Debt, it is more useful to report its distribution in the form of a SQALE Pyramid. This should be part of good Project Management Dashboards.

P.S. The Pyramid helps to answer many other questions, I covered another one: “How agile is your code?” in a previous post here.


Testability is the mother of ability

Among the particularities of the SQALE method, there is one whose importance is not always well understood. I’ll try to explain it in this post.

The SQALE Quality Model identifies quality characteristics and put them in a chronological order. It appears that the first one at the bottom is Testability.

This means that even before you look at the reliability of your code, its performance, its safety, its maintainability by third parties etc… You must first look at its testability and fulfill the associated requirements.
If your code is not testable (that means it is too much complex, too much coupled …), you will not be able to test it adequately before delivery. You won’t be able to check and improve its reliability and safety.  Also later, when you will make changes and corrective maintenance on your application, you won’t be able to test and check correctly your work.

This leads to the conclusion that testability is the foundation upon which all the other quality characteristics rely. This does not appear in standards such as ISO 25010 and it does not help to raise the importance of this characteristic.

Because all other abilities depend on testability, if you want to improve the overall quality of an application, you must start by improving its testability. That means refactoring its architecture and its internal structure in order to make it completely testable.

Why managers like the Technical Debt concept

Since its introduction by Ward Cunningham, the concept of technical debt is quite well recognized and used more and more by project managers to monitor their project.

What is quite surprising (and also beneficial) is the fact that this quite technical concept is also used and supported by middle and upper managers. I have already mentioned within a previous post that the CIO of a very large bank (30,000 + developers) monitors on a quarterly basis, and with the SQALE method, the technical debt of its complete portfolio.

There are probably many reasons for this growing interest and each manager will have his own. Here are the reasons, which in my opinion, are the most common.

  1. Technical debt is an objective measure of quality. In fact it measures “non-quality” and tells you how far (in terms of days, $…) you are from complying with your “right code” definition. This is a very simple concept. It is easy to understand and facilitates communication.
  2. Consolidation is easy: Consolidation is performed by simple summation. If you have an estimation of the technical debt of each of your applications, then it is easy to get the technical debt for each of your domains and for your complete portfolio.  As an example, this is what is automatically performed by the combination of the views and SQALE plugin of the Sonar platform. If your static analysis does not do it for you, it’s still easy to consolidate the numbers within Excel.
  3. Technical debt density has a useful meaning. It’s easy to calculate the technical debt density of an application or a domain etc. You just need to divide the technical debt of an item by its size (If your analysis tool does not support this feature, again, Excel will do it for you). This will allow you to compare quality of items of different size. You will be able to compare projects developed (or maintained) by different subcontractors or in different locations.
  4. Technical debt is estimated in days or dollars, which can easily be compared with other project or portfolio management data. It is meaningful to compare technical debt to other measures like remaining schedule or budget allocated on a project. Technical debt can be correlated to such project data (and many others) providing support to managerial decisions.

Technical debt is probably the first code related measure which fulfills the measurement needs of managers; it also fits well into their favorite tool: Excel. Their interest for this measure is not a temporary fashion. Technical debt is becoming part of many management dashboards and will support more and more portfolio management decisions.

Your strategic decisions will depend on the precision of your Technical Debt estimations. Make sure that your estimation model is calibrated to your context.

How agile is your code?

It is sometimes necessary to change the maintenance mode of a legacy application and switch it to an agile mode.

In this case, we must ask ourselves whether the source code of the project in question does not contain too much technical debt inherited from years of maintenance. If the inherited debt is too high, it is likely that it does not lend itself to an agile maintenance mode. How do I know which applications will be eligible to a maintenance type change, and those who do not?

We will see that the SQALE method provides a real help to such kind of decision.

What we want to avoid is that the poor (or very poor) quality of the application source code hampers the maintenance activities of the team. In this case the maintenance team will be far from reproducing the expected productivity achieved in other agile projects.

In a SQALE quality model, the first three expected quality characteristics (those shown at the bottom of the SQALE pyramid) are testability, reliability and changeability. An agile team performs cycles where activities of testing, debugging and change keep coming at a high speed. Their velocity depends mainly on their productivity for these 3 activities. The part of the technical debt that corresponds to these activities is thus the main concern. Other parts of debt such as that related to performance or safety will have a very limited impact on the team’s productivity.

In the SQALE method, the debt specific to these three characteristics is called SCCI (SQALE Consolidated Changeability Index). This index represents the « agile debt » of your code. When you divide this value by the size of the code, you get the density of this debt. This index called SCCID (SQALE Consolidated Changeability Index Density) represents the « agility » of your code.

If you look at the two SQALE pyramids below (which show the distribution of technical debt according to its impact on the life cycle activities), it is clear that the two projects have similar amounts of technical debt but very different distributions.

In one case (Application A) the portion of the debt which is relative to the “agility of code” is relatively low; in the other it is rather high. In the second case, it will be probably beneficial to refactor the code before maintaining it in an agile mode.

It takes some calibration effort to know which threshold should be used within a specific organization and a given context, but this dedicated SQALE index is of very obvious interest for such type of decision.

SQALE Pyramid samples issued by Sonar

Obsolescence and Technical Debt

I have read many blogs and articles on Technical Debt. I also participated in exciting events on the topic. They are at least 2 major and positive messages that are always raised:

  • The code quality is very important and all projects, all organisations should monitor it.
  • The Technical Debt metaphor is a simple but smart way to monitor this code quality in a way that everybody in the hierarchy can understand.

I have a concern about what should be included in Technical Debt.

If, at a point in time, we analyse the source code of an application, we will for sure, have findings, room for improvement. Do we have to count everything as Technical Debt? It sounds logical, but if we take a step back, it seems to me that we should differentiate two types of findings.

1st Category: Findings related to violations of good coding/implementation practices, violations of architecture constraints etc. I will put in this category and as example:

  • Copy and paste
  • Over complex methods
  • Violation of architecture layers
  • Cyclic dependencies
  • Naming convention
  • Presentation convention

2nd Category: Findings associated to the fact that since the software has been delivered there has been some technology progress. New ways/tools are now available, allowing better stability, changeability, performance etc. Examples that come to mind are:

  • ESB
  • New framework (or new version)
  • New library (or new version)

From my point of view, the second category should not be counted in Technical Debt, it is just obsolescence.

Obsolescence  should be used for managing the application, governing a portfolio. In the balance sheet, this figure will have the same negative effect as the Technical Debt. But to be more precise, it should be in specific cell dedicated to evaluating the depreciation of the application not in a “debt” cell.

If we go back to the original quote from Ward Cunningham about Technical Debt, Technical Debt comes from the “not right code”. That means that it comes from violations of source code versus requirements. Ward does not include any additional root causes.

If we include into Technical Debt findings with root causes linked only to technical progress and obsolescence, Technical Debt will increase over time without any change to the code and we will attribute unfair debt to developers.

Does obsolescence count as Technical Debt?

What about differentiating Technical Debt and Technical Obsolescence?

What’s your opinion?

What “Managing Technical Debt” means?

Estimating the value of the Technical Debt of a project is not enough to be able to manage it.

When you have estimated the value of your debt, you’ve just made a first step. You know where you are but it does not help you decide where to go and how to get there.

I have tried to describe here what, personally, “Managing Technical Debt” means. This is certainly not a complete inventory, but I expect at least it will contribute to the debate.

In the following lines and as stated by W. Cunningham, I consider Technical Debt as the result of “not right code”.

In my opinion, “Managing Technical Debt” means to be able at least to perform the following:

1) Set project goals related to Technical Debt. Establish quantifiable goals in terms of amount or density, in terms of nature etc. and answer questions such as:

  • What creates Technical Debt?
  • How is the Technical Debt estimated?
  • What is the acceptable Technical Debt (absolute value or density) for the Project?
  • What level or type of debt is acceptable (and not acceptable) from the Technical Team perspective?
  • What level or type of debt is acceptable (and not acceptable) from a Business perspective?

2) Monitor the amount of Technical Debt over time (either the absolute value or the density) and answer questions such as:

  • Has the Technical debt increased during the last day(s)/sprint(s)/version(s)?
  • How much margin do we have related to the goal set for the project?

3) Compare the Technical Debt for different projects or subcontractors and answer questions such as:

  • Which project/subcontractor delivers the least Technical Debt?
  • Regarding Technical Debt density, are we below or over the average other comparable projects?

4) Analyze the temporal origin of the Technical Debt and answer questions such as:

  • Which part of the current debt has been created during the last day/sprint/version?
  • Which part of the current debt is inherited from legacy code?

5) Analyze the physical origin of the Technical Debt and answer questions such as:

  • Which parts (files, packages, components…) of the project/portfolio have the highest Technical debt (in absolute value, or in density)?

6) Analyze the technical origin of the Technical Debt, which means obtaining information on different “bad practices” that generated the debt, (and then perhaps launch awareness, coaching sessions on some specific topics) and answer questions such as:

  • How much of the debt is related to architecture issues?
  • In this amount, how much comes from cyclic dependencies?
  • How much is related to “Exception Handling”?
  • How much is related to “copy and paste” instances?
  • How much is related to insufficient test coverage?

7) Analyze the points you want to address by reducing the Technical Debt and answer questions such as:

  • If I want to preserve/increase the velocity of the project, which part of the debt is concerned and needs to be fixed?
  • If I want to preserve/increase the transferability (capacity to be maintained by a third party) of the project, which part of the debt is concerned and needs to be fixed?

8)  Analyze the impact of the Technical Debt from a business perspective (that creates issues or risks for the business)  and answer questions such as:

  • Which part of the Technical Debt impacts the security of the application?
  • Which part of the Technical Debt impacts the reliability of the application?
  • Which part of the Technical Debt impacts the performance of the application?

9) Set priorities for reimbursing the Technical Debt. Be able to optimize the results of a partial payback of the debt (This is the typical situation as it is rare to have sufficient budget to reimburse all of the debt).

  • Which are the most urgent issues to fix within my code?
  • When I have fixed the most urgent issues and if I have some remaining budget, what are the next issues to fix?
  • Which violations of “right code” are not so costly to fix and will generate a high decrease of the impact for business?
  • Which parts (files, packages, components…) have the best ratio from a business impact/remediation cost point of view?
  • I have exactly 14 hours available. What is the most profitable way (from the business point of view) to spend them?

I had initially identified some additional questions but chosen not to keep them because they are too dependent on context, and need some local feedback and calibration, so they can’t be answered immediately after the deployment of a solution. For example:

If I spend 100 hours to decrease the Technical Debt,

  • How much I will improve my velocity?
  • What improvement will users perceive on the quality of the application?

I consider that if you have put in place a solution that provides answers to all these questions, then you can really say that you “Manage your Technical Debt”.

Technical Debt and Business perspective

A new concept: Non-Remediation Cost

The current version of the SQALE Method Definition Document supports and helps to manage the Technical Debt of an application/project. The new version of the method (which will be publicly released soon) will continue to use the debt metaphor and help you to manage two concepts:
•    The Technical Debt, which is now well understood. (If needed, you will find more detail here. If you want to know everything, there is the complete reference book from Chris Sterling here). I remind you that the SQALE Quality Index is an objective and precise estimation of the Technical Debt accumulated within a piece of source code.
•    The Non-Remediation Cost, a new concept, which I want to introduce here.

I will introduce the Non-Remediation Cost concept with a simple analogy.
Let’s imagine that you need a new office building. So, you write a specifications document, which contains your functional requirements and also your quality requirements. Here are two examples of such quality related requirements:
•    For security reasons, you require video cameras at strategic locations of each floor
•    For thermal insulation reasons (or whatever reason), you require that all windows have double glazing.

If during the building phase, you or the building team discovers some non-conformity regarding these requirements, the building company will evaluate the remediation cost of the issues in order to understand their relative importance and their impact on the project’s planning and budget. The useful information during all the building phase and the concept which will be used by the development team is the “Technical Debt”. The Technical Debt is a relevant measure to manage non-conformities during the building phase.

Let’s suppose that very close to the delivery date, you visit the building for control and you discover 5 single glazing windows and 2 missing video cameras. As there is not enough time to fix the 7 issues, you will be obliged to find a compromise. Which information will help you to make the smartest decision, the best compromise?
In fact, you will look after the cost of leaving the non-conformities and you will try to answer the following question:
What will be the impact of leaving simple glass window against missing cameras in some places of the building?
At that moment, what counts is the business perspective. That means the real or potential damages resulting of the non-conformities. The monetization of the damages can be summarized as a cost which is transferred to the client and/or the owner, that’s what is called the “Non-remediation Cost” (you also may call it the “Business Debt”). The Non-Remediation Cost of each issue will be compared to its Technical Debt in order to set remediation priorities.

To summarize the metaphor:
The cost of fixing a non-conforming window is a Technical Debt, that’s the cost that the building team will have to pay to fulfill its commitment.
The extra heating cost (and the total monetization of all other damages) resulting of keeping a non-conforming window is a “Non-Remediation Cost” transferred to the owner – the business, and resulting of the delivery of a non-conformity.

Non-Remediation Cost applied to source code quality

Now, if we apply both concepts to source code:
First I remind you that within SQALE, source code quality is simply compliance to source code related requirements.
The Technical Debt represents the cumulated negative impact of the non-conformities on the real development cost of the project. The interest of that debt is a decrease of development productivity.
Technical Debt will impact the figures of the Development Plan.

The Non-Remediation Cost represents the cumulated negative impact of the same non-conformities on the real business value of the project. The interest of that debt is a decrease of the project’s ROI. Non-Remediation Cost will impact the figures of the Business Plan.

Both concepts and the metaphor apply to all types of development methods (agile or not). Both concepts are the monetization of stated non-conformities.
The Technical Debt represents the technical perspective of the findings; the Non-Remediation Cost (or Business Debt) represents the business perspective of the same findings.

How to use both perspectives

When you just want to monitor source code quality, you just want to monitor “how far” you are from your quality requirements. Then you will mainly monitor and analyze your Technical Debt.
If you want to optimize your quality versus your effort, then you will use both concepts, both information, the Remediation Cost and the Non-Remediation Cost. You will spend your limited remediation budget on remediations with the best ROI. That means, you will try to decrease your transferred costs (Non-Remediation Cost) while spending the less remediation effort.

The following graph illustrates the usage of both concepts.

I think the new concept just provides the means to manage the remediation’s priority as you manage the feature’s priority:
Priorities regarding the implementation of functionalities are established by taking into account associated development costs and business values.
In analogy, the remediation’s priority of non-conformities is established by taking into account associated Technical Debt and Non-Remediation Cost. The analogy is illustrated below:

In conclusion, the two concepts provide two different perspectives, two different pieces of information for analyzing and managing the non-conformities of your code. The Technical Debt is a first level indicator for day to day monitoring. The Non-Remediation Cost is a second level indicator for performing optimization, compromise and setting priorities.
The new version of the SQALE Method Definition Document (coming next month)  will support you with relevant indices and indicators.

Testimony on SQALE

From Dr Israel Gat – Cutter Consortium Fellow and Director, Agile Practice – who uses SQALE for performing Technical Debt Assessments.

Context over Content

“Context over Content” is my mantra in just about every consulting engagement I carry out these days. You will literally hear me tell my clients something like “Values, principles and practices are, of course, extremely important. However, as far as this engagement is concerned the only thing that really matter is how we will jointly apply them in your specific context – your needs, your resources, your predicaments, and your constraints.” In the domain of software quality evaluation, I find SQALE – Software Quality Assessment based on Life Cycle Expectations – a great tool for implementing my mantra. It interprets source code analysis in terms of what really matters in the specific client environment. In so doing, it transforms an overwhelming set of measurement data to actionable insights which are meaningful at multiple levels of the firm.”

Israel Gat, January 2012

Page 1 of 2

Fièrement propulsé par WordPress & Thème par Anders Norén