Social Email Login

Your users are tired of having to register on your site because it takes time, and it’s another username/password to remember. If you have tried to offer the social login feature on your site and realized that it was not that simple to build, then this Social Email Login utility is for you!

Social Email Login is a library built in ASP.NET 4.0 that will let your users log in using their favorite social network. This library was made because OAuth libraries available on the net today are usually too complicated, and often depends on other libraries. Sometimes you just need a simple login to identify the user, and nothing more…

The main social networks are available out of the box, such as Facebook, Google, Microsoft Live, Yahoo and Twitter. The goal of this project is to have a simple and flexible tool that retrieves the email address of the user who logged in using one of the available social network, and then, use that email to integrate easily with the ASP.NET Membership provider.

This is a simple tool because it only tries to retrieve the email address and nothing else.
This is a flexible tool because it lets you add service providers very easily.

The available authentication protocols are OAuth 1.0, OAuth 2.0 and OpenID.

Dependencies:
Social Email Login has only one dependency, which is the CodeFluent RuntimeClient, a free library, available as a Nuget (http://nuget.org/packages/CodeFluentRuntimeClient/). This library is used mainly for two of its features:

  1. to manipulate the different parts of a url
  2. to work with JSON (de)serialization

Many json utilities exist on the net today, but most of them are over complicated, and too big. So we decided to use the CodeFluent RuntimeClient library, because it does what we need and works for any type of ASP.NET application. Because the source code is on CodePlex, you can obviously change this library to another one if you like.

Nuget:
https://nuget.org/packages/SocialEmailLogin/

CodePlex:
http://socialemaillogin.codeplex.com

Website Demo:
http://www.softfluent.com/downloads/socialemaillogin.demo.zip

HELP
To run the demo, you need to edit the web.config:

  • select your SQL database
  • enter both the consumerKey and consumerSecret keys for each service provider you wish to use. You will need to create an app for each service to retrieve your consumerKey and consumerSecret keys.

Generating JSON web services from an existing database with CodeFluent Entities

This article posted on CodeProject will show you how to generate a JSON base web service layer from an existing database using CodeFluent Entities. We will also generate a web client back office following an “Import wizard”.

A common scenario

 

Let us say that we are facing the following scenario:

  • We have a database that we want to expose via a JSON based web service layer, providing CRUD (Create, Read, Update and Delete) operations.
  • We also need to build a back office in order to manage and administrate the data coming from our database.
  • We may need, on a future, to access in a different way our database, for example from a Smart Client or expose a SOAP based web services layer (there are always new ideas).
  • We need to deploy this system as soon as possible.

Let us start, what we need to do is:

  • Build a data access layer capable to load data, create new data, update and delete existing data (and make sure it works).
  • Manage validation data (and make sure it works).
  • Build a JSON based web service layer:
    • Build every needed service contract and operations.
    • Configure our service contracts to support JSON.
    • Host our services.
    • Make sure it works
  • Build a web based client (and make sure it works).
  • Lay the foundations so any possible evolution and additional architecture can be supported including mobile access through different smartphone devices.
  • And everything I have missed.

Or…. We can use CodeFluent Entities to do the plumbing and being sure that it works.

In the starter wizard, we can see some of the possible built-in architectures that can be generated by CodeFluent Entities, and of course you can imagine your own architecture by creating a custom CodeFluent Entities project with your relevant set of producers.

clip_image002

The scenario we mention here is developed "step by step" in the full article on CodeProject

Benefits of Model-First Software Development

CodeFluent Entities White Paper

whitepaper-image

The objective of this white paper is to describe the software development challenge and clarify its root causes.

The first half of the document explains the market challenge and why this is a tough business issue. This part is widely applicable by anyone interested in software development and is not dependent on our offering.

In the second half of the document we explain how SoftFluent addresses the challenge through itsCodeFluent Entities model-first software factory and associated methodology.

Read CodeFluent Entities WhitePaper

Learn more about CodeFluent Entities

 

The mythical "Indian man-month"

Introduction

For long, it has been established that measuring software development result cannot be reduced to a time-spent unit such as man-month. As early as in 1975, The mythical man-month became a famous book that explained this in different ways, including the mention that adding resources to a late software project could only delay it more. The book was republished in 1995, a confirmation of its accuracy, but also a proof that the necessity of -explaining this to non-software people remained fully necessary 20 years after. I have no doubt it will be necessary to keep repeating the message in 2015, as software production remains a widely misunderstood discipline among decision-makers.

image

A lot of people keep comparing this to the physical industry where the same component needs to be produced thousands or millions of times with the exact same process. In software, a source code is produced only once as it is immaterial and can be copied as needed at almost no cost. And still, even when comparing with industry, one should think whether a team of 10 guys with shovels would dig a hole faster and at less cost than a team of 3 guys with bulldozers. So it is not very hard to understand that each business evolves in method and tooling. Software is probably even more concerned than many other businesses, considering the technology evolution pace.

Emergence of off-shore

Still, since the emergence of the "off-shore" model in the late 1990s, the focus on hourly cost of developers has peaked without many voices in the software industry to challenge these new software production models.

The reasoning was as simple as the following most of the cost of software development is the salary of developers, as it is a time-consuming activity, which is true. So if one goes to countries where the time is less costly, it will cost less in the end. This fully neglects the importance of the methodology, in particular the interaction process with users or product management, the skills and the tooling but who cares?

At that time, the development of cost-killing methodologies, combined with the power given to purchasing departments and their basic comparison methodologies and the absence of a relevant metric to measure software production (beyond the man-month) made this reasoning the general trend in the industry, leading to what I would now call "The mythical Indian man-month syndrome", with Indian developers being of course cheaper than their western countries equivalent with higher salaries.

Despite the awareness of numerous field failures (one can read this 2004 article as an example and notice the cautiousness of the title), the trend has continued to date. To some extent, skills and part of the production methodology has of course improved in the off-shore countries, but the most important point about interaction with business users remains.

20% savings "when it works"

As the president of the R&D focus group of the AFDEL (French software vendor association), we had a session of experience sharing about off-shore. Not only were mentioned many failures, but the fact that struck me the most is that the successful projects talked about 20% savings in the end, when including all the hidden costs that were needed to make it work. And it was also mentioned that it required about 2 years of ramp-up to make it successful.

Interestingly, I found this interesting CIO magazine article that confirms this observation made as early as in 2003! This article has the merit of listing any of the costs, as well as explaining this "20% savings in the most favorable case" reality.

Understanding the harsh reality about off-shore should be no surprise, as I personally love one comment on this article:

It is very difficult for business users and IT workers residing in the same locale to work out software requirements and successfully execute a project. In fact, some statistics say that around 80% of IT projects fail to meet their goals. Now imagine moving the technical team several thousand miles away, put them on a work schedule that is completely opposite that of their end-users, give them a different native language, and give them a completely different set of cultural mores and norms. With as much sarcasm as I can muster, I must say that none of this seems intuitively likely to increase the odds of success, but it does have the "advantage" of being really cheap.

Paying a low price for something that does not fulfill your need is for sure a wrong spending!

Probably more than 100% extra costs in other cases

Now let us come back to field experience, off-shore "per se" is not the solution to the numerous challenges faced in software development. This is why we observe many failures on the field where we estimates costs being around almost the double as they should be, not only in the short term, but over the long run.

It is not the matter about whether developers are good or not in some geographies (although there are geographies with more or less skills) but mostly the nature of software development that makes it challenging to work distantly from users.

Trendy agile methodologies are smart enough to put an emphasis on the proximity of product management role. We have developed the importance of alignment in our series of post about "Measuring software development performance", that developed 5 key critical dimensions:

  1. Alignment
  2. Productivity
  3. Quality
  4. Debt minimization
  5. Predictability

If the team is large, you might have a chance to run for the 80% side of the spectrum with the "scaling effect", but even so, it is quite challenging to reach it. And we have seen many customers moving back from off-shore models and now hiring local resources.

It is also worth mentioning that many failures on the field are just taboo and then perfectly explained by technical teams, taking advantage of the difficulty for decision-makers to really measure success. It is quite common in this industry and is of course exacerbated with the complexity of a distant team.

Beyond the costs

Beyond the costs, going too far with off-shore often causes:

  • Skills challenges, as you may not find the best technical solutions,
  • Loss of control, as you may depend on external partners that do not have the same interest as yours,
  • Loss of agility, as you may require months to change even simple business requirements,
  • Increased risks of various nature because of the distance, the language, the culture and sometimes even the legal or intellectual property risks.

In the end, by externalizing too much of your development, you will lose the skills to even evaluate whether you are doing well or not. One could also mention the citizenship dimension but this is not even my point here.

This is why, even if off-shore is often a failure, few people publicly admit it, leading to a significant market hypocrisy, which really contrasts with discussions experts have together when they share their experience in a face-to-face mode.

Seriously, as an informed decision-maker, would your risk losing control for an uncertain potential 20% savings?

I personally would not we have never even thought of externalizing the R&D of our software, as an example.

As we have written in a previous post, it is only with the contribution of software engineers that you can make a real long-term difference on the market, as those guys are the ones able to make some critical choices. Some choices can impact the software development cost with another order of magnitude while delivering the same value!

So make sure you keep at least some local skills, or you will expose yourself to losing control at some time in the future.

Daniel COHEN-ZARDI

SoftFluent CEO

Measuring software development performance – Part V: Predictability

Through our previous articles we elaborated about the first 4 performances axes of development teams:

    The fifth important element to take into account in order to evaluate the team development performance is the predictability of the results.

According to various studies also referred by the Afdel in the White Paper about driven innovation: 58% of purchased software are never put into production, only 13% of IT projects are delivered on time and 30% are purely and simply stopped. Thus, it is clear that the issue remains on this axis to drive and predict software projects.

Obviously, it is very difficult to compare the predictability level of a development team performing management applications in a relatively stable market functionally speaking and a team of research and development of an innovative software company positioned on a technological field close to fundamental research.

Be that as it may, the purpose of any development team is to achieve results, that they are either proof of concepts, software to be marketed or applications for internal use. In the business applications world, taking into account the current state of the art, it seems to us particularly feasible to obtain consistent results in delays and costs that can be evaluated using relatively simple and limited macroscopic criteria.

For software companies, it is important to be able to formalize a roadmap as well as releasing regular versions of it. Although, it is known that these roadmaps are hardly followed, big gaps can lead to disaster since they are often linked to financial forecasts. Below is an example of a traditional roadmap about Microsoft Visual Studio for the past years:RVB de base

The trend towards the "Software as a Service", where competition is aggressive, generally requires more regular release like "Web" companies do (whose business model is based on the Internet such as business to consumer web sites). It is then essential to integrate continuous improvements, which is actually creating a roadmap. In the latter case, the roadmap is simply built the other way, with fixed dates and a variation of what can "fit" into a given release. Generally it is aligned on a season or semester. Here is an example for the Microsoft Dynamics CRM’s offer:

Livret-blanc-visuel1_v1

Note that software companies often need to make new products to differentiate themselves from competition and maintain their market position. We will highlight this fact afterwards when talking about the extra dimension of creativity, especially for teams of a software companies with a Research and Development department.

The issue is quite similar for large companies because they must be able to orchestrate the release of business applications meanwhile preparing deployment and training as well as users support. Ideally, development department of large companies should behave as small software companies. Actually, as organizations, this point is becoming a market trend.

From our experience, many failures are due to a mismatch between the expected result and what is really produced. This discrepancy is often due to a rupture between the design and development stage. The off-shore has contributed to some disasters in that area.

Trying to imitate models from traditional industry, some have forgotten that – contrary to the ‘physical matter’ industry – the implementation phase of a software is never the reproduction process performed identically. Each development is unique by nature, since it is intangible and can then be replicated at zero cost.

This reasoning transposed into the world of software development has now shown its limits hence a movement of reflux of these models at low cost begins to emerge… with problems related to the lack of resources that the model has created.

It is also common, especially with technological breakthroughs, to find teams who lose control of timing and R&D projects that take several months or years of delay, with budgets growing in the same proportions leading to massive deficits. This phenomenon is very marked in software companies, as large waves of investment are sometimes more than a decade away with a major cultural leap that is not assessed by the management or by teams.

It is therefore essential to have regular milestones including deliveries of versions and new features to avoid the tunneling effect of some major projects. This effect can create huge gaps in the product roadmap.

This is why agile methods are also interesting to prevent from the tunneling effect by maintaining alignment between the holders of the "product" vision and the production teams. Besides, some indicators of predictability are also included in these methods as a percentage of each achieved iterations compared to what is expected at the beginning of the iteration.

RVB de base

Even though this aspect is clearly useful and interesting, it is important to have the right debate here. Predictability to reach is beyond the scope of the measured work at each iteration – usually 2 to 3 weeks – and it must be achieved at a more macroscopic level as mentioned before with the roadmap.

To conclude, note that the issue of predictability is stronger than it seems, because beyond the direct financial consequences of a delay, when the loss of trust is established, the team dynamics rapidly becomes a vicious circle. Schedules are taking delay, functional managers "load the boat" because they know they will have to wait too long for postponed features, and the project becomes more exposed to the risk of a major failure.

CodeFluent Entities for Windows 8 app generator

windows-8-logo 
CodeFluentEntities_400
SoftFluent announces release of CodeFluent Entities for Windows 8 app generator. SoftFluent announces today that CodeFluent Entities and its Visual Studio integrated graphical editor now provides an out-of-the box Windows 8 generator. It is now possible to generate mobile ready web services as well as complete Windows Store apps in minutes.

By leveraging CodeFluent Entities, developers can put the burden of keeping up with new technologies on the product, while focusing on developing the features they need for their applications.

Read the full Press Release

Save time for Windows 8 Store apps with CodeFluent Entities

In the next few days, Windows 8 will be released bringing a set of new features.

Indeed, Windows 8 will come with a brand new user interface. This interface, which used to be known as “Metro” is already implemented on Windows Phone devices for more than two years, so you may be familiar with it.

This new user interface brings some changes in the Windows universe UI. For instance, the “Start menu” has disappeared and has been replaced by a “Start Screen” where you’ll find all your apps.

Oh wait…apps? Do you mean that my computer has been turned into a smartphone? Will I still be able to use my current software with Windows 8?

Don’t worry! Your good old desktop is still there and your current software will continue to work. Indeed, your computer hasn’t been turned into a smartphone but to answer customers’ current and future needs and expectations, Microsoft had to provide a user experience which fits with both computers and tablets (i.e. mouse/keyboard and touch screens).

Windows 8 Store apps can be downloaded from the “Windows Store” where you can find free and payable apps which, once purchased, are linked to your Windows Live account.

As disturbing as it is at the first look, this new Windows provides a lot of new possible uses for end-users and enterprises. The “Windows Store” is accessible in 120 countries. So people from 120 countries can now buy your apps! A lot of potential customers represent a lot of potential incomes thanks to this store and this income can be generated a lot of different ways (e.g. payable apps, advertising, and in-app purchase).

So far, I’ve been talking mainly about BtoC apps but you can also develop a Windows 8 Store app dedicated to your own business. For instance, you may have an existing SharePoint server hosting your extranet and you would want to offer to your co-workers more mobility. Windows 8 Store apps give you the ability to provide a new, friendly, itinerant, and interactive way to present your old data. Besides, you won’t have to go through the “Windows Store” to deploy a Windows 8 Store app, you can develop Windows Store apps for your enterprise only. You can add them to Windows devices you manage through a process called “sideloading”. “Sideloaded” apps don’t have to be certified by or installed through the Windows Store.

Contrary to BtoC apps, enterprise apps will depend heavily on business needs and as such susceptible to evolve continuously during their lifetime (even during development time!). Integrating new requirements or new technologies in such applications is usually difficult and risky, if you didn’t anticipate it properly. CodeFluent Entities was born 7 years ago from this ascertainment and has been designed from the grounds up for these kinds of scenarios. CodeFluent Entities is a Visual Studio integrated code generation product, based on a technology and platform independent model, and allows continuous code generation to more than 20 target platforms (databases, business layers, UIs, etc.)

clip_image002

CodeFluent Entities is already compatible with Visual Studio 2012 and a “Windows Store producer” (generator) has been shipped last month. This producer allows you to generate a complete Windows 8 Store application, its relational database and its JSON web services back-end. We’ve published an article which shows how to use this new producer a week ago.

clip_image004

clip_image006

clip_image008

Measuring software development performance – Part IV : Debt minimization

In our previous articles about software development team performance, we talked about alignment productivity, and quality.

Today, let us talk about debt minimization.

Debt minimization capability

IT debt is a notion that is getting more and more popular at the analyst level. Gartner estimates IT debt to grow as large as 1 trillion $ by 2015.

Up until recently, IT departments have presented the cost of developing applications without really explaining or measuring the induced cost for the future.

But as soon as you develop and deploy an application, you generate recurring maintenance costs that will last as long as the application is used. This cost will disappear only when this application is replaced.

Experience shows that applications always last longer than anticipated for several reasons. Even when you think of an application as a temporary solution, there will be business changes or budget restrictions that may impact the timing of the next version; there might be issues, as with all software projects, causing late delivery; or even, possibly, there may be technical disruptions that result in the failure of subsequent application launch.

So delivering an application that will run with the minimal maintenance effort is actually one of the most important elements to consider when measuring performance of a software development team.

About five years ago, in a software vendor meeting, Microsoft had shared this slide. It was the evolution of developer teams for the Windows group split by roles. The blue bar indicates developers dedicated to maintenance, the orange bar indicates the ones dedicated to compatibility and the yellow indicates the one focused on innovation. This illustrates the evolution challenge cost (especially for an ISV) when a piece of software is used and successful. A lot of your development bandwidth is necessary to maintain the legacy.

image

The Windows slide also illustrates a very important point. Application debt is usually tricky to measure because most of the associated hidden costs are not included the pure corrective maintenance cost. Of course, when an application is really buggy, people notice they have a big issue and usually take some actions.

But in most cases, a significant part of the real debt is hidden in a bad design that translates into over-sized evolution maintenance costs, the issue becoming bigger and bigger over time. The slide above does not really indicate if the innovation part produces as much value as it did in the past. However, one can get some insights thinking that the effort was mostly part of some pre-Vista work.

Issues are difficult to avoid, because people usually only realize there is a problem after it is too late. It is a bit like the “leaning tower of Pisa”. And the real solution in these cases is to rebuild, which organizations think they cannot afford.

So they will spend much more money in maintaining a badly-designed system but – because it is usually less visible than the tower of Pisa in software – this fact will be hidden in operating costs and associated with the cost of “unreasonable evolutions” asked by those “over-demanding business users”.

If your evolution costs look like the non-industrialized curve below, it is time to think about a real modernization of your application:

image

In our view, there are two different kinds of systems:

  1. The ones which are very stable functionally. With these systems, even if an individual evolution might be too costly, it is probably good to maintain it for long as possible, even when the technology is very old. People often launch project when the technology is no longer supported by vendors. In our view, this is not a so important argument, when your system has run for decades. There is little risk that it stops with such a history behind.
  2. The ones which are still alive because the business requires relatively quick adjustments to the application. With these systems, if you have run into a “heavy debt” situation, it is always better for the long term to re-design and solve the issue. The tricky thing is that you probably need to do this “by pieces”, to secure the success of the project; “big-bang” projects often lead to major failures. Throwing away old pieces of code, reducing the size of the code base and aligning it to more recent technology will save a lot of money rather quickly. But you need to do it the right way, with the proper approach, the right people and efficient tooling.

The leverage effect of application debt

As your application grows, the legacy you have developed weighs on maintenance, and over-complexity may severely impact the cost of evolution.

Let us take a typical scenario for an application that has a 10 year lifecycle. The following table describes a “nominal scenario”, with an initial development of 100 and evolutions over the following years. Maintenance is calculated as 15% of the cumulated past workload.

Nominal scenario

Cost line

Year 1

Year 2

Year 3

Year 4

Year 5

Year 6

Year 7

Year 8

Year 9

Year 10

Total

Initial Development

100

                 

100

Evolution

 

30

15

50

25

10

40

25

16

5

216

Maintenance

 

15

20

22

29

33

35

41

44

47

284

Yearly cost

100

45

35

72

54

43

75

66

60

52

600

This scenario gives an overall cost of 6 times the initial cost, which is consistent with many projects that live normally and have certain levels of evolution over time to align with the business.

Now, let us imagine a team that is not fully optimal but adds a bit of extra complexity with a classical factor of 20% a year, which is not outrageous. Still, we estimate that the complexity multiplies over the year and increases the cost of evolutions by this cumulative approach. We still calculate maintenance by a 15% of the past workload.

Yearly over-complexity deviation factor: 20%

Cost line

Year 1

Year 2

Year 3

Year 4

Year 5

Year 6

Year 7

Year 8

Year 9

Year 10

Total

Complexity deviation

1,2

1,2

1,2

1,2

1,2

1,2

1,2

1,2

1,2

1,2

 

Initial Development

120

0

0

0

0

0

0

0

0

0

120

Evolution

0

43

26

104

62

30

143

107

83

31

629

Maintenance cost

0

18

24

28

44

53

58

79

95

108

508

 

120

61

50

132

106

83

201

187

178

139

1257

The cumulated cost of the project is doubled in the end, and interestingly, the cost of the evolution in year 10 is 6 times what it would be without the burden of this overly complex legacy!

Another observation that might be of interest is to understand what happens when projects start the wrong way. With a starting complexity deviation factor of 30% at the beginning, even if you react and behave perfectly over the following years by progressively lowering the factor from 1,3 to 1, you will never decrease your yearly maintenance cost enough to align with the previous team.

Yearly over-complexity deviation factor: starting 30% but going down to zero

Cost line

Year 1

Year 2

Year 3

Year 4

Year 5

Year 6

Year 7

Year 8

Year 9

Year 10

Total

Complexity deviation

1,3

1,3

1,3

1,2

1,2

1,2

1,1

1,1

1

1

 

Initial Development

130

0

0

0

0

0

0

0

0

0

130

Evolution

0

51

33

132

79

38

167

115

73

23

711

Maintenance cost

0

20

27

32

52

64

69

94

112

123

592

 

130

70

60

164

131

102

236

209

185

146

1433

Projects that start bad never end well, this is also a field observation that one can make. When it is really bad, it may be better to just start a new project. Here is a graphical version of the application cumulated costs of the three above scenarios.

image

Although a bit simplistic, we believe this model is consistent with some of our observations in the field where complexity translates into cost increases over time.

As we explained in the article about productivity, it is also very important to be able to estimate the real size and complexity of your project. A good estimation is also a way to ensure developing only the essential elements, because every single line of code that is not strictly necessary will be very costly over the whole application life cycle.

From our observations in the field, we are always surprised to find people are not familiar with simple metrics such as:

  • Number of business entities or tables,
  • Number of screens / pages,
  • Number of reports,
  • Number of lines of code.

For sure, one needs these factual elements (and even better, other details such as business rules, data properties, user interface fields, etc…) to be able to evaluate costs, both for producing software but also for maintaining it and implementing changes.

If the only criterion used to measure the impact of a feature request is workload, as defined by your developers, you may be in trouble. Firstly, you have no way of challenging this estimate, and second, there is a high degree of variability among developer skills.

As always, feel free to comment and share your own experiences.

Daniel COHEN-ZARDI
SoftFluent CEO

CodeFluent Entities supports Visual Studio 2012


visual_studio_logo[1]

CodeFluentEntities

SoftFluent announces release of CodeFluent Entities for Visual Studio 2012 in August SoftFluent announces today that CodeFluent Entities and its Visual Studio integrated graphical editor will run within the final version of Visual Studio before August 31st 2012. CodeFluent Entities was updated to following the look & feel of Visual Studio 2012.

By leveraging CodeFluent Entities, developers can put the burden of keeping up with new technologies on the product, while focusing on developing the features they need for their applications.

Read the full Press Release

Measuring software development performance – Part III: Quality

In our previous articles about software development team performance, we talked about alignment and productivity.

Today, let us talk about quality.

Quality

Quality is a critical pillar of software development and measuring the performance of your team against this criterion is fundamental.

Quality consists of 5 dimensions: reliability, performance, scalability, supportability, and ergonomics/ease of use.

image

Quality is obviously a topic "per se", with its own best practices and organization challenges. We developed this topic in an article the "Software testing best practices". It became the most visited post on our blog.

So the idea today is not to address the entire broad topic of quality management; instead we will focus on how to measure the performance of your team based on the five dimensions of quality as it relates to software development.

Reliability

Measuring reliability of an application can be pretty straightforward, even if comparisons are not always easy because this information is most of the time kept confidential inside companies.

Basically the best indicator is the # of open bugs and their evolution over time. Of course, the less bugs the better. But be careful, a lack of bugs may indicate insufficient testing or that the application is not being used (except, of course, when you reach this state over the long term). And in general few bugs means that there are few evolutions.

What is really important to monitor over time when delivering software is:

· The trend for the # of bugs: it should increase at the time you deliver a new version of your software and decrease regularly from there; then,

· The split in importance of bugs: critical, major or minor,

· The consistency of the bug / line of code ratio when delivering new software.

An important indicator that we will talk about later on is the time needed to correct a bug. If a lot of time is needed to fix a significant number of bugs, it may be a sign of a bad design and those situations are likely to have an impact on reliability.

The consequence of a lack of reliability can be disastrous, especially for software publishers. Unreliability can dramatically increase the support cost of an application. There are some real stories of vendors that were killed or lost significant business by a lack of reliability of a specific version of their software.

Performance

We are all software users. Who of us has never grunted against a slow application or an unresponsive web site?

It is obvious when we are on the user side, but some developers tend to minimize the importance of implementing a responsive user interface. In the end, whatever the technical reason behind it or the work that is done by the software, users won’t use a solution that is too slow.

Consequently, when coding, you need to choose technical options that provide a good performance experience for your users. Of course, this does not mean doing anything only for the sake of performance, or everything would be written in assembly language, but it is important to always optimize the components that have a strong impact on performance: remote connections, data access, manipulation of the appropriate reduced set of data, usage of loops (especially imbricate loops).

If some data processing requires a lot of time, you need to think about asynchronous mechanisms, splitting data or using other design options to avoid a bad performance experience for your users.

The consequence of a lack of performance will be user non-acceptance of the application, whatever the proper implementation of features you provide. If it is an internal application, lack of performance may cause conflict in a company. If this is a product you sell… then you won’t sell it!

Scalability

People tend to confuse performance with scalability. These are two different aspects of quality, and sometimes they require contradictory design options.

Scalability is the ability for your application to grow without limit, either in the number of users it supports or in the volume of data it can manage. For example, Access is a very well performing DBMS, but it is not scalable; it doesn’t work well with many users.

With the emergence of the Internet, SaaS and cloud applications, scalability has become often more important than performance. And it is most of the time a technically challenging issue if you have to deal with large number of users sharing the same data.

With the diminishing cost of hardware, multiplying the numbers of machines is clearly less costly than over-optimizing performance with developer work time. But simply adding machines does not work; you need to design your application in a way you can split the work between those machines and still guarantee consistency.

Measuring the scalability of your applications is a rather costly process. It requires setting up complete bench platforms dedicated to the process of evaluating your application under a heavy load of users/data.

It also requires the appropriate tools and skills to correctly interpret the measures and be able to assess the elements you need depending on the growth of your user base. This process is known as "capacity planning".

A lack of scalability can cause a major business disruption. If the application becomes stuck, then it cannot welcome new users. And there is usually no short term solution if the application has not been designed with scalability in mind.

Supportability

Supportability is the ability of an application to be correctly operated in production.

One key element is the proper management of exceptions and errors, as mentioned in our article "Code instrumentation best practices".

A production-aware application should also be able to deal with non-forecasted events such as network failures, bandwidth drops or hardware issues.

To measure results on this dimension of quality, we recommend tracking the average time to correct a bug, which involves:

· finding what happened,

· finding the piece of code concerned,

· being able to get the data used in the scenario that caused the error,

· identifying the non-forecasted scenario,

· proposing a valid correction for the code.

The speed at which you can reproduce issues is also a key indicator for an application’s supportability.

Thinking about disaster recovery and how you can restore the application in case of major events is also part of this dimension.

The consequence of having an application low in supportability is the risk of downtime. Beyond the induced costs and the image problem, a discontinuity of service can impact business revenue when the system is down.

Ergonomics and ease of use

Ergonomics and ease of use is also a major pillar of quality.

Measuring this dimension and making comparisons is not so easy though. People tend to either underestimate the importance of ergonomics, or – on the other hand – expect every application to have the same ease of use as major consumer web sites, which are developed with tens of millions of dollars.

Some applications bear some intrinsic complexity of their business which might require very specific knowledge making it a specific a challenge to keep usage simple.

The best indicator of ergonomics is the ability to use and learn an application quickly. If possible, we recommend measuring this for new users. How much time does it require to get them on board using the application? If a long time is needed, how much is intrinsic to the complexity of the application and how much is related to the ergonomics and design?

It is better to measure this on new users than on users that might have been used to other systems. Their judgment may be biased by habit and some reluctance to change.

Increased ease of use will lower training costs, improve productivity of your users and ease the acceptance of the applications.

As with performance, a lack of ergonomics may result in user non-acceptance of the application and extra hidden costs.

In conclusion, when measuring the quality of your software development, don’t forget to assess all 5 dimensions. They all contribute to make your software successful.

As always, feel free to comment and share your own experiences.

Daniel COHEN-ZARDI
SoftFluent CEO

Follow

Get every new post delivered to your Inbox.