August 19, 2011 2 Comments
A classical discussion I have with some of our customers – many of them being software vendors – is the opportunity to leverage third party components and code, or -on the contrary – the risk of introducing dependencies into their application or product.
As always with this kind of question, it is quite irrelevant to give an absolute answer. A developer may find obvious that reducing the dependencies on external elements will make him more in control. But a decision-maker would also easily understand that leveraging someone else’s effort makes more economic sense than doing everything by yourself.
So let us try to give some hints that might help you find your own appropriate answer to the question.
Economics is the first aspect of the question. There are scenarios where it is just obvious that you should leverage existing components. Database is an obvious example for most applications, as you won’t rewrite a database system. In many cases, it is also relevant to use visual components such as Infragistics, Telerik or DevExpress when you need elegant user interfaces. Still, when developers meet an issue with some of these elements, possibly due to a misuse, I saw situations where they tried to redevelop user interface components by themselves. Needless to say, it is most of the time a bad idea.
I remember the reasoning of a developer telling me something such as: "But the component costs $1,000, and I could do it in a week!". Where is the reasoning here?
- In fact, a week of a developer costs certainly more than $1,000$. But the biggest part of the mistake is not here.
- Implementing the needed feature will certainly cost more than 5 days, because the developer has under-estimated some areas such as testing. It is likely that first implementation will at least cost 10 days, for a very limited set of features.
- When you consider the whole application lifecycle, there are studies that demonstrate that the cost until the shutdown of the application is about 6 times the initial investment, because of evolution and maintenance over the long term. It is likely that there will be additional features that will be needed and bugs that will emerge from the coding achieved. So in that particular case, I suspect the real cost of doing it internally likely to be in the tenths of thousands of dollar range, not to mention the risk of failing.
- And the component written by the developer is unlikely to match the design and flexibility that will have been achieved by professional software component designers.
- Finally, the company will also have lengthened its time to market in the process.
The only advantage in that case is to be able to design something really customized for the context. So before deciding to internally implement something that exists as a component, one should probably formalize the specificity of the context to justify it.
This said, dependency is another factor to take into consideration. Following the former argument, and thinking about all the many pieces of software that have been developed over the past decades, one could quickly conclude that building software is just a matter of assembling pieces and components. And the more you leverage third-party components, the better it should be. Beyond the fact that it is much more complex than that, because of technology diversity and integration costs, there is also another factor to consider which is dependency. Especially if you are a software vendor, limiting your dependencies to the necessary ones should be a goal as such. These two posts about "Dependency Avoidance" and "In Defense of Not-Invented-Here Syndrome" explain this well so I won’t detail more here.
Dependencies are also a risk in the sense that your software may be exposed to risks if the third-party component fails, if not now, as your software evolves. The risk needs to be managed through appropriate validation of the third-party software, both in quality and in the support contracts you can have with the vendor. For example, for our own CodeFluent Entities software, we do not have any third-party dependency for the tool itself, except the .NET framework and Visual Studio for the Modeler version which integrates directly into Microsoft’s IDE. We used to have a dependency on the licensing piece, and we removed it about a year ago, as the vendor became less responsive to our support requests, as we identified this as a risk for us and our customers. It is also relevant to make a decision based on the current version of the technology but also the potential evolution. Will it provide you with an ability to innovate faster or will it slow down your own innovation pace?
To analyze your dependencies, also between your different libraries internally, one can look at the very good NDepend product.
Visualizing your dependencies with NDepend
Integration mode of third-parties, in the sense of binary versus source code integration, is another key element to consider in the picture. While integrating third-party open source code is very trendy at the moment, with some promoted benefits such as the possibility to fine-tune the code to your own needs if necessary, this is actually a dangerous trap in our view.
Why is this an issue? In fact, the argument of being able to modify the code is not a good one. If I am building cars, and if I buy the engine from a third-party, I am probably better off having a contractual guarantee with the engine vendor, than having an opportunity to tweak the engine. Choosing the latter, I am taking full responsibility to make the engine work again if I tweak it, and I do not know where it will lead me, especially if I am not an engine expert. And the fact that the vendor has to promote the possibility to make the final adjustments might be a sign that the engine was not designed as a finished element that should work as long as you respect clear specifications to integrate it in the car you are building.
Furthermore, open source mode often means that you can start for "free", which also usually means that nobody at management level validates the dependency introduced when importing an open source library into the code base. In the projects we see on the field, we often find numerous dependencies directly introduced by developers, without a strong awareness of the impact. The bottom line of increasing your code base by importing external code is an increase in the maintenance cost, not to mention sometimes the legal implications that are often overlooked by developers (what percentage of open source fans really read and understand EULAs?).
We also see a lot of projects in the field where developers pile up frameworks. This has never made a consistent solution, and even if it might work at a specific given time, it is usually an evolution nightmare. As this is a topic per se, I will probably devote a whole post about it later.
As a conclusion, we think that the key elements to remember are the following:
· Limiting the number of dependencies to the ones that bring you the more value is necessary, especially if you are a software vendor yourself,
· Favoring binary over source code integration helps in making sure the added-value of the third-party is clear and contractual, and in ensuring you do not get contaminated by potential flaws in the third-party software.
Daniel COHEN-ZARDI, CEO