How to double your money?

In this article I am not talking about doubling your revenue or profits, I am going to focussing on doubling your costs. Why? Because nothing is for free: if you want to earn more, you need to pay more. And because you have direct control over your costs, not over your earnings. Like in darts or archery: you have control only until you release the arrow.

Suppose you are managing a small racing team. You are spending 1 million on the car and 5 million on the driver. If you double the money on the car to make a better car you will win more races, and if you double the money on hiring a better driver you will win more races too. Which one would you choose? I would choose doubling the money on the car. It will cost you 1 million extra, not 5 million.
Now suppose you are running a production factory. You are spending 1 million on machines and robots and 5 million on workers. If you can increase your production, where would you double your money? Again, I would double the amount of money on machines and robots. In fact, most car manufacturers are spending over 90% on equipment.

Well it is not always that obvious. System and software development is a knowledge industry. People are the most valuable assets, but also the most expensive. It is no exception when 10% of the costs are spent on tools, such as licenses, servers and support and maintenance, while 90% is going to the engineers, architects, testers and project managers. No development manager will approve doubling the costs on tooling. It is considered as “overhead” costs and overhead should be minimized, not increased. But he is willing to spend even more money on better training, process improvement initiatives, metrics programs and new methodologies. Workers in projects are allowed to spend a considerable amount of their time on implementation and deployment of better processes and tools, time they do not spend on actually developing the system or software.

Why is that? Why is a factory manager willing to spend over 90% on equipment to maximize productivity of the factory, while a development manager is not willing to go beyond 10%. If people cost 10 times more than tools, an increase of 10% on the people would be equivalent of an increase of 100% (doubling the money) on tools. Which one would ultimately be more effective for the organization?
I argue that if you spend 90% of your development costs on tools, tool integration and automation, and 10% on the best possible teams that you can buy, where the people can fully focus on the intellectual work that cannot be done by tools, the organization would be far more effective in speed and delivering quality than any organization with the reverse ratio.

Of course, you should not double the costs of tools and equipment blindly. But in system and software development there seems to be a taboo on doubling the “overhead” costs even if that would be more effective than spending even more money on improving the capabilities of people.

Posted in integration, people, software development, systems | Tagged | Leave a comment

Would you spend 1 FTE more to save 2 FTE?

Suppose you have a software or system development organization of 400 people, working in 4 development programs with each 5 projects and 20 people per project, on average. You are using a heterogeneous tooling landscape of different tools from different vendors to support the projects, such as a requirements management tool (e.g. Doors), an architecture modeling & analysis tool (e.g. Enterprise Architect), integrated IDE editing-build-test tools (e.g. Eclipse, Visual Studio), a test management tool (e.g. HP ALM), source control tool (e.g. Git, SVN, RTC), defect control tool (e.g. Jira, Bugzilla, Trac), Scrum support tool (e.g. Jira Agile, Rally, RTC, TFS).

Of course, the tooling infrastructure did not drop out of thin air. It is the result of an evolutionary growth, where also practices evolved over time. Some tools are used in isolation but at a certain moment the need for integration of tools emerges. Let’s go through some of those integrations:

  • Defect control – Source control
    We need this integration to understand which source code was changed to solve a particular defect. Oppositely, it helps understand why a piece of code was modified.
  • Test management – Defect Control
    We need this integration to understand which defect was submitted because a test run failed, or more accurately because a particular step of a test case in a test run fails. Oppositely, we need to understand which test cases to re-execute when the defect is solved.
  • Requirement management – Test management
    When a test case passed, we need to know which requirements are OK. And oppositely, if a test case fails, we need to know which requirements are not OK. In addition, if we want to release our product, we need to understand how much of the requirements are OK – the essential requirements must be OK 100% and less essential requirements may be less than 100% OK. So we need to integrate requirements with tests
  • Agile Planning – Requirement management
    Most projects have limited capacity to develop the product or systems. We need to plan which requirements will be delivered in the next releases. That requires integration between requirements and plans.
  • Agile Monitoring – Test management
    A burn-down chart visualizes the amount of stories that are ready over the course of a sprint. A story is ready when it passes the acceptance criteria which typically involves testing. We need the integration between the stories counted as ready in the burn-down chart and the status of the test. Typically, this is a 3-way integration from the agile planning/tracking to requirements to tests.
  • Agile Planning – Defect Control
    To understand that you can finish the delivery of a release, you need to understand the amount of open work (e.g. stories) and the amount of open rework (defects).
  • Source controlĀ  – Requirement management & Architecture modeling
    To implement a piece of code, you need to understand what it is expected to do. In traditional organizations you would have a specification document; in modern organizations the specifications are stored as requirements and architecture models in the corresponding tools. Oppositely, if you want to change the requirements or the architecture, you need to understand which source code is affected.
  • Reporting – any tool
    Many tools have limited reporting capabilities and when there are reporting capabilities it reports are often only accessible within the tool itself or as a PDF or Excel file. Integration with a reporting or data warehouse tool is often a neglected integration.

Now suppose you want to implement those tool integrations. Some integrations are supported by a plugin, adapter, integrator or synchronizer of the vendor of the tools. By the way, if you have tool A from vendor AA and tool B from vendor BB, it is not always obvious which vendor provides the integrator and how far that vendor is willing to support integration issues with the other tool (free of charge as part of the license agreement). So it takes effort to find the integrator, to install it, to configure it, to test it, to troubleshoot it and to solve problems of the integration. If there is no vendor with an out-of-the-box integrator, you might make your own scripts or application to integrate the tools. That takes effort to learn the APIs of the tools, to discuss and define the integration needs, to implement it, to test it, to troubleshoot it and to solve problems.

And then the vendors come with a newer version of the tools, that you need because bugs are solved and extra functionality is needed in the projects. When upgrading a tool, you may need a new version of the integrators, you need to reinstall it, to reconfigure it, to retest it, to troubleshoot it and to solve problems. And if the integrator belongs to vendor BB while you are upgrading tool A, you might run into issues with vendor BB to support it.
And in case of your own integrations, you need to learn the updated APIs for the new tool version (hopefully, no functions are deprecated that you use), to redefine the integration needs (to take advantage of the new capabilities of the new tool version), to adjust the implementation, to retest it, to troubleshoot it and to solve problems.

And then you want to add a new tool and/or replace an existing tool. You need to reconsider all your integrations with those tools, (re)install or (re)define and (re)implement them, (re)configure them, (re)test them, troubleshoot issues and solve problems.

To make things worse, you need to maintain resources with technical knowledge and expertise with the integrations and keep that knowledge up-to-date.If someone leave, he/she must be replaced. For integrations that you made in-house, you need to maintain that knowledge in-house; for vendor-based integrations you may depend on the vendor or external service provider partners, but still the organization specific knowledge needs to be maintained in-house. For open-source tools it may even be more difficult.

And to make things even more worse, software engineers have a tendency to invent their own tools and integrations in the margins of their work. What starts by retyping information from one tool into another tool, often ends in a “handy” little script that extracts information and import that in another tool. Run it in a crontab on a local computer and your done, until another projects wants the same thing. Before you know, your machine is overloaded with processes that are not yours, but that you need to support anyway in case there are issues.

“How much resources does maintenance and development of tool integrations cost?”

As I started, we assumed an organization of 400 people. What do you think, would you be able to support, develop and maintain all tool integrations (including maintaining technical knowledge of the APIs and the integrations) for about 10 tools for 400 people with 1 person or 1 FTE? And how much time do projects spend in the margins to work on integrations? Would 0.1% be realistic, which is 2 hours/year per person on average? And how much overhead would the projects lose because an integration does not work properly, is missing or down, and synchronizations need to be recovered? Would 0.2% be realistic, which is only 4 hours per year per person? Would that be a realistic representation of reality?
Anyway, in that case the total costs would be 1 + (0.001 * 400) + (0.002 * 400) = 2.2 FTE which is about $250k per year. My estimate would be that this is highly underestimated!

Now if I can do the same amount of tool integrations and more, keep it up-to-date with the latest version of those tools, better tested and of higher quality, for the cost of 1 FTE (about $120k per year), which solution would you prefer? If a tool integration is missing, I could add it for another $50k with no yearly cost increase, it would still be cheaper than the underestimated current reality.

“Impossible? Unrealistic? Are you mad?”

This is a absolutely realistic and possible. The only problem is that I have never met a development manager who is willing to increase the operational overhead costs by $120,000 per year on tool integrations. Why not? Because less effectiveness in projects are hidden costs counted as “normal” development costs and because a tools support team of – say – 4 people is already considered “overhead” costs.

“We ‘d rather hire an extra engineer than spend an extra $120k per year on tool integrations that we already have or don’t need!”

Posted in configuration management, integration, tools | 3 Comments

Flipboard magazines

If you are interested in what I find interesting internet reads, have a look at my Flipboard magazines.

Posted in blogs, community, configuration management, music, software development | Tagged | Leave a comment

Is the configuration manager going to disappear?

The simple answer is: Yes, configuration managers are going to disappear!

The more comprehensive answer to this question is that configuration management is not going to disappear, but the configuration manager will be. We can compare it to the change of the crew in the cockpit of an airplane.

In the early days of aircraft, there was only a single pilot in the cockpit. With the growing need for air transportation of people and goods, the airplanes became bigger, requiring more advanced engines, more advanced hydraulics, electronics, aeronautics, navigation, communication and much more. So a second pilot was added. With the growing complexity of airplanes, a flight engineer was added with thorough technical knowledge of engines, electronics, hydraulics and dynamics; and with the growing complexity of the technology even a second flight engineer was added.
This trend was stopped when computers were introduced. Initially, the computers aided the engineers by replacing gauges and switches by screens and buttons, so one flight engineer was enough again. The next step was that computers not only relayed the information more concisely with better structure and overview, but computers were also put in control of complex systems like the engine, hydraulics and hydraulics. The user interface to the flight engineer was simplified and the responsibility of the engineer moved from decisions what to do and how to do it, to only what to do; the computer controlled the how-to-do-it. Currently, the flight engineer has disappeared completely.

Does that mean that flight engineering has disappeared too? No! Airplanes and flight control has become tremendously difficult. There are so many control processes running continuously and simultaneously that it is impossible for human beings to executed that. In fact, flying modern airplanes is not even possible anymore without computer systems.

Now, let’s go back to software and system development. Developing a control system like an airplane is complex, extremely complex! But developing a car is complex too, and even a “simple” device like a phone is very complex. A lot of people have to work together to develop a system and bring it to the market, each of them using a lot of information and producing a lot of information. The information is in constant flux; content is changing, relationships are changing, structure is changing, expectations and interpretations are changing constantly.

The traditional role of the configuration manager is to assure that the right data is available to the right people at the right time with the right status in the right format. To build a complex system requires a lot of work from a lot of people, requiring a lot of information to be used and produced. Adding to that a growing demand for speed, visibility, traceability, responsiveness to customer changes and a high level of quality, and collaborations between multiple projects and project teams in multiple sites at various locations and timezones… In other words configuration management is becoming more important than ever before.

In fact, similar to flying modern airplanes, developing modern systems is not even possible anymore without computer systems. And similar to the flight engineer, the configuration manager role is replaced by these computer systems, or Application Lifecycle Management (ALM) systems. So yes, configuration managers are disappearing. Project managers and quality managers can take control of configuration management through the ALM systems.

But something else is also happening: configuration management is disappearing as a separate discipline. Similar to saving and printing of documents in office application, configuration management is being embedded into the ALM applications that support the various engineering disciplines. For example, requirements management involves managing requirement which implies unique identifiers, versioning, storage and retrieval (including searching), baselining, status, delivery, and even branching and variant management. Some for testing management, portfolio management, roadmapping, and other disciplines. If we look at amazon, facebook, twitter, phones and tablets, cars, trains, airplanes, booking agencies, street lighting and security systems, healthcare and wellness systems, or even ALM systems, nowhere is configuration management a separate discipline. Isn’t it important then? Yes, it is so important that it is becoming a basic functionality of every system that manages information, like saving or printing in office applications.

So there we have it: configuration managers are disappearing and configuration management is disappearing as a separate discipline. Does that mean that I will be out of a job soon and many other CM-ers with me? No! Configuration managers will move into operational engineering or management disciplines (e.g. testing or project management), or into strategic process, business and development management disciplines (e.g. development manager, product manager), or into technical or managerial IT roles (e.g. tool expert, IT manager).

Posted in agile, complex systems, configuration management, excellence, large projects, software development, tools, tracking | Tagged , | Leave a comment

Software development in industry is lagging behind

I was watching a tv program on the distinction between fake and real. They covered the use of digital technologies to replicate art, or even create 3d animations about news or other realities. It made me think about system and software development in the embedded systems industry. Why are we developing the most advanced MR and CT scanners, the most advanced and intelligent lighting systems, the most advanced chip production or the most advanced printers, car navigation, coffee making machines, or even the most advanced airplanes, but still do it with technology from the previous century? Why are we applying the methods and the technology from the 1990ies to make products of today?
I often have a very hard time convincing my management to use digital technologies that is currently applies in companies like Google, Microsoft and even big blue IBM, and not only management. Also software and systems engineers find it often hard to accept collaboration technologies that is very common in our private lives and apply it in their professional life.
Why are so many people still reluctant to adopt agile methodologies, internet technologies like RESTful interfaces, or even collaboration technologies that commonly apply APIs supported by Google, Facebook, Twitter, Amazon, eBay or in products like smartphones and tables? Why are we using non-integrated software and system engineering and project management tools? Why waste so much effort on ignoring or customizing the functions of these tools, why waste effort on integrating these tools if there are integrated solutions available?
Apparently, wasting thousands of manhours is cheaper than spending a few thousand on improving the effectivity of the professionals. I just don’t understand how this is economical viable. Apparently, still too much money is payed for these systems that we can afford the waste in the innovation industry. Apparently is the competition also sleeping, wasting there time by using outdated technology. Apparently, customers have no alternative but to accept that we waste so much time and effort. Apparently, it is still profitable to waste precious innovation effort. Apparently, the large companies are still powerful enough to overpower the more advanced entrepreneurs. But for how long? When will we wake up that high tech innovation in large industry companies is seriously outdated?

Posted in complex systems, configuration management, large projects, software development, tools | Tagged | Leave a comment

Writing less, not more

You probably have noticed that I have exported my blog posts from Blogspot to here. And that I have not blogged very much lately, mainly because I do not finish and publish my posts. So, I am going to write less and publish it more often.

Posted in blogs | Leave a comment

Welcome to my new blog

This is my first post on my wordpress blog.

Posted in blogs | Tagged | Leave a comment