The last few weeks I have been more involved in hardware CM, or more specifically hardware CM for IC (chip) development. The design process for a chip if very similar to the design process of software.
To design an analogue chip, you first make a model the electronic circuits using an modeling tool where you connect signal and power lines between standard components. This model goes into a simulator where you “test” your design. For a digital chip, modeling is even more similar to software development: you write code using a programming language. A code checker is then able to calculate the output bits based on the input bits. So in both cases, you create source files to describe the system behavior and you run a build to check the code or model, and generate files needed to feed the (chip) fab. The different between hardware (chips) and software is what happens with the build result.
So I would expect that CM for the design process of chips would be similar or even identical to the design process of software. Well, in my present experience it is not! CM for chip design is as we did it for software in the 80-ies of the previous century, which is 25 years ago. There is hardly any parallel development and the only controlled versions are the current versions (/main/latest) and the baselines (tagged versions).
From a software perspective, absence of parallel development is hard to imagine. How come that chip design can do without, and still develop multiple product variants at the same time? The answer is: reusing block designs. A chip consists of a combination of (more or less) standard blocks. These can be high level blocks (e.g. an ADC, or a Bluetooth controller) or low level blocks (e.g. a transistor, an amplifier or a FIFO). These blocks are used to compose a chip design. The challenge is to combine as many blocks as possible, on a chip as small as possible, with the best performance as possible (e.g. low noice levels, minimum power dissipation, largest temperature range, etc.). The cost of a chip is determined by the size, but the market value is determined by the functionality (maximum number of blocks) and the quality (best performance).
One of the biggest challenges of chip design is how to place and connect the blocks and get still get the chip within spec. This is a 3-D problem – because a chip consists of multiple layers – and almost all changes in the design has impact on the complete chip. Therefore, you cannot change one part of the chip in one team, which another team works on another part, and therefore there is not much parallel development.
Development of those reusable blocks (e.g. making a smaller ADC using less power and with better performance) is done separately from the chip development. These blocks are called IP blocks. The designers of an IP block don’t really care about placement and connection of the block on the chip; they care about the internals of the block, making the design possible. The only reason they care about other blocks on the (test) chips is interference: for example, what is the effect if we place the ADC next to a power management block?
Since almost all changes within an IP block have impact on the whole block, there is little use to designing one part of the block in parallel with another part of the same block. And if that is feasible at all, it is more useful to make them separate IP blocks which can be used (and designed) independently.
Now let’s go back to configuration management. We have seen that chip design and IP block design have little use for parallel development. So CM does not need branching or streaming. If there is a need for (for instance) product variants, it is easy enough to just design them as separate product types, reusing the ideas of one product type for the other one. It may even be sensible to design them sequentially where the first product type takes several months to design and the next one only a few weeks after it.
If we look at software development, we may notice that the use of standard blocks is not “common practice”. Designing a software system involves (1) designing the software blocks and (2) integrating them into a system. So what chip design does in two separated processes (usually also separate projects), is combined in a software project. This is possible because in software you have infinitely more freedom in interconnecting “blocks” and infinitely higher speed of changing the characteristics of a blocks. If a software designer has a block that “almost” fits in a system, it is often cheaper to change to block itself than to change the system around it to make it fit. In a modular design changing the block internally has less impact on the whole system than changing the environment of the block. And therefore, software blocks are often developed in separate teams that work simultaneously, using a (more a less) fixed interconnection called the interface architecture. Variants of blocks can – and do – exist simultaneously within the same system, where at compile-time, link-time, install-time or run-time is it decided which variant of a block is actually used.
The simultaneous and concurrent existence and design of software blocks, and the dynamics of integrating those blocks into a system is so much more complex than for chip design, that it requires a lot more complex configuration management approach. The SCM approach requires parallel development in branches and streams, both on block level and system level. It also requires a lot more flexibility of dealing with integration scenarios, variants and circumstantial differences. And last but not least, chip designs require an (expensive) manufacturing process that is very expensive to change once it is established, while software is the final product after the build. Consequently, the dynamics of change in software systems is much higher then for chip designs.
In summary I would say that although the design processes for chip design and software design look very similar, the configuration management requirements are of completely different magnitudes.