RTC2GIT – Rational Team Concert to GIT migration (part 2)

RTC export (RTC_export.py)

As I explained in the previous article (part 1), I use the RTC REST API instead of the RTC CLI to export the history from RTC because the RTC CLI is not reliable. However, the official (documented) REST API is too limited. The RTC Webgui also uses the REST API, but extends the REST API with an unofficial, undocumented REST API calls. These (official and unofficial) GET requests return JSON responses containing the information we are looking for. By analyzing the browser messages, you know the exact GET request (https) and the response (JSON) communication.

Reverse engineer GET request/response of the RTC Webgui

To figure out how the headers of the GET requests that the RTC Webgui uses look like, we open the page in the Webgui that shows the information. For example, login to RTC and open the page that shows all streams in Firefox, then press CTRL-SHIFT-E (Web Developer > Network), lookup and select the GET request with JSON output of the stream information (in this case it is typically the last GET with JSON response), find the URL in the Headers section and the JSON response in the Response section.

The advantage of JSON is that Python dictionaries use about the same syntax. This way, you can reverse engineer the REST API used by the the RTC Webgui and translate this into GET/JSON calls in Python quite easily.

#----------------------------------------------------------------------
# Extract information from RTC
class RTC:
    def __init__(self):
        self.session = requests.Session()
        self.login()

    #----------------------------------------------------------------------
    # Login to RTC
    def login(self):
        response = self.session.post("%s/auth/j_security_check" % RTC_URI, data={'j_username': RTC_USERNAME, 'j_password': RTC_PASSWORD})
        assert 'net.jazz.web.app.authfailed' not in response.text, 'Failed to login'

    #----------------------------------------------------------------------
    # Get all project areas from RTC
    def get_all_project_areas(self):
        url = '%s/service/com.ibm.team.process.internal.service.web.IProcessWebUIService/allProjectAreas' % RTC_URI
        response = self.session.get(url, headers={'accept': 'text/json'})
        response = json.loads(response.text)['soapenv:Body']['response']['returnValue']['values']
        return [{'name':v['name'], 'summary':v['summary'], 'ID':v['itemId']} for v in response if not v['archived']]

This example shows the get_all_project_areas function uses the internal web services and translates the JSON response into a dictionary structure. Only the ‘value’ data from the JSON response is relevant. The return value of the function is created by a Python “generator” construct to build a list (array) of relevant values from the response (list). Being new to Python, I find this a very powerful and beautiful mechanism.

Exporting RTC history

Now that we know how to extract data from RTC using the REST API used by the RTC Webgui, we can go into the export logic. First of all, we need to export from RTC now, what we need to imported into a Git repository later. There are 2 types of components in a stream:

  1. Components with history
  2. Components without history

Components without history are just a reference to a baseline of the component from another stream. Without history, there is no point in exporting the components. The history of the components with history need to be exported. These specific components are defined in components_of_interests. And the streams from which the components are exported are defined in streams_of_interest. In my case, I only export from a single stream, to be imported into a single Git repository later.

For example, exporting the history of 4 components from 2 streams, requires the following definition:

streams_of_interest = [
    'streamA',
    'streamB'
    ]
components_of_interest = {
    'streamA': [
        'component1',
        'component2'
        ]
    'streamB': [
        'component3',
        'component4'
         ]
    }

So the outer loops go through the streams of interest and the components of interest:

rtc = RTC()
for stream in rtc.get_all_streams():
    if stream['name'] not in streams_of_interest:
        continue
    for component in rtc.get_components(stream):
        if component['name'] not in components_of_interest[stream['name']]:
            continue
        print("Retrieving history for %s/%s" % (stream['name'], component['name']))
        for changesets in rtc.get_changesets(stream, component):
            for changeset in changesets:
               ...
               changeset['files'] = rtc.get_changeset_detail(stream, component, changeset)
               for file in [f for f in changeset['files'] if 'itemId' in f]:
                   ...
                   rtc.get_file(file['itemId'], file['afterStateId'], file_path)
               ...
               target = os.path.join(output_path, 'changesets', stream['name'], changeset['ID'])
               os.makedirs(target, exist_ok=True)
               with open(os.path.join(target, 'meta.json'), 'w') as fp:
                   json.dump(changeset, fp, indent=4)

Of each component, all changesets are exported. Per changset, I need to know which files have changed. The content of the files is saved (so it can be imported in Git later on), and the meta-data of the changeset is saved in a meta.json file.

In a similar way, snapshots and baselines are exported, except that they don’t need a double loop, as far as I can tell, and they don’t need to save files.

    for snapshot in rtc.get_snapshots(stream):
        print("Retrieving baselines for snapshot %s/%s" % (stream['name'], snapshot['name']))
        ...
        for baseline in rtc.get_baselines(snapshot):
            if baseline['componentName'] not in components_of_interest[stream['name']]:
                continue
            ...
            target = os.path.join(output_path, 'baselines', stream['name'], baseline['ID'])
            os.makedirs(target, exist_ok=True)
            with open(os.path.join(target, 'meta.json'), 'w') as fp:
                json.dump(baseline, fp, indent=4)
        ...
        target = os.path.join(output_path, 'snapshots', stream['name'], snapshot['ID'])
        os.makedirs(target, exist_ok=True)
        with open(os.path.join(target, 'meta.json'), 'w') as fp:
            json.dump(snapshot, fp, indent=4)

A stream consists of a set of components. A snapshot is a set of baselines for the components of the stream. Since we are only exporting the components_of_interest, the script only saves the baselines for those components. Unfortunately, I did not find a way to show all baselines of the components, only the baselines present in a snapshot of the stream.

Important to know is that a snapshot contains new component baselines only for the components that have changed since the previous snapshot. For other (not changed) components, the snapshot contains the same baselines as in the previous snapshot. This means that going through the snapshots, I will find the same baselines over and over again and only a few new ones for the components of interest.

Export details

Maximum batch size

In the middle of the changeset retrieval, you see this double ‘for’ loop.

    for changesets in rtc.get_changests(stream, component):
        for changeset in changesets:

The reason for the double loop is that RTC only returns the changesets in batches of maximum 100 changesets. The inner loop processes the changesets of a batch; the outer loop retrieves the next batch. The get_changesets function uses a neat generator mechanism in Python (yield) to produce these successive batches:

#----------------------------------------------------------------------
# Get change sets from a stream in RTC
def get_changesets(self, stream, component):
    # This function will yield changeset arrays with the batch size
    # The RTC REST api does not support values higher than 100
    batch_size = 100
    last = None

    while True:
        ...
        params = {'n': batch_size, 'path': 'workspaceId/%s/componentId/%s' % (stream['ID'], component['ID'])}
        if last != None:
            params['last'] = last['ID']

        url = "%s/service/com.ibm.team.scm.common.internal.rest.IScmRestService2/historyPlus" % RTC_URI
        response = self.session.get(url, params=params, headers={'accept': 'text/json'})
        response = json.loads(response.text)['soapenv:Body']['response']['returnValue']['value']['changeSets']

        changesets = response
        if last != None:
            # The first changeset is the same as the last changeset from the previous batch
            changesets = changesets[1:]

        changesets = [{
            'objecttype':'changeset',
            'date': changeset['changeSetDTO']['dateModified'],
            'label': [reason['label'] for reason in changeset['changeSetDTO']['reasons']] if 'reasons' in changeset['changeSetDTO'] else [],
            'message': changeset['changeSetDTO']['comment'],
            'author': '%s <%s>' % (changeset['changeSetDTO']['author']['name'], changeset['changeSetDTO']['author']['emailAddress']),
            'modified': changeset['changeSetDTO']['dateModified'],
            'added': changeset['changeSetDTO']['dateAdded'],
            'ID': changeset['changeSetDTO']['itemId'],
            'component': component['name'],
            'componentID': component['ID'],
        } for changeset in changesets]

        yield changesets

        # Stop if less than batch size are returned
        if len(response) < batch_size:
            break

        last = changesets[-1]

Yield acts similar to return (i.e. return a batch of max 100 changesets) but instead of ending the function (as return would do) processing continues with the while True loop to retrieve the next batch of changesets. This infinite loop is ended by the break. On the outside, the get_changesets function will return an array (list) of return values of the successive yield instructions, i.e. all batches of max 100 changsets. This is called a “generator” in Python.

Each exported changeset consists of the following fields, which is stored in a meta.json file in the folder identified by the changeset UUID.

  • 'objecttype' – changeset, baseline or snapshot
  • 'date' – datetime stamp of the changeset, typically latest modification date
  • 'label' – reason of the change, typically the workitem linked to the changeset
  • 'message' – comment provided with the changeset by the developer
  • 'author' – name and email address of the developer
  • 'modified' – date of the latest modification of the changeset
  • 'added' – creation date of the changeset
  • 'ID' – UUID of the changeset
  • 'component' – component modified by the changeset
  • 'componentID' – UUID of the component
  • 'files' – field filled by the get_changeset_detail function
    • 'path' – relative path of the file
    • 'type' – type of change, e.g. added, modified, renamed, deleted
    • 'itemType' – type of object, e.g. File, Folder, SymbolicLink
    • 'beforepath' – relative path of the file before a rename or move
    • 'size'– size of the file
    • 'itemId' – UUID of the file
    • 'afterStateId' – UUID of the version of the file

Non-chronological history

Another problem I ran into is that the history (of a component) is not chronological. For example, I found a sequence of changesets of a component dated in 2019 with a changeset from 2018 in between. So, I wondered how that is possible.

The reason for the non-chronological history is that the date of a changeset is de latest modification date of the changeset, not the delivery date into the stream. If a changeset is latest modified in 2018, but delivered to the stream in 2019, it sits in between the changesets created/modified and delivered in 2019.

Although the history itself is not chronological, RTC (i.e. the REST API) returns the history in the correct sequential order: newest changeset first, oldest changeset last. This is the same sequence as shown in the RTC Webgui. To keep track of the correct sequential order of the changesets, I add a sequence number to the changesets. With it, the Git import can reconstruct the right sequence order for the import.

                changeset['seqnr'] = seqnr
                seqnr += 1

Incremental export

Now suppose you have run the export for a number of hours and there is a hickup in the communication with the RTC server. The script stops. When you restart the script, you don’t want it to start over from the beginning and redo what it already did. But how do you know where it left off?

Well, you don’t! The script does go over all changesets from the beginnen, except that files already exported are not exported again. Instead it reads the meta.json file and updates the sequence number.

                target = os.path.join(output_path, 'changesets', stream['name'], changeset['ID'])
                json_file = os.path.join(target, 'meta.json')
                if os.path.exists(target):
                    with open(json_file) as fp:
                        changeset = json.load(fp)
                    print("\tAlready exported changeset", changeset['ID'], changeset['date'], "%s/%s" % (stream['name'], component['name']))

                changeset['seqnr'] = seqnr
                seqnr += 1

If the changeset was not exported yet, the files of the changeset are downloaded and saved. Then the meta.json file is written. That way it is assured that the changesets get the correct sequence number.

                if not os.path.exists(target):
                    os.makedirs(target, exist_ok=True)

                    changeset['files'] = rtc.get_changeset_detail(stream, component, changeset)
                    for file in [f for f in changeset['files'] if 'itemId' in f]:
                        file['seqnr'] = seqnr
                        seqnr += 1

                        file_path = os.path.join(target, component['name'], file['path'].replace("/", "\\"))
                        # Prefix the path with \\?\ to allow paths longer than 255 characters
                        file_path = "\\\\?\\%s" % file_path
                        os.makedirs(os.path.dirname(file_path), exist_ok=True)
                        rtc.get_file(file['itemId'], file['afterStateId'], file_path)

                os.makedirs(target, exist_ok=True)
                with open(os.path.join(target, 'meta.json'), 'w') as fp:
                    json.dump(changeset, fp, indent=4)

Conclusion

Now that we have the history from all components of interested exported from the stream of interest, we can go to importing it in Git. In principle, this involved replaying the changesets from the exported history and applying the baselines. But in practice, there are a few challenges to overcome. That is something for the next article.

Posted in configuration management, git, migration, professional, rtc, tools | 13 Comments

RTC2GIT – Rational Team Concert to GIT migration (part 1)

Introduction

Rational Team Concert (RTC) is a great, extensive and rather complicated configuration management (CM) system. Now after several years, it is time for us to move on to GIT. Git is simpler to use, more straight forward in its approach and cheaper. We have many project areas in RTC, each with many streams with many components, each stream with a long history of changes, baselines and snapshots, some of them over 5 years of history. To switch to Git, we want to migrate the history from RTC to GIT and we need a tool for that. In this article (serie) I will explain how we did it.

On the internet I found 2 solutions:

  1. rtc2git tool based on the RTC CLI (Command Line Interface), implemented in Python. See Github. There is also a version in Java.
  2. rtc2git tool based on the RTC REST API (Application Programming Interface), implemented in Bash scripts. See BitBucket

The CLI-based tool is buggy because the RTC CLI is buggy. This is a known issue at IBM. Istvan Verhas implemented an API-based tool in bash, see this article. I could not find his bash scripts, but in general I find bash scripts to be rather cryptic and unreadable, especially if it contains a lot of awk and sed commands. So, using an example of an RTC changeset exporter in Python, I decided to build my own RTC2GIT tool in Python.

At first I was a bit reluctant to dive into Python since I had not prior experience with Python. I do have some experience with Perl (over 20 years ago). To be honest, Python turns out to be a lot easier to learn than Perl, as long as you set your editor to replace tabs by spaces – Python is quite critical concerning indentations. Mixing tabs and spaces results in errors which are hardly visible for the human eye.

The real challenge was that requirements for the RTC2GIT tool evolved over time. Starting with a simple migration of a single component from a single stream, it quickly turned into a complex of requirements, e.g. multiple components, multiple streams, loadrules, incremental migration (continued development in RTC after migration to the history to Git) and adding branches (RTC streams branched off from another stream to branches in Git). It turned out to be too much for a single article, so I decided to write it down in separate articles. This is the first, introductory article where you will get a general overview.

General concepts

As I wrote in the introduction, I go for an RTC2GIT tool using the RTC REST API and written in Python. Basically, it consists of 2 tools:

  1. RTC_export.py – exporting history from RTC to a folder/file structure on a local disk (data/ folder)
  2. GIT_import.py – importing history from the local disk (data/ folder) into a Git repo

By separating export and import I am able to perform several trials for the export from RTC within redoing the import, and I can do import runs on all or a selection of the data without redoing the export. Note that an export or import may take several hours for large histories. I have done many export runs on selected components with a small history (so export is quick), and many imports on a (small) selection of exported data.

The data structure created by the RTC export looks like this:

data/rtc2git/changesets/<streamName>/<UUID>/meta.json
data/rtc2git/changesets/<streamName>/<UUID>/<path>/<file>
data/rtc2git/baselines/<streamName>/<UUID>/meta.json
data/rtc2git/snapshots/<streamName>/<UUID>/meta.json

The <UUID> is the unique identifier of the changesets, baseline or snapshot in RTC. The meta.json file contains meta-information such as the datetime stamp, deliver message, workitem, change type (e.g. “added”, “modified”, “renamed” or “deleted) and the <path>/<file> of the file or folder that has been changed by the changeset. Using the files on the local disk, the GIT import script is able to reconstruct the changeset into commits on the Git repository, which looks like:

rtc2git/.git/
rtc2git/<path>/<file>

As you know, the .git folder contains the Git repository and <path>/<file> represents the folders and files of the Git workspace. The Git workspace is (re)constructed using the exported changesets from RTC and then committed to the (local) Git repository. More details about GIT import in follow-up articles.

In the next article, I will dive into details about RTC export script.

Posted in configuration management, git, migration, professional, rtc, tools | 2 Comments

Configuration Management with SAFe

Introduction

Configuration Management is crucial for the success of an enterprise, whether it applies an agile, lean or a more traditional approach. Although CM is crucial, it has been given little attention in SAFe and other agile methodologies. In this article, I take an attempt to describe CM in concise form. Feel free to comment.

Lean-Agile Configuration Management

Lean-Agile Configuration Management is a important enabling function at all levels of the Scaled Agile Framework (SAFe). Traditionally, CM is based on the principles and practices from military traditions as captured in ISO standards [IEEE 828-2012] and CMMI [CMMI for Development, version 1.3], where the focus is on control of assets. This easily leads to blocking or delaying changes, sticking to the plan and contract agreements, and comprehensive documentation. This defies the value statements of the the Agile Manifesto. The focus of Lean-Agile Configuration Management is on enabling and supporting interactions, collaboration and change, while maintaining working software throughout the life cycles of development and operation and protecting the assets from loss or damage with a minimum of waste.

A common misunderstanding is that modern CM tools can solve the CM challenges of the organization. It is true that CM tools have improved significantly since the invention of configuration management. But although good CM tools are a prerequisite for good CM practices, in a complex software and systems development environment, good (CM) tools do not warrant good practices.

Details

In SAFe, Lean-Agile Configuration Management is a responsibility of the System Team. The System Team assists with building and using the Agile development environment. As the System Team is one of the Agile Teams, the System Team is part of an Agile Release Train (ART). For large enterprises, the System Team may be separated into an infrastructure team at enterprise level while the development teams of the ARTs take on the responsibility for use.

Lean-Agile configuration management consists of two primary responsibilities:

  • Manage assets
  • Manage change

The following sections describe the practices to implement these responsibilities.

Manage assets

For managing assets we use the input-process-output model [IPO]. Every process step requires input assets to produce output assets, which in turn are used as input assets in another process step. If the output is not being use or has no value directly or indirectly for the benefit of the customer, it is waste. Similar, input that is not used in the process is waste.
Lean-Agile Configuration Management manages both input and output assets.

IPO.PNG

Basic asset management practices are:

  • Identification – is a practice to assign an name or number to an asset. Every asset has a unique identification, for example a file name for a source file or a number of a release. If the same name or number is used within the same development environment, a location path to the source location helps on making the identification unique.
  • Version control – is a practice for keeping track of who made changes to the system, when, what was changed and possibly other metadata attributes, e.g. status. A version control system assures that subsequent versions of the assets are identified, stored, retrieval and protected against loss (deletion) and damage (change). Typically, versions of an asset are identified by a version number.
  • Structure and location – is a practice of grouping assets into a logical hierarchical relationship. File-based assets, e.g. source code or documents, can be organized in a folder structure within the repository. Object-based assets stored in a database, e.g. requirements, test cases or Bills-of-Material (BOM), use a grouping mechanism to structure the atomic assets into composite assets. Non-hierarchical structures can be defined by using ‘labels’ or ‘tags’.
    Using the structure, the location, version and name of an asset or a version can be identified, for example by repository name, version number, folder path and file name, or by Universal Resource Identifier (URI).
  • Security – is the access control practice to assure that only authorized users are able to access and/or modify the assets. Access permissions can be assigned to individual users, or to user groups. Typically access permissions are create, read, update and delete (CRUD).

More advanced asset management practices are:

  • Collaboration and sharing – assets or versions of assets are shared with other users. The security for those collaborating users may be different compared to the access permissions of the primary users. For example, a reviewer may require temporary access (for the duration of the review) to one particular version of an assets, without access to any other.
    Another form of collaboration is co-authoring. Co-authoring allows multiple authors to edit the asset simultaneously and interchangeably.
  • Releases and baselines – are defined collections of (a version of) assets. A baseline is a collection of assets serving as a starting point or reference for changes. A release is a collection of assets serving as a final point for a delivery.
  • Branching – is applied as an advanced form of versioning and structure to support parallel development. A branch represents a sequence of versions of the same asset in a different context. For example, a feature branch contains the changes for the particular feature; another feature branch contains the changes for another feature. By merging the feature branches, functionality of different features is combined.
  • Workflow or status control – is a practice to assign a status value (metadata) to an asset to identify the progress through a workflow. For collections of assets (e.g. a release or baseline) or for composite assets (e.g. a component in a bill-of-material), the status of the atomic assets may be different from the status of the composite asset. For example, a user manual may have status “final” while a deployment package containing the user manual has status “in review”.

So far, there is no difference between Agile CM and traditional CM.

Manage changes of assets

In traditional CM we identify two type of work items to manage change:

  • Change Request (CR)
  • Problem Report (PR), a.k.a. bug

In Lean-Agile CM, we have no need for change requests. Changes of plans are managed through backlog items, in the form of Epics or Enablers, or lower level work items such as Stories.

Problem Reports or bugs are not (yet) defined in SAFe, possibly because it is assumed that Built-In Quality sufficiently assures that no bugs leak through to the customer. In practice, defects do leak through which are reported back to the development organization where they are managed as PRs or bugs. We cannot plan PRs in the iteration planning or the PI planning upfront because the specific PRs cannot be predicted.

In Lean-Agile CM, we manage the following work item types:

  • Backlog items – for example Epics, Enables, Features, or Stories. Backlog items are managed through the normal backlog management practices.
  • Defect items – for example Problem Reports, bugs, or issues.

Lean-Agile CM practices for managing change are:

  • Backlog management – handling the backlog items through the normal agile planning processes as defined on the various levels in SAFe, such as portfolio, program or team level.
  • Change Control Board (CCB) – is authorized to take decisions on priority of defect issues. The defect items with high priority may overrule the iteration planning; lower priority items are planned in the iteration planning meeting.

When a high-priority defect item must be handled for the current increment or iteration, it is not an option to put it on the backlog and plan the item in the next program increment planning or iteration planning. A common practice for handling high-priority issues is an ‘expedite’ lane or ’emergency’ lane. Any issue on the expedite lane shall be handled immediately by the release train and any resource is at the disposal for the expedite lane.
Other defect items are put on the program or team backlog, where they will be handled in priority order by the teams like any backlog item.

CCB.PNG

Priority defect items from the program CCB that need to be solved in the current program increment, must be escalated to the CCBs of the affected teams to assure that they are planned and handled with the appropriate priority in the teams. For that reason, the product owners of the Team CCBs are represented in the Program CCB.

The difference between Agile CM and traditional CM concerning managing changes is that in Agile CM the goal is to respond to change as fast as possible while in traditional CM the goal is to negotiate the impact on the plan, conditions and commitments before accepting a change.

Discussions

Configuration Manager role

With traditional configuration management, the responsibility for CM activities lies with the Configuration Manager. No assets are added to the repository and no changes are applied to any item in the repository without the consent of the configuration manager.

In a lean-Agile CM context, many of the CM activities are automated or delegate to the users,  decision authority for CM is shared and delegated to the agile teams, and the CM tools have powerful security features. Responsibility for managing assets and managing changes to assets CM lies with the Scrum Master (SM), the Release Train Engineer (RTE) or the Solution Train Engineer (STE).

Backup & restore

Backup and restore are no longer subject for configuration management. The modern IT systems have evolved into reliable systems where loss and damage due to system failure has been mitigated through redundancy and data-protection measures.

Undo an operations

There is a misconception about configuration management, in particular version control, is that its purpose is to protect developers from themselves by allowing them to revert changes they have made. To support developer on this, there are two approaches:

  1. Workspace or stages
  2. Progressive reversion

A workspace is a copy of (a part of) the CM repository that is private to the developer, and not under control of configuration management. Until the changes in the workspace are committed to the CM repository, they remain private to the workspace. Most CM tools support workspaces and allow reverting (undoing) changes in the workspace. But when the changes are committed to the CM repository, they cannot be undone anymore.

To undo these changes anyway, we need to make a new change that reverts the change. In fact, the reverting change is an additional change; hence progressive reversion.

Configuration auditing

The purpose of configuration auditing is to verify compliance with specific requirements or standards. Since compliance to requirements or standards is a quality attribute, it is no longer a configuration management responsibility to audit it.

In Lean, it is preferable to prevent that quality issues during development rather than correcting them afterwards. Compliance to requirements and standards should be considered as part of the Built-in Quality.

Posted in agile, configuration management | Tagged | Leave a comment

Agile Manifesto in het Nederlands

Hoewel er al een vertaling bestaat van het Agile Manifesto naar het Nederlands (zie  hier), geef ik de voorkeur aan mijn eigen vertaling. Het zijn niet altijd letterlijke vertalingen vanuit het Engels omdat ik probeer (voor mezelf) de essenties te treffen.

Principes achter het Agile Manifest

Wij laten zien dat software ontwikkeling beter kan door dat te doen en het anderen te leren. Hierdoor waarderen we:

Mensen en interacties boven processen en hulpmiddelen

Werkende software boven volledige documentatie

Samenwerking met de klant boven contractonderhandelingen

Inspelen op veranderingen boven het volgen van een plan

Hoewel de punten aan de rechterkant wel waarde hebben, zien we meer waarde in de punten aan de linkerkant.

Twaalf principes voor Agile Software

Principes achter het Agile Manifest

Wij volgen deze principes:

  1. Onze hoogste prioriteit is een tevreden klant, door snel en vaak software op te leveren die waarde voor hem toevoegt.
  2. Verwelkom wijzigingen van de behoeftes, zelfs tot laat in het ontwikkelingproces. Door Agile processen, betekenen die wijzigingen een concurrentievoordeel voor de klant.
  3. Lever regelmatig werkende software op, elke paar weken of maanden, liefst in kortere tijdsintervallen.
  4. Mensen uit de business en ontwikkelaars moeten dagelijks samenwerken, gedurende het hele project.
  5. Vorm teams van gemotiveerde mensen. Geef hen de werkomgeving en ondersteuning die ze nodig hebben, en vertrouw erop dat ze hun werk goed  doen.
  6. De beste manier om te communiceren met of in een team is door elkaar in de ogen te kunnen zien.
  7. Werkende software is de voornaamste maat van voortgang.
  8. Agile processen bevorderen een voortdurende ontwikkeling. De sponsoren, ontwikkelaars en gebruikers moeten het tempo wel kunnen blijven volhouden.
  9. Aanhoudende aandacht voor hoogwaardige technieken en gedegen constructies versterken de agility.
  10. Eenvoud – de kunst van het weglaten – is essentieel.
  11.  Zelfsturende teams leiden tot de beste ideeën, ontwerpen, en resultaten.
  12. Regelmatig evalueert het team hoe het beter kan en past zijn gedrag daarop aan.

 

Posted in agile, software development | Leave a comment

SAFe 4.0 for small release trains

Introduction

SAFe 4.0 states that an Agile Release Train consists of 50 to 125 people. Larger teams should be split up in multiple release trains and smaller teams should be organized as individual agile teams, not as an agile release train. A second statement in SAFe 4.0 is that the Value Stream level is applicable for very large organizations of many hundreds or thousands of people, with multiple value streams and multiple agile release trains. This article covers two situations that are corner cases of SAFe 4.0: (1) small release trains of less than 50 people, and (2) managing multiple small release trains.

Situation

Suppose a product development organization is about 100 people, developing multiple independent commercial (off-the-shelf) products. One product is a mobile app that will be released to the app store every 2 weeks. The second product is a website that can be released every day, every hour or even continuously. And other products are high-tech devices that can be released no more than 2 or 3 times per year.
Now the questions is: should these 100 people be combined into a single agile release train (ART) with multiple cadences, in multiple ARTs some of which are too small for a release train (< 50 people), in one large-enough ART (>50 people) + multiple agile teams, or in agile teams without a release train. And the big question is, how to keep everything aligned if you have small ARTs, big-enough ARTs and loose agile teams?

To make the situation even more complicated. Suppose the app, website and devices form the components of an integrated system that is also commercialized. In fact, the integrated system is the USP for the individual products. And to go one more step further, other vendors may partner up to have their products and systems integrated into this system.
Now again, the question is: how should these 100 people be organized?

Solutions

Option 1: a single ART of 100 people

From a SAFe 4.0 perspective, having a single ART for 100 people is preferred. It is in the range of 50-125 people. However, from a practical point of view it is quite artificial to combine the app teams with the web teams and the devices teams.They are independent teams, with their own planning, their own technology, their own resources. Combining for instance the PI planning session and collocating them isn’t really improving the effectiveness if they work independently.

On the other hand, although being independent products they may be able to interconnect into a system. It is typical for the Internet-of-Things (IoT) that everything is interconnected to become a big system, but that does not necessarily mean that everything should be put in the same release train. With IoT you don’t know upfront which systems will actually be interacting.

Option 2: no ART, many agile teams

The other extreme is to have no release train at all, only singular Scrum teams. Each team has its Product Owner (PO) responsible for the Voice-of-the-Customer. But there may be multiple related Scrum teams. For example in app development, there may be an Android, iOS and a Windows team and they need to be aligned on feature and planning level. Also for devices, there may be very similar device that need to be aligned on feature and planning level.

So if all teams are autonomous Scrum teams, the question is how to organize the alignment between teams that need alignment. Scrum-of-Scrums is an option but that does not take into account the PO and backlog alignment between teams. Of course, you can try to invent an approach, but… hasn’t SAFe already done that?

Option 3: 1 ART for devices, agile teams for the rest

Assuming the the device development teams comprise of over 50 people, we could make an ART of the devices teams. It provides the program coordination, alignment and synchronization. However, what to do with the rest. The teams that belong together as less than 50 people, so according to SAFe 4.0 is can’t be a release train.

So in this situation, we have an ART and a set of Scrum teams, some of them belonging together. And since they belong together, we need some kind of alignment and synchronization between them. Since they are too small for a release train, we end up with 2 approaches for alignment and coordination: SAFe 4.0 ART and some kind of Scrum-of-Scrum like organization.

Option 4: 1 ART for devices, 1 ART for app, 1 ART for web

That leaves us with the ultimate solution of combining all teams that belong together in an ART, yet separating the teams that are independent into different ARTs. The devices development may be large enough to fit the SAFe 4.0 criteria; the other ARTs are smaller than 50 people. And it is a small step to assume that also the devices teams are less than 50 people.

SAFe argues that for agile teams smaller than 50 people the overhead of an ART exceed the benefits. Yet given that the other options cause a lot of confusion that is bad for effectiveness of the teams, I think making multiple small ARTs is the best option.

Dependent independence and IoT

Now that sound strange. But with the IoT technology, everything becomes interconnected including independent ‘things’. The app must be aware of the services that the devices provide, the devices may become interdependent by responding to the reaction of other devices. For example,  weather sensors detect an increase of rain and wind, a weather website predicts a thunderstorm moving towards your house, the app warns you that your sun screens are down and you may decide to open your sun screens with your app to prevent them from being damaged by the thunderstorm.

In that scenario, the supposed independent products all of a sudden become dependent because they have to know the services that other products provide. The app should have knowledge of weather reports, sun screens and other devices. But does that imply that the development teams should be part of the same ART?

Yes, no, that depends! Suppose the wind sensors and the sun screens are built by our company, but the rain sensors are not. Originally, the wind sensor was controlling the sun screens directly: when the wind is above a particular threshold, the sun screens are opened. That could be a reason to combine the rain sensor team with the sun screens team. But when the rain sensor reports to a weather reporting website independently of the sun screens there is no need to combine them.

SAFe for small ARTs

Defining mini-ARTs

Finally, I have arrived where I want to be. There may be reasons for an organization to define ARTs that are less than 50 people. Some people think that when following SAFe, all practices are mandatory. But that is not true! You are allowed to deviate from anything that SAFe defines; what is not allowed is claim that those deviations are part of the SAFe model. They are not because Scaled Agile Inc. does not define it that way and they have copyright on the SAFe model and trademark on the term ‘SAFe’.

You are allowed to create an Agile Release Train of less than 50 people and you should if that helps your organization. But you may need to reconsider the “overhead” of an ART. For example, some roles may not need to be full-time jobs, some events may be shortened. I would not recommend to skip events (e.g. PI Planning session or the I&A workshops), because SAFe is already quite minimum it the number of formal ceremonies – although at first sign many people may disagree.

Managing mini-ARTs

Although the ARTs may be independent from each other on the development activities and functionality, there may be good business reasons to establish alignment and synchronization between the ARTs. For example, the marketing strategy may require that launches of products and services to the market must be aligned into a common event for a specific commercial window, e.g. 4 weeks before Christmas. Retailers won’t be happy if they want to decorate their shops for Christmas and the products are services are launched at different dates.

SAFe 4.0 defines the Value Stream level in between Portfolio level and Program level. The Value Stream level is intended for “very big and complex organizations”. But as we have seen above, an organization may be medium size and still become rather complex with multiple mini-ARTs. Now the question is: is the Value Stream level suitable for managing multiple mini-ARTs?

No! If you look closely at the Value Stream level, it is primarily suitable for managing big value streams with big features (called “capabilities”) that are implemented in multiple ARTs because the total team is too big to fit a single ART (because a single ART would become unmanageable). The Value Stream level is not suited for managing multiple mini-ARTs that are together small enough for a single ART but that cannot be combined into a single ART.

But with a small  change, the Value Stream level can be made suitable. Since the mini-ARTs are independent release trains, with distinct functionality and architecture runways, there is not point in defining Capabilities for functionality that stretches over multiple ARTs. But you can define capabilities (epics, features, stories) for the common, aligned work, such as marketing preparations (photos, videos, advertisements, retailer decorations, etc.). And pre- and post-planning activities for the PI Planning event could be used to align multiple (mini-)ARTs with the common value stream work.

Conclusion

SAFe claims that an Agile Release Train (ART) should be between 50 and 125 people. In this articule, I have shown that mini-ARTs of less than 50 people may be applicable for medium size organizations developing multiple independent products within the same value stream. For managing the individual mini-ARTs, the concepts of the SAFe Program level are nicely applicable. Arguably, this leads to extra overhead but coordinating the agile teams without using mini-ARTs would lead to more overhead.

Although the ARTs may be independent on features, it may be required to align particular activities across the release trains that have no impact on the functionality of the individual ARTs, for example common marketing activities. The SAFe Value Stream level may be suitable for managing the common activities and assuring that the impact on the work within the ARTs is incorporated in the normal release train activities.

 

Posted in agile, complex systems, large projects, professional, systems, Uncategorized | Tagged | Leave a comment

“Olifantenpaadjes” or how to change habits


One of the most difficult things in change management is change people’s habits. Habits are like reflexes: act without thinking why and how to do it. Habits save a lot of time and energy. How difficult would walking or driving a car be if we had to think about what to do, why you do that and how you should do it.

When I go to work, I always take the same route. Boring… some people say. But it allows me to think about other things without thinking about where to go. Sometimes I try an alternative route to avoid traffic jams. And if the traffic jam happens every day, the alternative becomes my fixed route, even when traffic is low and there are no traffic jams. Just out of habit.

This is how “olifantenpaadjes” (Dutch) are born. Literally they translate to “Elephant paths”, but in English they are called “Desire paths” or “Game trails”. Usually desire paths are the result of a shortcut; cutting a corner is shorter and faster. But sometimes it is longer, e.g. going around a fence over the grass. The same with habits. Habits are usually efficient ways of doing something, but often it is an easier way not necessarily more efficient or better.

In change management, changing a way of working is like changing the roads. Deployment of the change involves instructing and training people to followed different roads and routes. But as old habits die hard, people will continue to follow the old routes which then become “elephant paths”. The reason I don’t like the term “desire paths” is because people don’t desire to follow the path, they just do out of habit following the same way as the other elephants go. One elephant going the “right” way won’t change the habit of the herd.

So what can you do? You can punish people for not following the roads (or rules), by fining them, taking disciplinary measures or put it in their evaluations. Or you can instruct and train them until their habits have changed. But since old habits die hard, people will continue to fall back to their familiar “elephant paths”.

A better way would be to make use of the natural inclination of people to take the “elephant paths”. Instead of paving the roads where you want people to go, you should build obstacles around the areas you don’t want people to go and allow them to go anywhere else. People will go around the obstacles in the easiest possible way to get to their destination. New elephant paths will grow that way and old ones will be forgotten. Then you can pave those new elephant roads and remove the paving of obsolete paths. You don’t need signs, rules or guidance to stay on the road! People will just do it.

So how does this change change management? First of all, change management should not focus on defining and deploying processes. People do not follow processes, they follow “elephant paths”. So instead, change managers should make it easy to realize the dream of success. Secondly, they should pave the “elephant paths” to make it even more easier to follow them, and clean up the obsolete paths. And finally, change managers should place obstacles at strategic places in the processes. This may be when the elephant path does where you’d rather not have it – so people start going an alternative path – or at places where you absolutely don’t want people to go regardless of a path leading there or not (e.g. restricting access to confidential information).

Posted in agile, improvement, modeling, people | Leave a comment

SAFe 3.0 versus SAFe 4.0

When SAFe 4.0 was release was my first question: which one is better, 3.0 or 4.0. In my opinion the 3-level model of SAFe 4.0 is definitely an improvement compared to SAFe 3.0.

  • The triangle RTE – System Architect – Product Manager is a much clearer representation of main roles at program level
  • The program backlog is managed by a Kanban. In Jira, we have struggled with using a Scrum board or a Kanban board, but finally the Kanban board seems to work best on program level.
  • Scrum and Kanban op team level. The team backlog is managed by Scrum or Kanban. Scrum teams can have timeboxes (sprints) of 2 or 3 weeks weeks. But some teams have a more continuous process that does not fit in fixed timeboxes, so working with Kanban is more convenient.
  • Incorporation of SAFe values and principles in the poster. Many critics of SAFe claim that SAFe is a single prescribed process, while SAFe proponents claim it is an open framework leaving room for fitting in your own processes and practices. With the SAFe values and principles more prominently depicted, it is emphasized that you have to think yourself about how to implement SAFe, within the framework.
  • Customer. In 3.0, the customer was implicit while in 4.0 it is made more explicit. Still there is much room for interpretation about what the customer actually is.

Things I don’t like about SAFe 4.0 (and to some extent SAFe 3.0):

  • Enterprise Architect. A better name would be “Business Architect”. For an IT organization, it is okay to have an enterprise architect because the products and systems being developed are for the business processes of the enterprise itself. But for a production company, the business is selling products and systems to the customer markets.
  • Product Management. A better name would be “Product Manager”. Product management is a function within the enterprise responsible for managing the products in production, marketing, sales, logistics, procurement, etc. A product manager is role that understands the features of the products or systems, and as such is a partner to the RTE and System Architect.
  • Enablers. Enablers are just a different kind of epic, feature or story. They are written, prioritized and follow the same rules as their respective Epics, Features and Stories. So why not call them Epics, Features and Stories?
  • Quality and problem solving. SAFe defines that you need to build in quality as you go, but practically you cannot – ever – build a perfect system without problems. So somehow, while working on a new iteration you need to cope with problems that emerge in past releases. I am missing that in SAFe 3.0 and still missing it in 4.0.

Now the question arises: if you have built your agile organization around SAFe 3.0, do you need a (major) transformation to adopt SAFe 4.0? No! If you implemented SAFe according to the spirit (and not according to the letter), you have made your own interpretation of each and every element of SAFe that fits your organization. This interpretation is still valid with 4.0 and you don’t need to change anything (to adopt SAFe 4.0).

Of course, with SAFe 4.0 you may have new insights that may lead to a desire to change. In fact, you should change something to improve. If there is nothing left to improve, please let me.

Posted in agile, process | Leave a comment

Implementing SAFe with Jira (part 3)

In part 2 we have looked at estimating and planning of a release of the Agile Release Train. This is mainly on SAFe portfolio and program level. In this part, I will focus more on the challenges of estimating and planning on program and team level. Since Jira does not offer much cross-project support, we will be using the structure plugin. So first, we must get our structure boards right.

Structure plugin (revisited)

Last time, I explained that you can make a structure board with the structure plugin, and populate and locate issues on the structure board with synchronizers, in particular:

  • Filter synchronizer – use a filter query
  • Agile synchronizer (JIRA Agile (Greenhopper) – using Epic links and Subtasks
  • Links synchronizer (Issue Links synchronizer) – using “Implement” links

The filter synchronizer can be configured to remove issues that are outside the filter query results. For example, if you filter on ‘project = PSYS AND status != Closed’ the filter synchronizer removes issues when they are put in Closed status. But, it will also remove issues for other projects than PSYS. And given that the links synchronizer will typically add issues from other Jira projects (e.g. the portfolio project or product development projects), a resync of the filter synchronizer will remove issues added by the links synchronizer. We don’t want that!

My first instinct was to configure the filter synchronizer not to remove issues, only add them. That way, issues added by the agile and links synchronizers remain on the structure board. It worked at first, but then issues of the product teams started popping up on the system structure board. And obviously, they were not removed. So, why did they pop up and how do I get rid of them?

They popped up by “clone and move”. Suppose we want to clone a system epic on program level to a product epic for a particular project on team level. Step 1 is to clone the system epic. The result is that 2 identical system epics are on the system structure board. Step 2 is move the (cloned) system epic to the product project. Jira actually renames the issue id (key). The structure plugin shows the renamed issue id on the structure board because the filter synchronizer is not configured to remove it. So, now the question is, how to get rid of it on the structure board.

  • Manual way 1: select the issue and press the ‘x’ button on the structure board.
  • Manual way 2: select all issues and press the ‘x’ button on the structure board, which makes the board empty. Then resync the filter synchronizer and then the agile and links synchronizers to add, sort and locate issues on the structure board
  • Automatic way: extend the filter of the filter synchronizer and configure it to remove issues you don’t want

In Jira it is not possible to define a query to find all issues that the Agile and Links synchronizers add. The query language (JQL) is just too limited for that. But the structure plugin does add the function structure(<structure>,<query>) to find issues on a structure board. Now if the filter synchronizer adds PSYS issues and the agile and links synchronizers only add sub-issues underneath, the top issues are always PSYS issues. For any top that is not of the PSYS project should be removed. Result:

project = PSYS OR structure(“System structure”,”ancestor in [project = PSYS]”)

In English: all PSYS issues and all sub-issues on the structure board underneath a PSYS issue. If you want multiple projects on the structure board you can use project in (PSYSA,PSYSB,PSYSC) instead.

Jira boards for SAFe

Let’s have a look back: what boards do we have for SAFe implementation in Jira?

  • Portfolio level
    • Portfolio Kanban board – manage the workflow of the portfolio backlog items
    • Portfolio Structure board – visualize the hierarchy from portfolio to products
  • Program level
    • System Kanban board – manage the workflow of the system backlog items
    • System Structure board – visualize the hierarchy from system to products
  • Team level
    • Product Scrum board – manage the workflow in sprints of product backlog items
    • Product Structure board – visualize the hierarchy from product to portfolio

On the portfolio structure board and the system structure board, it is only a downwards hierarchy to the product. On the product structure boards, it is an upward hierarchy to the portfolio, which requires a different filter for the synchronizer:

project = PRDA OR structure(“Product A structure”,”descendantsOrIssue in [project = PRDA]”)

Of course the links synchronizer needs to be configured differently to locate issues in an upwards hierarchy, i.e. adding parents instead of sub-issues.

Orphans

Orphans are backlog items on the system or product backlog that are not related directly or indirectly to an item on the portfolio backlog. Portfolio backlog items represent value to the business. Since an orphan is not related to a portfolio item, it does not add value to the business and therefore working on an orphan is wasted effort. Of course, if links are missing they may seem like orphans but they are not.

To be able to eliminate waste (which is a Lean principle), we strive for linking all backlog items, to a portfolio backlog item. This includes defects; more on defects later. For example, a story (on team/product level) may be linked to an product epic (through an epic link), which is linked to a product epic of another team (through a “implement” link), which is linked to a system epic (through a “implement” link) which is linked to a business epic (through a “implement” link) on the portfolio backlog. So working on the story adds value to the portfolio backlog item and thus it is not waste.

Finding orphans is quite simple: all backlog items without an ancestor (i.e. parent, grandparent, …) on portfolio level is an orphan. As a query, assuming PBUS is the portfolio project in Jira:

structure(“Product A structure”,”ancestorOrIssue not in [project = PBUS]”)

Estimating and planning

Now that we have structure boards cleaned up of orphans, we can finally start estimating. One of the nice things about the structure plugin is that it can accumulate estimates: original estimate, remaining estimate and logged work. For each estimate it shows 2 numbers: excluding and including the estimate of the underlying item. So when the structure hierarchy is complete, you can nicely see the total of original estimate, remaining estimate and logged work, assuming the numbers are filled in right.

However, in practice the hierarchy is never complete. There are portfolio items which have not yet been refined into system items, and there are system items not yet refined or partially refined into product items. Here is the dilemma:

  • Complete the hierarchy before estimating a portfolio item
  • Estimate the portfolio item before completing the hierarchy

Completing the hierarchy takes a lot of time and effort. You need to do an impact analysis and technical (architectural) decision to decide the affected system and product parts. You should do that only for backlog items early on the roadmap. However, if you estimate before making the hierarchy complete, the estimate may be unreliable. So what to do?

Simple answer: estimates on any level (portfolio, program or team level) should take into account the velocity at that level only. The story point estimate is a size estimate for a backlog item relative to other backlog items on the same level. Don’t take underlying estimates or velocities into account! Based on the (recent) past, calculate the velocity (story points per time unit). For the upcoming iteration (e.g. program increment on program level, or sprint on team level), set the planned velocity equal to the velocity of the last iteration. Then adjust:

  • If the velocity over the past iterations shows a trend (e.g. increasing), adjust the planned velocity in the extrapolated direction
  • If there is a significant change in the development staffing (e.g. new people,  reduction, outsourcing), technology (e.g. new platform, new OS, new protocols) or another factor that may affect the velocity, estimate the impact on the velocity and adjust the planned velocity accordingly
  • If there are temporary circumstances (e.g. holidays, absence of key people), estimate the impact on the velocity for backlog items high on the backlog and adjust the planned velocity accordingly. Consider delaying the backlog items that are hampered by the temporary circumstances

With a planned velocity and story point estimates (on portfolio, program or team level), you can determine how may of the (top) backlog items you can plan for the next iteration.

Why do you need the hierarchical structure anyway? You don’t need it for estimation! You do need it for planning. On team/product level, the sorting order of the product backlogs should satisfy the sorting order on program/system level. And on program/system level the sorting order of the system backlog should satisfy the sorting order on portfolio level.

The backlog hierarchy also allows cross-checking the sanity of the estimates for velocity and story points. For example, if the program velocity is planned to be reduced by 10% due to holidays but actually the average team velocity drops by 30% during the holidays, you know that you should increase the expected velocity drop for the next holiday season. Similarly, if a backlog item is 2x the size of another backlog item on program level, but the underlying product backlog items are 4x the size, you need to analyze which type of backlog items are over or underestimated and adjust the estimates accordingly.

Conclusion

In part 1 and part 2, we discussed the 3-layered structure and how the structure plugin supports it. In part 3 (this part), we discussed orphans in the structure and estimating and planning across the portfolio, program and team levels. Important to remember is that story point estimations and velocity planning is independent from the hierarchy. The hierarchy is merely used to assure the consistency between the portfolio, program and team level in sorting order (planning), story points (estimating) and velocity (planning). I am not sure yet about part 4, so if you have ideas of subjects you would find interesting, feel free to let me know.

Posted in agile, complex systems, large projects, project management, software development, systems, tools | Tagged , | 11 Comments

Een acuut hartinfarct – hoe is dat?

Op 28 september 2015 om kwart voor twaalf ‘s nachts kreeg ik plotseling pijn. We waren net naar bed gegaan en ik wilde net gaan slapen. Terwijl ik op mijn zij was gaan liggen om lekker te gaan slapen kwam de pijn opzetten. Maagzuur, of de slokdarm, dacht ik. De pijn werd snel sterker, midden op mijn borstbeen. Stomme maag! Even gaan liggen, gaat zo wel weer over… niks, werd alleen erger. Mijn vrouw belde toch maar even de huisartsenpost om te overleggen. Allerlei vragen of tinteling in de vingers, uitstraling naar armen of de kaak, duizeligheid, desorientatie… niks. De pijn werd alleen maar heftiger, en ik kon alleen nog ineengekrompen op de rand van mijn bed zitten. Laat die pijn stoppen!! Dan ineens: waarschijnlijk een hartinfarct, ambulance komt er aan. Nee, hartinfarct, ik toch niet!? Ik heb nooit wat!
Na enkele minuten komt de ambulance. Hulpverleners komen binnen, maken vlot een hartfilmpje en ja hoor: infarct, links. Ik krijg 4 pillen. Op sokken en slipper, in pyjama met korte mouwen en korte broek, lopend van de trap, door de koude nacht naar de ambulance. Pufje nitro, muntsmaak. Rustig naar het ziekenhuis, geen blauw want het is vlak bij en de opgeroepen dienstdoende arts is er nog niet.
In het ziekenhuis naar de cardio ruimte; 20 a 30 minuten geschat. Vervelend prikje in de pols, ader verwijden wat een beetje prikt, catheter er in, wat schraperig gevoel in mijn arm. Rechts ok, links zit dicht. Rontgen buis gaat van links naar rechts, terug, boven, onder, vlak langs mijn neus. Lampje aan, uit. Allerlei groottes van stents hoor ik noemen; nee, doe toch maar die andere. Dit doet even wat meer pijn, en ja hoor. Mijn benen en armen trillen; sorry hoor, ontspannen lukt me echt niet Meer. Het is zo gebeurd, geef hem maar even wat… nee, doe maar wat meer. Het helpt, maar een klein beetje. Om 3 uur ‘s nachts is het klaar. Moe, uitgeput, maar klaarhelder. Ik ben naar een kamer gebracht. Daar lig je dan. De pijn is gelukkig weg. En nu? Mijn vrouw wordt er bij gehaald. Ze heeft ook midden in de nacht, van half 1 tot 3 uur in spanning zitten wachten. Half uur, heel uur, ja nu zijn ze bijna klaar… en dan nog een uur! Helemaal hyper door de adrenaline. Het gaat goed, de pijn is weg. Je kunt gerust naar huis. Ik zie je morgen weer. Dag. Kus.
Daar lig je dan. Sokken nog aan, in je pyjama, onder een dekentje. Klaarwakker! Af en toe komt er een witte jas langs. Proberen te slapen en geen slaap. Ik merk dat ik gewoon kan bewegen, zonder pijn. Heerlijk! Alleen het infuus en al de draden zijn hinderlijk. Linker zij, recher zij, op de rug, allemaal geen probleem met wat wurmen. Half 4… vier uur…kwart over vier… half zes… toch even weggedommeld. Tegen een uur of 8 wordt mijn bed naar een andere ruimte gebracht. Moe en helder. Zal wel een zware dag worden. Maar de pijn in mijn borstbeen is terug. Minder erg als voorheen, maar toch. Belletje voor de zuster. Ja, dat komt voor. Wordt wel minder. Maar het werd niet minder; het werd meer. Niet zo erg als gisteravond, maar toch.
Pas laat in de middag spreek ik de cardioloog. Hij vertelt dat ze een 2 stents hebben geprobeerd te zetten op een splitsing van aderen in het hart. Dat lukte niet, dus hebben ze besloten de stent op de splitsing te zetten waardoor een van de aftakkingen werd afgesloten (en afsterft). Afsluiten van die aftakking is eigenlijk hetzelfde als een afsluiting door een infarct, vandaar de pijn op de borst. Hadden ze dat wat eerder verteld dan had ik me niet zo’n zorgen hoeven maken. Na een ruime dag trok de pijn ook weg.
De volgende dag komt de fysiotherapeut langs voor een inschatting van wat ik nodig zal hebben aan revalidatie. Ik voel me prima, en kan volgende week wel weer aan de slag, denk ik. Nou, dat is nog niet zo zeker; ga maar even mee naar de trap. Uit bed, door de gang langs de batterij hartmonitoren, naar de trap. Dat gaat prima, zie je wel. Een paar treedjes op de trap… 4 treden, buiten adem. Sh*t! Dat had ik niet verwacht. Toch maar revalidatie afspreken dan.

De weken er na bleef ik me prima voelen. Wel een “golden 5” aan medicijnen (bloedverdunner, betablokker, bloeddrukverlager, cholesterolverlager, stollingsremmer). Een maand niet autorijden. Balen natuurlijk. Na die maand naar een reanimatie cursus die al gepland stond. Iedereen nieuwsgierig natuurlijk, en ik doe goed mee, maar mijn condities (om te reanimeren) is wel bagger. Maar goed, wel gemotiveerd om dit te leren natuurlijk. Burgerhulpverlener word ik toch maar even niet in deze situatie, maar ik heb er wel ernstig over gedacht.
Na drie maanden weer part-time aan het werk. Iedereen bezorgd, en ook maar even rustig aan gedaan. Trappenlopen gaat nu weer prima, beter dan voor het infarct toen ik nog geen 24 treden kon doen zonder buiten adem te raken. Nog een ruime maand later weer full-time aan het werk. Weer helemaal de oude, maar dan fitter…

Posted in life | 2 Comments

Implementing SAFe with Jira (part 2)

This is part 2 about implementing SAFe with Jira. In part 1 I explained how we implemented the portfolio, program and team backlogs as the 3-layered structure of SAFe in Jira. You may recall that we renamed them to business, system and product levels. In part 2 we will dig more deeply in connecting the layers.

Structure plugin

Since Jira (core) is not able to visualize linked relationships, we use the Structure plugin from ALMworks. On a structure board, you can add Jira issues, you can sort them and put them in a hierarchy (i.e. an issue nested under its parent issue), manually by drag-and-drop or automatically by synchronizers. A synchronizer interprets the data in Jira (e.g. a filter, Epic link or issue links) and puts the issues in the structure at the right place. Synchronizers work bi-directional. For example, by adding or removing a link the synchronizer places the issue on the right place in the structure, and by changing the place of an issue in the structure, the synchronizers adds or removes links in Jira.

So if we have a business epic -> system epic -> product epic -> product story -> subtask relationship (through Epic links and Issue links of the “Implement” link type), the Structure board visualizes the hierarchy structure as nested issues. By adding the columns on the Structure board (through views), you can see, check and even change the value of a Jira issue. For example, when a release on team level is later than the release on program level, you can easily see this by looking at the FixVersion/s column.

One of the features of the structure plugin is its time tracking capabilities. The plugin is able to aggregate the planned and actual effort (original estimate, remaining estimate, logged work) over items in a structure. For example, if an epic contains 4 stories each estimated at 3 days work, and the epic itself requires an extra 1 day work, the estimate of the epic is 13 days work (4 * 3 days + 1 day).

There is a lot more to say about using the structure plugin and its synchronizers. I may come to that later.

Agile Release Trains (ARTs), Program Increments and Sprints

An Agile Release Train (ART) is planned and organized in iterations called Program Increments (PIs). We define 2 PIs per year, corresponding with the commercial windows of product marketing to the market. In fact, we don’t even use the term “Program Increment” and call them “windows”. Each PI (window) consists of 3 releases, and each release consists of 4 sprints. In Jira, PIs (windows) and releases are defined in the FixVersion field and sprints are defined in the Sprint field.

  • Business backlog items are planned in windows: W1/2015, W2/2015, W1/2016, …
  • System backlog items are planned in releases: 1.1, 1.2, 1.3, 1.4, …
  • Product backlog items are planned in releases (1.1, 1.2, …) and assigned to sprints

As you can see development and releasing are both on cadence. SAFe says “develop in cadence, release on demand”, but since we come from a traditional stage-gating world, releasing in cadence is a old habit. We may change that in the future.

Portfolio/business level and program/system level have no sprints; they use Kanban, which represents a continuous flow of input (new items) and output (closed items). On team/product level we do use sprints with Scrum. Typically, sprints are 2 week time-boxes. The name or numbering of each sprint is dependent on the team; sprint 6 for one team may not coincide with sprint 6 of another team. But by using separate projects in Jira for portfolio, system and for each product team, each project may have their own set of version values and sprints in Jira. This requires some alignment between system and product level, because we want the release numbers to be the same for all projects, at least for now.

Of course, our world is not ideal. We have teams with a different iteration lengths and we even have teams that do not even follow Scrum, e.g. hardware development. That is okay as long as they are able to deliver for and integrate with the release.

The Structure board shows the hierarchical relationships from portfolio to system to product level. The fixversion column shows the windows and releases and the sprint column shows the sprints, so we can spot inconsistencies easily. The estimated effort column shows the aggregated effort and the story points column shows the aggregated story points. This helps us decide about the release content. And the progress bar shows the aggregated progress of a portfolio item and all its descendants. That way, we can visually check the consistency and progress.

Agile release train (ART) planning

Before a release starts, we have an ART planning session, which we call the “release train workshop”. The portfolio backlog in Jira containing the  business epics on the strategic roadmap for the coming years, specifies the business epics for the coming window. Since we have 4 releases per window, we don’t need to do everything in a single release. But the question is: how much can we do in this release?

The first step is to divide the portfolio items (epics in the Portfolio project in Jira) into multiple system items (epics in the System project in Jira). Typical system epics are a minimum viable product (MVP) of the business epic and successive increments.

After putting them in Jira, they need to be estimated: how much capacity does it require from each of the teams for each of the system epics? In Jira, you only have one estimate field per epic, so that is insufficient to capture the individual estimates of each team. So instead of trying to extend the system epics with extra custom fields, we make epics in the Jira projects for the product teams and link the product epics to the system epic with an “implement” issue link. Through the structure plugin, the product epics show up underneath the system epics and the estimate column (or story point column) shows the aggregated effort (or size) of the product epics for the system epic.

Unfortunately, Jira (or better, the structure plugin) does not allow us to extract the aggregated data to reports or even to queries.

Having these estimates brings us to the next question: will it fit the release? Accumulating the estimates of the system epics for the release may give the impression that the total capacity of the teams is able to cope with the total amount of work for the release. But fortunately, it is not a realistic approach either. If one team is overloaded, the release cannot be completed in time anyway. So a better metric would be to look at the total estimate for each team for the release. And since the estimates for a team are all in the same Jira project, it is easy to create a report that shows the totals.

Knowing the total estimate per team allows us to decide whether the release content fits. Of course, we are running into the situation that it does not fit! In those cases, the MVP definitions may even be reduced further and the estimates need to be redone. So in practice, the information is not actually entered in Jira until there is sufficient consensus about the feasibility of the release content and the estimates. Jira is not a planning tool and does not allow the dynamics of what-if scenarios; I wish.

Conclusion

In part 1 I described how we have implemented the 3-layered backlog structure in Jira. In this part 2 I have made the first step of making use of this structure to deal with release content and estimation. Part 3 will take us to the next step. If you have ideas what next step you would like me to focus on, feel free to let me know.

Posted in agile, complex systems, large projects, project management, systems, tools | Tagged , , | 11 Comments