Agile Zone is brought to you in partnership with:

Kirk is a software developer who has filled most roles on the software developer team. He is the author of Java Design: Objects, UML, and Process (Addison-Wesley, 2002) and he contributed to No Fluff Just Stuff 2006 Anthology (Pragmatic Bookshelf, 2006). His most recent book, Java Application Architecture: Modularity Patterns with Examples Using OSGi was published in 2012. Kirk is a DZone Zone Leader and has posted 77 posts at DZone. You can read more from them at their website. View Full User Profile

Big Teams and Agility

03.06.2009
| 10959 views |
  • submit to reddit

In Grass Roots Agile, I talked about some of the details surrounding how development teams can increase their agility, and I presented a diagram similar to what’s above that discusses how to measure and manage a system’s tested features. Here, I want to talk a bit more about the macro development process, and how I’ve used agile practices on large software development teams. I’ve used this structure with teams up to roughly 100 developers. I have no reason to believe it wouldn’t work on larger teams, though. Here’s the general mechanics illustrated by the diagram.

Team Structure

One of the keys to building big software projects is to break it down into a bunch of smaller projects. Each team in the diagram (Team 1, Team 2, etc.) represents a group of people working on one of these smaller projects. The smaller projects are organized around coarse grained units of business functionality and each team focuses on developing the complete unit of functionality (front to back).

Each of these teams consists of around two to five developers, a business analyst (BA), tester, UI designer, and customer. Depending on the size of the effort, some of these individuals may span teams. A project manager spans all teams. The developers on each of these teams focus on some aspect of technology expertise. Examples include a JSP expert, Hibernate expert, etc. These technology experts form alliances with experts on other teams, which is critical to ensure consistency with how the technology is applied project-wide (ie. the architecture). The technology experts will also spend some time on infrastructure code, such as system-wide error handling utilities and other important architectural aspects. Naturally, you adjust the size of the teams and the roles of team members based on various factors.

The Feature Board

The feature board is a virtual concept that represents the stream of requirements that flows from the customer to the development teams. It is the responsibility of the BA to make sure there are an adequate set of requirements on the feature board, to resolve conflicts, and manage change. Everyone on the individual teams should have the option to participate in discussions surrounding requirements. The feature board could be your typical project room whiteboard, but for large teams, that’s probably not going to work because you have a group of geographically dispersed developers.

Instead, the feature board is simply a snapshot of requirements, and each team pulls those requirements and starts working on them when ready. These requirements might come in the form of user stories, use cases, or something else. The format used is not as important as the fact that the feature board is organized in a way that each team can pull the next set of requirements for the part of the system they are working on when they are ready, while allowing project management to see a complete view of the system requirements.

Continuous Flow

One of the challenges with a large team is that it’s virtually impossible to get everyone aligned on the same iteration schedule. There is too much management overhead. This is one of the driving forces behind why skeptics don’t feel agile is a good fit for really big projects and teams. The economies of scale lead us to believe we need longer iterations because there is so much more to manage. But that’s flawed because it delays risk mitigation and discovery.

In practice, there isn’t a need to have each project team on the same iteration schedule, and in fact, because the teams pull requirements from the feature board when necessary, there isn’t a need for iterations at all. Instead, there is a continuous stream of work that flows from the customers to the development teams. As a team completes a piece of work, they simply release the code to the version control system, build it, and feed it right back to their customer in form of a functional system.

The Build

With each team releasing code, there is a continuous stream of feature rich functionality that is added to the system. Continuous integration and the automated build process holds it all together. Because we are building on a frequent basis, we’ve got a system of checks and balances in place to make sure no breaking changes enter the main product line.

Occassionally, we’ll hit a situation where two teams release incompatible changes. That’s the purpose of the build! When this does happen, the build breaks, the problem is identified, the teams fix the problem, and we’re back on track. This is a central part of the macro process. Without the automated build, it all falls apart. Once the build executes successfully, the application can be deployed to an environment where it’s accessible by the customers.

Closing the Loop

Adopt all the agile processes and practices you want, but increased agility completely falls apart unless you close the feedback loop. I’ve seen too many teams fail because they’ve stopped just short. Feedback is the crucial piece to ensure we close the loop, and because we’ve got a continuous integration process in place due to the automated build, we are able to effectively close the loop because we’ve always got a product that works. It’s important to leverage this. Eliciting frequent feedback from customers through weekly demonstrations is one way to close the loop.

We should also frequently execute a variety of tests. Not just unit and acceptance tests, but usability tests, performance tests, load tests, and more. Because the build is an automated process, we should incorporate code analysis and inspection tools into the process and output the results to a project dashboard that is accessible by the project team. In the end, closing the loop is going to help significantly in increasing project transparency.

In Summary

This is a very broad overview of how agile can work on big teams. In fact, I believe agile is more beneficial for large teams than it is for smaller teams. Small teams are inherently more nimble than bigger teams in the first place, and so agile is an obvious and natural fit. But on large teams, it’s not so obvious how agile practices can improve the team’s success. Above is one formula I’ve found that works very well.

Without question, there are numerous micro process mechanics that I haven’t discussed here. For instance, how often should the build run? (I say at least hourly). For really big systems, how do I keep the build running quickly? (You may need staged builds). What tools do I use to make this all happen? (You don’t need to buy anything, actually). And much more. But in general, the macro process above is a good way to get started.

From http://techdistrict.kirkk.com/

Published at DZone with permission of its author, Kirk Knoernschild.
Tags:

Comments

Artur Biesiadowski replied on Fri, 2009/03/06 - 7:13am

"As a team completes a piece of work, they simply release the code to the version control system, build it, and feed it right back to their customer in form of a functional system."

Who is a customer in this case? Because certainly not a company which is getting an updated version of  system once per few months, where integration takes few weeks to clear all the legalities.

I'm working for a company having considerably large program (over 10M lines of C++ code). There are teams and subteams working on small subprojects - but I fail to see how you could do 'small releases to customer' from each of the subteams. Do you really expect big company to go through integration/testing/whatever every week ? What about interdependencies between modules/subprojects? Even if you are careful, there is a chance that your change will break some other remote module in non-obvious way. You could avoid it by running full testing each time, but what it it takes a week to run all automated tests? With 20 teams releasing often, how many of those integration tests you want to run in parallel against what versions of other modules ?

In the end it ends up that every 1-2 months there is few days of modification freeze, branch created, QA is starting to test it, only criticial bugfixes are done and customer is getting code which is few weeks behind the trunk version, but reasonably well tested. Which is exactly opposite of what you are proposing and I don't see a real way to change it for such big project which has to be delivered to 'serious' outside clients.

Jeroen Wenting replied on Fri, 2009/03/06 - 7:37am

"fail to see how you could do 'small releases to customer' from each of the subteams. "
You can't, period. "Agile" gurus all seem to live up in some cloud where every system is small and no change ever affects any other part of the software, where there is no need for things like QA, customer acceptance testing, shadow environments, etc. etc., where the error-free completion of a batch of unit tests means the system if perfect and not only compiles and doesn't crash but actually meets customer requirements.

Dan Greening replied on Sun, 2009/03/08 - 1:17pm

Nice approach.

Craig Lerner and Bas Vodde's "feature teams" can allow you to ship small releases to customers from subteams (see Choose Feature Teams over Platform Teams). It wasn't clear whether you were saying that your TEAMS specialize, as in a "platform team" approach; if they do, you can have too many organizational dependencies to do short iterations directly to customers. This is the jist of previous criticisms. It is reasonable, especially for large groups operating in a waterfall world, because it takes enormous organizational effort to recast a large development group into feature teams. The benefits are worth it, I think, but the first step is to get the smaller teams all operating under Scrum or something similar. I bet neither Artur or Jeroen have Scrum teams.

I agree: as you scale, it become less feasible to synchronize iterations. We also gave up on this idea, if only because we had insufficient meeting rooms to handle simultaneous Sprint Planning for many teams. As Kirk points out, strong test automation and continuous integration makes multiple unsynchronized groups agile, so everything always works despite "other developers messing with my stuff."

I think Kirk's approach could be characterized as "lean development" at the top, with Scrum/XP at the lower levels. Iterating teams take on work from the feature backlog, but the top level organization doesn't really operate with an iteration. My group has a different take on this, where we have Scrum-like quarterly iterations at the top level. The purpose is to do better resource allocation and product prioritization (we manage 6 products, 200+ engineers in one Enterprise Scrum). It still looks pretty "lean", even with a quarterly iteration. We don't produce "one big product" at the end of a quarter, we just take a big breath.

Would love to commune with others scaling agile to 100 developers or more: dan at greening.org. There are not too many of us thinking at this scale.

Artur Biesiadowski replied on Mon, 2009/03/09 - 9:03am

No, we don't have Scrum teams, but we have multiple small mostly-independent teams, working on features from baskets. Still, there is a release for a product every 1-2 months. It is just done globally and going through QA department for around 2 weeks before going to customer (of course, developers are already working on new things).

You mention quarterly iterations on top level - I fully agree that this is doable, regardless of Scrum or no Scrum on low levels. My question to original poster was how he imagines skipping those integration iterations by waving magic Agile wand.

As far a feature teams are concerned - I understand the idea. What I don't understand how it makes packaging and integrating the application on customer site faster or easier.

Dan, will your approach work if every of the feature teams would work in 2-week increments and deliver their features/components to customer independent from each other, in random order and combination? Forgetting about 3-months top level iteration, just bunch of 6-person team leaders sending single .so libraries to customer directly saying 'replace it in that directory, such and such bugs should be fixed' ? I don't think so, but this is exactly what original poster seems to suggest - and what I was asking to clarify, because I don't believe it working, even in ScrumXpTDDAgileBestThingSinceSlicedBread environment.

Brian Shannon replied on Wed, 2009/03/11 - 1:02pm

Artur,

My question to original poster was how he imagines skipping those integration iterations by waving magic Agile wand.

There are no integration iterations.  Things are continuously integrated through the automated build and test system.   This is definitley more challenging to implement in a legacy system, but for ones starting from the ground up (or near the ground), it is well worth the effort.

This is what Kirk is alluding to when he says, "Occassionally, we’ll hit a situation where two teams release incompatible changes. That’s the purpose of the build! When this does happen, the build breaks, the problem is identified, the teams fix the problem, and we’re back on track."  This happens on a regular basis, continuously and not during an 'integration build'.  When you are dealing with at most one days worth of work, finding the points in which the error occurs is fairly straightforward (not always, of course!).

As you probably know, integration is probably the toughest part in the whole process.  By reducing this to at most a days worth of work, you can effectively fix and work out integration problems much more efficiently.  You can also make changes to the system with a higher degree of confidence knowing that you can make the change, then run the barrage of tests to see if you broke any of your own feature or others'.

Forgetting about 3-months top level iteration, just bunch of 6-person team leaders sending single .so libraries to customer directly saying 'replace it in that directory, such and such bugs should be fixed'?

I think Kirk was actually suggesting a 'pull' model for customers in which they can go and get the latest release when they are ready for it.   He alludes to this when he says, "Once the build executes successfully, the application can be deployed to an environment where it’s accessible by the customers."

It seems your comments come from the perspective of dealing with a legacy system that has already atrophied to the point in which any of this seems impossible.  If that is the case, you can slowly, but surely begine to improve the code base.  "Working Effectively With Legacy Systems" is a highly regarded book that might be able to guide you in this regard.  The main issue in dealing with legacy systems and Agile is that you have to start slow.  You can't expect that Agile is a magic wand that fixes all of your ills instantly.  A tremendous amount of technical debt is built up in a legacy system that takes a lot of time, care and effort to pay off.

Artur Biesiadowski replied on Thu, 2009/03/12 - 7:36am

I think that our difference in understanding boils down to having perfect automated test system. Indeed, if there is a system which can detect every error in application, both in modules and in entire platform, which is possible to run in reasonable amount of time, I can possibly agree with statements above.

Unfortunately, in my case, automated tests are covering around 25% of functionality and take almost a week to execute on multiple machines in parallel. While I could in theory imagine moving them to 90% coverage (by hiring few hundred people working for few years), tests would probably execute for few weeks then... especially as some of them would be things like "Is component ABC behaving properly under heavy load if left alone for 1 week" ;)

So, from misunderstanding about agile release/integration process, it boils down to: Is it possible to have all-encompassing unit/functional/integration/everything automated tests for applications which are above certain threshold of complication/size. I think that we can agree that it is almost impossible to do for huge legacy applications, without effectively spending effort similar to rewriting them from scratch. For new applications, it is for sure possible at the beginning - what I'm wondering is if costs of test maintenance[1] and time taken to run all tests at some point will be not limiting the possible growth of the application, preventing fully TDD apps to become as huge as legacy montrosities mentioned before.

[1] - I mean that with certain change in a module, you pay constant cost of changing unit tests for this module - but you might be forced to correct integration/functional tests for modules depending on it.

john green green replied on Sun, 2009/12/06 - 11:36am

The technology experts will also spend some time on infrastructure code, such as system-wide error handling utilities and other important architectural aspects. Naturally, you adjust the size of the teams and the roles of team members based on various factors.
nike china shoes

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.