Editor’s note: This is a guest post on Lean Software Development by Corey Ladas. If you don’t know Corey, he is a product development methodologist extraordinaire. if you missed Corey’s previous post, Introduction to Lean Software Development, be sure to check it out. This is a follow up post for readers who wanted more information on some principles, patterns, and practices that could help support Lean Software Development.
Lean Thinking is a paradigm of production and can’t easily be reduced to a process recipe. The particular form of any Lean process will always depend upon the form of the product that is created by that process. However, any Lean process will realize a few essential principles. If we apply these Lean principles to software development, we may find some practices that express those principles in a way that is useful and sensible for the medium.
One Piece Flow
A central concept in Lean is that planning, executing, and delivering work in small batches minimizes waste. The ideal limit of working in small batches is the single unit. Creating one piece at a time with zero waste is the ideal of one-piece flow.
A Lean goal in software development would be to define a minimum batch size that delivers value to stakeholders. One such approach for product development is the minimum marketable feature (MMF). Not all software development is product development, so “marketable” is not always the right criterion. But every enterprise has some kind of goal that determines value, and this goal can be applied to a minimum stakeholder-valued feature.
There may be many steps involved with the creation of such an atomic work product. Some of the work that is scheduled in association with a product will directly add value to the product. Other work may be incidental or even unnecessary. It is important to distinguish between these categories. Value stream maps describe the sequence of value-adding improvements that are necessary to deliver the product. The “flow” in one-piece flow means that these activities should be performed in an uninterrupted sequence from start to finish.
Dividing the work into independent business-valued features enables the staged delivery of those features. Your highest-value customers may receive most of their benefit from some subset of your planned features and would prefer to take earlier delivery of a smaller system. Revenue earned from these customers can fund your ongoing development in an incremental funding model.
An incremental delivery strategy may still be executed against a predetermined plan. An even more adaptive strategy is evolutionary delivery, where frequent deliveries are made in response to evolving market conditions and each delivery is a complete, fully functional system. Many hosted applications and open-source projects fit this model. Tom Gilb has written extensively on engineering practices than enable evolutionary delivery at a high level of rigor and quality.
The extreme interpretation of evolutionary delivery is continuous deployment, where very small incremental improvements are released to production at a high frequency. Flickr, for example, releases new code to production every 30 minutes. IMVU releases every 8 minutes. This level of operation transforms the metaphor of software development from the construction of an object to the refinement of a fluid.
Let the Market Pull Value from the System
If we are very flexible in our delivery capability, then we can respond to market conditions as they evolve. Rather than deliver a feature in a planned release 18 months from now, we can deliver it next month if the market demands. The faster we can plan and deliver a new feature, the more we can allow the market to pull value from the system.
Customers may not have the right understanding to express what they really need. Allowing the market to pull will still require interpretation from a producer. In order to understand what the customer really needs, you should go and see for yourself the need that the customer is trying to satisfy. The methods of Quality Function Deployment allow us to discover and express the voice of the customer in a way that we can translate into detailed and actionable product specifications.
If we deliver work in high-frequency iteration, we will have to find a way to make that work flow smoothly within the development organization. If that value stream includes operations that require the services of specialized or scarce resources, then we will need a way to coordinate the scheduling of those resources. A kanban-controlled workflow regulates the productivity of individual process steps in the value stream, so that each step produces only at the rate that is required by the pull of the market. A kanban system creates an internal market where downstream consumers pull value from upstream producers.
Pull systems enable us to create value just-in-time. Market value does not always come in uniformly-sized chunks and sometimes we can see changes in market trends before they are expressed as specific product demands. We can strike a balance between meeting the demand of the present and anticipating the needs of the future with just-enough-planning. Rolling wave planning scales planning detail according to time, so that we make detailed plans about things that will happen very soon, and more general plans about things that will happen in the future. The further the time horizon, the less specific the plan. Then we must revisit the plan frequently as we learn new things about market conditions.
Real options are an approach to project management that are analogous to financial options in investment management. A real option may allow us to purchase in the right to defer a management decision to some future date. When that date arrives we can exercise the option or allow it to expire. We can then create a planning portfolio of options instead of a portfolio of commitments. Rolling wave planning and staged delivery are examples of options thinking.
Set-based development means that we may pursue multiple competing design alternatives simultaneously, and defer commitment to any particular design alternative until we have more information. This is another example of options thinking. Distributed version control systems enable inexpensive branching and merging and thereby facilitate evolutionary selection of the fittest features and designs. Some well-known large open source projects operate according to a set-based strategy.
Another key principle of Lean thinking is that we should optimize the whole system, as a system. Uncoordinated local optimizations never lead to a system that is optimized as a whole. In order to optimize the system as a whole, it helps if we can see the system as a whole, and this leads us to the practice of visual control. Visual controls are usually created to illuminate the relationship between a local process and the system, in order to provide the operators of the local process with context.
A popular type of visual control in software development is the card wall. If a card wall is used in conjunction with a kanban system, then we might call it a heijunka board or kanban board. A heijunka board is used to implement a process called production leveling, where the parts of the system operate at a consistent rate, without booms and busts, and the flow of work through the system is smooth. A card wall can also be used to implement the practice of the andon signal, which is used to communicate a problem in the development process.
Visual controls work best when they are ubiquitous and inescapable, but we can always enforce a minimum shared understanding with a daily stand-up meeting.
Build Quality In
A Lean development process should not allow defective work to move downstream. A policy of one-piece flow enables a policy of 100% inspection, with an aim towards discovery of any defects at the earliest possible moment. This translates directly to software development in the form of design review and code inspection, which are greatly facilitated by small batch size and pull workflow. When a defect is discovered, a Lean process should stop the line and apply root-cause analysis in order to discover the source of the defect. If the root cause is repeatable and preventable, then we should adapt our process to prevent that kind of defect in the future.
Jidoka is a philosophy of optimizing the balance of work between people and machines, so that people only do work that people are good at and machines only do work that machines are good at. Lean processes use automation, but always in the service of and under the control of a human operator. A Lean process should prefer simple tools. Over-reliance on complex tools and technology may lead a business to be enslaved by the capability of their technology and disconnected from customer value. Lean tools should be as flexible as the marketplace they serve.
Continuous integration is a automation practice that reduces work-in-process and can greatly facilitate the early discovery of defects. Design-by-Contract instruments source code to self-detect design anomalies. Automated design verification can then be accomplished with static analysis. Specification-based verification strikes a nice balance between work that people are good at (expressing intent) versus work that machines are good at (exhaustive search). Requirements specifications can also be made verifiable in this way.
The PDSA (or PDCA) cycle is a time-tested approach to continuous improvement. Plan the work that you need to do, Do the work, Study the results of the work you have done, and Act on what you learn in order to improve. The application of Lean principles create an environment where the PDSA cycle can be sustainable and effective.
Standard work is the foundation of continuous improvement. Standard work in Lean means that the operators of a process reach a consensus about appropriate results of a process and the best known methods to achieve them. A work standard only remains the standard until a better way is discovered, which then becomes the new standard. These standards should never be imposed by an external agent. In software development, such standards may take the form of design rules, coding rules, checklists, inspection procedures, or even the workflow itself. PDSA is the process which drives the evolution of standards.
Throughput metrics provide useful information to discover the causes and effects of production problems. The primary throughput metrics are time-series metrics, so we can associate changes in the metrics with known historical events. Inventory levels are a leading indicator of lead time, which we can use to guide management action. A stable throughput process is predictable and enables estimation of future deliveries based on historical data, instead of guessing.
- Corey’s blog: http://www.LeanSoftwareEngineering.com
- Lean Thinking, James Womack and Daniel Jones
- Software by Numbers, Denning and Cleland-Huang
- Principles of Software Engineering Mangement, Tom Gilb
- Lean Software Strategies, Peter Middleton and James Sutton
- Implementing Lean Software Development, Mary and Tom Poppendieck
- The Principles of Product Development Flow, Donald G. Reinertsen
- Scrumban, Corey Ladas
You Might Also Like
Photo by Cayusa.