Editor’s note: This is a guest post on Lean Software Development by Corey Ladas. If you don’t know Corey, he is a product development methodologist extraordinaire. if you missed Corey’s previous post, Introduction to Lean Software Development, be sure to check it out. This is a follow up post for readers who wanted more information on some principles, patterns, and practices that could help support Lean Software Development.
Lean Thinking is a paradigm of production and can’t easily be reduced to a process recipe. The particular form of any Lean process will always depend upon the form of the product that is created by that process. However, any Lean process will realize a few essential principles. If we apply these Lean principles to software development, we may find some practices that express those principles in a way that is useful and sensible for the medium.
One Piece Flow
A central concept in Lean is that planning, executing, and delivering work in small batches minimizes waste. The ideal limit of working in small batches is the single unit. Creating one piece at a time with zero waste is the ideal of one-piece flow.
A Lean goal in software development would be to define a minimum batch size that delivers value to stakeholders. One such approach for product development is the minimum marketable feature (MMF). Not all software development is product development, so “marketable” is not always the right criterion. But every enterprise has some kind of goal that determines value, and this goal can be applied to a minimum stakeholder-valued feature.
There may be many steps involved with the creation of such an atomic work product. Some of the work that is scheduled in association with a product will directly add value to the product. Other work may be incidental or even unnecessary. It is important to distinguish between these categories. Value stream maps describe the sequence of value-adding improvements that are necessary to deliver the product. The “flow” in one-piece flow means that these activities should be performed in an uninterrupted sequence from start to finish.
Dividing the work into independent business-valued features enables the staged delivery of those features. Your highest-value customers may receive most of their benefit from some subset of your planned features and would prefer to take earlier delivery of a smaller system. Revenue earned from these customers can fund your ongoing development in an incremental funding model.
An incremental delivery strategy may still be executed against a predetermined plan. An even more adaptive strategy is evolutionary delivery, where frequent deliveries are made in response to evolving market conditions and each delivery is a complete, fully functional system. Many hosted applications and open-source projects fit this model. Tom Gilb has written extensively on engineering practices than enable evolutionary delivery at a high level of rigor and quality.
The extreme interpretation of evolutionary delivery is continuous deployment, where very small incremental improvements are released to production at a high frequency. Flickr, for example, releases new code to production every 30 minutes. IMVU releases every 8 minutes. This level of operation transforms the metaphor of software development from the construction of an object to the refinement of a fluid.
Let the Market Pull Value from the System
If we are very flexible in our delivery capability, then we can respond to market conditions as they evolve. Rather than deliver a feature in a planned release 18 months from now, we can deliver it next month if the market demands. The faster we can plan and deliver a new feature, the more we can allow the market to pull value from the system.
Customers may not have the right understanding to express what they really need. Allowing the market to pull will still require interpretation from a producer. In order to understand what the customer really needs, you should go and see for yourself the need that the customer is trying to satisfy. The methods of Quality Function Deployment allow us to discover and express the voice of the customer in a way that we can translate into detailed and actionable product specifications.
If we deliver work in high-frequency iteration, we will have to find a way to make that work flow smoothly within the development organization. If that value stream includes operations that require the services of specialized or scarce resources, then we will need a way to coordinate the scheduling of those resources. A kanban-controlled workflow regulates the productivity of individual process steps in the value stream, so that each step produces only at the rate that is required by the pull of the market. A kanban system creates an internal market where downstream consumers pull value from upstream producers.
Pull systems enable us to create value just-in-time. Market value does not always come in uniformly-sized chunks and sometimes we can see changes in market trends before they are expressed as specific product demands. We can strike a balance between meeting the demand of the present and anticipating the needs of the future with just-enough-planning. Rolling wave planning scales planning detail according to time, so that we make detailed plans about things that will happen very soon, and more general plans about things that will happen in the future. The further the time horizon, the less specific the plan. Then we must revisit the plan frequently as we learn new things about market conditions.
Real options are an approach to project management that are analogous to financial options in investment management. A real option may allow us to purchase in the right to defer a management decision to some future date. When that date arrives we can exercise the option or allow it to expire. We can then create a planning portfolio of options instead of a portfolio of commitments. Rolling wave planning and staged delivery are examples of options thinking.
Set-based development means that we may pursue multiple competing design alternatives simultaneously, and defer commitment to any particular design alternative until we have more information. This is another example of options thinking. Distributed version control systems enable inexpensive branching and merging and thereby facilitate evolutionary selection of the fittest features and designs. Some well-known large open source projects operate according to a set-based strategy.
Visual Control
Another key principle of Lean thinking is that we should optimize the whole system, as a system. Uncoordinated local optimizations never lead to a system that is optimized as a whole. In order to optimize the system as a whole, it helps if we can see the system as a whole, and this leads us to the practice of visual control. Visual controls are usually created to illuminate the relationship between a local process and the system, in order to provide the operators of the local process with context.
A popular type of visual control in software development is the card wall. If a card wall is used in conjunction with a kanban system, then we might call it a heijunka board or kanban board. A heijunka board is used to implement a process called production leveling, where the parts of the system operate at a consistent rate, without booms and busts, and the flow of work through the system is smooth. A card wall can also be used to implement the practice of the andon signal, which is used to communicate a problem in the development process.
Visual controls work best when they are ubiquitous and inescapable, but we can always enforce a minimum shared understanding with a daily stand-up meeting.
Build Quality In
A Lean development process should not allow defective work to move downstream. A policy of one-piece flow enables a policy of 100% inspection, with an aim towards discovery of any defects at the earliest possible moment. This translates directly to software development in the form of design review and code inspection, which are greatly facilitated by small batch size and pull workflow. When a defect is discovered, a Lean process should stop the line and apply root-cause analysis in order to discover the source of the defect. If the root cause is repeatable and preventable, then we should adapt our process to prevent that kind of defect in the future.
Jidoka is a philosophy of optimizing the balance of work between people and machines, so that people only do work that people are good at and machines only do work that machines are good at. Lean processes use automation, but always in the service of and under the control of a human operator. A Lean process should prefer simple tools. Over-reliance on complex tools and technology may lead a business to be enslaved by the capability of their technology and disconnected from customer value. Lean tools should be as flexible as the marketplace they serve.
Continuous integration is a automation practice that reduces work-in-process and can greatly facilitate the early discovery of defects. Design-by-Contract instruments source code to self-detect design anomalies. Automated design verification can then be accomplished with static analysis. Specification-based verification strikes a nice balance between work that people are good at (expressing intent) versus work that machines are good at (exhaustive search). Requirements specifications can also be made verifiable in this way.
Continuous Improvement
The PDSA (or PDCA) cycle is a time-tested approach to continuous improvement. Plan the work that you need to do, Do the work, Study the results of the work you have done, and Act on what you learn in order to improve. The application of Lean principles create an environment where the PDSA cycle can be sustainable and effective.
Standard work is the foundation of continuous improvement. Standard work in Lean means that the operators of a process reach a consensus about appropriate results of a process and the best known methods to achieve them. A work standard only remains the standard until a better way is discovered, which then becomes the new standard. These standards should never be imposed by an external agent. In software development, such standards may take the form of design rules, coding rules, checklists, inspection procedures, or even the workflow itself. PDSA is the process which drives the evolution of standards.
Throughput metrics provide useful information to discover the causes and effects of production problems. The primary throughput metrics are time-series metrics, so we can associate changes in the metrics with known historical events. Inventory levels are a leading indicator of lead time, which we can use to guide management action. A stable throughput process is predictable and enables estimation of future deliveries based on historical data, instead of guessing.
References
- Corey’s blog: http://www.LeanSoftwareEngineering.com
- Lean Thinking, James Womack and Daniel Jones
- Software by Numbers, Denning and Cleland-Huang
- Principles of Software Engineering Mangement, Tom Gilb
- Lean Software Strategies, Peter Middleton and James Sutton
- Implementing Lean Software Development, Mary and Tom Poppendieck
- The Principles of Product Development Flow, Donald G. Reinertsen
- Scrumban, Corey Ladas
You Might Also Like
Top 20 Best Software Engineering Books of All Time
Architectural Styles in Software Engineering
Lessons in Software Development from Mike de Libero
Photo by Cayusa.
“Flickr, for example, releases new code to production every 30 minutes. IMVU releases every 8 minutes.”
When you say things like this, if not coached in the right context it can tend to strike fear into people rather than excitement. For a developer: does Lean all of a sudden mean I only have 8 minutes to implement my next assignment? For a team: do we have to do everything in 8 minutes?
I follow the idea that you’re postulating here, and am an ardent supporter of keeping things simple and delivering small features quickly. With tools and the right automated testing/validation in place the flow can be achieved quite naturally.
It’s not about doing everything in 30 minutes or 8 minutes. It’s about the idea that if everyone is continually producing at their own rates, and the correct systems are in place, then a release can happen at any time with minimal structure or ceremony whenever anyone completes what they happen to be working on. Whether it took them 8 minutes or 8 days to implement, it can be released right now.
Thanks for the post!
Mr. Hericus
Really informative follow up to the Lean Software Development on SourcesOfInsight.
I had questions about feature cost estimation that got answered in your blog :D.
I have always been the evolutionary delivery guy. But the other techniques, esp continuous deployment might be very useful in my environment.
I had two dumb questions wrt “stop the line” – cross reference it to your blog post on “p&p on s/w engg workflow” – parallel activities. For the sake of argument, consider that a fault occured on Design Interface. Now, will the Design logic step also stop till a resolution is identified?
Another question. Is the Lean development process from “specify” -> “deploy” or can it segmented into each task?
Sorry for taking bits n pieces from shapingsoftware and leansoftwareengineering :D.
Great informative article, I could have used this article 2 years ago when I was first starting development of my product; however, fortunately I can avoid future mistakes now :), thanks again!
–Kevin
Hi Praveen,
The stop-the-line case from the parallel workflow example might not stop the “design logic” step immediately. If that workflow segment is kanban-limited, then the second path will continue to process any work that is already underway. If the problem on the first path is not resolved before the second branch completes, then the second branch will stall. This might be a good thing, because the people working on the second branch can use their capacity to try to help the first branch resolve their bottleneck instead of building up new design-in-process inventory that will then have to be burned off at some downstream bottleneck.
The second branch might also stall if it is determined that the root cause was a defect in the specification step that preceded both design steps.
There is no one Lean development process, so the first step is usually Value Stream Mapping to show how people really work in your business.
Fantastic article, Cory! This succinctly states many concepts I have been talking to my management team about, and does so with lean vocabulary. I am going to pass it around widely. The focus on “lean patterns” (that’s the way I view the keywords) is effective.
Can you provide some pointers in the way of metrics to feed the idea to more data-oriented personalities? I agree with the concepts in the article, but the art of pursuasion usually requires back-up data, and specifically data from the software industry.
I had never heard of PDSA specifically, but it sounds just like the Six Sigma DMAIC concept. The continous improvement ideas are also stressed in Human Performance Technology.
Hi James!
PDSA is also known as the Deming cycle, which is an evolution of the earlier PDCA / Shewhart cycle.
You’ll find metrics galore in the new Don Reinertsen book, but the core are your basic throughput metrics: throughput, cycle time, lead time, work-in-process, station availability. “Software by Numbers” also makes a solid economic argument.
We’ve been seeing 2-4x productivity improvements over comparable teams using Scrum-like methods (e.g. http://www.ddj.com/architect/218000215), and even bigger improvements over teams using traditional phase/gate methods. But of course, this stuff is really about scale and enterprise integration. Team performance improvement is mostly a bonus feature.
Those most interesting and valuable ideas in lean are the focus on the simple over the complex. The idea that excess process are wasteful, and even dangerous, shouldn’t need to be said, but it does.
The areas where I believe lean software development practitioners betray lean’s roots is specifying certain technologies or patterns. The whole point is that process will be specified at a team, or organization level from the ground up. One example of this would be declaring that you can’t have lean without test driven development.
Hi Ian,
I very much agree with you about the overspecification of tools and practices, and you picked a good example. I like User Stories, and sometimes I use them because sometimes that is an appropriate method. Other times it isn’t. I’m glad I know more than one requirements analysis technique. Right tool for the job. No golden hammers.
Hi Corey,
I guess I am somewhat late to this discussion – but hopefully you will see this and respond. I am still early in my learning of Lean Software Development – and your two posts were very instructive.
However, I am left with a nagging question – how different is all this from Agile or Iterative development – both of which also talk of many of the same things such as delivering in small chunks, delivering continuously for faster market/ customer feedback, etc.?
A secondary question – based on the current capabilities of various software engineering and project management tools, what can software teams actually practice of Lean methods? There are a lot of tools that support Agile development, including the one my company produces – are there tools that explicitly provide Lean related processes, workflows, metrics and measures? Or are Lean practitioners able to do Lean projects using standard/ Agile tools in the market – in other words – is it more to do with the methods than the tools?
Thanks,
Mahesh