Warning: include(api.php) [function.include]: failed to open stream: No such file or directory in /home/shapings/public_html/wp-content/themes/arthemia/functions.php on line 2

Warning: include() [function.include]: Failed opening 'api.php' for inclusion (include_path='.:/usr/local/php52/pear') in /home/shapings/public_html/wp-content/themes/arthemia/functions.php on line 2
Shaping Software Patterns and Practices for Software Success. 2010-05-20T04:36:05Z http://shapingsoftware.com/feed/atom/ WordPress JD http:// <![CDATA[Customer Connected Engineering at patterns & practices]]> http://shapingsoftware.com/2009/12/24/customer-connected-engineering-at-patterns-practices-2/ 2009-12-30T05:22:29Z 2009-12-24T19:25:16Z CustomerConnectedEngineering2

This post is a write up of how we do Customer-Connected Engineering on the Microsoft patterns & practices team.  Our Customer-Connected Engineering process has been a key part of our success and impact in the software industry.  While this write up is about how patterns & practices implements Customer Connected Engineering, you might find that you can tailor and adapt some of the principles for your scenario or context.

Customer Connected Engineering (CCE) at patterns & practices

Customer Connected Engineering (CCE) is a collection of practices for engaging customers during the planning, development, and release of code and narrative guidance. Instead of simply collecting customer requirements up front, or getting feedback after the fact, it’s continuous involvement of customers throughout our life cycle. By involving our customers in the process, we improve transparency and increase the probability of shipping what’s most valuable to our customers. By partnering with customers, we improve our ability to understand end to end scenarios as well as priorities. By shortening the cycles of feedback we also improve our ability to learn, to reflect, and to adapt our deliverables as we clarify the wants and needs of our customers.

At patterns & practices, our approach is optimized towards external and largely unknown customers. Internal projects with identifiable stakeholders can also use CCE, but it will take a different form.

Overview

At patterns & practices, we use an approach we call Customer Connected Engineering (CCE). As the name implies, we engage with customers throughout the project. Our customers help us ship better software and deliverables that meet their needs. At the heart of customer connected engineering is a customer advisory board. The advisory board is a set of customers that helps influence what we build. The customer advisory board helps identify scenarios, prioritize the scenarios, comment on designs, test early preview bits, and give timely feedback during planning and development.

In addition to the advisory board, we have an open community that allows for any community member involvement. After the initial phases of some weeks, we will typically start releasing code and guidance “drops” in the community. The community is also the place, where we provide the main support of a deliverable during the development and after it is released. Internally we ensure the alignment with the product directions with a set of technology stakeholders, typically from the product groups that provide and own the platform technologies. In agile projects, the assumption is that you have the end-customer or proxies being engaged in the development. CCE ensures customer participation in a group inside Microsoft, like patterns & practices that has have many end customers as the target.

Customer Connected Engineering Overlay

We use a combination of XP / Scrum for executing projects at patterns & practices. So if you’re doing XP/Scrum most of this isn’t new. The following diagram is an overlay of customer-connected activities on top of our development process:

CustomerConnectedEngineering2

The activities on the left-side of the diagram below are core activities in our patterns & practices projects. On the right hand-side are customer-connected activities. Here is a brief description of each of the activities:

  • Customer Advisory Board. The Customer Advisory Board is a group of customers that act as a sounding board for the project. This is a smaller set of customers that act as a proxy for the rest of our customer base. We build a Customer Advisory Board to help us stay on track with customer demand in an agile way (see Customer Advisory Board section below).
  • Stories / Scenarios. Customers share stories and scenarios. Stories and scenarios are narratives that capture and share usage scenarios for your product. The scenarios help show requirements in context.
  • Backlog Prioritization. Customers help prioritize by providing input for the product backlog, the sprint backlogs and iteration planning sessions. The advisory board provides their prioritization ongoing. For some projects, we open up a survey to the broader community to help with prioritization in the earlier phases before Vision / Scope.
  • Frequent Delivery. Unless you deliver frequently you’re asking for feedback on what? We deliver frequently to the Customer Advisory Board to give them something concrete to provide feedback on and also to give broad visibility (outside of the Customer Advisory Board). This means being able to ship high quality code (or wireframe prototypes, chapter drafts for written guidance) every iteration. Always being “done”.
  • Feedback. Customers provide feedback during each iteration and for release. What’s important is that it’s earlier instead of later. It helps us course correct midstream instead of miss the mark at the end of the project.

Some of the things in the CCE column most people would say are Scrum/XP include: Stories/Scenarios, Prioritization, Demos, Drops, and Feedback. The thing we do – that Scrum/XP do not cover is include some practices which help a Product Owner gather feedback from a user community and aggregate that for the development team: The Customer Advisory Board, CodePlex forums, Advisory board selection and calls.

Why Customer Connected Engineering

Shipping the wrong thing is expensive. Customer Connected Engineering when done properly provides more benefits than tax. Some of the benefits include:

  • Customers help you identify relevant scenarios.
  • Customers help you verify, prioritize, rationalize and refine the scenarios.
  • Customers evaluate and provide feedback of your deliverables against the scenarios.
  • Customers better understand your trade-offs and have more visibility into your process. This builds trust and increases probability for adoption and usage.
  • Customer become your greatest evangelists.

The benefits of Customer Connected Engineering largely depend on both how engaged your Customer Advisory Board is and how representative they are of your target customer base.

Guiding Principles

One of the ways to successfully adopt a practice is to focus on the principles. The principles help you avoid getting stuck on implementation details. Implementation will vary from project to project, but the core concepts will stay the same. Here are some principles we’ve found to improve Customer Connected Engineering:

  • Set the frame. A frame is how you look at things. We use a frame to anchor discussions, and create something that people can react to. The more thoughtful the frame, the higher the quality feedback we get. We create the frame by figuring out the customers, their needs, and the business goals. We use the frame to help focus feedback and dialogue. For example, one frame could be an architectural overview. Another frame can be your product backlog. Our most common approach is to use stories and themes.
  • Shared problems. The customers we select for the Customer Advisory Board need to have first-hand experience with the problem. They need to care and be involved in the solution.
  • Have an opinion. Our opinion is one piece of the pie. We need to balance among the economics, the customer scenario, the product direction, the product team, the field, support, and community experts. We also need to balance generalizing a solution for more reach against contextualizing a problem so it’s specific enough to be useful. Without an opinion, we would get randomized. We have an opinion so we can rationalize the feedback and priorities from various customers and perspectives. Each customer will be coming from different perspectives. It’s your job to frame the feedback and understand the perspectives. We also need to know our own assumptions. When people challenge our assumptions, we understand why we are changing our opinion. For example, we might have an idea on a user experience. Our customers then provide their reaction, which leads us to revisit our design.
  • Synthesize the feedback. Here we step back and look across the scenarios and requirements. We look for common denominators. We prioritize across our highest ROI items.
  • Scenarios are King. Scenarios are the backbone of Customer Connected Engineering. The end-to-end scenarios are one of our most important outcomes. It’s one thing to look at a list of scenarios in a document. It’s another to walk through the stories and scenarios with our customers. Our customers can share their goals and their stories in detail. We suggest having a set of straw man scenarios, before you engage with the advisory board.
  • Transparency. Transparency is letting our customers see inside our process to understand how we do things and how they work. It’s sharing our decision making approach so that customers understand how we make trade-offs. It’s also about sharing design goals as we know them. It’s also about making our customers aware of important changes along the way, instead of at the very end when we ship. It’s opening up the door to the workshop and letting customers watch and participate as we build our deliverables. When they understand why we made a decision or tradeoff, we are more likely to have a satisfied customer, even if they disagreed with a specific decision.
  • Incremental value. This is about finding a way to flow value. As the project progresses, customers should get a sense that we are delivering value along the way.
  • Fail early, fail often. We share releases with our customers so they can share feedback. We don’t want to be surprised when we’re ready to ship. We share early and share often. We use the feedback to improve.
  • Timely feedback. One of our biggest benefits of Customer Connected Engineering is the timely feedback we receive.
  • Stay flexible. We stay responsive to feedback. Acting on the feedback shows our customers that we value their input and that it makes a difference. The more they see the impact, the more they engage.
  • Real world solutions. When we have a working implementation, we have a significant starting point. We try to find working examples of specific customer solutions that solve some of the same scenarios and challenges we’re facing. For example, to speed up our success, rather than chase our competition, we look to working solutions.

Customer Advisory Board

When we create our Customer Advisory Board, we want to be selective. The customers we choose need to have deep insight into the problems we’re working on. We search for people that are respected in the community both for their understanding of the technology and building real-world solutions. We focus on customers that are trying to solve the same challenges. We look for customers that have a serious interest in leveraging what we develop or learning from it. We want customers that are “early adopters” and still representative of our main target customer base. Customers that just want to track how we’re doing aren’t going to help. We need customers that will actually run alongside us, taking our work applying it, so we get specific feedback. We want customers who are not shy to push back, to scrutinize our backlog and criticize our direction and execution.

Selection

We build a board that is representative of our target audience including various customer types:

  • Key contributors – We consider engaging key contributors, if we have found a reference solution that addresses some of the core challenges.
  • System integrators – We consider leveraging system integrators, which can be aggregators of requirements reflecting multiple customers. We verify that they are still representative of the main stream.
  • ISVs – We consider leveraging partner ISVs that may address extensions compared to our main stream customers.
  • MVPs – We consider engaging MVPs as another source. Often they are early adopters themselves and work with early adopter customers.
  • Customers themselves – large, small, and medium.

Stories / Scenarios

A lot of software projects fail because they miss the scenarios. It’s one thing to imagine or dream up scenarios, it’s another to get them directly from customers and to get them properly articulated in an unambiguous way. A lot of working features don’t necessarily aggregate up into working scenarios, or even the right scenarios. The value of our deliverable can be measured by the problems it solves. Ultimately, we can evaluate our deliverable against actual usage scenarios.

Prioritization

There’s a lot of opportunities for our Customer Advisory Board to help us prioritize and make trade-offs throughout the project. For example, we get input when we prioritize our product backlog. We also want input when we prioritize our iteration backlog. We also want customer input when we prioritize stories during iteration planning.

We make it obvious that we have fixed deadlines and limited resources, which means our main variable is scope. This often helps encourage the board members to engage more actively, because it gives them a clear sense the impact of their feedback.

In Summary

  • Customer Connected Engineering (CCE) is a collection of practices for engaging customers during the planning, development, and release of code and narrative guidance.
  • Customers help ship better deliverables that meet their needs by providing input and feedback throughout the project.
  • Customer-Connection Engineering activities can overlay on top of existing project activities.
  • The Customer Advisory Board is a key part of Customer-Connected Engineering.
    An effective Customer Advisory Board includes customers that have deep insight into the problems that the project is focused on.
  • Customers help provide the scenarios that guide the work.
  • Customers help prioritize the work.

Acknowledgements

I’d like to thank the following people for their review and contributions:

Ade Miller, Blaine Wastell, Bob Brumfield, Chris Tavares, Don Smith, Eduardo Jezierski, Erwin van der Valk, Eugenio Pace, Francis Cheung, Grigori Melnik, Javed Sikander, John deVadoss, Michael Puleio, Per Vonge Nielsen, Tom Hollander

My Related Posts

]]>
2
JD http:// <![CDATA[Customer, Problem, Competition, and Success]]> http://shapingsoftware.com/2009/12/15/customer-problem-competition-and-success/ 2009-12-24T19:27:04Z 2009-12-15T17:13:26Z CustomerProblemCompetitionSuccess

This is a simple frame for testing your vision, your pitch for a project, or your proposed solution.  One of my mentors uses it all the time to test the thinking and to make sure the team stays on track.  I’ve adopted because it’s a great way to stay focused on the basics.  Don’t let the basics get in the way of great results.

The frame is pretty simple to use.  You simply walk the categories and ask questions to explore the thinking:

  1.  Who is the customer?
  2. What’s the problem?
  3.  What’s the competition?
  4.  What does success look like?

Here’s how they help:

  1. Know your customer.  Your customer is a strategic decision.  Asking who the customer is, forces you to decide who’s in and who’s out.  This helps you figure out what’s relevant and what is not.  This helps you build empathy for relevant customer problems, once you know who your customer really is.  This helps you determine whether there is a market and whether you will be relevant to your customer.  It also helps you identify your ultimate test bed.  If your customers aren’t happy, you missed the boat.
  2. Don’t be a solution looking for a problem.  Asking what’s the problem forces you to ask whether you are focusing on the right problem.  Is it really a problem?  Do your customers think so?  Is this the next best thing to work on?  Are you playing to your strengths?  Knowing the problem can also help you build customer empathy.
  3. Know what the competition is doing.  You should know what’s been done, and you should be clear on your differentiation.  Are you competing on the problem, the approach, or the implementation?
  4. Know what success looks like.  Asking what does success looks like, forces you to figure out your tests for success.  In this case, I’ve found it helpful to both be able to draw your vision, and to know the key measures that you can evaluate.  The sooner you can draw your vision, the earlier you can beat up the idea to make it better, as well as get people on board.  When you figure out what to measure, it’s important to consider who the opinion leaders are, who the key stakeholders are, and who your key customers are.  Chances are, the tests for success can be very different, especially if your stakeholders lack customer empathy.

It’s a simple frame, but it can help keep you focused on the right things.

Photo by BruceTurner.

]]>
0
JD http:// <![CDATA[Lessons in Software from Eric Brechner]]> http://shapingsoftware.com/2009/12/07/lessons-in-software-from-eric-brechner/ 2009-12-15T17:15:45Z 2009-12-07T07:51:37Z EricBrechner Editor’s note: This is a guest post from Eric Brechner. Eric is author of the book, and blog, I.M. Wright’s “Hard Code.”  At Microsoft, Eric is Director of Development Excellence on the Engineering Excellence team.  His group is responsible for improving the people, process, and practices of software development across Microsoft.  Eric has more than 20 years of experience in the software space, including a tour of duty on the Microsoft Office team.

When I first met Eric, several years ago, he struck me as somebody with opinions and insight.  Time and again he impressed me with his words of wisdom and his perspective on everything from software to career and to life.  He always has a good answer to the tough problems, and never fails to make me think.

Without further ado, here’s Eric on his Lessons in Software … 

Rather than focus on software engineering and craft, I’d like to concentrate on admirable attributes of software developers as human beings. These are attributes of people I like to work for, work with, and have working for me.
The attributes fall into two categories—strength and balance. Strength attributes form the foundation of someone’s being. Balance attributes characterize how someone deals with opposing ideals. Clearly, this is going to be a philosophical discussion. Thankfully, it’s also going to be short.
I chose three strength and three balance attributes. I like working with a diverse set of people, so I narrowed these admirable attributes to just the fundamental set that yields an interesting individual I respect.
Strength attributes:

  • Insightful. Smarts alone don’t cut it. There are plenty of smart people. It’s insight that changes the game. Insight drives the breakthrough. Insight directs the decision. Some people focus on getting the answer. That’s boring—there are lots of answers. Great people focus on understanding the problem—the customer, the scenario, the competition, the partners, and their situation. Great people are insightful. I love working with insightful people.
  • Reflective. Being reflective is necessary to being insightful. Reflective people are curious and seek understanding, both of the problem and of themselves. They feel driven to constantly improve every aspect of who they are and what they do. Reflective people also tend to be self-deprecating and have a broad sense of humor. I delight in working with reflective people.
  • Principled. “Whatever,” is something you rarely hear from a person of principle. Principled people do things for a reason. They have integrity, almost by definition. While I may not agree with the principles of a colleague, I respect him or her. You can trust people of principle to stay true to their beliefs and follow through, whatever the pressure or circumstances. I depend upon principled people.

Balance attributes:

  •  Serving and advocating. You must serve your customers, your team, your management, and your company. It’s not about you, it’s about the customer and the business. However, if you are purely selfless, your career and your ideas will go nowhere. You must advocate for yourself and your innovative ideas. Balancing these two demands in a way that promotes your contributions yet is never about you is challenging for most people. Great people serve and advocate with dignity and grace.
  • Execution and slack. You must execute on projects and deliver on commitments. Great ideas are nothing if they never reach the hands of our customers. However, if you are purely tactical without thoughtful strategy, planning, and design, you will make critical mistakes, burn out yourself and your team, and deliver products and services that lack quality, value, and emotional connection. Balancing execution and slack time continues to be one of the great challenges of software development. Great people clearly prioritize their work, commit to difficult but achievable well-defined goals, and jealously protect their slack time.
  • Trust and risk. You must trust your coworkers and staff. You can’t accomplish anything truly impactful alone. However, if you rely on others to help, there is always the possibility that they may misunderstand your instructions and intent or be unable to produce the results you require on time, regardless of what controls and processes you put in place. Balancing trust and risk is the subject of countless management books and theories. Great people care deeply about their coworkers and staff, develop strong trust relationships with integrity and transparency, and rely upon those relationships to inform and adjust an acceptable level of risk.

If these attributes were easy to embody, the world would be a different place. It takes commitment and courage to be insightful, reflective, and principled. It takes thoughtful and unending vigilance to delicately maintain the balance of serving and advocating, execution and slack, and trust and risk.
The right balance at the beginning of a project is often quite different from the appropriate balance at the end. People challenge your principles, doubt your insights, and question your faith in yourself and your team. You must be strong and believe in yourself, yet balanced and dedicated to those you serve. It’s not easy, and that is why I admire people who embody these attributes.

Additional Resources

]]>
13
JD http:// <![CDATA[Best Practices at patterns & practices]]> http://shapingsoftware.com/2009/11/30/patterns-practices-best-practices/ 2009-12-24T19:28:03Z 2009-11-30T18:26:39Z PAGBestPractices

The Microsoft patterns & practices team has been around since 2000. The patterns & practices team builds prescriptive guidance for customers building applications on the Microsoft platform.  The primary mission is customer success on the platform.  As part of that mission, patterns & practices delivers guidance in the form of reusable libraries, in-tool experiences, patterns, and guides.  To put it another way, we deliver code-based and content-based guidance.

I’ve been a part of the team since 2001.   Along the way, I’ve seen a lot of changes as our people, our processes, and our catalog of products have changed over time.  Recently, I took a step back to collect and reflect our best practices.  Some practices were more effective than others, and we’ve lost some along the way.  To help reflect and analyze the best practices, I created a map of the key practices organized by discipline.  In this post, I’ll share the map (note that it’s a work in progress.)  Special thanks to Ed Jezierski, Michael Kropp, Per Vonge Nielsen, Shaun Hayes, and Tom Hollander (all former patterns & practices team members) for their contributions and insights to the map.

Best Practices by Discipline
The following table is a map of the key practices used by the patterns & practices team over the years.

Discipline Key Practices
Management Team
  • Milestone Reviews
  • Product Portfolio (correlated with customer & business challenges/opportunities)
  • Team development  (leadership skills, communication skills, … etc.)
  • Backlog
  • Connection with customers and partners
  • Fireside chats
  • Meeting with key stakeholders in the app plat space
  • People review
  • Scorecard management
  • Tracking overall team budget
  • Weekly Status
Architect
  • Articulate the inner (scope) and outer (context) architecture (these involve time)
  • Articulate technical principles – drive technical tradeoffs discussions
  • Be aware of roadmaps of the company, and build trust to make sure they are current
  • Be familiar with competitive tech.
  • Customer connection.
  • Groups’ technical strategy and product model.
  • Know actionable industry trends.
  • Overall design with functional breakdown.
  • Relationship with key influencers in the product groups.
  • Spikes / explorations including new approaches (technology and process)
  • Technical challenges
Development Team
  • Ship running code / guidance at the end of each iteration
  • User Stories
  • XP / Scrum with test-driven-development
  • Continuous build and integration
  • Iterations
  • Retrospectives
Product Management
  • Asset Model
  • Customer Surveys (feature voting, exit polls)
  • Standardized product model (app blocks, factories, guides, etc.)
  • Blogging throughout project (planning, development, release)
  • Case Studies
  • Community Lead
  • Customer Advisory Board
  • Customer Proof Points
  • Own Vision / Scope
  • Portfolio Planning
  • Project dashboard
Program Management
  • 5 customers stand behind it
  • AAD Sessions (Accelerated Analysis and Design)
  • Backlog
  • Exec Sponsor
  • Product owner from M0
  • Quality over scope.
  • Scorecards
Release Checklist
  • Release Checklist
  • Release Mail
Test Team
  • Automated tests
  • Focused on overall quality (functionality is tested by dev)
User Experience Team
  • Authoring Guide
  • Content Spec (Content scenarios and outline)
  • Doc Tool (Template for standardizing content formatting)

Some practices are obvious, while some of the names of the practices might not be.  For example, “Fireside chat” is the name of our monthly team meeting, which is an informal gathering and open dialogue.   I may drill into some of these practices in future posts, if there’s interest and there are key insights to share.

]]>
3
JD http:// <![CDATA[Drive from Quality]]> http://shapingsoftware.com/2009/11/11/drive-from-quality/ 2009-11-30T18:34:03Z 2009-11-11T04:47:53Z DriveFromQuality

My recent road trip was a great reminder how quality is durable.  As I passed through familiar territory, it was interesting to see how many building and places stood the test of time.  Whether it was a business or a building, it was quality that survived in the long run.  Some of the restaurants I remembered were gone.  Every restaurant I remembered that was high quality, was still around.

Competing on Price Fails in the Long Run
Competing on price failed, time and again.  There was no customer loyalty when it was the price play.  There was no compelling distinction beyond price.  Chasing the price play, meant getting priced out of market by somebody better or cheaper or you name it.  There are only so many corners you can cut before your value is insignificant.  On the other hand, the quality play is focused on differentiation and distinction in terms of value.  In a globabl market, where cycles of change are faster, competing on price is a game I just don’t want to play in.

Do You Stand Behind Your Work?
One of my most important tests, and it’s a simple gut check, is, do you stand behind your work?  It’s a cutting question.  When your results are something you’re proud of, and quality is your game, and continuous improvement is your way, and excellence is your bar … you set yourself up for success.  When you can put yourself into your work, the journey becomes as enjoyable, if not more so, than the destination.

In times of change and uncertainty, driving from quality is a guiding principle that helps us find our path.

Photo by Cornell University Library.

]]>
4
JD http:// <![CDATA[Vision Scope Examples]]> http://shapingsoftware.com/2009/09/24/vision-scope-examples/ 2009-11-11T04:49:43Z 2009-09-24T13:47:57Z VisionScopeExamples

On the Microsoft patterns & practices team, we use Vision / Scope as a key milestone.  It’s where we frame the problem, identify the business opportunity, and paint a vision of the solution.  It’s a forcing function to get clarity on the customer, their scenarios, and our scope for the project.  We generally use a “fix time, flex scope” pattern, so this means having a candidate backlog that we prioritize with customers.

On the execution side, we expect to know the team, key partners, the budget, the schedule, and the deliverables.  We also need to know the risks and their mitigations.  At the Vision / Scope, the real key is first selling people on the vision, and then selling them on the execution.  It’s basically about answering, “why?” should we go do this, and “why now?.”  This can be either about reducing pain or exploiting an opportunity.  It’s also about answering these questions in the context of trade-offs.  When you can tell a compelling story from problem to solution, and how you’ll get their incrementally with a team people trust, you dramatically increase your odds of getting a “Go” decision, and the support you need.

Vision / Scope Baseline
This is my rough sketch of the key pieces I need in my Vision / Scope presentations for success:

Vision / Scope
  • Agenda
  • Problem
  • Vision
  • Approach
  • Prioritized
  • Tests for Success
  • Scope Key Activities
  • Deliverables
Execution
  • Team
  • Budget
  • Schedule
  • Risks
  • Asks
  • Go/No Go

Vision / Scope Examples
Here are some examples of various Vision / Scope slides from over the years:

Example Items
Example 1
  • Problem
  • Vision
  • Approach
  • Prioritized Tests for Success
  • Scope
  • Key Activities
  • Deliverables
  • Team
  • Schedule
  • Budget
  • Asks
  • Go/No Go?
Example 2
  • Vision / Strategy
  • Solution Concept
  • Scope
  • Outcomes
  • Deliverables
  • Scorecard
  • Team
  • Budget
  • Burn Rate
  • Schedule
  • Go/No Go?
Example 3
  • Agenda
  • Customer Proof Points
  • Business and Technical Scenarios
  • Project vision
  • Project Scope
  • Target Customer
  • Development Strategy
  • Project Objectives
  • Go-to-Market Release Strategy
Example 4
  • Agenda
  • Project Justification
  • Team and Extended Teams
  • Project Vision
  • Business and Technology Threats
  • Primary Business Scenario
  • Associated Technical Challenges
  • Quotes from Target Market
  • Top 5 Customer Requests
  • Potential Beta Customers
  • Project Scope
  • Project Deliverables
  • Assumptions
  • Risks
  • Development Strategy
  • Delivery Options
  • Single Release Schedule and Budget
  • Dual Release Schedule
  • Dual Release Budget
  • Go-to-Market Strategy
  • Current Status/next Steps
Example 5
  • Agenda
  • Customer Proof
  • Project vision
  • Business and Technical Scenarios
  • Scope
  • Pre-Release Strategy
  • Go-to-Market Strategy
  • Goals
  • Current Status/Next Steps
Example 6
  • Agenda
  • Project Lifecycle
  • Habits and Practices
  • Scenario-Based Guidance
  • What is a baseline architecture?
  • Reference Architecture Space
  • Baseline Architecture Applied
  • How will customers use it?
  • Vision
  • Strategy
  • Why create this baseline architecture?
  • Target Customer and Business Requirements
  • Customer Scenarios
  • Technical Challenges
  • Deliverables
  • Project Schedule
  • Budget and Resource Allocation
  • Risk and Mitigation
  • Project Team
  • Dev Update
  • Development Velocity
  • Test Deliverables
  • Testing Coverage
  • Bug – Status to Date (Test)
  • Support Strategy
  • Market Distribution
  • Partner Strategy
Example 7
  • Challenges
  • Opportunity
  • Vision and Strategy
  • Scope
  • Feature Prioritization Approach
  • Candidate Scope
  • Scope: Components of the Deliverable
  • Iterative Development Process
  • Staging and Release Strategy
  • Success Metrics
  • Alignment with SC-BAT
  • Team Roles
  • Product Group Feedback
  • Risks
  • Issues
  • Schedule
  • Test Deliverables and Coverage
  • Requests
Example 8
  • Agenda
  • Customer Proof Points
  • Business and Technical Scenarios
  • Project Vision
  • Project Scope
  • Target Customer
  • Development Strategy
  • Project Objectives
  • Go-to-Market Release Strategy
  • Current Status/Next Steps
Example 9
  • Agenda
  • Customer Pain
  • Vision/Strategy
  • Opportunity
  • Solution Concept
  • Scope
  • Deliverables
  • Scorecard
  • EcoSystem
  • who Are We Working with
  • Team
  • Test Scope
  • Alignment – Relation to Projects/Programs
  • Schedule
  • Budget Ask to M0 + 30 days
  • Total Budget
  • Risks
  • Asks
  • GO / No GO
Example 10
  • Situation
  • Opportunity
  • Vision
  • Goals
  • Guidance Team
  • Guidance Frame
  • Strategy – Program and Project
  • Program
  • Customer Data
  • Customer Scenario
  • Technology Landscape
  • Target Personas
  • Solution Concept: Deliverables
  • Scope – Phase 1a (Preview Release)
  • Candidate Pattern Map
  • Possible Phase 1b Scope
  • Scope
  • Release Strategy
  • Customer Validation Plan
  • Risks and Mitigation Strategy
  • Issues
  • Schedule
  • Budget
  • Technical and Organizational Dependencies
  • Asks

 

My Related Posts

]]>
1
JD http:// <![CDATA[Cloud Security Frame]]> http://shapingsoftware.com/2009/08/20/cloud-security-frame/ 2009-08-20T00:10:52Z 2009-08-20T00:08:38Z CloudSecurityFrame

Here is a draft of our Cloud Security Frame as part of our early exploration work for our patterns & practices Cloud Security Project.  It’s a lens for looking at Cloud Security.  The frame is simply a collection of Hot Spots.  Each Hot Spot represents an actionable category for information.  Using Hot Spots, you can quickly find pain and opportunities, or key decision points.  It helps us organize principles, patterns, and practices by relevancy.  For example, in this case, we use the Cloud Security Frame to organize threats, attacks, vulnerabilities and countermeasures.

Hot Spots

This is our current set of Hot Spots for our Cloud Security Frame:.

  • Auditing and Logging
  • Authentication
  • Authorization
  • Communication
  • Configuration Management
  • Cryptography
  • Exception Management
  • Sensitive Data
  • Session Management
  • Validation

Cloud Security Frame
Here is our draft of the Cloud Security Frame with a description of each Hot Spot category:

Hot Spot Description
Auditing and Logging Auditing and logging refers to how security-related events are recorded, monitored, and audited. Examples include: Who did what and when?
Authentication Authentication is the process of proving identity, typically through credentials, such as a user name and password.
Authorization Authorization is how your application provides access controls for roles, resources and operations.
Communication Communication encompasses how data is transmitted over the wire. Transport security versus message encryption is covered here.
Configuration Management Configuration management refers to how your application handles configuration and administration of your applications from a security perspective. Examples include: Who does your application run as? Which databases does it connect to? How is your application administered? How are these settings secured?
Cryptography Cryptography refers to how your application enforces confidentiality and integrity. Examples include: How are you keeping secrets (confidentiality)? How are you tamper-proofing your data or libraries (integrity)? How are you providing seeds for random values that must be cryptographically strong?
Exception Management Exception management refers to how you handle applications errors and exceptions. Examples include: When your application fails, what does your application do? How much information do you reveal? Do you return friendly error information to end users? Do you pass valuable exception information back to the caller? Does your application fail gracefully?
Sensitive Data Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores. Examples include: How does your application handle sensitive data?
Session Management A session refers to a series of related interactions between a user and your application. Examples include: How does your application handle and protect user sessions?
Validation Validation refers to how your application filters, scrubs, or rejects input before additional processing, or how it sanitizes output. It’s about constraining input through entry points and encoding output through exit points. Message validation refers to how you verify the message payload against schema, as well as message size, content and character sets. Examples include: How do you know that the input your application receives is valid and safe? Do you trust data from sources such as databases and file shares?

Threats, Attacks, Vulnerabilities and Countermeasures
Here is our working draft of our threats, attacks, vulnerabilities and countermeasures organized by our Cloud Security Frame:

Hot Spot Threats, Attacks, Vulnerabilities and Countermeasures
Auditing and Logging Vulnerabilities

  • Failing to audit failed logons.
  • Failing to secure audit files.
  • Failing to audit across application tiers.

Threats / Attacks

  • User denies performing an operation.
  • Attacker exploits an application without trace .
  • Attacker covers his tracks.

Countermeasures

  • Identify malicious behavior.
  • Know your baseline (know what good traffic looks like.)
  • Use application instrumentation to expose behavior that can be monitored.
Authentication Vulnerabilities

  • Using weak passwords.
  • Storing clear text credentials in configuration files.
  • Passing clear text credentials over the network.
  • Permitting over-privileged accounts.
  • Permitting prolonged session lifetime.
  • Mixing personalization with authentication.

Threats / Attacks

  • Network eavesdropping.
  • Brute force attacks.
  • Dictionary attacks.
  • Cookie replay attacks.
  • Credential theft.

Countermeasures

  • Use strong password policies.
  • Do not store credentials.
  • Use authentication mechanisms that do not require clear text credentials. to be passed over the network.
  • Encrypt communication channels to secure authentication tokens.
  • Use HTTPS only with forms authentication cookies.
  • Separate anonymous from authenticated pages.
Authorization Vulnerabilities

  • Relying on a single gatekeeper.
  • Failing to lock down system resources against application identities.
  • Failing to limit database access to specified stored procedures.
  • Using inadequate separation of privileges.

Threats / Attacks

  • Elevation of privilege.
  • Disclosure of confidential data.
  • Data tampering.
  • Luring attacks.

Countermeasures

  • Use least privilege accounts.
  • Consider granularity of access.
  • Enforce separation of privileges.
  • Use multiple gatekeepers.
  • Secure system resources against system identities.
Configuration Management Vulnerabilities

  • Using insecure administration interfaces.
  • Using insecure configuration stores.
  • Storing clear text configuration data.
  • Having too many administrators.
  • Using over-privileged process accounts and service accounts.

Threats / Attacks

  • Unauthorized access to administration interfaces.
  • Unauthorized access to configuration stores.
  • Retrieval of clear text configuration secrets.
  • Lack of individual accountability.

Countermeasures

  • Use least privileged service accounts.
  • Do not store credentials in clear text.
  • Use strong authentication and authorization on administrative interfaces.
  • Avoid storing sensitive information in the Web space.
  • Use only local administration.
Cryptography Vulnerabilities

  • Using custom cryptography.
  • Using the wrong algorithm or a key size that is too small.
  • Failing to secure encryption keys.
  • Using the same key for a prolonged period of time.
  • Distributing keys in an insecure manner.

Threats / Attacks

  • Loss of decryption keys.
  • Encryption cracking.

Countermeasures

  • Do not develop and use proprietary algorithms (XOR is not encryption. Use platform-provided cryptography.)
  • Use the RNGCryptoServiceProvider method to generate random numbers.
  • Avoid key management. Use the Windows Data Protection API (DPAPI) where appropriate.
  • Periodically change your keys.
Exception Management Vulnerabilities

  • Failing to use structured exception handling.
  • Revealing too much information to the client.

Threats / Attacks

  • Revealing sensitive system or application details.
  • Denial of service attacks.

Countermeasures

  • Use structured exception handling (by using try/catch blocks.)
  • Catch and wrap exceptions only if the operation adds value/information.
  • Do not reveal sensitive system or application information.
  • Do not log private data such as passwords.
Sensitive Data Vulnerabilities

  • Storing secrets when you do not need to.
  • Storing secrets in code.
  • Storing secrets in clear text.
  • Passing sensitive data in clear text over networks.

Threats or Attacks

  • Accessing sensitive data in storage.
  • Accessing sensitive data in memory (including process dumps.)
  • Network eavesdropping.
  • Information disclosure.

Countermeasures

  • Do not store secrets in software.
  • Encrypt sensitive data over the network.
  • Secure the channel.
Session Management Vulnerabilities

  • Passing session identifiers over unencrypted channels.
  • Permitting prolonged session lifetime.
  • Having insecure session state stores.
  • Placing session identifiers in query strings.

Threats or Attacks

  • Session hijacking.
  • Session replay.
  • Man-in-the-middle attacks.

Countermeasures

  • Partition site by anonymous, identified, and authenticated users.
  • Reduce session timeouts.
  • Avoid storing sensitive data in session stores.
  • Secure the channel to the session store.
  • Authenticate and authorize access to the session store.
Validation Vulnerabilities

  • Using non-validated input in the Hypertext Markup Language (HTML) output stream
  • Using non-validated input used to generate SQL queries
  • Relying on client-side validation
  • Using input file names, URLs, or user names for security decisions
  • Using application-only filters for malicious input
  • Looking for known bad patterns of input
  • Trusting data read from databases, file shares, and other network resources
  • Failing to validate input from all sources including cookies, query string parameters, HTTP headers, databases, and network resources

Threats / Attacks

  • Buffer overflows
  • Cross-site scripting
  • Canonicalization attacks
  • Query string manipulation
  • Form field manipulation
  • Cookie manipulation
  • HTTP header manipulation

Countermeasures

  • Validate input: length, range, format, and type
  • Constrain, reject, and sanitize input
  • Encode output
]]>
1
JD http:// <![CDATA[Vision Scope Template]]> http://shapingsoftware.com/2009/08/09/vision-scope-template/ 2009-08-20T00:11:07Z 2009-08-09T18:54:51Z VisionScopeTemplate

How do you convince a team of venture capitalists to bet on you?  There’s a lot of ninja techniques but here I’ll share the fundamentals.

Vision and Scope
At patterns & practices, we use Vision Scope milestones to sell management on how we’ll change the world.  Knowing the vision and scope for a project is actually pretty key.  The vision will motivate you and your team in the darkest of times.  It gets you back on your horse when you get knocked off.  The scope is important because it’s where you’ll usually have to manage the most expectations of what you will and won’t do.

Thinking in Terms of Venture Capitalists
When I do a vision scope, I think of the management team as the venture capitalists (a tip from a friend.)  This helps me get in the right mindset.  I have to convince them that I have the right problem, the right solution, the right customers, the right impact, the right team, the right cost and the right time-frame.  Hmmmm … I guess there’s a lot to get right.  A template helps.  The right slide template helps because it forces you to answer some important questions.

Template for Vision Scope
Here’s the template I used from my last vision scope meeting:

Vision / Scope

  • Agenda
  • Problem
  • Vision
  • Approach
  • Prioritized Tests for Success
  • Scope
  • Key Activities
  • Deliverables

Execution:

  • Execution
  • Team
  • Budget
  • Schedule
  • Risks
  • Asks
  • Go/No Go

It’s implicitly organized by problem, solution, deliverables and execution.  While the slides are important, I found that the real success in vision scope isn’t the particular slides.  It’s buy in to the vision, rapport in the meeting, and trust in the team to do the job.

What works for you?

Photo by Robert Couse-Baker.

]]>
4
JD http:// <![CDATA[Lessons in Software from Mike de Libero]]> http://shapingsoftware.com/2009/07/20/lessons-in-software-from-mike-de-libero/ 2009-08-09T18:56:11Z 2009-07-20T04:27:58Z MikeDeLibero

Editor’s note: This is a guest post from Mike de Libero.  Mike has been doing software development for more than 9 years in a variety of settings.  He’s worked as a freelance developer.  He’s also worked on a small team of developers maintaining 30+ programs at one time.  He’s even worked as a security tester on the Microsoft Office team.

I first met Mike through Mark Curphey.  Software security is a small world.  The funny thing about many of the people I meet in software security is that they 1) tend to break things to make things better, 2) like to help, and 3) focus on improvement.  The great thing about Mike is that he’s got a passion for development, and he’s more focused on principles, patterns, and practices, than on a particular technology.  Here are Mike’s top lessons learned in software development …

Top 10 Lessons in Software Development

Here is a summary of my top lessons in software development:

  • Lesson 1. All software is flawed.
  • Lesson 2. Check-in often.
  • Lesson 3. Tests, gotta love them.
  • Lesson 4. Refactor, check-in and repeat.
  • Lesson 5. Coding is easy, humans are tough.
  • Lesson 6. The more eyes on your code the better.
  • Lesson 7. Keep learning and improving.
  • Lesson 8. Simple is beautiful.
  • Lesson 9. Learn software development not coding.
  • Lesson 10. Think about your audience.

Lesson 1. All software is flawed.
Anyone who has written a software program larger than “hello world” knows that there will be bugs.  That is just a fact of software development.  These flaws occur because the way a piece of software is written is a reflection of what the developer’s though processes are and what challenges he or she is trying to solve.  Because human thoughts are not logically perfect all of the time, errors will occur.  Also, software development is always a trade-off between time/money and features leading to items left partially coded or rushed through.  Which leads us to the next lesson…

Lesson 2. Check-in often.
Everyone is using source control, right?  If you are not, start now!  When developing software or doing heavy refactoring, source control is your friend.  The more often you check-in a usable piece of code the easier it is to rollback if you completely screw something up.  On teams that require a code review before check-in or they freeze check-ins setup a private source control server it is so quick and easy.  I always think of source control as an undo button and it partially frees the developer from the fear of screwing up publicly.  If you use source control and unit-tests almost all fear just goes away.

Lesson 3. Tests, gotta love them.
Ahh unit tests… Everyone says you have to use them and they are the best thing since sliced bread.  I happen to agree for the most part.  When doing greenfield development I make sure unit tests are always written and used.  However the idea of unit tests as a part of the normal development cycle has only become semi-common in the last five years and good software was built long before then.  Keeping a list of common tests that should be ran outside of an IDE is also a great thing to have.  I think the greatest advantage of unit tests is that as long as they are quick to run it allows for a quick sanity check for the developer.  If source control is the development undo button then unit-tests are the babysitter that yells at you for doing something you wouldn’t do if your parents were around.

Lesson 4. Refactor, check-in and repeat.
No piece of code is perfect but it can hopefully become better (and yes it could get worse) by going back over and asking the questions “what can I do to make this shorter, easier to understand, etc…”  People do this all the time when writing papers but it doesn’t happen often when writing code.  After each interval, check-in the code in case you go too far and removed some needed piece of code.  Depending on the size of code being refactored there might be many iterations.

Lesson 5. Coding is easy, humans are tough.
Don’t get me wrong, there are some hard problems in coding but they are not as hard as figuring out what our fellow beings want out of a piece of software.  Humans tend to be fickle and contradict what they say.  On top of that it is very hard to communicate clearly and a barrier exists between “geek-speak” and normal vocabulary.  It becomes extremely difficult to figure out what users are asking for.

Lesson 6. The more eyes on your code the better.
Whenever I go into a new code base or one I haven’t been into for a few weeks I always spend a few minutes browsing around looking for things to improve.  When I spot something that was a bad implementation or just a bug I shoot an email over to the developer and let them know how it could be improved (I also change the implementation to make it better).  I ask for and expect the same thing out of any developer I work with.  Why?  Because it keeps us honest and teaches us ways to make our programs better.  Sure, this might not be a sit-down code review but I find this to work fairly well at least for smaller teams.  The more formal code reviews are nice to and have similar goals: higher quality code, bugs found before QA gets it and information transfer.  I think it all depends on the environment you are in.

Lesson 7. Keep learning and improving.
This lesson is pretty obvious but it has to be said.  If you don’t learn and/or keep improving you risk becoming a fez which I doubt anyone wants.  My usual metric is:

  • 1)    What new language / technology / technique did I learn in the past month?
  • 2)    When looking at my old code can I easily find better ways to do things?

I think the second metric is very important.  If you can’t think of ways to improve the code – even if it is not feasible in the current code base – then you should be concerned as that is one sign you are not growing.

Lesson 8. Simple is beautiful.
The acronym KISS is awesome and I try to follow that when developing software since I also believe in the idea that as software grows it becomes more complex.  Whenever I have to fix an issue or write code I always ask myself “can it be any simpler?”  Simplicity has many benefits and not just from a development perspective some of them are:

  • Code is easier to understand
  • Maintenance tends to be easier
  • Simple UIs tend to make programs easier to use

Lesson 9. Learn software development not coding.
Personally I make a distinction between coding and actual software development.  I think there are far too many people who focus on just the coding portion and not the bigger picture.  Many people can write code but it seems fewer people can design a system, test a system, write the code, document the architecture, talk to users to figure out requirements, create semi-accurate estimates, help other team members and when designing a user interface know the basics of user interaction.  There are things all developers should know about coding as well but I feel improving your coding chops is pretty easy and is done as you develop software.  Learning and improving in regards to software development is not necessary for one to hold done a job as a programmer.

Lesson 10. Think about your audience.
When reading this point did you immediately jump to the users of the program for which it is being created?  If you did you forgot a few other audiences :) .  Whenever you code I find that there are at least 4 audiences: the compiler, other programmers / your future self, attackers/malicious users and then the users of the program.  All of these audiences require different things and the easiest audience to please is the compiler.  The other audiences take a bit more work while creating software.  For example the attacker audience we really want to piss off instead of please and pissing off an attacker is fairly easy by just not trusting input and properly encoding output (note: this won’t protect you from everything in the attacker arsenal but it will take care of a huge amount).  The developer audience is fairly tough as we tend to think all other code is stupid or at least have an opinion about it, this stems from the differences in how each person thinks (at least that is my opinion on it).  Commenting business logic, writing clean and clear code and keeping it simple usually helps the development audience.  For the actual users of the program they are an interesting group.  There are many books on usability and design so I am just going to suggest you pick-up a few good books on that matter (if you need any suggestions feel free to get in touch with me).

]]>
14
JD http:// <![CDATA[Lessons in Software from James Waletzky]]> http://shapingsoftware.com/2009/07/06/lessons-in-software-from-james-waletzky/ 2009-08-25T16:02:44Z 2009-07-06T06:12:59Z James Waltesky

Editor’s note: This is a guest post from James Waletzky. James is a Development Lead at Microsoft and he maintains a blog about software engineering at http://blogs.msdn.com/progressive_development. James has shipped quite a few products and has worked on the Microsoft Engineering Excellence team, where he taught developers about agile and other software engineering practices and consulted with internal product groups to improve their engineering practices.

When J.D. asked me to share my thoughts on some top software development lessons I’ve learned throughout my time as a developer, I jumped at the chance. I have had successes and failures, and consulted with teams that share the same. Below is my list of 10 lessons I have learned through hard experience. This list is by no means definitive, but is gleaned from years of development experience.
Without further ado…
Ten Software Development Lessons

  • Lesson 1.    Keep it simple.
  • Lesson 2.    Define ‘done’.
  • Lesson 3.    Deliver incrementally and iteratively.
  • Lesson 4.    Split scenarios into vertical slices.
  • Lesson 5.    Continuously improve.
  • Lesson 6.    Unit testing is the #1 quality practice.
  • Lesson 7.    Don’t waste your time.
  • Lesson 8.    Features are not the most important thing.
  • Lesson 9.    Never trust anyone.
  • Lesson 10.    Reviews without preparation are useless.

Lesson 1.   Keep it simple.
I lost count of the number of over-engineered, over-complicated designs that I have seen throughout the past few years.  Software developers are ever in search of the most elegant solution to a problem. Guess what? Complexity causes problems – like prohibiting understanding of the design and code, causing maintainability issues, increasing the likelihood of bugs, generating bloated code, and often causing difficulty in testing. From the age old adage:
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." — Antoine de Saint Exupéry
Build in extensibility only when you need it. Accommodate change in your designs – don’t anticipate it. Keep class definitions small. Follow the Rule of Seven (i.e. 7 +/- 2 rule) when grouping concepts like methods on a class. Measure code complexity and refactor as necessary. There are many other strategies for keeping design and code simple. Use them.

Lesson 2. Define ‘done’.
Have you ever asked a developer how far along they are in their feature development? I am willing to bet that the most typical answer is 90%. Then next week you ask the same question and get the same answer. When they are eventually 100% ‘done’, do you know exactly what that 100% entails from the quality side? How exactly would you define "code complete"? I bet your answer would be different than mine, and different from J.D.’s.
It is important to have a clear, agreed-upon definition of ‘done’ at many different levels, including individual check-ins, component development, feature development, short iterations, milestones, and finally, release. On my team, code is ready for check-in when all unit tests pass, unit tests achieve 80%+ code coverage, code has been reviewed, design docs are in place, the code is free of memory leaks, and several other criteria. Our check-in checklist is arguably our most important tool. In fact, checklists are a great way to track these definitions. The meaning of ‘done’ should become commonplace and a part of your team’s vocabulary. Write it down so there is no confusion.

Lesson 3.    Deliver incrementally and iteratively.
Unfortunately, my crystal ball is in the shop being repaired. Until I get it back, it is hard to predict the future and derive a detailed plan that I am sure will hold true for the development of many features . In the absence of that crystal ball, delivering software in a piece-wise fashion helps achieve success. Break your scenarios into pieces and deliver small chunks in short iterations of 2-4 weeks in length. Get feedback early and often. Fold the feedback into the next iteration and incrementally build upon the results of the previous iteration, refactoring as needed to keep the design clean. You will end up with a better result than if you swim down the river and fly over the waterfall.

Lesson 4.    Split scenarios into vertical slices.
Assuming you are practicing scenario-based development, which could also easily make this list, to help deliver real business value in short iterations, it is important to break functionality into chunks. One method of chunking, assuming a typical architecture of data, logic and presentation layers, is to deliver the lowest level (data) followed by the middle layer (logic) followed by the user interface (presentation). The user does not care about the data layer and you miss the chance to gain valuable feedback if you deliver in this first. Instead, break things up vertically – deliver an end-to-end scenario with just enough data, logic and UI to support the scenario. The feedback you receive will factor into future scenarios and you adjust the design as you go. Additionally, you never write code that is not used, and adhere to the principle of YAGNI, or "You Ain’t Gonna Need It".

Lesson 5.    Continuously improve.
Tightly coupled with delivering software in an iterative fashion is the idea of continuous improvement, often called "Kaizen". Nothing is ever good enough – at least, that is the way you should think. Work to constantly improve your processes, the way the team works together, your tools, and anything else that contributes to your software development. Step back early and often and do a retrospective on the previous iteration, feature delivery, or even past few days of work. What went well? Continue to do those things. What didn’t go so well? Get beyond the symptom to the root cause of why there were issues and come up with actionable ways to fix them. Put those actions into practice in your next iteration.  Always strive to become a high performing team with the world’s best product.

Lesson 6.    Unit testing is the #1 quality practice.
I often get asked the following question: if I could change one thing about software development to encourage improved early-development cycle quality, what would it be? Easy – improved unit testing. Historically at many companies, developers would write the code, run the "happy path" through the debugger, and throw the code over the wall to the test team for validation. Quality would be "tested in". On more recent teams we have been doing much more unit testing  using code coverage as a feedback mechanism and quality has risen substantially. Additionally, unit tests give you the confidence to refactor your code at any moment in time leading to cleaner designs and more maintainable code. The icing on the cake is having the tests run as part of a daily build, so you always have quick feedback as to whether functionality is broken. The disadvantages are that unit tests take time to write and you add 50%+ more code to your product, but the investment is worth it.

Lesson 7.    Don’t waste your time.
The agile development manifesto values working software over comprehensive documentation. This guideline has proven valuable. Several projects I have experienced went overboard on plans, requirements specifications, designs, test plans, process documentation, release plans, etc. Don’t get me wrong – there is value in these documentation artifacts. The key is to do "just enough". Know the audience for your documentation and do the minimum amount to meet their needs. Any more than that is waste. Every activity in the development cycle should add value to the business, product or end user. Spend your time on activities that count.

Lesson 8.    Features are not the most important thing.
Yes, you heard correct – features are not the most important thing. Of course, if you are writing a v1 product, features are pretty important. However, in today’s software market, quality and fit and finish are just as important as features. The software needs to "just work". Quality attributes such as performance and reliability are huge satisfiers and are expected by customers. Fit and finish, or polish, on a product set it apart from competitors. A good example of fit and finish that could have been cut from the Apple development cycle are the rubber band effects on the list control on the iPhone. When I bought my iPod Touch I flicked that thing over and over because I thought the effect had a significant cool factor. It delighted me. I fell in love with the device. Of course, polish goes hand-in-hand with features and quality attributes – the device must do what I want it to do and not crash while doing it. The point, however, is that fit and finish is very important in today’s software world and should not be neglected.

Lesson 9.    Never trust anyone.
Ok, not literally. I am not talking about trusting your teammates – that is extremely important, and if you ask Stephen Covey, "trust is the life-blood of an organization". Here I am talking about trusting calling code outside of your boundary (e.g. any public method). I have seen more security vulnerabilities than I can count resulting from a failure to validate input parameters. I have seen more bugs than I count that could have been prevented by programming defensively. Use assertions liberally in your code to validate internal state.  Use trace statements strategically to dump out debugging state. Assume that some client with bad intentions will call into your code and handle all the error cases gracefully. One piece of advice that a good friend of mine and contributor to this blog, Corey Ladas, once told me: "write code as if the debugger doesn’t exist". That slight switch in mindset, coupled with a focus on unit tests, will make you a much more efficient developer reducing your time in the debugger, where you are generally least efficient.

Lesson 10.    Reviews without preparation are useless.
If you ever get invited to a spec review or code review without having seen the document or code prior to the review, just say "no". In this case, you are about to violate lesson #7 and waste your time as well as everyone else’s. Code reviews are a valuable quality control technique that every software development organization should practice. The key to a successful review is receiving the artifact up-front and having that focused alone time to prepare and find issues. The meeting is simply used to gather the feedback and learn from one another. The meeting is not used to find more issues. It pains me to see many hours wasted in useless reviews. Don’t be a victim.
The above list of lessons learned in software development is the tip of the iceberg. There are many more lessons that could be added to this list to make us all more successful. I would love to learn from all of you as well, and hear about your top lessons learned. Care to share?

Additional Resources
There are many resources for each of the lessons in the above list. For a gateway to many good resources, see the Progressive Development blog listed below, as well as many of the other postings on this site.

]]>
15