Blog Page 2



On the Microsoft patterns & practices team, we use Vision / Scope as a key milestone.  It’s where we frame the problem, identify the business opportunity, and paint a vision of the solution.  It’s a forcing function to get clarity on the customer, their scenarios, and our scope for the project.  We generally use a “fix time, flex scope” pattern, so this means having a candidate backlog that we prioritize with customers.

On the execution side, we expect to know the team, key partners, the budget, the schedule, and the deliverables.  We also need to know the risks and their mitigations.  At the Vision / Scope, the real key is first selling people on the vision, and then selling them on the execution.  It’s basically about answering, “why?” should we go do this, and “why now?.”  This can be either about reducing pain or exploiting an opportunity.  It’s also about answering these questions in the context of trade-offs.  When you can tell a compelling story from problem to solution, and how you’ll get their incrementally with a team people trust, you dramatically increase your odds of getting a “Go” decision, and the support you need.

Vision / Scope Baseline
This is my rough sketch of the key pieces I need in my Vision / Scope presentations for success:

Vision / Scope
  • Agenda
  • Problem
  • Vision
  • Approach
  • Prioritized
  • Tests for Success
  • Scope Key Activities
  • Deliverables
  • Team
  • Budget
  • Schedule
  • Risks
  • Asks
  • Go/No Go

Vision / Scope Examples
Here are some examples of various Vision / Scope slides from over the years:

Example Items
Example 1
  • Problem
  • Vision
  • Approach
  • Prioritized Tests for Success
  • Scope
  • Key Activities
  • Deliverables
  • Team
  • Schedule
  • Budget
  • Asks
  • Go/No Go?
Example 2
  • Vision / Strategy
  • Solution Concept
  • Scope
  • Outcomes
  • Deliverables
  • Scorecard
  • Team
  • Budget
  • Burn Rate
  • Schedule
  • Go/No Go?
Example 3
  • Agenda
  • Customer Proof Points
  • Business and Technical Scenarios
  • Project vision
  • Project Scope
  • Target Customer
  • Development Strategy
  • Project Objectives
  • Go-to-Market Release Strategy
Example 4
  • Agenda
  • Project Justification
  • Team and Extended Teams
  • Project Vision
  • Business and Technology Threats
  • Primary Business Scenario
  • Associated Technical Challenges
  • Quotes from Target Market
  • Top 5 Customer Requests
  • Potential Beta Customers
  • Project Scope
  • Project Deliverables
  • Assumptions
  • Risks
  • Development Strategy
  • Delivery Options
  • Single Release Schedule and Budget
  • Dual Release Schedule
  • Dual Release Budget
  • Go-to-Market Strategy
  • Current Status/next Steps
Example 5
  • Agenda
  • Customer Proof
  • Project vision
  • Business and Technical Scenarios
  • Scope
  • Pre-Release Strategy
  • Go-to-Market Strategy
  • Goals
  • Current Status/Next Steps
Example 6
  • Agenda
  • Project Lifecycle
  • Habits and Practices
  • Scenario-Based Guidance
  • What is a baseline architecture?
  • Reference Architecture Space
  • Baseline Architecture Applied
  • How will customers use it?
  • Vision
  • Strategy
  • Why create this baseline architecture?
  • Target Customer and Business Requirements
  • Customer Scenarios
  • Technical Challenges
  • Deliverables
  • Project Schedule
  • Budget and Resource Allocation
  • Risk and Mitigation
  • Project Team
  • Dev Update
  • Development Velocity
  • Test Deliverables
  • Testing Coverage
  • Bug – Status to Date (Test)
  • Support Strategy
  • Market Distribution
  • Partner Strategy
Example 7
  • Challenges
  • Opportunity
  • Vision and Strategy
  • Scope
  • Feature Prioritization Approach
  • Candidate Scope
  • Scope: Components of the Deliverable
  • Iterative Development Process
  • Staging and Release Strategy
  • Success Metrics
  • Alignment with SC-BAT
  • Team Roles
  • Product Group Feedback
  • Risks
  • Issues
  • Schedule
  • Test Deliverables and Coverage
  • Requests
Example 8
  • Agenda
  • Customer Proof Points
  • Business and Technical Scenarios
  • Project Vision
  • Project Scope
  • Target Customer
  • Development Strategy
  • Project Objectives
  • Go-to-Market Release Strategy
  • Current Status/Next Steps
Example 9
  • Agenda
  • Customer Pain
  • Vision/Strategy
  • Opportunity
  • Solution Concept
  • Scope
  • Deliverables
  • Scorecard
  • EcoSystem
  • who Are We Working with
  • Team
  • Test Scope
  • Alignment – Relation to Projects/Programs
  • Schedule
  • Budget Ask to M0 + 30 days
  • Total Budget
  • Risks
  • Asks
  • GO / No GO
Example 10
  • Situation
  • Opportunity
  • Vision
  • Goals
  • Guidance Team
  • Guidance Frame
  • Strategy – Program and Project
  • Program
  • Customer Data
  • Customer Scenario
  • Technology Landscape
  • Target Personas
  • Solution Concept: Deliverables
  • Scope – Phase 1a (Preview Release)
  • Candidate Pattern Map
  • Possible Phase 1b Scope
  • Scope
  • Release Strategy
  • Customer Validation Plan
  • Risks and Mitigation Strategy
  • Issues
  • Schedule
  • Budget
  • Technical and Organizational Dependencies
  • Asks


My Related Posts



Here is a draft of our Cloud Security Frame as part of our early exploration work for our patterns & practices Cloud Security Project.  It’s a lens for looking at Cloud Security.  The frame is simply a collection of Hot Spots.  Each Hot Spot represents an actionable category for information.  Using Hot Spots, you can quickly find pain and opportunities, or key decision points.  It helps us organize principles, patterns, and practices by relevancy.  For example, in this case, we use the Cloud Security Frame to organize threats, attacks, vulnerabilities and countermeasures.

Hot Spots

This is our current set of Hot Spots for our Cloud Security Frame:.

  • Auditing and Logging
  • Authentication
  • Authorization
  • Communication
  • Configuration Management
  • Cryptography
  • Exception Management
  • Sensitive Data
  • Session Management
  • Validation

Cloud Security Frame
Here is our draft of the Cloud Security Frame with a description of each Hot Spot category:

Hot Spot Description
Auditing and Logging Auditing and logging refers to how security-related events are recorded, monitored, and audited. Examples include: Who did what and when?
Authentication Authentication is the process of proving identity, typically through credentials, such as a user name and password.
Authorization Authorization is how your application provides access controls for roles, resources and operations.
Communication Communication encompasses how data is transmitted over the wire. Transport security versus message encryption is covered here.
Configuration Management Configuration management refers to how your application handles configuration and administration of your applications from a security perspective. Examples include: Who does your application run as? Which databases does it connect to? How is your application administered? How are these settings secured?
Cryptography Cryptography refers to how your application enforces confidentiality and integrity. Examples include: How are you keeping secrets (confidentiality)? How are you tamper-proofing your data or libraries (integrity)? How are you providing seeds for random values that must be cryptographically strong?
Exception Management Exception management refers to how you handle applications errors and exceptions. Examples include: When your application fails, what does your application do? How much information do you reveal? Do you return friendly error information to end users? Do you pass valuable exception information back to the caller? Does your application fail gracefully?
Sensitive Data Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores. Examples include: How does your application handle sensitive data?
Session Management A session refers to a series of related interactions between a user and your application. Examples include: How does your application handle and protect user sessions?
Validation Validation refers to how your application filters, scrubs, or rejects input before additional processing, or how it sanitizes output. It’s about constraining input through entry points and encoding output through exit points. Message validation refers to how you verify the message payload against schema, as well as message size, content and character sets. Examples include: How do you know that the input your application receives is valid and safe? Do you trust data from sources such as databases and file shares?

Threats, Attacks, Vulnerabilities and Countermeasures
Here is our working draft of our threats, attacks, vulnerabilities and countermeasures organized by our Cloud Security Frame:

Hot Spot Threats, Attacks, Vulnerabilities and Countermeasures
Auditing and Logging Vulnerabilities

  • Failing to audit failed logons.
  • Failing to secure audit files.
  • Failing to audit across application tiers.

Threats / Attacks

  • User denies performing an operation.
  • Attacker exploits an application without trace .
  • Attacker covers his tracks.


  • Identify malicious behavior.
  • Know your baseline (know what good traffic looks like.)
  • Use application instrumentation to expose behavior that can be monitored.
Authentication Vulnerabilities

  • Using weak passwords.
  • Storing clear text credentials in configuration files.
  • Passing clear text credentials over the network.
  • Permitting over-privileged accounts.
  • Permitting prolonged session lifetime.
  • Mixing personalization with authentication.

Threats / Attacks

  • Network eavesdropping.
  • Brute force attacks.
  • Dictionary attacks.
  • Cookie replay attacks.
  • Credential theft.


  • Use strong password policies.
  • Do not store credentials.
  • Use authentication mechanisms that do not require clear text credentials. to be passed over the network.
  • Encrypt communication channels to secure authentication tokens.
  • Use HTTPS only with forms authentication cookies.
  • Separate anonymous from authenticated pages.
Authorization Vulnerabilities

  • Relying on a single gatekeeper.
  • Failing to lock down system resources against application identities.
  • Failing to limit database access to specified stored procedures.
  • Using inadequate separation of privileges.

Threats / Attacks

  • Elevation of privilege.
  • Disclosure of confidential data.
  • Data tampering.
  • Luring attacks.


  • Use least privilege accounts.
  • Consider granularity of access.
  • Enforce separation of privileges.
  • Use multiple gatekeepers.
  • Secure system resources against system identities.
Configuration Management Vulnerabilities

  • Using insecure administration interfaces.
  • Using insecure configuration stores.
  • Storing clear text configuration data.
  • Having too many administrators.
  • Using over-privileged process accounts and service accounts.

Threats / Attacks

  • Unauthorized access to administration interfaces.
  • Unauthorized access to configuration stores.
  • Retrieval of clear text configuration secrets.
  • Lack of individual accountability.


  • Use least privileged service accounts.
  • Do not store credentials in clear text.
  • Use strong authentication and authorization on administrative interfaces.
  • Avoid storing sensitive information in the Web space.
  • Use only local administration.
Cryptography Vulnerabilities

  • Using custom cryptography.
  • Using the wrong algorithm or a key size that is too small.
  • Failing to secure encryption keys.
  • Using the same key for a prolonged period of time.
  • Distributing keys in an insecure manner.

Threats / Attacks

  • Loss of decryption keys.
  • Encryption cracking.


  • Do not develop and use proprietary algorithms (XOR is not encryption. Use platform-provided cryptography.)
  • Use the RNGCryptoServiceProvider method to generate random numbers.
  • Avoid key management. Use the Windows Data Protection API (DPAPI) where appropriate.
  • Periodically change your keys.
Exception Management Vulnerabilities

  • Failing to use structured exception handling.
  • Revealing too much information to the client.

Threats / Attacks

  • Revealing sensitive system or application details.
  • Denial of service attacks.


  • Use structured exception handling (by using try/catch blocks.)
  • Catch and wrap exceptions only if the operation adds value/information.
  • Do not reveal sensitive system or application information.
  • Do not log private data such as passwords.
Sensitive Data Vulnerabilities

  • Storing secrets when you do not need to.
  • Storing secrets in code.
  • Storing secrets in clear text.
  • Passing sensitive data in clear text over networks.

Threats or Attacks

  • Accessing sensitive data in storage.
  • Accessing sensitive data in memory (including process dumps.)
  • Network eavesdropping.
  • Information disclosure.


  • Do not store secrets in software.
  • Encrypt sensitive data over the network.
  • Secure the channel.
Session Management Vulnerabilities

  • Passing session identifiers over unencrypted channels.
  • Permitting prolonged session lifetime.
  • Having insecure session state stores.
  • Placing session identifiers in query strings.

Threats or Attacks

  • Session hijacking.
  • Session replay.
  • Man-in-the-middle attacks.


  • Partition site by anonymous, identified, and authenticated users.
  • Reduce session timeouts.
  • Avoid storing sensitive data in session stores.
  • Secure the channel to the session store.
  • Authenticate and authorize access to the session store.
Validation Vulnerabilities

  • Using non-validated input in the Hypertext Markup Language (HTML) output stream
  • Using non-validated input used to generate SQL queries
  • Relying on client-side validation
  • Using input file names, URLs, or user names for security decisions
  • Using application-only filters for malicious input
  • Looking for known bad patterns of input
  • Trusting data read from databases, file shares, and other network resources
  • Failing to validate input from all sources including cookies, query string parameters, HTTP headers, databases, and network resources

Threats / Attacks

  • Buffer overflows
  • Cross-site scripting
  • Canonicalization attacks
  • Query string manipulation
  • Form field manipulation
  • Cookie manipulation
  • HTTP header manipulation


  • Validate input: length, range, format, and type
  • Constrain, reject, and sanitize input
  • Encode output



How do you convince a team of venture capitalists to bet on you?  There’s a lot of ninja techniques but here I’ll share the fundamentals.

Vision and Scope
At patterns & practices, we use Vision Scope milestones to sell management on how we’ll change the world.  Knowing the vision and scope for a project is actually pretty key.  The vision will motivate you and your team in the darkest of times.  It gets you back on your horse when you get knocked off.  The scope is important because it’s where you’ll usually have to manage the most expectations of what you will and won’t do.

Thinking in Terms of Venture Capitalists
When I do a vision scope, I think of the management team as the venture capitalists (a tip from a friend.)  This helps me get in the right mindset.  I have to convince them that I have the right problem, the right solution, the right customers, the right impact, the right team, the right cost and the right time-frame.  Hmmmm … I guess there’s a lot to get right.  A template helps.  The right slide template helps because it forces you to answer some important questions.

Template for Vision Scope
Here’s the template I used from my last vision scope meeting:

Vision / Scope

  • Agenda
  • Problem
  • Vision
  • Approach
  • Prioritized Tests for Success
  • Scope
  • Key Activities
  • Deliverables


  • Execution
  • Team
  • Budget
  • Schedule
  • Risks
  • Asks
  • Go/No Go

It’s implicitly organized by problem, solution, deliverables and execution.  While the slides are important, I found that the real success in vision scope isn’t the particular slides.  It’s buy in to the vision, rapport in the meeting, and trust in the team to do the job.

What works for you?

Photo by Robert Couse-Baker.


Editor’s note: This is a guest post from Mike de Libero.  Mike has been doing software development for more than 9 years in a variety of settings.  He’s worked as a freelance developer.  He’s also worked on a small team of developers maintaining 30+ programs at one time.  He’s even worked as a security tester on the Microsoft Office team.

I first met Mike through Mark Curphey.  Software security is a small world.  The funny thing about many of the people I meet in software security is that they 1) tend to break things to make things better, 2) like to help, and 3) focus on improvement.  The great thing about Mike is that he’s got a passion for development, and he’s more focused on principles, patterns, and practices, than on a particular technology.  Here are Mike’s top lessons learned in software development …

Top 10 Lessons in Software Development

Here is a summary of my top lessons in software development:

  • Lesson 1. All software is flawed.
  • Lesson 2. Check-in often.
  • Lesson 3. Tests, gotta love them.
  • Lesson 4. Refactor, check-in and repeat.
  • Lesson 5. Coding is easy, humans are tough.
  • Lesson 6. The more eyes on your code the better.
  • Lesson 7. Keep learning and improving.
  • Lesson 8. Simple is beautiful.
  • Lesson 9. Learn software development not coding.
  • Lesson 10. Think about your audience.

Lesson 1. All software is flawed.
Anyone who has written a software program larger than “hello world” knows that there will be bugs.  That is just a fact of software development.  These flaws occur because the way a piece of software is written is a reflection of what the developer’s though processes are and what challenges he or she is trying to solve.  Because human thoughts are not logically perfect all of the time, errors will occur.  Also, software development is always a trade-off between time/money and features leading to items left partially coded or rushed through.  Which leads us to the next lesson…

Lesson 2. Check-in often.
Everyone is using source control, right?  If you are not, start now!  When developing software or doing heavy refactoring, source control is your friend.  The more often you check-in a usable piece of code the easier it is to rollback if you completely screw something up.  On teams that require a code review before check-in or they freeze check-ins setup a private source control server it is so quick and easy.  I always think of source control as an undo button and it partially frees the developer from the fear of screwing up publicly.  If you use source control and unit-tests almost all fear just goes away.

Lesson 3. Tests, gotta love them.
Ahh unit tests… Everyone says you have to use them and they are the best thing since sliced bread.  I happen to agree for the most part.  When doing greenfield development I make sure unit tests are always written and used.  However the idea of unit tests as a part of the normal development cycle has only become semi-common in the last five years and good software was built long before then.  Keeping a list of common tests that should be ran outside of an IDE is also a great thing to have.  I think the greatest advantage of unit tests is that as long as they are quick to run it allows for a quick sanity check for the developer.  If source control is the development undo button then unit-tests are the babysitter that yells at you for doing something you wouldn’t do if your parents were around.

Lesson 4. Refactor, check-in and repeat.
No piece of code is perfect but it can hopefully become better (and yes it could get worse) by going back over and asking the questions “what can I do to make this shorter, easier to understand, etc…”  People do this all the time when writing papers but it doesn’t happen often when writing code.  After each interval, check-in the code in case you go too far and removed some needed piece of code.  Depending on the size of code being refactored there might be many iterations.

Lesson 5. Coding is easy, humans are tough.
Don’t get me wrong, there are some hard problems in coding but they are not as hard as figuring out what our fellow beings want out of a piece of software.  Humans tend to be fickle and contradict what they say.  On top of that it is very hard to communicate clearly and a barrier exists between “geek-speak” and normal vocabulary.  It becomes extremely difficult to figure out what users are asking for.

Lesson 6. The more eyes on your code the better.
Whenever I go into a new code base or one I haven’t been into for a few weeks I always spend a few minutes browsing around looking for things to improve.  When I spot something that was a bad implementation or just a bug I shoot an email over to the developer and let them know how it could be improved (I also change the implementation to make it better).  I ask for and expect the same thing out of any developer I work with.  Why?  Because it keeps us honest and teaches us ways to make our programs better.  Sure, this might not be a sit-down code review but I find this to work fairly well at least for smaller teams.  The more formal code reviews are nice to and have similar goals: higher quality code, bugs found before QA gets it and information transfer.  I think it all depends on the environment you are in.

Lesson 7. Keep learning and improving.
This lesson is pretty obvious but it has to be said.  If you don’t learn and/or keep improving you risk becoming a fez which I doubt anyone wants.  My usual metric is:

  • 1)    What new language / technology / technique did I learn in the past month?
  • 2)    When looking at my old code can I easily find better ways to do things?

I think the second metric is very important.  If you can’t think of ways to improve the code – even if it is not feasible in the current code base – then you should be concerned as that is one sign you are not growing.

Lesson 8. Simple is beautiful.
The acronym KISS is awesome and I try to follow that when developing software since I also believe in the idea that as software grows it becomes more complex.  Whenever I have to fix an issue or write code I always ask myself “can it be any simpler?”  Simplicity has many benefits and not just from a development perspective some of them are:

  • Code is easier to understand
  • Maintenance tends to be easier
  • Simple UIs tend to make programs easier to use

Lesson 9. Learn software development not coding.
Personally I make a distinction between coding and actual software development.  I think there are far too many people who focus on just the coding portion and not the bigger picture.  Many people can write code but it seems fewer people can design a system, test a system, write the code, document the architecture, talk to users to figure out requirements, create semi-accurate estimates, help other team members and when designing a user interface know the basics of user interaction.  There are things all developers should know about coding as well but I feel improving your coding chops is pretty easy and is done as you develop software.  Learning and improving in regards to software development is not necessary for one to hold done a job as a programmer.

Lesson 10. Think about your audience.
When reading this point did you immediately jump to the users of the program for which it is being created?  If you did you forgot a few other audiences :).  Whenever you code I find that there are at least 4 audiences: the compiler, other programmers / your future self, attackers/malicious users and then the users of the program.  All of these audiences require different things and the easiest audience to please is the compiler.  The other audiences take a bit more work while creating software.  For example the attacker audience we really want to piss off instead of please and pissing off an attacker is fairly easy by just not trusting input and properly encoding output (note: this won’t protect you from everything in the attacker arsenal but it will take care of a huge amount).  The developer audience is fairly tough as we tend to think all other code is stupid or at least have an opinion about it, this stems from the differences in how each person thinks (at least that is my opinion on it).  Commenting business logic, writing clean and clear code and keeping it simple usually helps the development audience.  For the actual users of the program they are an interesting group.  There are many books on usability and design so I am just going to suggest you pick-up a few good books on that matter (if you need any suggestions feel free to get in touch with me).

James Waltesky

Editor’s note: This is a guest post from James Waletzky. James is a Development Lead at Microsoft and he maintains a blog about software engineering at James has shipped quite a few products and has worked on the Microsoft Engineering Excellence team, where he taught developers about agile and other software engineering practices and consulted with internal product groups to improve their engineering practices.

When J.D. asked me to share my thoughts on some top software development lessons I’ve learned throughout my time as a developer, I jumped at the chance. I have had successes and failures, and consulted with teams that share the same. Below is my list of 10 lessons I have learned through hard experience. This list is by no means definitive, but is gleaned from years of development experience.
Without further ado…
Ten Software Development Lessons

  • Lesson 1.    Keep it simple.
  • Lesson 2.    Define ‘done’.
  • Lesson 3.    Deliver incrementally and iteratively.
  • Lesson 4.    Split scenarios into vertical slices.
  • Lesson 5.    Continuously improve.
  • Lesson 6.    Unit testing is the #1 quality practice.
  • Lesson 7.    Don’t waste your time.
  • Lesson 8.    Features are not the most important thing.
  • Lesson 9.    Never trust anyone.
  • Lesson 10.    Reviews without preparation are useless.

Lesson 1.   Keep it simple.
I lost count of the number of over-engineered, over-complicated designs that I have seen throughout the past few years.  Software developers are ever in search of the most elegant solution to a problem. Guess what? Complexity causes problems – like prohibiting understanding of the design and code, causing maintainability issues, increasing the likelihood of bugs, generating bloated code, and often causing difficulty in testing. From the age old adage:
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." — Antoine de Saint Exupéry
Build in extensibility only when you need it. Accommodate change in your designs – don’t anticipate it. Keep class definitions small. Follow the Rule of Seven (i.e. 7 +/- 2 rule) when grouping concepts like methods on a class. Measure code complexity and refactor as necessary. There are many other strategies for keeping design and code simple. Use them.

Lesson 2. Define ‘done’.
Have you ever asked a developer how far along they are in their feature development? I am willing to bet that the most typical answer is 90%. Then next week you ask the same question and get the same answer. When they are eventually 100% ‘done’, do you know exactly what that 100% entails from the quality side? How exactly would you define "code complete"? I bet your answer would be different than mine, and different from J.D.’s.
It is important to have a clear, agreed-upon definition of ‘done’ at many different levels, including individual check-ins, component development, feature development, short iterations, milestones, and finally, release. On my team, code is ready for check-in when all unit tests pass, unit tests achieve 80%+ code coverage, code has been reviewed, design docs are in place, the code is free of memory leaks, and several other criteria. Our check-in checklist is arguably our most important tool. In fact, checklists are a great way to track these definitions. The meaning of ‘done’ should become commonplace and a part of your team’s vocabulary. Write it down so there is no confusion.

Lesson 3.    Deliver incrementally and iteratively.
Unfortunately, my crystal ball is in the shop being repaired. Until I get it back, it is hard to predict the future and derive a detailed plan that I am sure will hold true for the development of many features . In the absence of that crystal ball, delivering software in a piece-wise fashion helps achieve success. Break your scenarios into pieces and deliver small chunks in short iterations of 2-4 weeks in length. Get feedback early and often. Fold the feedback into the next iteration and incrementally build upon the results of the previous iteration, refactoring as needed to keep the design clean. You will end up with a better result than if you swim down the river and fly over the waterfall.

Lesson 4.    Split scenarios into vertical slices.
Assuming you are practicing scenario-based development, which could also easily make this list, to help deliver real business value in short iterations, it is important to break functionality into chunks. One method of chunking, assuming a typical architecture of data, logic and presentation layers, is to deliver the lowest level (data) followed by the middle layer (logic) followed by the user interface (presentation). The user does not care about the data layer and you miss the chance to gain valuable feedback if you deliver in this first. Instead, break things up vertically – deliver an end-to-end scenario with just enough data, logic and UI to support the scenario. The feedback you receive will factor into future scenarios and you adjust the design as you go. Additionally, you never write code that is not used, and adhere to the principle of YAGNI, or "You Ain’t Gonna Need It".

Lesson 5.    Continuously improve.
Tightly coupled with delivering software in an iterative fashion is the idea of continuous improvement, often called "Kaizen". Nothing is ever good enough – at least, that is the way you should think. Work to constantly improve your processes, the way the team works together, your tools, and anything else that contributes to your software development. Step back early and often and do a retrospective on the previous iteration, feature delivery, or even past few days of work. What went well? Continue to do those things. What didn’t go so well? Get beyond the symptom to the root cause of why there were issues and come up with actionable ways to fix them. Put those actions into practice in your next iteration.  Always strive to become a high performing team with the world’s best product.

Lesson 6.    Unit testing is the #1 quality practice.
I often get asked the following question: if I could change one thing about software development to encourage improved early-development cycle quality, what would it be? Easy – improved unit testing. Historically at many companies, developers would write the code, run the "happy path" through the debugger, and throw the code over the wall to the test team for validation. Quality would be "tested in". On more recent teams we have been doing much more unit testing  using code coverage as a feedback mechanism and quality has risen substantially. Additionally, unit tests give you the confidence to refactor your code at any moment in time leading to cleaner designs and more maintainable code. The icing on the cake is having the tests run as part of a daily build, so you always have quick feedback as to whether functionality is broken. The disadvantages are that unit tests take time to write and you add 50%+ more code to your product, but the investment is worth it.

Lesson 7.    Don’t waste your time.
The agile development manifesto values working software over comprehensive documentation. This guideline has proven valuable. Several projects I have experienced went overboard on plans, requirements specifications, designs, test plans, process documentation, release plans, etc. Don’t get me wrong – there is value in these documentation artifacts. The key is to do "just enough". Know the audience for your documentation and do the minimum amount to meet their needs. Any more than that is waste. Every activity in the development cycle should add value to the business, product or end user. Spend your time on activities that count.

Lesson 8.    Features are not the most important thing.
Yes, you heard correct – features are not the most important thing. Of course, if you are writing a v1 product, features are pretty important. However, in today’s software market, quality and fit and finish are just as important as features. The software needs to "just work". Quality attributes such as performance and reliability are huge satisfiers and are expected by customers. Fit and finish, or polish, on a product set it apart from competitors. A good example of fit and finish that could have been cut from the Apple development cycle are the rubber band effects on the list control on the iPhone. When I bought my iPod Touch I flicked that thing over and over because I thought the effect had a significant cool factor. It delighted me. I fell in love with the device. Of course, polish goes hand-in-hand with features and quality attributes – the device must do what I want it to do and not crash while doing it. The point, however, is that fit and finish is very important in today’s software world and should not be neglected.

Lesson 9.    Never trust anyone.
Ok, not literally. I am not talking about trusting your teammates – that is extremely important, and if you ask Stephen Covey, "trust is the life-blood of an organization". Here I am talking about trusting calling code outside of your boundary (e.g. any public method). I have seen more security vulnerabilities than I can count resulting from a failure to validate input parameters. I have seen more bugs than I count that could have been prevented by programming defensively. Use assertions liberally in your code to validate internal state.  Use trace statements strategically to dump out debugging state. Assume that some client with bad intentions will call into your code and handle all the error cases gracefully. One piece of advice that a good friend of mine and contributor to this blog, Corey Ladas, once told me: "write code as if the debugger doesn’t exist". That slight switch in mindset, coupled with a focus on unit tests, will make you a much more efficient developer reducing your time in the debugger, where you are generally least efficient.

Lesson 10.    Reviews without preparation are useless.
If you ever get invited to a spec review or code review without having seen the document or code prior to the review, just say "no". In this case, you are about to violate lesson #7 and waste your time as well as everyone else’s. Code reviews are a valuable quality control technique that every software development organization should practice. The key to a successful review is receiving the artifact up-front and having that focused alone time to prepare and find issues. The meeting is simply used to gather the feedback and learn from one another. The meeting is not used to find more issues. It pains me to see many hours wasted in useless reviews. Don’t be a victim.
The above list of lessons learned in software development is the tip of the iceberg. There are many more lessons that could be added to this list to make us all more successful. I would love to learn from all of you as well, and hear about your top lessons learned. Care to share?

Additional Resources
There are many resources for each of the lessons in the above list. For a gateway to many good resources, see the Progressive Development blog listed below, as well as many of the other postings on this site.