This post is a write up of how we do Customer-Connected Engineering on the Microsoft patterns & practices team. Our Customer-Connected Engineering process has been a key part of our success and impact in the software industry. While this write up is about how patterns & practices implements Customer Connected Engineering, you might find that you can tailor and adapt some of the principles for your scenario or context.
Customer Connected Engineering (CCE) is a collection of practices for engaging customers during the planning, development, and release of code and narrative guidance. Instead of simply collecting customer requirements up front, or getting feedback after the fact, it’s continuous involvement of customers throughout our life cycle. By involving our customers in the process, we improve transparency and increase the probability of shipping what’s most valuable to our customers. By partnering with customers, we improve our ability to understand end to end scenarios as well as priorities. By shortening the cycles of feedback we also improve our ability to learn, to reflect, and to adapt our deliverables as we clarify the wants and needs of our customers.
At patterns & practices, our approach is optimized towards external and largely unknown customers. Internal projects with identifiable stakeholders can also use CCE, but it will take a different form.
At patterns & practices, we use an approach we call Customer Connected Engineering (CCE). As the name implies, we engage with customers throughout the project. Our customers help us ship better software and deliverables that meet their needs. At the heart of customer connected engineering is a customer advisory board. The advisory board is a set of customers that helps influence what we build. The customer advisory board helps identify scenarios, prioritize the scenarios, comment on designs, test early preview bits, and give timely feedback during planning and development.
In addition to the advisory board, we have an open community that allows for any community member involvement. After the initial phases of some weeks, we will typically start releasing code and guidance “drops” in the community. The community is also the place, where we provide the main support of a deliverable during the development and after it is released. Internally we ensure the alignment with the product directions with a set of technology stakeholders, typically from the product groups that provide and own the platform technologies. In agile projects, the assumption is that you have the end-customer or proxies being engaged in the development. CCE ensures customer participation in a group inside Microsoft, like patterns & practices that has have many end customers as the target.
We use a combination of XP / Scrum for executing projects at patterns & practices. So if you’re doing XP/Scrum most of this isn’t new. The following diagram is an overlay of customer-connected activities on top of our development process:
The activities on the left-side of the diagram below are core activities in our patterns & practices projects. On the right hand-side are customer-connected activities. Here is a brief description of each of the activities:
Some of the things in the CCE column most people would say are Scrum/XP include: Stories/Scenarios, Prioritization, Demos, Drops, and Feedback. The thing we do – that Scrum/XP do not cover is include some practices which help a Product Owner gather feedback from a user community and aggregate that for the development team: The Customer Advisory Board, CodePlex forums, Advisory board selection and calls.
Shipping the wrong thing is expensive. Customer Connected Engineering when done properly provides more benefits than tax. Some of the benefits include:
The benefits of Customer Connected Engineering largely depend on both how engaged your Customer Advisory Board is and how representative they are of your target customer base.
One of the ways to successfully adopt a practice is to focus on the principles. The principles help you avoid getting stuck on implementation details. Implementation will vary from project to project, but the core concepts will stay the same. Here are some principles we’ve found to improve Customer Connected Engineering:
When we create our Customer Advisory Board, we want to be selective. The customers we choose need to have deep insight into the problems we’re working on. We search for people that are respected in the community both for their understanding of the technology and building real-world solutions. We focus on customers that are trying to solve the same challenges. We look for customers that have a serious interest in leveraging what we develop or learning from it. We want customers that are “early adopters” and still representative of our main target customer base. Customers that just want to track how we’re doing aren’t going to help. We need customers that will actually run alongside us, taking our work applying it, so we get specific feedback. We want customers who are not shy to push back, to scrutinize our backlog and criticize our direction and execution.
We build a board that is representative of our target audience including various customer types:
A lot of software projects fail because they miss the scenarios. It’s one thing to imagine or dream up scenarios, it’s another to get them directly from customers and to get them properly articulated in an unambiguous way. A lot of working features don’t necessarily aggregate up into working scenarios, or even the right scenarios. The value of our deliverable can be measured by the problems it solves. Ultimately, we can evaluate our deliverable against actual usage scenarios.
There’s a lot of opportunities for our Customer Advisory Board to help us prioritize and make trade-offs throughout the project. For example, we get input when we prioritize our product backlog. We also want input when we prioritize our iteration backlog. We also want customer input when we prioritize stories during iteration planning.
We make it obvious that we have fixed deadlines and limited resources, which means our main variable is scope. This often helps encourage the board members to engage more actively, because it gives them a clear sense the impact of their feedback.
I’d like to thank the following people for their review and contributions:
Ade Miller, Blaine Wastell, Bob Brumfield, Chris Tavares, Don Smith, Eduardo Jezierski, Erwin van der Valk, Eugenio Pace, Francis Cheung, Grigori Melnik, Javed Sikander, John deVadoss, Michael Puleio, Per Vonge Nielsen, Tom Hollander
This is a simple frame for testing your vision, your pitch for a project, or your proposed solution. One of my mentors uses it all the time to test the thinking and to make sure the team stays on track. I’ve adopted because it’s a great way to stay focused on the basics. Don’t let the basics get in the way of great results.
The frame is pretty simple to use. You simply walk the categories and ask questions to explore the thinking:
Here’s how they help:
It’s a simple frame, but it can help keep you focused on the right things.
Photo by BruceTurner.]]>
When I first met Eric, several years ago, he struck me as somebody with opinions and insight. Time and again he impressed me with his words of wisdom and his perspective on everything from software to career and to life. He always has a good answer to the tough problems, and never fails to make me think.
Without further ado, here’s Eric on his Lessons in Software …
Rather than focus on software engineering and craft, I’d like to concentrate on admirable attributes of software developers as human beings. These are attributes of people I like to work for, work with, and have working for me.
The attributes fall into two categories—strength and balance. Strength attributes form the foundation of someone’s being. Balance attributes characterize how someone deals with opposing ideals. Clearly, this is going to be a philosophical discussion. Thankfully, it’s also going to be short.
I chose three strength and three balance attributes. I like working with a diverse set of people, so I narrowed these admirable attributes to just the fundamental set that yields an interesting individual I respect.
If these attributes were easy to embody, the world would be a different place. It takes commitment and courage to be insightful, reflective, and principled. It takes thoughtful and unending vigilance to delicately maintain the balance of serving and advocating, execution and slack, and trust and risk.
The right balance at the beginning of a project is often quite different from the appropriate balance at the end. People challenge your principles, doubt your insights, and question your faith in yourself and your team. You must be strong and believe in yourself, yet balanced and dedicated to those you serve. It’s not easy, and that is why I admire people who embody these attributes.
The Microsoft patterns & practices team has been around since 2000. The patterns & practices team builds prescriptive guidance for customers building applications on the Microsoft platform. The primary mission is customer success on the platform. As part of that mission, patterns & practices delivers guidance in the form of reusable libraries, in-tool experiences, patterns, and guides. To put it another way, we deliver code-based and content-based guidance.
I’ve been a part of the team since 2001. Along the way, I’ve seen a lot of changes as our people, our processes, and our catalog of products have changed over time. Recently, I took a step back to collect and reflect our best practices. Some practices were more effective than others, and we’ve lost some along the way. To help reflect and analyze the best practices, I created a map of the key practices organized by discipline. In this post, I’ll share the map (note that it’s a work in progress.) Special thanks to Ed Jezierski, Michael Kropp, Per Vonge Nielsen, Shaun Hayes, and Tom Hollander (all former patterns & practices team members) for their contributions and insights to the map.
Best Practices by Discipline
The following table is a map of the key practices used by the patterns & practices team over the years.
|User Experience Team||
Some practices are obvious, while some of the names of the practices might not be. For example, “Fireside chat” is the name of our monthly team meeting, which is an informal gathering and open dialogue. I may drill into some of these practices in future posts, if there’s interest and there are key insights to share.]]>
My recent road trip was a great reminder how quality is durable. As I passed through familiar territory, it was interesting to see how many building and places stood the test of time. Whether it was a business or a building, it was quality that survived in the long run. Some of the restaurants I remembered were gone. Every restaurant I remembered that was high quality, was still around.
Competing on Price Fails in the Long Run
Competing on price failed, time and again. There was no customer loyalty when it was the price play. There was no compelling distinction beyond price. Chasing the price play, meant getting priced out of market by somebody better or cheaper or you name it. There are only so many corners you can cut before your value is insignificant. On the other hand, the quality play is focused on differentiation and distinction in terms of value. In a globabl market, where cycles of change are faster, competing on price is a game I just don’t want to play in.
Do You Stand Behind Your Work?
One of my most important tests, and it’s a simple gut check, is, do you stand behind your work? It’s a cutting question. When your results are something you’re proud of, and quality is your game, and continuous improvement is your way, and excellence is your bar … you set yourself up for success. When you can put yourself into your work, the journey becomes as enjoyable, if not more so, than the destination.
In times of change and uncertainty, driving from quality is a guiding principle that helps us find our path.
Photo by Cornell University Library.]]>
On the Microsoft patterns & practices team, we use Vision / Scope as a key milestone. It’s where we frame the problem, identify the business opportunity, and paint a vision of the solution. It’s a forcing function to get clarity on the customer, their scenarios, and our scope for the project. We generally use a “fix time, flex scope” pattern, so this means having a candidate backlog that we prioritize with customers.
On the execution side, we expect to know the team, key partners, the budget, the schedule, and the deliverables. We also need to know the risks and their mitigations. At the Vision / Scope, the real key is first selling people on the vision, and then selling them on the execution. It’s basically about answering, “why?” should we go do this, and “why now?.” This can be either about reducing pain or exploiting an opportunity. It’s also about answering these questions in the context of trade-offs. When you can tell a compelling story from problem to solution, and how you’ll get their incrementally with a team people trust, you dramatically increase your odds of getting a “Go” decision, and the support you need.
Vision / Scope Baseline
This is my rough sketch of the key pieces I need in my Vision / Scope presentations for success:
|Vision / Scope||
Vision / Scope Examples
Here are some examples of various Vision / Scope slides from over the years:
My Related Posts
Here is a draft of our Cloud Security Frame as part of our early exploration work for our patterns & practices Cloud Security Project. It’s a lens for looking at Cloud Security. The frame is simply a collection of Hot Spots. Each Hot Spot represents an actionable category for information. Using Hot Spots, you can quickly find pain and opportunities, or key decision points. It helps us organize principles, patterns, and practices by relevancy. For example, in this case, we use the Cloud Security Frame to organize threats, attacks, vulnerabilities and countermeasures.
This is our current set of Hot Spots for our Cloud Security Frame:.
Cloud Security Frame
Here is our draft of the Cloud Security Frame with a description of each Hot Spot category:
|Auditing and Logging||Auditing and logging refers to how security-related events are recorded, monitored, and audited. Examples include: Who did what and when?|
|Authentication||Authentication is the process of proving identity, typically through credentials, such as a user name and password.|
|Authorization||Authorization is how your application provides access controls for roles, resources and operations.|
|Communication||Communication encompasses how data is transmitted over the wire. Transport security versus message encryption is covered here.|
|Configuration Management||Configuration management refers to how your application handles configuration and administration of your applications from a security perspective. Examples include: Who does your application run as? Which databases does it connect to? How is your application administered? How are these settings secured?|
|Cryptography||Cryptography refers to how your application enforces confidentiality and integrity. Examples include: How are you keeping secrets (confidentiality)? How are you tamper-proofing your data or libraries (integrity)? How are you providing seeds for random values that must be cryptographically strong?|
|Exception Management||Exception management refers to how you handle applications errors and exceptions. Examples include: When your application fails, what does your application do? How much information do you reveal? Do you return friendly error information to end users? Do you pass valuable exception information back to the caller? Does your application fail gracefully?|
|Sensitive Data||Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores. Examples include: How does your application handle sensitive data?|
|Session Management||A session refers to a series of related interactions between a user and your application. Examples include: How does your application handle and protect user sessions?|
|Validation||Validation refers to how your application filters, scrubs, or rejects input before additional processing, or how it sanitizes output. It’s about constraining input through entry points and encoding output through exit points. Message validation refers to how you verify the message payload against schema, as well as message size, content and character sets. Examples include: How do you know that the input your application receives is valid and safe? Do you trust data from sources such as databases and file shares?|
Threats, Attacks, Vulnerabilities and Countermeasures
Here is our working draft of our threats, attacks, vulnerabilities and countermeasures organized by our Cloud Security Frame:
|Hot Spot||Threats, Attacks, Vulnerabilities and Countermeasures|
|Auditing and Logging||Vulnerabilities
Threats / Attacks
Threats / Attacks
Threats / Attacks
Threats / Attacks
Threats / Attacks
Threats / Attacks
Threats or Attacks
Threats or Attacks
Threats / Attacks
How do you convince a team of venture capitalists to bet on you? There’s a lot of ninja techniques but here I’ll share the fundamentals.
Vision and Scope
At patterns & practices, we use Vision Scope milestones to sell management on how we’ll change the world. Knowing the vision and scope for a project is actually pretty key. The vision will motivate you and your team in the darkest of times. It gets you back on your horse when you get knocked off. The scope is important because it’s where you’ll usually have to manage the most expectations of what you will and won’t do.
Thinking in Terms of Venture Capitalists
When I do a vision scope, I think of the management team as the venture capitalists (a tip from a friend.) This helps me get in the right mindset. I have to convince them that I have the right problem, the right solution, the right customers, the right impact, the right team, the right cost and the right time-frame. Hmmmm … I guess there’s a lot to get right. A template helps. The right slide template helps because it forces you to answer some important questions.
Template for Vision Scope
Here’s the template I used from my last vision scope meeting:
Vision / Scope
It’s implicitly organized by problem, solution, deliverables and execution. While the slides are important, I found that the real success in vision scope isn’t the particular slides. It’s buy in to the vision, rapport in the meeting, and trust in the team to do the job.
What works for you?
Photo by Robert Couse-Baker.]]>
Editor’s note: This is a guest post from Mike de Libero. Mike has been doing software development for more than 9 years in a variety of settings. He’s worked as a freelance developer. He’s also worked on a small team of developers maintaining 30+ programs at one time. He’s even worked as a security tester on the Microsoft Office team.
I first met Mike through Mark Curphey. Software security is a small world. The funny thing about many of the people I meet in software security is that they 1) tend to break things to make things better, 2) like to help, and 3) focus on improvement. The great thing about Mike is that he’s got a passion for development, and he’s more focused on principles, patterns, and practices, than on a particular technology. Here are Mike’s top lessons learned in software development …
Top 10 Lessons in Software Development
Here is a summary of my top lessons in software development:
Lesson 1. All software is flawed.
Anyone who has written a software program larger than “hello world” knows that there will be bugs. That is just a fact of software development. These flaws occur because the way a piece of software is written is a reflection of what the developer’s though processes are and what challenges he or she is trying to solve. Because human thoughts are not logically perfect all of the time, errors will occur. Also, software development is always a trade-off between time/money and features leading to items left partially coded or rushed through. Which leads us to the next lesson…
Lesson 2. Check-in often.
Everyone is using source control, right? If you are not, start now! When developing software or doing heavy refactoring, source control is your friend. The more often you check-in a usable piece of code the easier it is to rollback if you completely screw something up. On teams that require a code review before check-in or they freeze check-ins setup a private source control server it is so quick and easy. I always think of source control as an undo button and it partially frees the developer from the fear of screwing up publicly. If you use source control and unit-tests almost all fear just goes away.
Lesson 3. Tests, gotta love them.
Ahh unit tests… Everyone says you have to use them and they are the best thing since sliced bread. I happen to agree for the most part. When doing greenfield development I make sure unit tests are always written and used. However the idea of unit tests as a part of the normal development cycle has only become semi-common in the last five years and good software was built long before then. Keeping a list of common tests that should be ran outside of an IDE is also a great thing to have. I think the greatest advantage of unit tests is that as long as they are quick to run it allows for a quick sanity check for the developer. If source control is the development undo button then unit-tests are the babysitter that yells at you for doing something you wouldn’t do if your parents were around.
Lesson 4. Refactor, check-in and repeat.
No piece of code is perfect but it can hopefully become better (and yes it could get worse) by going back over and asking the questions “what can I do to make this shorter, easier to understand, etc…” People do this all the time when writing papers but it doesn’t happen often when writing code. After each interval, check-in the code in case you go too far and removed some needed piece of code. Depending on the size of code being refactored there might be many iterations.
Lesson 5. Coding is easy, humans are tough.
Don’t get me wrong, there are some hard problems in coding but they are not as hard as figuring out what our fellow beings want out of a piece of software. Humans tend to be fickle and contradict what they say. On top of that it is very hard to communicate clearly and a barrier exists between “geek-speak” and normal vocabulary. It becomes extremely difficult to figure out what users are asking for.
Lesson 6. The more eyes on your code the better.
Whenever I go into a new code base or one I haven’t been into for a few weeks I always spend a few minutes browsing around looking for things to improve. When I spot something that was a bad implementation or just a bug I shoot an email over to the developer and let them know how it could be improved (I also change the implementation to make it better). I ask for and expect the same thing out of any developer I work with. Why? Because it keeps us honest and teaches us ways to make our programs better. Sure, this might not be a sit-down code review but I find this to work fairly well at least for smaller teams. The more formal code reviews are nice to and have similar goals: higher quality code, bugs found before QA gets it and information transfer. I think it all depends on the environment you are in.
Lesson 7. Keep learning and improving.
This lesson is pretty obvious but it has to be said. If you don’t learn and/or keep improving you risk becoming a fez which I doubt anyone wants. My usual metric is:
I think the second metric is very important. If you can’t think of ways to improve the code – even if it is not feasible in the current code base – then you should be concerned as that is one sign you are not growing.
Lesson 8. Simple is beautiful.
The acronym KISS is awesome and I try to follow that when developing software since I also believe in the idea that as software grows it becomes more complex. Whenever I have to fix an issue or write code I always ask myself “can it be any simpler?” Simplicity has many benefits and not just from a development perspective some of them are:
Lesson 9. Learn software development not coding.
Personally I make a distinction between coding and actual software development. I think there are far too many people who focus on just the coding portion and not the bigger picture. Many people can write code but it seems fewer people can design a system, test a system, write the code, document the architecture, talk to users to figure out requirements, create semi-accurate estimates, help other team members and when designing a user interface know the basics of user interaction. There are things all developers should know about coding as well but I feel improving your coding chops is pretty easy and is done as you develop software. Learning and improving in regards to software development is not necessary for one to hold done a job as a programmer.
Lesson 10. Think about your audience.
When reading this point did you immediately jump to the users of the program for which it is being created? If you did you forgot a few other audiences . Whenever you code I find that there are at least 4 audiences: the compiler, other programmers / your future self, attackers/malicious users and then the users of the program. All of these audiences require different things and the easiest audience to please is the compiler. The other audiences take a bit more work while creating software. For example the attacker audience we really want to piss off instead of please and pissing off an attacker is fairly easy by just not trusting input and properly encoding output (note: this won’t protect you from everything in the attacker arsenal but it will take care of a huge amount). The developer audience is fairly tough as we tend to think all other code is stupid or at least have an opinion about it, this stems from the differences in how each person thinks (at least that is my opinion on it). Commenting business logic, writing clean and clear code and keeping it simple usually helps the development audience. For the actual users of the program they are an interesting group. There are many books on usability and design so I am just going to suggest you pick-up a few good books on that matter (if you need any suggestions feel free to get in touch with me).
Editor’s note: This is a guest post from James Waletzky. James is a Development Lead at Microsoft and he maintains a blog about software engineering at http://blogs.msdn.com/progressive_development. James has shipped quite a few products and has worked on the Microsoft Engineering Excellence team, where he taught developers about agile and other software engineering practices and consulted with internal product groups to improve their engineering practices.
When J.D. asked me to share my thoughts on some top software development lessons I’ve learned throughout my time as a developer, I jumped at the chance. I have had successes and failures, and consulted with teams that share the same. Below is my list of 10 lessons I have learned through hard experience. This list is by no means definitive, but is gleaned from years of development experience.
Without further ado…
Ten Software Development Lessons
Lesson 1. Keep it simple.
I lost count of the number of over-engineered, over-complicated designs that I have seen throughout the past few years. Software developers are ever in search of the most elegant solution to a problem. Guess what? Complexity causes problems – like prohibiting understanding of the design and code, causing maintainability issues, increasing the likelihood of bugs, generating bloated code, and often causing difficulty in testing. From the age old adage:
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." — Antoine de Saint Exupéry
Build in extensibility only when you need it. Accommodate change in your designs – don’t anticipate it. Keep class definitions small. Follow the Rule of Seven (i.e. 7 +/- 2 rule) when grouping concepts like methods on a class. Measure code complexity and refactor as necessary. There are many other strategies for keeping design and code simple. Use them.
Lesson 2. Define ‘done’.
Have you ever asked a developer how far along they are in their feature development? I am willing to bet that the most typical answer is 90%. Then next week you ask the same question and get the same answer. When they are eventually 100% ‘done’, do you know exactly what that 100% entails from the quality side? How exactly would you define "code complete"? I bet your answer would be different than mine, and different from J.D.’s.
It is important to have a clear, agreed-upon definition of ‘done’ at many different levels, including individual check-ins, component development, feature development, short iterations, milestones, and finally, release. On my team, code is ready for check-in when all unit tests pass, unit tests achieve 80%+ code coverage, code has been reviewed, design docs are in place, the code is free of memory leaks, and several other criteria. Our check-in checklist is arguably our most important tool. In fact, checklists are a great way to track these definitions. The meaning of ‘done’ should become commonplace and a part of your team’s vocabulary. Write it down so there is no confusion.
Lesson 3. Deliver incrementally and iteratively.
Unfortunately, my crystal ball is in the shop being repaired. Until I get it back, it is hard to predict the future and derive a detailed plan that I am sure will hold true for the development of many features . In the absence of that crystal ball, delivering software in a piece-wise fashion helps achieve success. Break your scenarios into pieces and deliver small chunks in short iterations of 2-4 weeks in length. Get feedback early and often. Fold the feedback into the next iteration and incrementally build upon the results of the previous iteration, refactoring as needed to keep the design clean. You will end up with a better result than if you swim down the river and fly over the waterfall.
Lesson 4. Split scenarios into vertical slices.
Assuming you are practicing scenario-based development, which could also easily make this list, to help deliver real business value in short iterations, it is important to break functionality into chunks. One method of chunking, assuming a typical architecture of data, logic and presentation layers, is to deliver the lowest level (data) followed by the middle layer (logic) followed by the user interface (presentation). The user does not care about the data layer and you miss the chance to gain valuable feedback if you deliver in this first. Instead, break things up vertically – deliver an end-to-end scenario with just enough data, logic and UI to support the scenario. The feedback you receive will factor into future scenarios and you adjust the design as you go. Additionally, you never write code that is not used, and adhere to the principle of YAGNI, or "You Ain’t Gonna Need It".
Lesson 5. Continuously improve.
Tightly coupled with delivering software in an iterative fashion is the idea of continuous improvement, often called "Kaizen". Nothing is ever good enough – at least, that is the way you should think. Work to constantly improve your processes, the way the team works together, your tools, and anything else that contributes to your software development. Step back early and often and do a retrospective on the previous iteration, feature delivery, or even past few days of work. What went well? Continue to do those things. What didn’t go so well? Get beyond the symptom to the root cause of why there were issues and come up with actionable ways to fix them. Put those actions into practice in your next iteration. Always strive to become a high performing team with the world’s best product.
Lesson 6. Unit testing is the #1 quality practice.
I often get asked the following question: if I could change one thing about software development to encourage improved early-development cycle quality, what would it be? Easy – improved unit testing. Historically at many companies, developers would write the code, run the "happy path" through the debugger, and throw the code over the wall to the test team for validation. Quality would be "tested in". On more recent teams we have been doing much more unit testing using code coverage as a feedback mechanism and quality has risen substantially. Additionally, unit tests give you the confidence to refactor your code at any moment in time leading to cleaner designs and more maintainable code. The icing on the cake is having the tests run as part of a daily build, so you always have quick feedback as to whether functionality is broken. The disadvantages are that unit tests take time to write and you add 50%+ more code to your product, but the investment is worth it.
Lesson 7. Don’t waste your time.
The agile development manifesto values working software over comprehensive documentation. This guideline has proven valuable. Several projects I have experienced went overboard on plans, requirements specifications, designs, test plans, process documentation, release plans, etc. Don’t get me wrong – there is value in these documentation artifacts. The key is to do "just enough". Know the audience for your documentation and do the minimum amount to meet their needs. Any more than that is waste. Every activity in the development cycle should add value to the business, product or end user. Spend your time on activities that count.
Lesson 8. Features are not the most important thing.
Yes, you heard correct – features are not the most important thing. Of course, if you are writing a v1 product, features are pretty important. However, in today’s software market, quality and fit and finish are just as important as features. The software needs to "just work". Quality attributes such as performance and reliability are huge satisfiers and are expected by customers. Fit and finish, or polish, on a product set it apart from competitors. A good example of fit and finish that could have been cut from the Apple development cycle are the rubber band effects on the list control on the iPhone. When I bought my iPod Touch I flicked that thing over and over because I thought the effect had a significant cool factor. It delighted me. I fell in love with the device. Of course, polish goes hand-in-hand with features and quality attributes – the device must do what I want it to do and not crash while doing it. The point, however, is that fit and finish is very important in today’s software world and should not be neglected.
Lesson 9. Never trust anyone.
Ok, not literally. I am not talking about trusting your teammates – that is extremely important, and if you ask Stephen Covey, "trust is the life-blood of an organization". Here I am talking about trusting calling code outside of your boundary (e.g. any public method). I have seen more security vulnerabilities than I can count resulting from a failure to validate input parameters. I have seen more bugs than I count that could have been prevented by programming defensively. Use assertions liberally in your code to validate internal state. Use trace statements strategically to dump out debugging state. Assume that some client with bad intentions will call into your code and handle all the error cases gracefully. One piece of advice that a good friend of mine and contributor to this blog, Corey Ladas, once told me: "write code as if the debugger doesn’t exist". That slight switch in mindset, coupled with a focus on unit tests, will make you a much more efficient developer reducing your time in the debugger, where you are generally least efficient.
Lesson 10. Reviews without preparation are useless.
If you ever get invited to a spec review or code review without having seen the document or code prior to the review, just say "no". In this case, you are about to violate lesson #7 and waste your time as well as everyone else’s. Code reviews are a valuable quality control technique that every software development organization should practice. The key to a successful review is receiving the artifact up-front and having that focused alone time to prepare and find issues. The meeting is simply used to gather the feedback and learn from one another. The meeting is not used to find more issues. It pains me to see many hours wasted in useless reviews. Don’t be a victim.
The above list of lessons learned in software development is the tip of the iceberg. There are many more lessons that could be added to this list to make us all more successful. I would love to learn from all of you as well, and hear about your top lessons learned. Care to share?
There are many resources for each of the lessons in the above list. For a gateway to many good resources, see the Progressive Development blog listed below, as well as many of the other postings on this site.