feedback

Definition Of Done Mindset

A fear often associated with self-organizing teams is the potential loss of focus on quality. Indeed, how does an organization ensure it produces a satisfactory level of quality when that responsibility is distributed among teams rather than centralized? Many organizations solve this issue by crafting and abiding by a Definition Of Done, or DoD for short.  A DoD is to product developers (analysts, designers, marketers, software programmers, testers, etc.) what the National Electrical Code (NEC) is to electricians. Unlike the NEC, however, there is usually not an independent association telling an organization what the DoD should contain. Each organization has to figure out craft its own DoD. In this article, we explore the mindset that should be adopted when crafting a DoD.

Before diving in, let’s remember that a Definition of Done is nothing more than a list of feedback loops a team deems necessary to address in its quest for quality. For example, teams that operate in a software product development context could get inspiration from this DoD starter kit.

Realistic or Idealistic?

It can be tricky to establish a Definition of Done while product development processes are still evolving. Should a team adopt a DoD that can be met today, even if it compromises on quality a little, or should it adopt a stricter, more thorough DoD that the current reality prevents it from meeting? I have a strong personal preference for the more stringent DoD, the one that aims for perfection.

Presuppositions

At this point, I must make a couple of assumptions explicit. First, I’m assuming that an organization embarking on a Lean Startup or Agile transformation is an organization motivated to change. Presumably, its leaders are dissatisfied with current results and have the willpower to plow through the inevitable obstacles (see Executive Support article for other willpower manifestations). I am thus assuming commitment. We will see in a minute why this is critical.

Second, the stakeholders must recognize that a cure for the current organizational ailments cannot be found in the mere mechanical adoption of an Agile method, be it Kanban, Scrum, XP, or any other. There must be a realization that the method’s chief promise, especially when it comes to quality, is to highlight the problems from which the organization already suffers. Only a change in mindset will allow the organization to solve those issues, not some Agile “pixie dust.” I am thus also assuming intellectual honesty.


Problem Solving

A committed and candid organization wants to be aware of its biggest issues as soon as possible so that it can get on with the arduous task of solving them. Through the prioritization and resolution of its most significant impediments, the organization knows it will get stronger and better able to meet its goals (see article on Agile transformation motivations). Such an organization would, therefore, craft a DoD based on what it needs, not what it can currently achieve. Put another way, theDoD will be measured against perfection, not benchmarked against current capabilities.

Pragmatism

Before going further, I must define what I meant by perfect quality. I am not talking about a blind aggregation of all techniques known to mankind that could possibly drive quality up no matter the cost. Here, perfection aligns with customer expectations and the organization’s economic context. If a given quality dimension isn’t important to customers, the DoD should relax or omit criteria along that dimension. Often, organizations will describe the quality context in a QA Vision or a Quality Management Strategy.

At first, a perfect DoD may be difficult to meet due to existing impediments. It’s even likely that some elements of the DoD cannot be met at all if frequent releases of valuable increments are to continue. That’s OK. Resolving those impediments is presumably valuable (otherwise they wouldn’t be impediments). That value should be estimated, perhaps with a tool like Cost of Delay, and the resolution of the impediments prioritized against other initiatives. In the interim, unmet “Done” criteria should be treated as explicit, temporary exceptions. It is important that the capability gaps remain visible. Team members have frequently surprised me with creative and elegantly simple solutions. Conversely, if a difficult-to-meet criterion is removed from the DoD, a team will soon forget that this problem remains to be solved. Out of sight, out of mind, and all that jazz

In summary, my recommendation for an organization establishing its Definition of Done is to ignore current impediments, however intractable they might seem, and write its ideal DoD. I believe this promotes a level of transparency and candor that cannot be attained when starting with a merely achievable DoD. In other words, I’m asking that teams channel famed American football coach Vince Lombardi who said: “Perfection is not attainable, but if we chase perfection we can catch excellence.”

Works cited

“Checklist”, by Yamini Ahluwalia, The Noun Project, Web. 28 Feb. 2016.  Modified.

How To Run Customers Demonstrations

Even with the understanding that customer feedback is critical, organizations are often gun-shy about seeking it on non-production-ready products or services. The concerns they have about exposing the insides of the sausage factory frequently revolve around: appearing to lack confidence, committing ourselves to a long list of customer requirements, leaning on employees with varying communication skills, opening the door to unbounded customer support requests, impacting sales as customers wait for upcoming offerings, and generally jeopardizing the relationship with participating customers. Can those risks be mitigated? Yes, they can, by managing the demonstration. Below are some examples of what this means to me.

Let Go Of Defensiveness

First, on the fear of appearing hesitant and harming the customer relationship: In innovation, the reality is that we don’t always know what customers want, so we are in fact a bit uncertain. Would customers penalize us for admitting it? That hasn’t been my experience. Customers are frequently treated fairly badly by the companies they do business with. Most customers are thus thrilled to be sincerely asked for their input and positively elated when that input makes it into the product/service.

Listening Is Not Agreeing

Second, is hosting a demonstration akin to committing to implementing every suggestion? If left unmanaged, it might be, but this is where customer preparation comes in. Expectations must be set that although feedback will be noted, it will be compiled across a number of customers and considered through the lens of the organization’s strategy. Thus, there is a promise to listen, but not necessarily to agree. In my experience, most customers are OK with that.

No Proxies

Third, should the organization’s doers be the ones presenting, or should that be left to slick sales/marketing types? I believe the right of first refusal should go to the doers. Most doers rarely (if ever) see, talk to, or hear from customers. Therefore, most will relish the opportunity. The impact of this activity on the doers’s level of understanding of the overall business context and empathy for customer needs is almost magical. No doubt the presentations will be a little rougher, more raw, and less polished, but is that such a bad thing? Probably not! I would bet that many customers are a little tired of being managed, influenced, and sold to by public relations-types. Therefore, the rough edges are likely to impart an air of authenticity to the proceedings. That said, however, it probably does make sense for the Product Owner to open and close the proceedings.

On the above point, there is another side benefit to having less polished demos. Research has shown that people are generally reluctant to provide negative feedback if they think one has put a lot of effort into the thing being reviewed. Therefore, too much polish is probably harmful to the quality and quantity of feedback.

Image by John Cook via Flickr

Transparency

Fourth, might sales be delayed by unveiling future versions of the product? Sure, yet demonstrations are held with only a sample of customers, not the entire customer population. Thus, this potential impact on sales should be quantified and the cost/benefit of the feedback evaluated. Don Reinersten provides guidance on valuing information. I never advocate doing anything out of inertia; deliberate decisions are much better.

In summary, can all risks be taken out of customer demonstrations? No. Does the value of the feedback received exceed the potential cost of the risks? If the demonstration is properly managed, I think the answer is an emphatic “yes.” Customer demonstrations don’t have to be thought of as minefields of risks. They are events that can be prepared for the greatest benefit of all participants. As with most other processes, a clever organization will get better at it with practice. In the comments section below, I invite you to share your tricks to get the most out of customer demonstrations.

P.S.

A former colleague insists that no demonstration should conclude without the Product Owner asking, “Would you buy this today?” as a way to better assess the value that the customer perceives in the product. I think this is a great practice when the value cannot be marked-to-market through an actual exchange of money. The question will likely garner a lot of Nays, but the eventual Yays will more than make up for it all.

Works cited

John Cook, “Fresh Sausage Class”, Flickr, 11 Mar. 2015. Web, 15 Oct. 2015.