close
    Subscribe to News

    Popular Stories

    Weave Toolkit: SAFe PI Planning with Weave
    7+1 Principles And Five Frameworks for Agile Portfolio Prioritization
    How to Run HUGE Retrospectives Across Dozens of Teams in Multiple Time Zones!
    Glass House Development
    Agile 2015 Keynote: Awesome Superproblems
    Written by Luke Hohmann
    on March 10, 2015


    CEO Luke Hohmann on why it's more effective to use the Collaboration Framework Buy a Feature collaboratively than with a single customer.

    Recently, I was asked why it’s more effective to use our Decision Engine platform (a.k.a. Buy a Feature/ Buy a Project) collaboratively, rather than using Buy a Feature with a single customer and asking him to allocate his sole budget to features he chooses.

    It’s not an uncommon question, and it frames Decision Engine and Buy a Feature as an alternative to other choice or preference modeling techniques, notably "Forced Rank." Forced Rank is a good comparison, because it helps highlights the differences between our collaborative prioritization engine and more traditional market research techniques.

    Results
    Let’s start by comparing the results generated by the both Buy a Feature and Forced Rank. Both techniques uncover the priorities of the participants. However, using Forced Rank or Buy a Feature with a single person, will only give you insight into that person’s priorities.

    Buy a Feature, when used collaboratively, gives you insights into two additional and critical pieces of information: The reasons behind the choices and the conditions of acceptance.

    For example, if you scan the chat logs for a Buy a Feature forum, you'll find that the player chats can be characterized as follows:

    • Social: “Hello”, “How are you?” ….
    • Help / Usage: “How do I make a bid?”
    • Negotiations: “Sally, You should buy the Flibble because it will help us solve [problem] by [reason].
    • Conditions of Acceptance: “OK, I’ll help you buy Flibble, but only if it ….”

    As you can guess, the payload of the negotiations and the conditions of acceptance chats produce far superior results for analysis. And skilled facilitators can encourage people to state their reasoning by asking them to elaborate while participating in the forum. For example, you may ask a player, “I see you've made a partial bid on the Flibble. What can you say to the other participants to motivate them to join you in purchasing the Flibble?"

    Because Buy a Feature allows for solo purchasing, facilitators are also trained to prompt participants who'v made solo purchases for insights: “Ming, I see you’ve used a big portion of your money to purchase the Gleeble. Why is this so important to you? What impact will purchasing this item have in your work?” (Note, that we instruct our facilitators to typically “whisper” these prompts, so that the chats are not visible to other participants, affecting their choices.) We’ve found that these prompts generate highly actionable results. And if the facilitator prompts are timed immediately after the purchase has been made, we've found that people are far more likely to respond – especially because whispers are discreet.

    Self-Reported Behavior vs. Actual Behavior
    A common objection to forum results is that the data is based on the self-reporting of behavior and respondents often lie. Of course, they don't mean to lie, but they do. After facilitating thousands of online forums, we’ve found that the online component tends to inhibit (inadvertent) lying, because it’s easier to call each other’s bluff in a calm and rational way when you're not face-to-face.

    Higher Quality Negotiation
    We have chat log data that suggests for people who are used to negotiating complex priorities face-to-face, the forums produce more "equal" results, because they eliminate body language and other forms of negative coercive behavior. There is no leaning-in, raising voices or banging tables.

    This is probably less important when leveraging the forums for market research, but I recall one online session from last January's San Jose Budget games (in which we used the platform to engage citizens in making budget priorities). One participant was clearly getting frustrated as evidenced by his behavior in the forum. The facilitator skillfully encouraged this person to practice his arguments via whispers, thereby defusing a situation in which the facilitator might have had to kick the player out of the forum. (There are special powers that facilitators have in our system: "Whisper" "help" to the "System" in an online forum, and you'll see them. We're going to make these more prominent in an upcoming release.)

    Data Quality
    Let’s explore the quality of the data generated in these forums. I’m going to focus on the techniques that are most comparable: solo Buy a Feature (because simply giving people money is a solo version of the game) vs. collaborative Buy a Feature.

    One dimension of data quality concerns the strength of the expressed preferences. When participants are acting alone, they hedge their bets… For example, an individual may think,“OK, I’ve got $100 to allocate. What do I want? A is really important – so I’ll bid $35. And B is also pretty important, so I’ll give it $30. That leaves me with $35. I think C and E are good, too, so I’ll give them each $15. And K also sounds nice, but I only have $5. Oh well, let me put $5 on K.”

    The lack of conviction for C, E and K create less actionable results. Let’s contrast this with what happens when participants are collaborating to purchase an item. Let’s say that in this forum, Sally initially allocates her money to the items as stated above. But, the other players disagree with her priorities. They work to convince Sally to reallocate her money. And the final results are usually quite important.

    Here are two real-world examples. First, in a series of forums we produced for VeriSign on how to improve tech support, less experienced engineers consistently supported a project that would introduce some self-service capabilities into the platform (like Cisco). More experienced engineers talked them out of this during the forums by pointing out inherent security flaws. The side benefit, of course, was critical education for the less experienced employees.

    Second, in a series of forums that we produced for VersionOne, many of VersionOne’s customers initially purchased Item A, but were later convinced by other customers to instead purchase Item B. The reason? While both items were clearly important, A had a workaround. B didn’t. And it was through the conversations of the customers – the wisdom of the tribe – that the workaround was shared, allowing a better set of priorities to be generated.

    Enjoyment
    You’ve probably noticed that when you’ve used Buy a Feature, there have been a few moments of laughter and associated feelings of “enjoyment” (or even joy) when an item was purchased. That emotion is related to a small amount of dopamine being released in your brain, precisely because you were able to overcome a shared challenge by working together as a team.

    We’ve recently started discussions with Steve Martin, the noted social psychologist, to better understand the importance of these phenomena. Steve suggests that these feelings induced through play are especially important in project and portfolio management, because the positive emotions experienced in the forum contribute to teamwork and commitment to implement the project. Note that we’re not entirely sure how this impacts forum participants when they are outside customers engaged in market research, as opposed to internal team members.

    Effective / Impactful Research vs. Statistical Significance
    If you absolutely require statistical significance in your research, I would recommend using Conjoint or similar market research technique. The reasons are very deep, but the summary is that statistical sampling theory is based on non-collaborative behavior. As we have not yet worked out the mathematical foundations for statistical significance in collaborative forums, I can’t recommend our techniques when statistical significance is required.

    The closest research that we’ve found in this domain is from Abbie Griffin, who suggests that you can get something like 75% of your core priorities correct by talking with as few as 31 customers. Our experience is that we start to see very actionable patterns around 5 to 8 forums. That’s more people than Abbie suggests is required, but I think it is related to the collaboration in the forums.

    I continue to believe that our techniques generate high impact, more actionable results, far faster, and for significantly lower costs, than conjoint analysis. And unlike a survey, participants find the process extremely enjoyable.

    Let us know what you think. 

    Add your comment below.

    You may also like:

    Decision Engine Innovation Games Thanksgiving Participatory Budgeting SAFe portfolio management portfolio prioritization product management

    Giving Thanks for Portfolio Management

    I have years of fond memories of Thanksgiving Dinner. As a child growing up outside of Buffalo, NY, we'd play downstairs...

    Collaboration Frameworks Customer Story Decision Engine Innovation Games Multidimensional Collaboration Uncategorized

    Weave Special Pricing For All Scrum Alliance Members

      WEAVE SPECIAL PRICING FOR ALL SCRUM ALLIANCE MEMBERS The Scrum Alliance and Conteneo have teamed up to provide discoun...

    Collaboration Frameworks Customer Story Decision Engine Innovation Games Multidimensional Collaboration Uncategorized

    Agile Portfolio Management: Designing for Multi-Team Engagement

       One of the values of the Agile Manifesto is Customer Collaboration over Contract Negotiation. At Conteneo, we conveni...