Thursday, May 12, 2011

The Danger of Testing in the Wild

I read an interesting article on the UXmatters site, the thrust of which was that usability testing with a mockup that contains errors "can compromise your data collection, complicate your study's logistics, and potentially, impact your study's budget and schedule." It's an excellent point, albeit academic and idealistic, but my greater disappointment is its myopic focus on the experiment itself.

That's no fault of the authors that the expectation I brought to the article was unfulfilled -the topic I wanted to explore wasn't the same as the topic they cared to discuss. But on the bright side, it does leave me with a topic for my notebook - specifically, that testing "in the wild" (as opposed to in the usability lab) is a dangerous proposition.

I've encountered firms that have an inordinate amount of pride in doing an insane amount of A:B testing and read books that advocate extensive real-world testing of absolutely everything. I'm generally a fan of the practice, as real-world test results trump the maelstrom of opinions and posturing that undermine the design process, but I've seen some truly terrible things hung out on the Web under the premise that it's "just a test."

The point I was hoping the authors of the original article would get around to making, was that testing does damage. The tens of thousands of unwitting participants are unaware that what they are seeing on a given day is the "B version" of an A:B test. For them, the page or flow that they see is reality, an encounter with a company that shapes their perception of the brand.

To some extent, a lab test has the same effect - the experience of being a test subject shapes the perception of the brand - though the small sample size and the laboratory environment in which testing takes place set expectations. There are test sessions where the subject, excited by what he's just seen, eagerly wants to know when it will be available "for real" - and it follows that there must also be sessions where a subject, disappointed by what he has seen, carries away a negative impression.

But testing in the wild is done without any notice, or any indication - telling users in advance that they are seeing a "beta test" likely skews results, and telling them afterward could compound rather than mitigate the negative sentiment toward the brand.

That's not to discourage testing as a best practice - it's essential, and far preferable to launching a change to a full audience before you can see what the results are - but that testing must be done with the expectation that it can be harmful, and steps taken to mitigate the damage, such as limiting sample size and test duration, as well as the number of different tests performed in a given period of time.

I'll do a bit more research to find if much study has been done in this area, but I'm not sanguine I'll find anything. I expect it's fairly common for companies to test to determine the potential impact on immediate user behavior, but I expect it's exceedingly difficult to determine the impact of the test, itself, on the brand equity - it's likely written off as collateral damage, and my point in this meditation is to suggest that it should not be.

No comments:

Post a Comment