I've been involved in a number of projects this year that revolve around refining the design of task flows based on testing in the wild - including champion-challenger tests, A:B tests, and multivariate tests, each of which changes one or more elements of the design of pages and flows to witness the resulting changes in user behavior. In general, I'm enthusiastic about the results, but I do have some reservations.
Primarily, I am enthusiastic because testing is the antidote to narcissism. It seems to me that much of what designers propose to do is based on their own assumptions about the way in which users will react. I will grant that designers have the best credentials to do so - they have studied the theory and principles of design and have witnessed from their own practical experience how the application of this theory succeeds or fails when it is implemented.
It is fairly obvious to designers that business executives and software developers who like to contribute their notions to design are very often lacking in knowledge and experience and that the ideas they contribute tend to be arbitrary - but at the same time designers seem loath to admit the same arbitrariness on their own part. No matter the knowledge and experience a designer has, his judgment is based on speculation - and however well-grounded this speculation may be, it may not bear out in practice.
Furthermore, it's distressingly common that designers do not receive reliable information regarding the consequences of their actions. Particularly for freelancers and consultants, they consider their job to be "done" when the site is launched and never touch base with the client to find out if the results were positive (and in fairness, they are dissuaded from doing so by clients who take it as an offer to do additional work to improve the outcome without additional payment). But even for in-house designers, the news about the outcome seldom makes it back.
As such, uninformed designers learn nothing from the experience and assume everything they did worked out swimmingly. This is the primary value testing brings to user experience: it grounds designers in reality, enabling them to see the results of their decisions in the field and, ultimately, to make better decisions as a consequence. Without this insight, designers easily become lost in the clouds, chasing their artistic visions and stoking their own egos without a much-needed reality check.
At the same time, I have some reservations about the way in which testing is implemented - specifically, in that it is often proposed as a substitute for sound judgment rather than a validation of it. In effect, testing has led to the "spaghetti" approach of throwing a bunch of ideas against the wall to see what sticks. The primary drawback of this approach is that it is unfocused and wasteful. Bad ideas, those that were not worth testing in the first place, are put through the paces in case they have some merit. A designer's experienced judgment can provide an excellent filter to eliminate obviously flawed ideas without having to test them.
A much more significant problem with this approach is that it damages the esteem of the brand. It's perfectly acceptable to test a prototype in a lab because it involves a small number of participants, each of whom well understands that what they are seeing is being tested. But when an idea is tested in production, the audience often does not know it is a test (and telling them so would change their behavior) - the result is that the experience they have on a test model is their real experience of the brand, and any negative impression becomes an indelible memory.
This is a particular problem in the online channel, because the distance from the participant does not enable us to gauge the severity of the reaction. We know that they clicked through to the next page, or did not do so, but cannot accurately asses the impression they took away from the experience.
Said another way, no-one in his right mind would sign off on (or even propose) a test for the voice channel in which half of the service representatives greeted the customer with "Good morning, Mister Jones, how may I help you?" another quarter with "What's up, Tommy?" and the last quarter with, "What do you want, you bastard?" It is obvious that the third option is unacceptable, and that every customer who called that day would be less than amused.
I'm certain that some secondary research might be drummed up to suggest that customers enjoy playful jibes, or that a case study can be found of a business that had success being rude to its customers, but it doesn't mean that it's a good idea for every business - and common sense should tell you that it's not even worth testing.
And while I have not seen an example quite that egregious, I have heard it suggested that firms test every conceivable combination of elements to see which works best, and some very bad ideas have been presented from non-designers who think that something unusual might be a good idea. (Hint: it's unusual for a very good reason.)
The problem as I see it is that firms tend to swing from one extreme to the other - either "don't test anything" (and design by arrogance) "just test everything" (and exercise no discretion) - neither of which is at all desirable.
Ultimately, it's a matter of finding a sensible approach - trusting in the experience and judgment of qualified design professionals to provide an expert opinion, and then testing to validate their judgment. Until that happy medium is achieved, a great deal of damage can be done to user experience of a brand.
No comments:
Post a Comment