Saturday, October 20, 2012

Fashion versus Function


An interesting observation: a decade ago, people took a great deal of pride in owning a new gadget as an object of conspicuous consumption.   They wanted other people to see them using it, to ask them questions about it, as if merely having the gadget made them (seem to be) a person of some importance and esteem.    But the value people take from being a thing-owner has waned, such that the gadgets that have longevity and were not abandoned have had to deliver actual value to the user other than a token that could be flashed to get attention.

I'm of two minds about this observation: there's something about it that rings true, but at the same time I have the sense it's not quite right ... and what follows is some meditation on that conflict.


Primarily, I have a sense that the implication that this is a cultural change may be wrong, or is at least an idiosyncrasy of the present environment.   That is, I have the strong sense that America is a consumer culture and that distressingly many American consumers can be justly characterized as shallow narcissists - and I don't have a sense that this has changed much over the past few decades.  The advertising message of "be the first kid on your block to own one" is nothing new, and I don't think the (rather contemptible) psychology to which it appeals has changed or diminished over the last decade ... but instead there hasn't really been any "shiny new thing" that can be shown off.  That is, the next truly different gadget that comes along (something physical, that can be seen by other people) will cause a resurgence of conspicuous consumption.

That considered, the core of the statement rings true: for any new product that is capable of being conspicuously consumed, there will be a period during which conspicuous consumption is a driving force in its adoption by the market: people will want to have one for the sake of the attention they get merely by being an object-owner in the presence of non-owners.   But after the product is in wider circulation, to the point at which enough people have one that the mere possession does not confer esteem, there is the reconsideration of whether the product offers any other benefit.

Consider the example of cell phones: in the nineties they were rare enough that anyone seen to be using one was considered (or hoped to be considered) a person of some importance.   Ten years later, they were so widespread that even children and poor people carried cell phones, and its value as an object of fashion diminished.  Granted, for those who joined the owners' club late still expected esteem. And speaking loudly on a cell phone in a public place was a desperate and annoying attempt to get other people to pay attention to them and acknowledge their importance in spite of the fact that cell phone ownership no longer garnered any esteem.

About the time that the cell phone had lost its cachet, smart phones came along, and the attention-starved individuals turned to those devices - but that fashion has also run its course.  To my earlier point, nothing else seems to have come along to become the next badge of esteem for owners ... but as soon as it does, I'm confident we'll see the cycle repeat, because the culture has not changed, we just lack a prop to put our collective insecurity on public display.


Thus far, I have been overly focused on the "fashion" and haven't had much to say about "function."   It's much more difficult to assess the value of a device to a user from an outsider's perspective.  That is, I can plainly observe a person using a smart phone in a public place, and the furtive way in which they periodically look about to see if anyone has noticed them using it ... but I can't very well observe whether they are getting any genuine value from whatever it is they happen to be doing at the time.

In retrospect, it was likely possible to make such an observation of the last-generation technology of cell phones because at least one side of the conversation was audible.  It seems to me that a decade ago, whenever you heard someone using a cell phone, there was some substance to the conversation and they were calling someone else for a specific purpose, whereas nowadays, the laggards who still think that owning one makes them seem important are engaged in banal and vapid conversations that, ironically enough, loudly demonstrate how unimportant they are - but that may just be a function of my own selective memory, picking recollections that support my present assessment.

But more to the point, a device must deliver functional value even after its fashion value has worn off, and that producers should be highly effective to the transition from the fashion phase to the function phase.   That seems to be the critical difference between products that are fads and novelties and those that have real staying power in the market.

It's likely also important to have a clear conception of exactly what that value is.  Mobile computing was originally sold on the notion that it had a functional value - it was for relatively important business-like activities such as checking your flight status, trading stocks, transferring money among accounts, and that sort of thing.  But if you consider the top applications of all times, it's all games and social networking, meaning that the majority of device users aren't using the devices as intended.

That's not to say they're not using them at all, as there likely is some genuine value to being able to distract yourself with a game and chitchat with friends, and given the amount of money people will pay to be able to do those things (device cost plus software costs plus monthly service fee over a five-year period), it shouldn't be taken lightly.  

But at that, I may be straying into yet a different topic - for now, the crux of this meditation is that many products begin as a fashion, and peter out if they fail to deliver functional value, and that serious mistakes can be avoided if you are careful to recognize when consumers are trending from the former to the latter.

Tuesday, October 16, 2012

The Need for Granularity

I find it puzzling that certain brands seem to end up on both extremes of survey results regarding customer experience. That is, the same brands will appear on "the best" list for one survey and "the worst" for another survey that seem to be investigating how the brand is perceived by customers in terms of quality, satisfaction, reputation, and customer experience.

There are likely a number of factors that might cause this to happen: the design of the survey instrument, recent events to when the survey was conducted. And the like. But for the moment, I'm taken with the notion that broad-based surveys that ask customers about their experience with or opinion of a brand in general lack specificity, and as such gain a general impression of a brand that is exceptionally good in some regards and exceptionally bad in others.

It's particularly noxious in that fuzzy and contradictory information leads to fuzzy and contradictory decision-making. The perception is that because a given brand is highly rated, every single thing that it does is spot-on perfect ... when in truth, some of the things it does are really great, and some are really awful, and there needs to be greater granularity in order to understand the customer experience holistically.

Maybe that's thinking as an insider, a UX professional who's been rubbed a bit raw by constant suggestions that one company should imitate the practices of another that is more successful in certain aspects (because they have more customers, or more revenue, let's imitate them without hesitation or consideration). But I also have the sense that it's important as a customer to have a clearer understanding of what to expect from a given firm.

That is to say that "Brand X is great" is a dangerous statement to make, dangerous to accept, and especially dangerous to imitate without understanding what it is that leads to this general impression. I've considered whether it might be melodramatic to use the word "dangerous" in this context - but I think it's fair: if you make such a statement, your reputation may suffer when experience suggests otherwise; if you accept such advice, you may be misled; and if you imitate the practices, you may be doing more harm than good.

While general assessments have some value in spite of their vagueness, it seems to me that a more granular consideration is necessary to yield accurate and useful information: what is it that makes them great? And in the interest of balance, is there anything they do that isn't so great? My sense is that this is more telling, and identifies areas in which firms excel or need improvement, but may explain the seeming paradox that arises when a given brand ends up on the "best" of one survey and the "worst" list of another.

It's likely also fair to say that there is not a company out there that is absolutely excellent in every regard. Each firm generally focuses on doing a small number of things extremely well, and by intent or neglect ends up doing other things poorly, and that to have a better sense of what is to be valued/emulated, a more granular analysis is necessary:
  • A retailer may offer the lowest prices, but a poor merchandise selection
  • An electronics manufacturer may provide excellent products, but have deplorable tech support
  • An online video rental service may have a remarkable shipping service, but a clumsy and awkward Web site
  • A restaurant may have excellent cuisine, but a rude and surly wait staff
This may be crossing over into an entirely different consideration of what factors are important to a given customer - the natural conclusion being that a firm should focus on the factors that matter most to its particular market segment - but to stay on the present topic, the difference in this consideration could cause two people to have entirely different overall impressions, or even one person to give seemingly contradictory survey responses depending on the precise nature of the questions asked.

All in all, I think the conflict could be resolved if more information were available: that is, there might be greater agreement among survey results if surveys were more granular. While a brand might find itself on the best/worst lists when it comes to overall impressions, my sense is results would be more consistent if the question of whether brand X provides a serviceable product, whether the buying process is convenient, whether they provide good after-sale support, etc.

Even within those more specific categories, there is room for more specific investigation. The "buying process" might involve the ease of finding items, getting questions answered, the checkout process, etc. And "the checkout process" itself can be broken down into smaller actions and various facets. The more granular the assessment, the more accurate the assessment, and the more realistic the expectations a customer has when dealing with a firm.

I don't expect this information to be disclosed in public surveys: most people who are involved with a brand (customers and insiders alike) merely want to check the list to see how their brand stacks up, and it's hardly worth the effort for a publisher to disclose the full story. It might be available from research firms, for a fee. But most likely, individual firms will wish to keep this information under wraps, disclosing only what is favorable and hiding what is not in order to preserve their esteem.

So ultimately, the lesson to take from this is to seek more granular information where it is available, with the understanding that it likely will be inaccessible, and recognize that a blanket statement or general survey is likely too vague and idiosyncratic to be taken at face value.

Friday, October 12, 2012

Strategy Development for Customer Experience

The question arose in a discussion forum as to how customer experience practitioners should approach strategy, which immediately veered of in the wrong direction, but which led me to ruminate on the topic a bit.  My reaction was that there are existing models for strategy development: Google "problem solving process" or do an image search for diagrams, and there are no shortage of suggestions for the steps to go through to develop a plan (short or long range).

The question implies that a different methodology is in order, but I don't believe that to be so: the existing models are perfectly serviceable, but the manner in which they are undertaken must be tailored to focus on customer experience.

Consider the four basic steps of the problem-solving process:
  1. Recognize the Problem
  2. Develop a Plan
  3. Execute the Plan
  4. Evaluate the Results
A couple of red herrings are immediately apparent: First, this model is based on the assumption that there is a problem to be solved rather than an opportunity to be seized, and it works equally well in either situation (though pursuing opportunities tends to be more speculative and predictive because it is based on a concept).    Second, there are dozens of similar models that have varying number of steps, though most of these additional steps arise from further digesting the existing one ("Develop a Problem" may be broken into sub-steps). I'll avoid those sidetracks to remain focused on the topic of user experience.

If a plan is to be developed to improve customer experience, or if an initiative that seems to have nothing to do with customer experience does not have a negative impact, the same process can be applied - but the inputs and evaluations along the way must be focused on the customer experience. At the very least, the concerns of the customer must be included by someone who is qualified and motivated to speak on their behalf; a middle ground can be achieved by including market research; but the best approach would be to include the customer in the strategic discussions directly.

And so, consider how customer focus can be maintained in each step:

1. Recognize the Problem

Problem recognition is often done because the "problem" represents a barrier to the internal interests of the firm.   Instead, consider the problem from the customers' perspective - how is this a problem for them in the course of serving their own interests?

A better approach than speculating what the customer might consider to be a problem would be to leverage traditional market research tools (surveys, interviews, focus groups, and the like) and customer feedback.    Once the core problem is identified, follow-up studies can be done to gather more granular information.

2. Develop a Plan

Developing a plan seems to be an internal matter - decide what will the firm do, using resources it already has and process that are comfortable or efficient for itself.   However, to be customer-focused, set aside the concerns of the firm and consider instead the concerns of the customer: what will they do, using the resources they have and processes that are comfortable or efficient to them.

Only after that is done should you switch to an internal perspective and consider how you will accommodate the desires and interests of the customer, given your culture and resource constraints.   This may also identify areas in which the firm must change its procedures and obtain additional resources to successfully address the problem.

Where a conflict of interest arises, return to the customer's perspective: if the firm does not do what is necessary to serve their interest, or implements a solution that is an imperfect or an inconvenient fix, will they still be willing to do business with the firm?  Or would it be an opportunity for them to consider a different supplier?  

3. Execute the Plan

I cannot at the moment conceive of how customers would participate in the execution of a plan.  This seems to me an entirely internal step, and can't think of an idea that isn't completely bizarre and/or impractical.

4. Evaluate the Results

Evaluating the results means resuming the customer's perspective.   Internally, there is likely greater interest in whether the solution achieved its goals for the business (cost savings or increased revenues) that cannot, and should not, be entirely ignored.   But the greater question is whether the solution worked for the customer.

To discover this, the same market research tools that were used in the identification phase could be leveraged, to see where the metrics have shifted as a result of the change.   Better yet, observe actual behavior - what people do is far more important than what they say they might do.   While observing customers in the wild seems difficult, it's likely that reporting and monitoring mechanisms can be built into the solution, particularly if it is a technology solution.

***


The running theme to all of this is: if the problem-solving approach does not serve the interests of the customer, it is likely not to do with the process, but with the degree to which the needs of the customer are considered while the steps themselves are undertaken.   Or said another way, you tend to hit what you're aiming at - and if your aims are entirely internal, you shouldn't  be surprised when customers don't seem to be falling in line with your grand scheme.

As usual, I've rattled on a while and considered the matter at a high level and in general terms. This could use a great deal more consideration, but this should do for a survey/introduction to the idea.

Monday, October 8, 2012

Lessons from Brick-and-Mortar Retail

I recently read Emmet Cox's book on Retail Analytics, which focuses almost exclusively on the brick-and-mortar channel. To my way of thinking, that focus is not a drawback: too many times, it is proclaimed that the brick-and-mortar channel is dead (or dying) and the online channel has nothing to learn from it. This is generally the proclamation of people who don't understand the very things they propose should be ignored, and unless we know better, such people can steer us in the wrong direction.

As in many things, there's a great deal to be learned from methods that have been successful in other channels, and a great deal that carries over very well from one channel to another. Whether online or in-store, customers have preferred retailers with whom they shop often. They tend to shop with a certain frequency, buy certain products together, and change their preferences of both items and retailer with surprising predictability (if you know what to monitor).

Online merchants have the ability to perform all of the same kinds of analyses as do traditional ones. Granted, they have a much broader customer base (they do not draw from a limited geographic area), which carries with it certain advantages and disadvantages, but they still collect the same kinds of data (header, detail, and tender) about every transaction.

There is additional data that an online merchant can collect. Unlike a physical store, monitoring and tracking every movement of a customer from the time they enter to the time they leave is neither impractical nor intrusive in the online channel - and unless you turn off a lot of server logging functions, it is in fact impossible not to collect this data.   Such information can grant additional insight - but it is "additional" and not "replacement": the precise number of people who look at an item but decide not to purchase could be helpful (if it drives a decision), but does not supersede more meaningful metrics. 

The problem isn't that retailers in either channel don't have data - the problem is that they don't know what to do with it. There's a great deal of very meaningful information that can be gleaned from the bits of data that are collected that is simply ignored. Or when it is collected, it is viewed with a "Hmmm ... that's interesting" perspective and doesn't guide business decisions.

If a merchant isn't using the simple basket data that is collected as a matter of course with every sales transaction, he's missing a lot. And for online merchants, the additional click-trail data that is gathered as a user browses a site and adds or removes items from their basket is often a distraction from the more meaningful transaction data that is being dismissed as irrelevant simply because the customer is using a different channel.

As such, the analytic techniques use in traditional retail are still valid and well worth considering - and getting them right, even before considering any of the additional data available by virtue of the online channel.  Ideally, the multi-channel merchant aggregates all data, separating it by channel, and channel-specific behaviors become evident. There will be differences, but they will not be altogether incompatible and unrelated.

Thursday, October 4, 2012

Levels of Price Sensitivity

A casual conversation about wristwatches got me to thinking about customer price sensitivity: I have not seen a resource (nor has searching the Web turned up one) that provided a categorization schema that seems to accurately reflect the way in which consumers consider price.   There is general concern, and general consideration, that the price of a given item is affordable and/or acceptable to the market - but the consideration seems to be binary: customers will or will not buy at a given price.

Ultimately, the decision to the customer is also binary - to buy or not to buy, though a closer examination of the "not to buy" option in regard to the stated price (given that for most purchases customers decide whether a product meets their needs and merits consideration) is a bit less direct: clearly if a product is initially regarded as unaffordable, the customer must revisit whether fulfilling some needs merits sacrificing others.   But there are also instances in which an affordable price creates hesitation: the price seems too low, and this arouses doubt.

Stepping back for a moment, the manner in which a customer sets the amount they are willing to pay for a given item (which drives the way a seller should seek to price his items), absent a specific example, seems to be ill-defined even for an individual.  That is, in our conversation about wristwatches, there was a vague sense that there is a price range that is expected: how much is "too expensive" depends on finances, how little is "too cheap" touches more on the notion of mistrust: the suspicion that the item is shoddy, a forgery, or even stolen goods.

The question of "how much would you pay for a wristwatch" caused some deliberation - likely because it's a luxury item that some expect to pay a significant price to obtain. Pay too little, and you get a product that is no good; pay too much, and you're wasting money.  (And yes, this touches on the way in which the customer regards an item: some people pay significant amounts for a wristwatch as a status symbol others regard it as utterly unimportant and are happy with a cheap disposable - but both are likely extremes, and a distraction from the point at hand.)

This leads to a categorization schema that includes the extremes (too much, too little), the sweet spot (about right), and two categories in-between:
  • Expensive - The consumer in question would not consider an item in this class at all. Though there may be perception of very high quality, it does not merit the price. (For wristwatches, specifically a men's dress watch, we agreed this price would be $5,000 or more.)
  • Costly - The consumer might be reluctant to pay this much, but would ultimately be willing to do so on the basis that he is getting good value for the price. (We belabored this a bit, but set a range of $1,000 to less than $5,000.)
  • Average - The price seems about right to what the customer expects to pay for the item, and there is no reluctance or hesitation in making the purchase, though a diminished sense that the item has quality. (The range we discussed was $500 to less than $1,000.)
  • Value - The price seems a bit lower than what is expected, and there's more hesitation over quality, but it still seems reasonable that the customer may be getting a good deal on an item of fair quality. (Our range for was $100 to $500)
  • Cheap - The price is suspiciously low, and the customer would refuse to purchase the item because their suspicions are that the quality is very poor, or that the watch is a knockoff or possibly even stolen goods. (Anything under $100)
I don't expect those numbers will hold up for the broader market - it was a conversation between two consumers of the same general income bracket, not to mention other geodemographic similarities - but my sense is that a broader survey would still likely find the same categories, with the same rationale, though the dollar amounts would vary according to the income and culture of respondents.

This also feeds back to the three-category system of too little, about right, and too much. But the off-center categories of "costly" and "value" are significant: these are ranges in which the customer would hesitate to purchase, but might ultimately decide to do so after some internal deliberation.

I'm now beginning to meditate about how a marketer or a salesman might work on prospects who find themselves in the quagmire of costly/value and help the customer to sort out their reluctance, hopefully to overcome it, but at the very least to make a firm decision so the buying process can move forward - but that's likely a separate consideration and I've nattered on quite long enough.