Wednesday, May 29, 2013

Rational Solutions for Emotional Needs

The reason-vs-emotion debate has been going on for millennia (literally), and it's generally accepted that both have influence to varying degrees, based on the idiosyncrasies of the people and situations in question.   That outcome seems something of an unenlightened shrug that comes to no firm or useful conclusion regarding how the two come to an agreement on a course of action.    

I'm toying with a notion that seems to have some merit in terms of sorting out reason and emotion in they buying process: consumers tend to be emotional about their needs, but rational in solving them.     Granted, there are instances in which reason seems to be used in identifying needs and emotion does not switch off during the solution process  - so this is likely not perfectly true in all instances, but may be helpful in getting closer to understanding the interplay of the two.

Emotional Needs

The vast majority of needs that consumers seek to serve in the marketplace are entirely emotional - specifically, they are keyed to the emotions of hope and fear in regards to our desires: we hope to gain something pleasurable as a result of purchasing some products, or we fear that we will experience something unpleasant if we fail to purchase others.   It really is that simple.

It may become complex when we attempt to describe or explain our needs.  We tend to rationalize or justify our emotions, to ourselves as well as to others, and the logic becomes highly convoluted - and the complication arises because it is essentially a veneer over a motivation that has nothing at all to do with logic, but emotions that we find difficult to explain and uncomfortable to admit.

To be clear: needs exist well in advance of choosing a product to solve them - when we begin considering what products might serve a need, we have transitioned from recognizing needs to evaluating solutions.   It's somewhat difficult to separate the two because we often speak of the "need for clothing" or the "need for food" when in reality, clothing and food are solutions to needs (cold and hunger), so to suggest a need for a thing is already to have jumped to the next step in problem solving.

The need is identified when we sense something (a sensation of mild discomfort) and have an emotional reaction (a fear that this discomfort will continue or worsen unless something is done about it).   After that, the rational mind kicks in.

Rational Solutions

The first step in solving a problem is investigating the causes.   It is logic, rather than emotion, that guides us in doing so.   To continue the previous example, we recognize a sensation of discomfort and experience fear that it will continue or worsen unless we take action.   As such, there is an immediate application of logic to evaluate the nature of the discomfort, and recognize that "I am hungry" and, further, to consider the customary solution to the problem and translate it into a statement such as "I need food."

This is not particularly brilliant, and many species instinctively understand the connection between consuming food, stopping the sensation of discomfort, and alleviating the fear that the discomfort will worsen.   Every animal does this, and would not have survived if it did not, so the logic is very primitive - but it is logic, rather than emotion, that makes these connections.  And in human beings, who are not factory equipped with a complex array of instincts, the rational process is more easily recognized as such.

Reason becomes more apparent when we consider the way in which solutions are defined.   "I need food" would be satisfied by "food" - anything edible that would mitigate the physical sensation would suffice.   "I want a turkey and Swiss cheese sandwich on whole wheat bread with low-fat mayonnaise on the side" represents quite some cogitation (at least the first time it is ordered, after that we are relying on previous work rather than performing the mental gymnastics again).

There can be a great deal of deliberation about the rational processes that lead us to the identification of a solution, not to mention those we undertake when we seek to effect the solution - but in the context of this short meditation, it should suffice to recognize that our identification of a solution to a need is more rational than emotional.

Cross-Pollination

And so, I'm led to the conclusion that needs are emotional and solutions are rational - but it's a very vague and loose conclusion.    I'm well aware that there is some application of reason to the identification of needs, and some intrusion of emotion into the process of solution - and it's likely that this differs according to the individual, the need, the solution, the context, and various other factors.

But having pondered this notion a while, and having mulled over far more examples than it's feasible to document here, I do have the sense that the general thesis holds: needs are essentially emotional and solutions are essentially rational, and understanding this should help to make better choices in the way in which sellers interact with buyers in the marketplace.

Saturday, May 25, 2013

The Right to Refuse Service


In general, companies attempt to take all comers.  You could argue that is borne of a sense of fairness to provide service to all people equally, or you could argue that it is borne of a sense of greed to grab as much money as possible, but in either case the results are the same: the firm attempts to serve as many customers as possible, and in doing so makes its products cheap and generic.   I have the sense that's not always a good thing.

I raised the point in a previous meditation that there is an implicit abandonment in market segmentation - that choosing to cater to the specific needs and interests of a narrowly defined a brand to neglect to serve the needs and interests of the broader market (and vice versa).    But there are also instances in which exclusion is neither passive nor implicit, but a deliberate choice.

It's particularly evident in the luxury segment: a luxury brand does not offer itself cheaply to anyone who wants it, but uses price and distribution as barriers to ensure that its product is only in the hands of a selected elite few.   To have its product in the hands of everyone is to diminish its esteem and its significance as a market of social distinction.   That is to say that when the lower ranks of society begin to adopt a brand, it loses its appeal to the upper ranks.

This is not merely (or perhaps "not only") a matter of snobbishness, but it may in fact be in the interest of customers for firms to refuse service to some individuals who wish to do business with it.   Consider the number of insurance companies that refused to write policies to homeowners in Florida because the state placed strict limits on the premiums they could charge.   To national companies, this meant that they would need to raise the difference between premiums and benefits in Florida by charging higher rates for coverage in states that are not prone to natural disasters - on effect, making people in areas that are not disaster-prone to subsidize the cost of insurance for people who reside in areas that are prone to disaster.  In this sense,  adopting a "take all comers" approach would be decidedly unfair.

There are also instances in which firms choose, or are compelled by law, to discriminate against certain customers.   A responsible firm would not, for example, sell alcohol to children or firearms to a convicted felon on parole.  I don't expect that there could be a rational and acceptable counter-argument to their right, and even responsibility, to refuse service to certain customers.

I do expect that such instances are rare, and that most instances in which firms purposefully choose to exclude certain groups of people represent an unwarranted prejudice - but that does not mean that all instances of refusing service are unwarranted or based on arbitrary or unjust criteria.  In some instances, it may be a good idea, for the efficiency of the firm, the esteem of the brand, and even the interests of the customers themselves for firms to identify segments of the market that the company should refuse to serve.

Tuesday, May 21, 2013

Don't Even Test That


I've lately been engrossed in optimization testing, and have noticed that some design choices that seemed awkward have had highly positive results.  For example, a phrase that seemed a bit stilted and technical was suggested for a button to begin a purchasing flow.  When first it was suggested, I thought "that probably won't work" but in spite of my reservations I argued to include it in an A:B test if for no other reason than to get actual evidence that it wasn't a particularly good idea.   Problem is, that didn't happen.

This clumsy phrase that I would normally have rejected effected a double-digit increase in conversion rates.   And while I am delighted with the results (and happy to gain a halo as the person who fought to keep it in the test), it have given me some angst.   It hasn't undermined my confidence or left me paralyzed with self-doubt, but it has given me the sense that I should be a little reluctant to trust my intuition and be more open to testing things that seem a bit awkward or unusual.

But at the same time, I am cautious of going to the opposite extreme and signing off of spaghetti tests - where you exercise no discretion and test everything that comes to mind to see what happens.   Most of the time, theory bears out in practice and professional instincts refined by decades of experience are reliable - so it's good to trust your gut.   But where do you draw the line between standing in the way of boldfaced stupidity and being too obstinate to try something that seems awkward but just might work?

It's likely I will be meditating on this for quite some time, and while I am likely to reconsider some of my hesitation there are other instances in which I am likely to remain resolute that some ideas are so entirely awful that they should not even be experimented with.

Consider the illustration above, in which a red octagonal sign is used to communicate a speed limit.   That is a truly awful idea and should not even be tested.   People will come to a dead stop on a busy street because they think it is a stop sign, and over time they will start speeding past actual stop-signs if the misuse of that color and shape loses its association of the "stop" imperative.

That example may seem utterly inane, but I get something similar at least half a dozen times every year: "Let's put this information in bold red text so that people will pay attention to it."   Bold-red messages are used to call attention to critical errors on most sites everywhere on the Internet - they are in effect stop signs.  And using bold, red text to highlight anything that is not a critical error may get attention for a short time time, but over the long run will break the connection, such that people stop paying attention to truly critical errors.   So don't even test that.

Another example, from personal experience, was a clothing store that placed a button beneath the image of a dress shirt that read "select your size" - clicking on that button added the shirt to a shopping cart, where I had to click "edit item" to be able to indicate the size (and it would have shipped a "medium" by default.

In this instance, it was just rotten design likely caused by a bad ecommerce infrastructure that was not built to enable the shopper to select a size before adding an item to the shopping cart.   But it is entirely conceivable that someone might think it a good idea to force customers to add an item to the cart before selecting their size to increase the likelihood of their actually purchasing it.  And it might even work.

Even so, it would be a truly rotten idea that doesn't even merit testing.   There will be many customers who are jarred by he experience, feel a sense of doubt and anxiety, and leave the site immediately.  Worse still there will be other customers who are not upset, but don't fiddle about to discover the awkward way they select size, and end up getting shipped the wrong size and have to return the item to the vendor, which is a significant expense and a horrible customer experience.

There are likely other examples of ideas so awful that they should not even be tested - but I don't think it merits the time to come up with a lengthier list, as it would likely not be comprehensive or entirely accurate.  Instead, I think it's more productive simply to adopt the general principle of "trust your instincts but do not be ruled by them."   That is, be willing to try new ideas that seem a bit awkward, but be wary of those that seem thoroughly bad - to an experienced person, the distinction should not be that difficult to make.



Friday, May 17, 2013

The Shopper Economy

Liz Crawford’s recent book considers the way in which the time customers invest in non-purchasing activities benefits a firm can and should be monetized and payment made to shoppers for their time, attention, and involvement.   It’s an interesting topic on the surface, and while the book considers the various methods and their effectiveness, I have some qualms that border on, or directly involve, the ethics of such an arrangement.

Attention

Paying customers for their attention is not unheard of, though traditionally it seems to always come with a catch: time-share companies give away “free vacations” if you’re willing to be hounded by salesmen the entire time (unless you’re willing to buy a product you don’t want for the simple privilege of being left alone) and in many instances the incentive isn’t worth the time spent.   But I think that’s a matter of how the program is designed rather than the concept in itself.

In truth, firms have always paid to get the attention of the customers, but they have generally paid others to get it for them, and the subcontractors can often be subversive and dishonest, effectively stealing the time of a person who would rather be doing something else.

The idea of offering someone a specific payment for a specific amount of time (earn 100 points on your credit card’s rewards program for watching a 30-second commercial about a specified product) seems to me much cleaner, in an ethical sense, than the present practices in advertising and marketing.

Participation

Participation is a broader category, the author’s catch-all for doing anything that isn’t covered by the other categories or making an actual purchase.   This, too, is not unheard of – car dealerships are constantly offering some incentive to come in for a test drive, any product give-away is a participation promotion in which the free sample itself is the reward for trying the product, or paying someone to participate in a marketing focus group (though getting them to buy is usually a secondary concern).

My sense is that none of this rankles, though any participation program could involve rough-handling the customer to make an immediate purchase.   The breadth of the category is a bit hazy in general, and includes many things that could be clean or unclean in an ethical sense.

Advocacy

Advocacy is likely the area in which I have the greatest unease.   People talk about the products they like and the brands with which they identify as an expression of themselves, and they recommend products that could help others as a kind of altruism (though like most altruistic acts, the motive is to gain social esteem for oneself by “fixing” other peoples’ problems for them).  They do all of this without payment.

When a person is motivated to advocate for a product not by genuine interest but because they want to get some sort of reward or incentive, this is disingenuous and invalidates the credibility of the source.   Salesmen do it all the time, but when you are interacting with a salesman, you are aware of their mercenary interest.   What makes personal recommendations more valuable and credible is the reputation of the advocate – that they are not being paid to shill things they don't really value.

There are various workarounds that are suggested, but it all seems rather greasy.  The only exception is a firm that sends a person a thank-you gift after the fact for advocating for them – but even that is a bit grey, as the firm not giving incentive to that particular person because the gift was granted after the review was written, but others who hear that someone got a reward for advocating might be motivated to advocate in hope of getting a gift as well.  In those instances, it is the person rather than the firm that is acting on bad faith, but let's not pretend it did not cross the firm's mind that their generosity toward one person would motivate the behavior of others.

Loyalty

Rewarding regular customers as a means to keep them loyal in future doesn’t ruffle me at all.   It’s often been pointed out that firms spend a great deal of budget attracting new customers, and offering them exclusive deals that their current customers are not eligible to receive seems like a clear indication that loyalty is not valued – and it ought to be.

But what strikes me as most odd, and I’ll get into this deeper momentarily, is that participants in loyalty programs are being hoodwinked in a way because they are paying for their own rewards.   Sometimes it is fairly subtle – the cost to the company of providing a “free” gift is covered by the amount the firm has overcharged you for purchases.   Other times, it is fairly obvious, such as the debit card program that rounded purchases to the next dollar and “gave” their customers their own change as a reward – such unabashed contempt for the intelligence of consumers is so outrageous it’s amusing and depressing that they were able to pull it off.

Who Really Pays?

The question, across all of these instances, becomes one of stewardship.   In essence, a business as an institution provides goods and services to customers, who pay the price to create the things that they want, the variable and fixed expenses of its production, plus the cost of capital to finance production, plus a reasonable profit to the owner of the business.  The marketing expenses of a firm are not related to anything necessary to produce the good, but to attract other customers to buy from the firm and increase its profits.

With that in mind, it’s long been my perspective that businesses don’t pay for anything.   When dim politicos call for taxes on business to be increased, or for business to pay more wages to their workers, or for business to contribute to charitable causes, the firms must generate the capital to pay those expenses by raising the price of goods.  So ultimately, a demand for “business” to pay for anything is a demand for the current customers to pay a higher price - for anything that a business purchases is paid for out of revenue taken from customers.

And in that sense, a business does not pay shoppers for the activities that are related to purchasing.   The shoppers are paying themselves, because any premium, gift, or payment they receive is ultimately funded by the price they will pay or have paid for its product.     Or more accurately, the customers who actually buy the products are paying to potential customers (because a person who receives a free sample or whatnot may not ever actually purchase the good) for activities that render them no benefit– that is, unless the firm borrowed the money for the promotion and will pay it back with interest from future revenues from the exact same customers who bought because of such promotions.

This troubles me, and the more I think about it, it is likely beyond the scope of the original topic: whether it is paying shoppers directly or paying an advertising firm, the marketing costs represent a burden on customers to pursue prospects – the conversion of which is in the interests of the owners (increased profits) rather than the consumers of the firm (who really don’t care how many units their company sells, so long as it’s enough for them to stay in business).

So all of this thinking has led me to the notion I have still more thinking to do.

Monday, May 13, 2013

Testing Experience Design

I've been involved in a number of projects this year that revolve around refining the design of task flows based on testing in the wild - including champion-challenger tests, A:B tests, and multivariate tests, each of which changes one or more elements of the design of pages and flows to witness the resulting changes in user behavior.   In general, I'm enthusiastic about the results, but I do have some reservations.

Primarily, I am enthusiastic because testing is the antidote to narcissism.  It seems to me that much of what designers propose to do is based on their own assumptions about the way in which users will react.  I will grant that designers have the best credentials to do so - they have studied the theory and principles of design and have witnessed from their own practical experience how the application of this theory succeeds or fails when it is implemented.

It is fairly obvious to designers that business executives and software developers who like to contribute their notions to design are very often lacking in knowledge and experience and that the ideas they contribute tend to be arbitrary  - but at the same time designers seem loath to admit the same arbitrariness on their own part.  No matter the knowledge and experience a designer has, his judgment is based on speculation - and however well-grounded this speculation may be, it may not bear out in practice.

Furthermore, it's distressingly common that designers do not receive reliable information regarding the consequences of their actions.   Particularly for freelancers and consultants, they consider their job to be "done" when the site is launched and never touch base with the client to find out if the results were positive (and in fairness, they are dissuaded from doing so by clients who take it as an offer to do additional work to improve the outcome without additional payment).  But even for in-house designers, the news about the outcome seldom makes it back.  

As such, uninformed designers learn nothing from the experience and assume everything they did worked out swimmingly.  This is the primary value testing brings to user experience: it grounds designers in reality, enabling them to see the results of their decisions in the field and, ultimately, to make better decisions as a consequence.  Without this insight, designers easily become lost in the clouds, chasing their artistic visions and stoking their own egos without a much-needed reality check.

At the same time, I have some reservations about the way in which testing is implemented - specifically, in that it is often proposed as a substitute for sound judgment rather than a validation of it.   In effect, testing has led to the "spaghetti" approach of throwing a bunch of ideas against the wall to see what sticks.   The primary drawback of this approach is that it is unfocused and wasteful.  Bad ideas, those that were not worth testing in the first place, are put through the paces in case they have some merit.   A designer's experienced judgment can provide an excellent filter to eliminate obviously flawed ideas without having to test them.

A much more significant problem with this approach is that it damages the esteem of the brand.   It's perfectly acceptable to test a prototype in a lab because it involves a small number of participants, each of whom well understands that what they are seeing is being tested.   But when an idea is tested in production, the audience often does not know it is a test (and telling them so would change their behavior) - the result is that the experience they have on a test model is their real experience of the brand, and any negative impression becomes an indelible memory.

This is a particular problem in the online channel, because the distance from the participant does not enable us to gauge the severity of the reaction.   We know that they clicked through to the next page, or did not do so, but cannot accurately asses the impression they took away from the experience.

Said another way, no-one in his right mind would sign off on (or even propose) a test for the voice channel in which half of the service representatives greeted the customer with "Good morning, Mister Jones, how may I help you?" another quarter with "What's up, Tommy?" and the last quarter with, "What do you want, you bastard?"   It is obvious that the third option is unacceptable, and that every customer who called that day would be less than amused.

I'm certain that some secondary research might be drummed up to suggest that customers enjoy playful jibes, or that a case study can be found of a business that had success being rude to its customers, but it doesn't mean that it's a good idea for every business - and common sense should tell you that it's not even worth testing.

And while I have not seen an example quite that egregious, I have heard it suggested that firms test every conceivable combination of elements to see which works best, and some very bad ideas have been presented from non-designers who think that something unusual might be a good idea.  (Hint: it's unusual for a very good reason.)

The problem as I see it is that firms tend to swing from one extreme to the other - either "don't test anything" (and design by arrogance) "just test everything" (and exercise no discretion) - neither of which is at all desirable.

Ultimately, it's a matter of finding a sensible approach - trusting in the experience and judgment of qualified design professionals to provide an expert opinion, and then testing to validate their judgment.   Until that happy medium is achieved, a great deal of damage can be done to user experience of a brand.

Thursday, May 9, 2013

The Experience of Employment Applications


I had the opportunity to consider an employment application from a perspective of user experience - and what I saw there was very disappointing.   Further conversation with the same individual (an HR recruiter at a firm I'll not name) uncovered a much deeper problem: the application-to-onboarding process for this firm was appalling, and I tend to doubt the situation is unique.  What's more, I suspect there is little motivation to address it, as firms do not seem to recognize the value of improving the candidate's experience.

Why is it important?

The perspective of the person who asked me to look over their form was that it is onerous for employment candidates to fill it out.   It's great that she was concerned about it, but it's going to be a tough sell within her company to obtain funding for process improvements that benefit someone else, particularly someone who isn't giving them revenue in the very same process. But in a broader sense, employees give a company all of its revenue - the equipment and systems do not operate themselves - and therefore getting the best employees is critical.   An onerous employment process effectively screens out good candidates.

It's tough to make this point in a down economy, in which many people are looking for work and are willing to jump through hoops to get a job, and I expect the perspective of employers is that they don't need to bother as a consequence. However, consider that highly qualified people are in great demand under any economic conditions, and do not perceive themselves as beggars asking for a handout, willing to supplicate themselves before a reluctant and disdainful benefactor.   Especially if they are well-connected through the social networks, chances are they get two or three inquiries from recruiters every week - and if they click-through to a lengthy application form that they must fill out before the employer will even deign to consider speaking to them, they will lose interest, figuring another opportunity will come along shortly.

I have heard the counterargument that a difficult application process weeds out people who "aren't really motivated to work for us" and dismiss that as hogwash by bureaucrats who wish to maintain the status quo and avoid putting in any additional effort to be more effective.   On the lowest level of menial employment, perhaps it's important for a candidate to prove they are motivated to do tedious and repetitive data-entry - but most firms seek people who are keen on efficiency and eliminating useless ritual, and such people generally despise being put through needless tedium.  In effect, an onerous application process weeds out good candidates rather than bad ones.

The Nature of the Problem

I won't go into the granular details - since it was "free advice" I didn't really pore over it and analyze each field in the form (and she had no budget to hire me as a consultant to do so), but what I saw there were problems well-know to customer experience professionals, so I provided some general remarks.

  • Redundant Information - The form requires applicants to enter a great deal of information that is already on their resume.  There's software that will parse resumes (even aggregate data from social media profiles) and prefill forms.  Use it.
  • Inappropriate Information - There was a giant red flag on this application form: it asked for the applicant's religious beliefs (denomination).   That's probably a leftover from the 1950s and needs to be gotten rid of.   Immediately.
  • Needless Information - In many instances, there was no clear reason that specific information was being requested, and I had the strong sense that the company probably didn't even consider it when evaluating applicants.   The form could likely be reduced by 25-30% just by removing any questions that weren't necessary
  • No Explanation - The form was completely devoid of any explanatory information that indicated why the firm was asking questions or what use they would make if the data.   This is a bit imperious ("You must tell us whatever we ask of you") but also likely resulted in people providing useless information because they had to guess what was needed.

There were other items in the list I provided her, but these were the most significant - and my sense is that these are very basic things that are woefully neglected.   Addressing them should be fairly simple, and will make a world of difference in making the task less onerous for applicants and the data more useful for the recruiting firm.

A much bigger problem

It occurred to me while reviewing this one form out of context that there is a much broader problem in employee relations - not only during the recruitment and onboarding processes, but also persisting for their entire term of employment at a firm.

The application form is not the only form that a candidate-cum-employee will encounter that asks for the same information - a new recruit is faced with an array of forms (taxes, benefits, and the like) that all request the same information.  And once they are hired, there is a lot of internal paperwork that likewise requests the same information.   Not to mention the array of internal forms to get set-up resources (a company-issued credit card, cell phone, remote network access, etc.) that ask for the same information.    And add to this the various forms that have to be filled out periodically as needed (a travel request, skills profile, e-learning courses, and the like) that once again ask for the very same information.

All in all, the problem evident on the application form is repeated for each form an employee has to fill out - and this is largely an IT problem because HR systems are often a succotash of various vendor products that don't share data with one another.  A great deal of human effort is required to make up for badly designed systems, and I strongly suspect that if a firm were to add up all the minutes that each employee spends providing redundant and/or unnecessary information would well fund the cost of integrating their data systems.

Winding Up

This has been rather a long meditation, and I have the sense that I have barely scratched the surface of "what's wrong with human resources" - I expect someone, and quite a few people, might make a career of cleaning up the problems for a firm that cared to have the problems cleaned up ... so let me stop here, as I think the point is well made that there is a great deal of opportunity to improve employees' morale and their perception of the quality of their employers by applying even the most basic principles of customer experience to the human resources department.

Sunday, May 5, 2013

Efficiency Isn't Innovation



In general, the approach to improving things, products or processes, begins with analyzing the as-is situation and identifying areas in which problems could be fixed or improvements could be made.   That is to say that it begins rooted in present reality and ends with only minor changes.   This is different to, and likely preventative of, true innovation, which requires starting with a blank slate and imagining the possibilities that might exist, independent of what currently does exist.

It is a common, but fundamental error, to regard anything new through the lens of existing processes.  This results not in innovation, but efficiency improvements, as firms seek to streamline what they are presently doing rather than considering whether there might be an entirely new way ("new" being the "nova" in "innovation") to achieve the desired goals - or even to change the way in which the goals are defined if doing so is necessary to achieve a better outcome.

In many instances, efficiency improvements are merely automation.   In the early industrial era, automation merely replicated human motion with machines; and in the present era of information technology, automation merely replaces human thought processes with digital ones - but "merely replaces" means the that task remains the same, it is just performed by a different actor.

For example, a computerized accounting system automates the way in which invoices are processed, in that the very same thing is done with databases and spreadsheets that nineteenth-century clerks did with ledgers and quill pens.   The process is made faster, and less labor is required, but the process itself has not changed.

In that sense, replacing a worker with a machine or a clerk with a computer system is not innovative at all: it's doing the same thing more quickly and efficiently, but still doing the same thing.   To innovate requires asking: what goal are we attempting to achieve by doing things this way ... and is there a different way in which we might achieve it?"

Knowledge of existing business practices is not only unnecessary, but can be harmful.   That's not to say that they can be completely ignored - the inputs and outputs are likely still the same (though one might reconsider whether the inputs or outputs could be improved) - but all the "stuff" in the middle is entirely irrelevant.   So long as the goals of the process are achieved, the rituals by which they are pursued is irrelevant.

As a final note: innovation is not always necessary, and sometimes efficiency improvements are the best that can be done - let's not throw that concept away entirely.  But at the same time, let's not assume that the two are similar or can be accomplished in the same way.  To be innovative in the outcome requires being innovative in the process - and that holds true even when the process is one of defining processes.






Wednesday, May 1, 2013

Reclaiming Our Humanity


I've often heard it said that service is losing its humanity because people are increasingly turning to digital channels to purchase goods and services instead of being served by people.   In effect, when customers research products on the Internet, purchase from a vending machine, make a withdrawal from an ATM, or get support from a mobile device, they are interacting with a sterile mechanism rather than a real human being.

All things considered, I cannot argue that customers interacting with a device or machine have a different sense of interaction than they do when interacting with human service providers – though there are human beings who rival mechanical devices for their coldness and impersonality, as well as instances in which the warmth and friendliness of a person interferes with the simplicity of a task and it is entirely more desirable to interact with a device that does not want to make small talk, ask intrusive questions, or attempt to redefine your goals for you.

While it may be true that interacting with a device is generally less satisfying than interacting with a flesh-and-blood person, I do not think it is necessarily so, nor do I think that the notion of being served "by a machine" is entirely accurate.

Consider this: when you call a company for service, you do not believe that the telephone is having a conversation with you.   You are aware that the device is just a channel through which you are communicating with a human being on the other end of the line, using a device to overcome distance between your physical locations and your physical selves.     Thus, you are receiving service from a person even though there is a device (a network of multiple devices, in fact) between you and the person to whom you are speaking.   I don't expect anyone could argue otherwise.

To go a step further, even when the voice on the telephone is a recorded message and there is no human being who is actually speaking at the same time you are listening, there was a human being speaking at some point, and the recording spans a distance in time in the same way a telephone is spans a geographic distance.   Thus, you are being served by a person who anticipated your needs and prepared the service experience in advance.   Though there is some argument to be made that because the voice is recorded, the experience cannot be adjusted in a truly interactive manner, changing the presentation in response to your reaction in real-time.   This is a point of frustration, but little different to having a face-to-face encounter with a pompous and dismissive person who seems to be ignoring your part of the conversation and speaks as from a script.   So I would argue it is not the machine that is the source of frustration, but the lack of foresight of the person who prepared the experience.

And to go further still, what if the voice itself is computer-generated?   I would suggest that while the voice is no part human, a person wrote the script and programmed the machine to speak those words, and it's little different to conversing with a mute person who uses a device to "speak" for them - it is not the device that emulates the sound of a human voice with which we are speaking, but the person operating the device, providing the words it speaks.  Thus, you are not being served by the device.

Perhaps I've belabored the point  ... but it seems to need belaboring given the frequency with which it is forgotten or ignored: that a machine is a proxy for a human being, and that the customer is not "served by a machine" but served by the people who designed the interactions to be delivered by means of this proxy.

Thus understood, if we find interacting with a machine to be unpleasant, it is not because of the machine, but because of the interaction designer.  They failed to sufficiently consider the needs and desires of the user in designing the way in which their proxy would support interaction.

Or to go off on a tangent, perhaps they were not capable of doing so in the first place.   I have to admit that I am to some degree idealizing the concept of interacting with people - and some people are so unpleasant that interacting with a machine is preferable.   I often pause to wonder at some of the personalities I encounter in the design community - staggeringly many designers are socially inept in face-to-face interactions yet feel themselves to be adept in designing interactions through a proxy.  It would seem that, at best, the interaction they design would be as thoroughly unpleasant as dealing with them in person.   But that's a separate topic.

To return to the point, the machine is a proxy for a human being, a proxy leveraged to overcome the limitations of time and distance.  It is merely a proxy and not a substitute because, until artificial intelligence is perfected and machines are making decisions independently rather than executing commands written by a person, there is and will always be another human being who is responsible for planning the interaction.

This considered, I would make the argument that the machine-human interaction is not necessarily dehumanized or dehumanizing - it is only dehumanized to the degree that the interaction designer has forgotten to apply, or has never been quite adept at applying, his own humanity and has failed to regard the users with the dignity and respect that they (rightly) feel they are owed.

There are some improvements that can be made to the technology to facilitate this - to do a better job of communicating the humanity of the designer - but technology has already progressed to the point that it can have only marginal improvements.  Instead, it is the designer's adeptness at applying his humanity to his work that is a serious deficiency, and ever has been.

And to smooth over a few ruffled feathers, "the designer" is not intended here to mean the individual who attends to the role of designing the interface - they are often the human hands directed by others.   Many can speak to the frustration of not being able to provide a good interaction because they are beholden to clients and sponsors who want things to be cheap and efficient, and who require the person in the role of designer to set aside their humanity and do the job they're being paid to do, to the satisfaction not of the customer but of the authority who signs their paycheck.

That, too, is a transition to another diversion on the nature of corporations, which have become dehumanized and soulless, and who dehumanize and smother the souls of their employees.  While that is an entirely separate argument, it is likely a prerequisite to the present one: even where service is provided by a living, breathing human being, they are disempowered by process and procedure, such that the service experience is rigid, inflexible, and unpleasant and they can do little to deviate from policy to do what clearly needs to be done.   If a company can transform a human employee into an automaton who follows procedure and is prevented from interacting naturally, what hope is there of finding much humanity when the employee is replaced by a machine or an electronic device?

Dragging myself back to topic (again), I would suggest that the main problem with customer experience is indeed its lack of humanity, and that the machine is not to blame, any more than a human who is compelled to read from a script and follow rigidly documented service procedures designed to maximize efficiency.  And the blame is then to be passed along to the administrators who created and insisted on those procedures, and likely to the management who demanded efficiency to the detriment of all other priorities.

If we are to improve the customer experience, we do not need to polish our functional skills or seek to employee better technology - these things help, but only to a small degree.   Our greater challenge is in reclaiming our humanity and respecting the humanity of those whom we serve.  Until we have done so, we will fail.  And until we stop blaming the machine or the employee who carries out our bidding, we will remain focused on the wrong elements of the customer experience.