Friday, February 28, 2014

Five Criteria for Ethical and Effective Leadership

In a seminar on leadership, the moderator asked a very curious question of the audience:  “Why do you want to be a leader?”   It was a rhetorical question, and he didn’t spend much time on answering it.  However, I sense that this is something that anyone who aspires to become a leader (or to practice leadership in any given situation) really ought to ask himself, because a lot of people end up being bad leaders because they fail to give this question due consideration.

Even the phrase “bad leaders” is a bit gentle: people who seek to be leaders for the wrong reasons, or those who have authority and use it for the wrong reasons, are not merely “bad” in the sense of being ineffective: they are “bad” in the sense of being unethical: manipulative, deceitful, and damaging.   The question of “why” is therefore of critical importance.

Having meditated on this notion for a while, I’ve come up with five critical criteria of legitimate, effective, and ethical leadership that should be well-defined before attempting to exert influence or authority over others.   There are a handful of others of which I am less confident, but these three seem of particular importance.

First: A Leader Must Have a Legitimate Purpose

Leadership is a technique or a tool that is used for a legitimate end.   One does not employ a hammer merely to engage in the act of hammering – but because driving a nail is necessary to build something, which accomplishes a purpose.   In order to lead effectively, a leader must have a vision of what he is attempting to accomplish and a clear sense that the actions he takes in guiding other people to follow his lead will achieve that outcome.

This would seem to be self-evident, but there are many people who seek positions of leadership simply because they want to be in command – it has to do with a craving for self-esteem and controlling other people, and sets aside the notion that anything is to be accomplished by exercising that control.   People who are already in “leadership positions” are particularly prone to forget this, and attempt to bully or manipulate people simply because they have authority at their disposal and nothing in particular to do with it, and it gratifies them merely to exercise power and see others obey.

Such people do not gain the reputation of being good leaders, or even the reputation of being particularly good people – but are instead perceived as nuisances and impediments to progress who must be avoided or accommodated in deference to their formal authority, but not out of respect for their character.

Second: A Leader Must Need Followers

Leadership is not a solitary activity – by its very nature, leadership requires one person to lead, and one or more others to follow.    Pursuant to that, the question arises: does the leader really need others to follow his direction in order to accomplish his objectives?    Or said another way, is he merely seeking to manipulate others into do things he can do, and ought to do, for himself?

I expect there is a strong counter-argument to this: that a leader will often delegate tasks to others that he could do himself because he has better things to do with his time – which is a valid enough reason, when it is in fact a reason rather than an excuse.  A good way to tell the difference is to consider what the leader is doing with his time after handing off tasks to a follower, and whether those tasks he is doing himself do, in fact, require skills that he has that his subordinate does not.

There is also the outdated argument of the esteem of a leader: that it diminishes the status of a leader to be seen doing mundane and unglamorous work.   I would argue that is not a valid argument: there are many instances in which high-ranking officers (even to the CEO) have been seen doing mundane work, and it has not hurt their esteem but instead won a great deal of respect from their employees and the public.   With that in mind, it is not so much esteem as it is ego that causes narcissistic types to delegate unpleasant tasks to others while they have nothing better to do.

Third: A Leader Must Consider the Welfare of His Followers

There is in the present day the notion of “servant leadership” and the principle that people are most motivated to act in their own interest, and least motivated to act when someone else gets all the benefits of their efforts.   I don’t think it’s realistic to go to the extreme of insisting that a leader must always serve the welfare of his followers, but it is likely reasonable to suggest that a leader should at least consider the welfare of his followers.

Controlling and manipulative individuals fail to consider the welfare of their followers, and are in it only to gain benefits for themselves.  This is very obvious in commercial settings, in which managers demand employees work overtime without any additional compensation so that the team can achieve performance goals that earn the manager (and no one else) esteem and financial rewards.   But it is also evident in non-commercial relationships, in which a person attempts to maneuver other people into doing things that do no benefit to anyone but themselves.

Consider leadership in the nonprofit sector, in which a leader must coordinate the work of those who give their support voluntarily, without compensation, because they feel a personal stake in the outcome.  Whether the work volunteers do will benefit themselves personally or confer a benefit on others whom they wish to serve, the volunteer believes that the fruit of his efforts will confer a benefit to someone who truly deserves it.  And if a volunteer is misused or abused, he will walk.

Where there is formal or commercial leadership, individuals are often motivated by some other reward: namely a paycheck, or avoiding the harm that will befall them if they fail to comply with the demands of leaders.   Often both.  And it is in those situations that leaders most quickly forget the principles of motivation, and lead in ways that demotivate their followers – forgetting, often until it is too late, that even a paid employee will eventually walk off the job when he’s had enough of being mistreated.

Fourth: A Leader Must Include His Followers in Decisions

The old-school approach to leadership was in the nature of “one mind, many hands,” in which leaders made all decisions and delegated tasks to followers, often without explaining to them what the tasks were meant to accomplish or why they were being done.   It was a very self-centered and exclusive approach to leadership that gave the person in command and inflated sense of self-importance.

In the present day, most leaders don’t do all the thinking, but instead engage workers who contribute not only their labor but also their expertise to the task.  From a functional perspective, it’s critical to success because a single person (even a designated leader) is not an expert in everything, and cannot see everything even if he does have expertise.   Where a task requires expertise that only the leader has, then it’s arguably an instance in which he should not be leading, but doing the task.

But more to the point, people resent being relegated to the role of drones who carry out orders without thought or question.   Some people actually prefer to work in that manner, and there are instances in which the old school of leadership is still germane (mass manufacturing, for example, requires many people to perform routine tasks without thinking).

So perhaps this is only germane to those situations in which workers contribute their knowledge – but I would argue that even when workers merely lend a hand, they are more efficient and effective if they understand how the work they do contributes to the accomplishment of a goal – and if given an opportunity, some could contribute ideas that would lead to a more efficient process and/or a more effective outcome.

Fifth: A Leader Leads Only When Leadership is Necessary

The final criterion that came tomind was that a leader must lead only when it is necessary.   This is likely implicit in some of the other criteria (1, 2, and 3 at least) but it pertains not only to whether leadership is necessary at all, but how much a leader should be involved in the activities of his followers.

The micromanager, an all too common instance of the obsessive-compulsive control freak, seeks to involve himself in many instances where leadership and control is not necessary – hovering over a person who knows what to do and how to do it and constantly henpecking them, often giving them bad advice and preventing them from being efficient and effective.  

Good leaders know when to provide guidance, and when to keep their teeth together and watch from a respectful distance, trusting in their followers to know what to do or speak up if they do not.   Bad leaders, or controllers, seek to feed their ego and need for esteem by constantly inflicting themselves on people who do not need their guidance.


The key point to this meditation is the need for leaders to ask themselves why they want to lead – and whether they need to lead in a given situation.   I have the distinct sense that a vast majority of the problems with ineffective or bad leaders could be resolved if they paused for a moment to consider those things – and whether they are leading for the right reasons, or merely bossing others just to gratify their need to have status and control..

Tuesday, February 25, 2014

Gamification: Games, Puzzles, Toys, and Amusements

I’ve lately been deeply concerned by some of the things I’ve seen being done (or heard being proposed) under the aegis of “gamification.”  It’s more than a passing feeling of disappointment, more of a long-term concern because if gamification is done poorly, it will likely be rejected as worthless by the entire customer experience profession – and in doing so we will be throwing away a great deal of potential because of the misuse of the term and thoroughly egregious misuse of the concept itself.

Gamification is the inclusion of game-like components into non-game situations: namely, the buying, using, and servicing behaviors of consumers of non-entertainment products.   Done properly, gamification can make some of the less enjoyable processes in these experiences more entertaining, or at least less onerous, and that’s a great thing.   Done poorly, gamification is a nuisance and a distraction, that can make the customer experience far less enjoyable and even irritating, and that’s a very bad thing.

The primary problem is that gamification is being used to do silly and pointless things: any wacky idea that someone wants to toss into the mix, including those that contribute nothing and even detract from the experience, is labeled “gamification” – and when these wacky ideas fail, which is in some instances inevitable, gamification will get a bad reputation.

To that end, I’ve pulled together some of my research on game design, in an attempt to explore the nature of “game” and thee game-like phenomena (puzzles, toys, and amusements) and will explore what each of these items is, how it might work, and how it might be relevant to customer experience.

I can tell already that this is going to be a long mediation, but hopefully worthwhile to anyone who is interested in understanding the concept better, using it appropriately, and identifying instances where gamification is simply a mask to cover utter stupidity that will do more harm than good.


It’s generally understood that games are regarded as non-serious play activities, but a game is a heavily structured kind of activity that has a number of essential features:
  • A game has a specific objective the player is attempting to accomplish
  • The player is given certain equipment or other resources 
  • The player also brings his own mental, physical, and (sometimes) material resources to the game
  • There are rules and procedures that require or prohibit certain actions
  • The player has some latitude to choose his own actions within the boundaries of rules and procedures
  • When the objective is achieved, the game is over

These are the features common to all games – if any is lacking, then the activity is not a game, but something else.   There are other features such as a defined playing space (board or field), elements of random chance, competition between players, etc. but these are present in some games and absent from others.

The similarity between a game and a serious activity can be seen in those very same features.   A person who is saving for retirement has a specific objective (amass a certain amount of wealth by a certain time), is provided various resources (kinds of investments), has some level of skill (financial savvy), must play be rules and procedures (the SEC provides quite a lot), may choose his own actions (investment strategy), and plays until the objective is achieved.    

Any serious activity that has those qualities may be treated as analogous to a game, and gamification leverages the various elements of game experience that correspond – which are largely those that provide feedback as to the player’s progress toward success.   For example, a game provides a score to give the player a sense of progress, and so do non-game scenarios (e.g., the growing balance of a retirement account).     Doing gamification well requires an experience designer to recognize these similarities and determine when game-like elements are supportive of a non-game behavior.

There’s also the notion that games can be repeated, and that the game is a little different each time it is played.   This, too, is analogous to real-life experiences, though the amount of tolerance players have for variations is far less in real-world experiences (we expect that our grocery shopping will be the same from one week to the next), too much consistency makes the task onerous.


A puzzle has some of the elements of a game, but with a few key differences:
  • The rules and procedures are far more constraining
  • The player is more willing to accept the possibility of failure
  • Once a puzzle is solved once, it loses its entertainment value
  • The outcome of a puzzle is binary, it is solved or it is not solved 

These differences should help to identify real-life situations in which a person takes more of a “puzzle” mentality than a “game” mentality – and in fact, many commercial experiences are more like puzzles than games: buying your first car is filled with excitement and uncertainty, but when you’re doing it for the fifth time, it’s rather a boring necessity.

A given puzzle is generally not repeated – it takes a simple and tedious mind to be amused by the fact that doing the same things multiple times achieves the same outcome.   However, puzzles often provide more long-term engagement when they are provided in a sequence, such that each puzzle teaches the player skills that he can leverage in attempting to solve the next puzzle in a series, which is a little more difficult or presents new parameters for added challenges.

Because solving a puzzle is an accomplishment, there are intrinsic psychological rewards for having been successful, and a certain esteem that is generated by the ability to succeed where others struggle or fail.   We sites that provide a series of puzzles often give players ranks or badges that recognize this fact.

Admittedly, I am not quite as confident in my analysis of puzzles.   It seems to me there’s something lacking, but I can’t quite figure what it is, but it is certainly a different species of leisure activity and I have a sense that the differences I’ve noted are valid, though maybe not comprehensive.


Toys represent an entirely different kind of leisure activity that have distinctly different qualities than puzzles or games:
  • A toy allows the player to exploring the capabilities of an object
  • There is no objective for playing with a toy
  • There are no rules for playing with a toy

A good example of a toy is the yoyo, a disk on a string that the player experiments with to see what he is capable of doing with it.   If the player defines an desired outcome (a specific trick he would like to be able to do with his yoyo) the toy becomes a puzzle.   If he provides a context such that all the various requirements of “game” are addressed, then the toy becomes a piece of equipment used in a game. 

Toying behavior can also be highly relevant to serious experiences, largely as a method of exploring options before an action is taken.   An investor might use a simulation to project the future returns of an investment he is considering or to compare several.  A customer shopping for a car might use a web site to see what his vehicle would look like with various colors, styles, and features.

The difference between gamification and toyification (and please let someone find a less clumsy term for the latter) is the seriousness of the endeavor.  It seems to me that gamification is applicable when an individual is performing an actual task, whereas toyification is exploring options without committing to an action (at least, not at the moment).

Likely the most important factors to keep in mind when considering using a toy in a non-play activity are the amount of latitude the user must be given in order for him to find the toy interesting (a constrained toy becomes a boring task), that a toy depends on the interest and creativity of the player (you cannot make a toy that will entertain a dull and unimaginative person), and the fact that, no matter how entertaining the toy, players will eventually lose all interest in toying with it.


I felt it necessary to mention amusements here because it seems to me that many people (particularly designers) don’t seem to appreciate that amusements are not games, puzzles, or toys, but has one distinct quality that cause them to be excluded:
  • An amusement is experienced passively
A cartoon is a good example of an amusement: the “player” merely experiences the program as designed.  He cannot take any action to influence the way that the cartoon is presented (play, pause, forward, and reverse alter the rate at which it is experienced, but do not change the experience).   And once an amusement has been experienced, there’s very little value in repeating the experience.

My sense is that it’s worthwhile to consider amusements because many of the instances in which I have seen gamification go horribly wrong are the result of someone attempting to insert a random amusement into a task flow.  That’s not to say that amusements are not of some value in the context of a serious task, merely that it is not the same thing as gamification.

The greatest problem in using amusements effectively is that a person who is performing a serious task is not very interested in being distracted by an amusement.   They may benefit from instruction or information that will enable them to make better decisions, and this information can be delivered in an amusing way – but when the amusement value surpasses the informational value, then it has become counterproductive and annoying.

Gamification Done Right

My intent in exploring the four separate categories of game, puzzle, toy, and amusement is to suggest to designers that it is important to consider which is the best “fit” for any given task situation in which the user’s interest or motivation may be bolstered by a game-like component.   Simply considering which of the four is most analogous to a non-game task can be helpful in avoiding some dreadful mistakes (such as providing a game feature to an interaction that is more analogous to a toy).

Another critical consideration, which is much more difficult to make general observations about, is whether a given idea is supportive or distracting from the task the user is attempting to perform.   This requires consideration of the idiosyncratic nature of the task and the proposed game element, with an eye toward whether interacting with the element better enables the user to perform the task or provides them with encouragement and motivation to remain engaged.

That will likely require a great deal more consideration, and I hope to get back to it in a later post – but for now, my sense is simply understanding the nature and qualities of these four basic elements will serve to support better design decisions regarding gamification.  

Friday, February 21, 2014

User Experience Design as Applied Psychology

User experience design is often relegated to the cosmetic, and while developing an aesthetically pleasing interface in which users will wish to interact is certainly within the demesne of user experience design, it is not by any means the full extent of what a user experience designer does, which falls into the realm of psychology (specifically the cognitive and behavioral branches).

It doesn't surprise me that non-practitioners fail to consider this, but it does surprise me that many practitioners do not seem to consider it, or to give it adequate attention.  It's entirely possible to develop a successful interaction without intimate knowledge of the basic principles, merely by doing things that seem to work without really understanding the reasons.  But pausing to consider the basic principles - and better yet study them intently with an eye toward their application - enables a designer to make better choices, and to understand the reasons that one tactic is more likely to succeed than another.

With that in mind, consider the following outline of some of the areas in which understanding the principles of psychology can improve a designer's skills.  It is by no means comprehensive nor exhaustive, but a good starting point for anyone who's not considered the way in which design derives from psychology.


Studies of perception seek to determine how people become aware of specific things.   In a basic sense, it means filtering through all of the sensory information that bombards a person in every waking moment to notice and give attention to what is most important.   Much of this happens unconsciously, and the mechanisms by which an individual becomes aware and gives attention to only a few specific things are learned or conditioned abilities that are only partially selective.

Perception is critical to design because you must understand how people notice things in order to design an interface that enables a user to accomplish a task easily.    If the user doesn't perceive some element of a interface or doesn't give it adequate attention, he will be unable to interact with it in order to accomplish a task.   Knowledge of the principles of perception can help a designer ensure that users "see" what is important.


Identification follows perception, in that merely noticing something is not sufficient to provoke an individual into taking an action.   They must identify it, realize what it is and what it does, in order to know what to do with it.  This may happen by virtue of recognition (matching something new to a memory of something similar) or intuition (figuring out what something is or does by a process of reason).

The importance of identification to design is clear: if a user is unable to identify the function of an element, they will be unable to interact with it, so the designer must leverage visual cues that enable the user to realize at a glance how they may interact.   In essence, you cannot claim that your design is "intuitive" if you do not fully understand what "intuition" means or how it operates.


Once a person can interact with something, they will then assess whether they should want to interact with it.  The first step in doing so is a process of assessment, in which an individual not only recognizes that something can be acted upon, but that there is some consequence to acting upon it - that is, they should expect that something is going to happen as a result of taking action.

When a user remarks that he noticed something in a layout but didn't think that it would "do anything" if he interacted with it, then the designer has failed to provide sufficient visual cues to support the users' process of assessment.   The net result is failure, no better than if the user hadn't seen the element at all.   Understanding the principles of assessment can help to avoid such problems.


Motivation considers a person's desire to take a given action.   That is, once he has noticed something and recognized that something can be accomplished by interacting with it, he must then affirm that the consequences of taking the interaction deliver a benefit that is worth the effort to interact.   In essence, a person must be motivated to do something, or they will opt not to do it.

The core of design is in motivating users to behave in certain ways, and to interact with our designs (which is, in effect, to interact with our companies) in a way that will bring about a mutual benefit.   This is also the area in which many designs fail terribly: it is clear to the user what they must do when they encounter an interface, but they simply do not want to do it, and so they choose not to.  A better understanding of motivation is essential to designing an interface that accomplishes its goals.


Encouragement is slightly different to motivation: whereas motivation involves giving a person incentive to begin a task, encouragement helps them to maintain the momentum to see it through to the end.   Meanwhile discouragement, which is the evil twin of encouragement, saps a user's motivation and leads them to abandon a task without completing it, as the difficulty the encounter after the initial decision to act has been made leads them to question whether the factors that caused them to be motivated still hold true.

Failure to provide encouragement is another major area in which many designers fail: the task-flows they design provide a path to success that becomes increasingly difficult with each step, such that users quit the journey, abandon the shopping cart, and leave the site.   Understanding the factors that maintain encouragement and avoid discouragement is necessary to keeping the user on task until the task is done.


Evaluation occurs after a task is completed, at which time the individual reconsiders whether the effort they undertook is worth the benefit they ultimately received.   If the evaluation is positive, they will be more inclined to be motivated and sustain motivation when they encounter a need or opportunity to perform the same task in the future.

This is a sorely neglected area of user experience design, and customer experience in general: once the company has made the first sale, they count on the customer to return to purchase again, neglecting to consider that if the evaluation is negative, the customer will be less likely rather than more to continue doing business with them.   Much of evaluation depends on the functional consequences, but design can enhance or detract from the positive sentiment of a customer, making the critical difference between whether they are a one-time customer or become a loyal and regular patron.

Much More to Be Said

While this "quick" overview has merely skimmed the surface of the way in which experience design can be more effective if it is based on the principles of psychology, I hope it is at least sufficient to drive home the point that the two are not only related, but one and the same.   There's a great deal more to be learned, and learning it has a great deal of benefit to the user experience designer.

Tuesday, February 18, 2014

Why do smart people do stupid things?

If you're reading this, you're probably a smart person and you've probably done some mortifyingly stupid things.  I'm not merely pandering to your narcissism by suggesting you're smart: stupid people don't study to gain information because they think they already know everything they need to know to make sound decisions - that's what keeps them stupid.

Neither should it be taken as an insult that I assume you have done some mortifyingly stupid things, because you have, and so have I, and so has everyone.  It's inevitable.   If you think you've never done anything stupid, then you're wrong, and you're probably very stupid, and I take back the bit about you're being a smart person.

Dumb or Stupid

Before investigating the reasons smart people do stupid things, it's necessary to draw a distinction between dumbness and stupidity.   While the words are used interchangeably, the difference is quite substantial:
  • Dumb - Having a lack of information or insight.   You do something dumb because you don't know better.
  • Stupid - Having a lack of wisdom or common sense.  You do something stupid when you knew better, but did it anyway.
Dumbness is forgivable, because making a dumb decision means we acted on the best information we had at the time, and did not recognize until afterward that had we known more, we would have made a different decision.   Dumb is easily remedied by information, and per my earlier point a smart person studies to gain the information he needs to not be dumb, to the best of his ability - but it is inevitable that we will make dumb mistakes because however much we know, there is always a great deal we don't.

Stupidity, on the other hand, is inexcusable, because making a stupid decision means that we made a conscious choice to ignore information that was available to us.  We clung stubbornly to a bad idea in the face of clear evidence, and proceeded expecting that things would work out in spite of the evidence.   Stupid isn't easily cured - and some people have to make a great many stupid mistakes before they develop the patience and humility to accept that their first idea isn't always flawless and that they must invest adequate time in exploring alternatives.

The reason we do dumb things is self-evident: we don't know better, and that should suffice.  The reasons we do stupid things is harder to pin down, because it defies all reason that a person should undertake a course of action he knows to be likely to fail.   I don't think I have all the answers to that, but in the remainder of this article I propose NUMBER of the most likely causes.

We Haven't the Resources

One common reason for making a stupid mistake, and the one that's likely easiest to accept, is that we make stupid decisions because we did not have the resources to do something smart so we did the best that we could with what we had available to us.  And that seems honorable.

"Resources" could pertain to the materials, equipment, and capital necessary to implement a better solution, but it could also pertain to the amount of time we were given to come to a decision and then to implement it.

Lacking the resources is very common, and it also seems easy to accept because resource constraints are external - we had to count on someone else to provide what was really needed, and the fact that we didn't get it is their fault rather than our own.

But I'll leave you with this: how hard did you argue with that person to get the resources that were necessary?   Did they insist on resource constraints after you conveyed to them that the outcome would be compromised, or did you simply assume they would be inflexible?

We Haven't the Patience

Patience is another common cause of stupid mistakes - it is similar to lacking the resource of time, but the key difference is that impatience is not something imposed externally, but that arises internally.   We wish to "do something" - and do it fast - rather than insisting on investing the time to do something smart and effective.

Part of this is related to neuroscience: the human brain is wired for efficiency and resists expending more effort than is necessary on a mental task.   For that reason we form a hypothesis quickly, based on knowledge and experience, and are inclined to act on it immediately so that we can focus our minds on something else.

Smart people are particularly prone to this form of stupidity precisely because they are smart, and very often their knee-jerk reaction is sufficient to accomplish their goals.  Stupidity arises because they act quickly and do not pause to consider that "very often" is not "always," and further explore the idiosyncrasies of the present issue that make it significantly different than challenges we have successfully overcome in the past.

We Haven't the Humility

Arrogance is another cause of stupid mistakes, and another one to which smart people are highly susceptible.   A smart person tends to make good decisions, which work out well, and their experience leads them to generalize, and to think that any idea that occurs to them is a smart idea simply because they are a smart person.

A smart person is quick-witted and can devise a solution in a very little time.  The problem arises when they cling to that initial decision in the face of all evidence to the contrary: if facts arise that disagree with our conclusion, then there must be something wrong with those facts because we take it for granted that our initial conclusion was right.

But this is not intelligence - it is arrogance, and precisely the kind of arrogance that gets us into serious trouble because we fail to consider, and even pointedly ignore, evidence that not only contradicts our assumptions, but that would lead us in a better direction if only we were willing to alter, or perhaps abandon, our plans.

It is much easier to see this in other people than it is to see in ourselves, because we can readily recognize when someone else is ignoring important details or seeking to avoid the embarrassment of admitting their knee-jerk reaction was dead wrong.  There is no easy way to turn that perception inward, and realize when we are the ones who are being arrogant and stubborn.   The best I can offer is to be aware of your own level of aggressiveness - the harder you are fighting for a conclusion, the greater the chances are that it's an acutely stupid one.

We Think Other People are Stupider

Next of kin to arrogance is a lack of confidence and respect for others.  The subtle but significant difference is that arrogance comes from within, spawned from an ego that has become all-consuming, whereas dismissiveness comes from without, from the premature or poor assessment we have made of other people.

Rationally, we recognize that significant decisions are made by groups rather than individuals because they bring a broader array of information and experience to bear, overcoming individual weakness with collective strength.  Unfortunately, we are quicker to recognize when another person is weaker than ourselves than we are to recognize when they are stronger.

This, too, arises from the brain's attempt to be efficient: to make an assessment once and apply it consistently thereafter rather than having to reassess something every time.   If Jake has said something stupid in the past, then he is a stupid person, and then anything he has to say in future will also be stupid - and we shouldn't pay his ideas any serious attention.

It's likely fair to say that we make the same mistake even when we think of someone as being smart.  If Jake has made smart decisions in the past, he is a smart person, and we should consider anything he has to say in future as being smart - without pausing to consider that it might not be so.

Becoming Less Stupid

If the four problems I have identified be the cause of much stupidity, then the way to become less stupid seem clear:
  1. To recognize constraints, openly discuss their consequences, and argue for their removal.
  2. To invest sufficient time in exploring alternatives, recognizing that a good decision is better than a fast one in most instances
  3. To resist the urge to fall in love with our first idea, in spite of our track record of success, ad explore options objectively
  4. To focus on the merit of an idea regardless of who suggested it, realizing that stupid people can have smart ideas and vice-versa
Each of these things seems straightforward enough, but is difficult to do in the heat of the moment, when we are focused on making a prompt decision.

More importantly, these four points are not a comprehensive list of all the reasons that smart people make stupid mistakes.  I have the sense that they cover a large majority of the problem - but to take my own advice, I must recognize that it's what's occurred to me at the moment: it's likely necessary not to fall in love with my first idea, to invest more time exploring alternatives, and listen to other perspectives on the problem, all in order to arrive at a better conclusion.

Friday, February 14, 2014

Industrial Management Meets the Knowledge Economy

An interesting theory: the reason many long-standing firms struggle to remain competitive in the present day economy is that their paradigm of organizational management remains stuck in the Industrial era.   That seem a bit glib to me, but not entirely inaccurate.

In the industrial era, and likely throughout history up to that point, management was largely spreading the work of one mind to many hands.   This is illustrated in the "pin factory" example of the division of labor - in which each worker was shown a simple and specific task (draw the wire, cut the wire, sharpen the tip, etc.) that he would perform exactly as demonstrated, over and over, to create a standardized product.   He does not need to solve problems or to think, merely to follow procedures and perform to standards, and if any problem arose to call over a foreman, who was the only worker on the floor authorized to make decisions.

Such jobs still exit in the present day, but are becoming far fewer in the domestic (US) economy because the manufacturing tasks of mass-production factories have mostly been shipped overseas.  However, the mental and problem-solving work - deciding what products to make, how to make them, how to market them, etc. - are what remains in most manufacturing organizations.    In that economy, the worker does not follow procedures, but must do knowledge-based tasks that involve problem-solving, which is a completely different kind of work.

In such an environment, there is no standard procedure for accomplishing tasks - there is no standard procedure for discovering the solution to a problem, and if one is defined it is only a rough approximation of what is actually done.   One a solution is discovered and a blueprint is crafted for the work that must be done to effect it, it can be (and often is) handed off to the drudges who perform tasks.   But even "the drudges" do not follow the same pattern, in the way that the laborer who sharpens a pin does the same thing every time, but must apply knowledge and skills to effect an outcome that is described but a process that often is not.

Hence, the problem with older management techniques being used in the modern economy is that they simply were not designed for the kind of work that is done anymore ... yet few seem to realize this or, if they do, to act upon it consistently.   Unfortunately, it seems that many who recognize this have only snake-oil to offer by way of a solution - misty and nebulous suggestion of techniques that seem to work, without a systematic examination of the reasons they do work in some instances (and fail miserably in others).

I'm not really sure where this was going ... or if it's gotten anywhere ... but there it is.

Tuesday, February 11, 2014

The Conquest of Happiness

I recently re-read Bertrand Russell's Conquest of Happiness, a book that is largely about developing a functional personal philosophy, but one which I hoped would help me in my current attempt to understand the source of customer satisfaction (as being something other than the minimization of nuisance).   I was largely wrong about that - it's a good read for personal development, but doesn't seem to have much applicability to customer experience.   There are only a few ideas that are worth considering in that regard.

The Cycle of Happiness

One problem with discussing happiness is that it's a vague word, much like "love," in that it means different things to different people at different times.   It's imprecise and is bandied about without much consideration - any time we are not experiencing pain or distress, we are "happy."   And in our role as consumers, we say that we are "happy" with any good or service that is not causing dissatisfaction at the moment - or even one with which we are wholly dissatisfied, if we suspect that the person who's asking has some intent to sell us something even worse.

Instead, consider that there is a cycle of happiness that pertains to any change in a person's situation, but which seems especially germane to the purchase of a good or services:

  • Elation - Follows a change, and is a very intense feeling of satisfaction with having recently achieved something positive, or a feeling of relief at having escaped something negative.
  • Contentment - Follows rather quickly after elation.  The new sensation is no longer new, and we grow accustomed to the situation after having made a change (by way of a purchase).  The intensity of emotion has decreased, but it is still positive.
  • Indifference - May proceed from contentment.  The emotions we feel about a given situation do not necessarily switch from positive to negative quickly, but fade from positive to neutral
  • Dissatisfaction - May also proceed, or may set in immediately after elation has faded.   In this instance, a neutral emotional state has degraded, or a negative state has set in suddenly.  The dissatisfied person may eventually seek to make another change to restore himself to happiness or contentment.
I do not expect this is a cycle so much as a flowchart.  A change may elicit immediate dissatisfaction without elation or contentment.  And while I can accept that elation is short-lived, it seems entirely possible for a state of contentment to be indefinite, never degrading to indifference or dissatisfaction.

So all in all, I don't have a sense that I can rest on this perspective on happiness - it seems a step in the right direction, and helps grant some greater and much-needed specificity to "happiness" - but is not so well baked-out that I can stop meditating on the matter.

Happiness and Expectations

One perspective that isn't really that new is that happiness arises as a result of expectations - specifically, that experience is as good or better than expected.  

"Meeting customer expectations" is a common phrase in the industry, and my concern is that this is all most firms seek to do - to meet the minimum requirements for a customer to be satisfied enough not to return the product or initiate a lawsuit.  Or to use the same terms as above, meeting expectations creates contentment, or sometimes mere indifference, rather than elation.

I could likely gnaw at that a bit more, or more aptly allow it to gnaw on me - but my sense is that I've likely given it enough consideration, given the degree to which it is already acknowledged.


Russell defines two paths to avoid unhappiness: effort and resignation.   Those who choose the path of effort undertake action to change their external environment to achieve satisfaction or happiness.  Those who take the path of resignation undertake a more internal effort, to adjust their attitude and become resigned to accept that something that makes them unhappy cannot be changed, and it's best not to dwell upon it.

In terms of customer experience, many firms mistake resignation for happiness.   A customer who is resigned to accept a product they find less than delightful or even satisfactory is mistaken for a loyal customer who is enthusiastic about a brand - when they are really a person who is resigned to accepting mediocrity until the find a more attractive alternative.   The question of "why do loyal customers defect?" is likely answered "because they were never happy with you in the first place."

This also speaks to the previous notion of customer expectations - and the way in which some firms seek to lower expectations rather than elevate the quality of their product.   That seems rather weak to me, but I can't argue that it can be a valid way of avoiding dissatisfaction, but likewise leaves a provider in a precarious position, likewise vulnerable to the discovery of alternatives.

Unhappiness as Opportunity

And finally, the incidence of unhappiness creates an opportunity for those who can grant happiness.   This notion also extends to a more general consideration but seems particularly germane to commercial interests - in that a product that offers a promise of happiness is attractive to consumers.

But that alone is only half the struggle to get a person to accept change: the prospect of obtaining a thing or attaining a state must not only promise happiness, but must be achievable with a level of effort and inconvenience that does not counterbalance the happiness that will be earned by its achievement.

My sense is that this is where customer experience is presently focused, and it seems to me rather short-sighted.   It is assumed that the outcome is desirable and makes the path to success less onerous - but people do not undertake to do things because it is easy to do them, but because they anticipate a positive outcome once they are done.

With this in mind, customer experience (and producers in general) are missing a great many opportunities to appeal to potential customers, and often mistake ease of acquisition as a substitute for the benefit of ownership.


Admittedly, this has been a bit random, and I'm piecing together a few stray bits from Russell's philosophy of happiness and stretching them a bit to fit the interests of a customer experience practitioner.   To return to my earlier point, I don't have a sense the exploration was sufficient, but a step in the right direction - or said another way, I am neither happy nor particularly discontented, but perhaps a wan shade of contentment or indifference.

Friday, February 7, 2014

The Myth of Mobile Migration

Mobile, like the Internet, replicates some features from previous media: people still read, listen to audio, watch video, send emails and text messages, and perform other tasks that were previously done in older channels.   It's implied that mobile will evolve away from this in time, and those who are enthusiastic about the mobile channel routinely proclaim that it will become the "one and only" channel for which the user will forsake all others.

I’ve heard this very same argument before – when the Internet first came along, people assumed that every newspaper and magazine in the world would go out of business overnight.   That didn’t happen then, and it’s not going to happen soon.  While many print publications have folded or migrated entirely to the new medium, many have not - and fanboys don't seem to want to pay attention to this, or even acknowledge it.

I expect that some amount of traffic will shift to the new channel, but what proportion of that traffic will shift is entirely debatable.    People will still read books on paper, and people will still watch television on a television set, and I don’t believe that either of those behaviors are going away just because a new channel has emerged that has me-too capabilities for doing those same tasks.   It is not enough simply to have the ability to do something with a gadget – the gadget must be well-suited to the task and must do so in a manner that is as good as or better than previous methods of accomplishing the same task.

Simply stated, Mobile devices are not very good at the delivering the content of older channels.   Mobile has unique advantages – chiefly its portability, storage capacity, and digital capabilities such as search and bookmarking – but in terms of content delivery, the moment in which the user consumes the content, it is still severely lacking.  The claims of enthusiasts that mobile will get better in the future are merely hubris and puffery – the alleged future capabilities of the channel are less of a factor than the present behavior of actual consumers, who even now do not choose to use mobile for all the things it is presently capable of doing.

Consumers do not adopt a new technology simply because it is capable of doing things – they only do so when technology does things better than their present way of doing them.   In the rare instances that technology enables a completely new task, it is usually better than “nothing” – though the choice not to do something at all remains entirely valid – but in most instances a new technology merely supplants an existing way of accomplishing a goal, and to become the method of choice means that it must be better in the aggregate, not just in certain ways, than other methods.

The specific things that mobile does better and worse than alternative methods depends entirely on the activity in question – both in terms of the task and the context in which a user is doing it.   As such, it’s difficult to draw generalizations, but I can easily provide a few examples of the tasks at which mobile fails, and is unlikely to overcome:

  • Mobile fails to deliver content to multiple users.   Television and video consumption are shared experiences in that people get together to watch and interact with one another.   And while it is theoretically possible to synchronize a video program among several devices so that people can see the same thing at the same time, the social experience is still not the same.
  • Mobile fails to deliver content in sufficient quantity.   The size of a printed magazine or newspaper enables the user to see a few thousand words of content at a time, allowing them to browse and read at a brisk pace.   Because of small screen size and low resolution, mobile devices cannot do this, but force the user to read a few sentences at a time before scrolling or refreshing the page.
  • Mobile fails to deliver content in sufficient complexity.  Mobile is narrowly focused, allowing the user to consume a single thread of content, and does not permit complex consumption.  A person who is studying history may have a textbook, a dictionary, an atlas, a timeline, and multiple reference works open at the same time, moving quickly from one to the other.  Mobile forces the user to pay attention to one thing at a time and does not facilitate switching between sources, at least not with a single device.

Granted, mobile enthusiasts can suggest ways in which these limitations might be overcome – but it requires speculating about features and capabilities that do not presently exist, or proposing labor-intensive techniques that provide a partial work-around that is highly inconvenient and unsatisfactory.   Most people are not enthusiastic about mobile, and rather than attempting to accommodate or sacrifice their experience to the channel, they will simply not adopt the channel.

It’s likely worthwhile to explicitly state, for the binary-minded who seek to pronounce mobile as entirely good or bad, that there are also a number of ways in which the mobile channel is almost universally better than traditional methods of content delivery, so to give it equal time, here are a few examples of the capabilities mobile brings to the party:

  • Mobile is portable.   The most obvious advantage of mobile is that it can betaken along for the ride everywhere a user goes, and studies show this is actually happening.  For many users, the mobile device is on a bedside table when they sleep, picked up as soon as they wake and carried with them until they return to bed.   While other technologies have been made portable, they are not actually ported by users, except on occasion.
  • Mobile has vast capacity.  Granted, the device storage capacity of a smartphone tends to be rather paltry, but its connection to a network makes a universe of content available to the user.  This is simply not possible using traditional media: a person can carry a book or a magazine with them, but not an entire library, nor a suitcase full of CDs and DVDs to have a selection of music and video.
  • Mobile is bite-sized.   While not all mobile content is well designed for the way people use the channel, there is a considerable amount that is very well designed in that it makes it very easy to find a “bite” of content without having to search or scan the entire device.   Even with the Internet, there are often several steps to get to the very thing you want – mobile is designed to serve it quickly: a single paragraph of text, a ten-second video clip, etc. are much easier to locate on mobile than in traditional media.

At the risk of generalizing, I suggest that those applications that are extremely popular on the mobile platform are these and others that are geared toward leveraging the unique or superior aspects of the channel to deliver an experience that cannot be had when using other media.   And it is in these areas that mobile users excel.

That said, an isolated assessment of capabilities is not sufficient.   When it comes to predicting adoption, consumer behavior is critical – that is, mobile may be capable of doing something, but it is not until users actually begin to use it for that purpose that the feature becomes meaningful.  Granted, there are statistics that suggest that mobile has high adoption rates, but I strongly suspect those are rigged to aggrandize the channel.  For example, enthusiasts spin the statistics to suggest users view ten times as many videos on mobile than they do on television, but this conveniently omits the fact that it was ten thirty-second videos (five minutes total viewing time) compared to “just one” program or movie on a television screen (thirty to 180 minutes of viewing time).  So that may be technically true, but entirely misleading.

Ultimately, my sense is that mobile will become part of the user’s diet, but never a staple food.   It will become one way that people read or watch video, but it seems highly unlikely to become the only way or even the preferred way, no even widely adopted for a specific task (e.g., a person may watch a two-hour movie on mobile when it is inconvenient to use a television, but when both are available they will choose television as the better medium).    A complete switch is unlikely to occur until mobile overcomes its limitations … which it presently does not, and in some ways it seems inconceivable that it ever will.

Tuesday, February 4, 2014

Misapplying Generational Marketing

I've become a bit concerned lately about the way in which generational marketing (Cam Marsten et. al.) is being misapplied.   It seems to me that, by so doing, the value that the theory proposes to offer is diminished or altogether discarded, and that many are still engaging in demographic stereotyping.

A quick refresher: generational marketing departed from age-based demographics by segmenting the population into specific generations (silent, boomers, generation X, and millennial - though that likely needs updating as the silent generation is dying off and the post-millennial generation will soon come of age) based roughly on the years in which people were born.   But more importantly, generational marketing considered the traits of these apparent age-groups according to the events that occur at certain points in their lifespan.

In particular, generational marketing suggest that people do not happen to behave a certain way because they are of a given age group, but that they behave a certain way because of their life experience during a given stage of our societal evolution - and more importantly, that the influences do not change as individuals age.   That is, a person's generation does not change simply because he gets older: members of Generation X do not change their culture and adopt the traits and characteristics of Boomers simply because they are over forty.

That is a significant difference, and the entire value of generational marketing: the realization that the person who is in his twenties today will retain many of his behaviors throughout the rest of his life.   He will not, upon turning forty years old, become just like the people who are that age today.   To assume so is to discard the lessons of generational marketing.

In times in which culture and society is stagnant, it is relatively safe to assume that a young man will become just like his father as he ages, because his life experience will be largely the same.   A forty-year-old cotton farmer in 1510 lived very much the same life as a forty-year-old cotton farmer in 1490.   But given the rate and degree at which our present society has changed, peoples' lives do not follow the same course as their predecessors: a forty-year-old accountant in 2010 works, acts, thinks, and behaves in a very different manner than a forty-year-old accountant did back in 1990.

Perhaps it's just clinging to traditional ways of doing business - the wistful notion that society has more stability and continuity than it presently does - that leads people to attempt to stuff the square peg of generational marketing into the round hole of age-based demographics and the stereotypes that proffer that people of a given physical age act in specific ways simply because of the number of years they have been alive, pointedly ignoring the life events they have experienced during the time at which they aged.

There may remain some characteristics that are in fact age related: the task of raising a child was done the same way in 1990 as it was in 2010 so the experience of "being a father" is largely similar.  While the trappings of the environment are likely to have made the tasks somewhat different, the basics are generally the same from one generation to the next.  I'd even go so far as to say that general concerns of a parent in rearing a child have not changed much over centuries or millennia in most cultures.

And I do believe that in a few decades the current era of rampant change will have run its course and society will fall back into a repetitive lull in which generational differences are minimized, and one generation is very much like the one before it.  But in the present era, it is simply not so - and to discard the bulk of theory around generational marketing to reduce it to age-based demographics is counterproductive.