Sunday, August 29, 2010

Filtering the Internet

I received an e-mail response to my previous post, in which I conceded that the internet has become cluttered with express less-than-knowledgeable opinions and that a solution was needed to sort out the wheat from the chaff. The message suggested that I check out del.icio.us and other "social book marking" sites, in which people help to identify "good" sites.

I have looked into these sites in the past, and I've got to say: I'm not a fan of social bookmarking. While it's good for entertainment value in that it catches the "cool" things I may not have been able to find on my own, it doesn't provide much in the way of sorting out authoritative and useful sources of information.

This goes back to some of the most basic theories of knowledge: popular opinion does not constitute expertise. If social media is a cacophony of random opinions from people with little to no expertise, the way to find a reliable source is not to allow even less knowledgeable people to vote on which of them is "right."

That's not to say I don't see the potential value of social bookmarking as a solution to the clutter - merely that, in its present incarnation, it falls far short of this potential.

Just as there are trusted and untrustworthy sources of information on the Internet, so must there be trusted and untrustworthy referrers of information. As such, the notion of social bookmmarking merely adds another layer of confusion: you must now find a trustworthy reference in order to find a trustworthy source of information.

And I don't think we're quite "there" yet.

Wednesday, August 25, 2010

A Convention of Imbeciles

I may have noted before that the social tools of Web 2.0 do not represent new capabilities, but merely provide an easier way to do things that have been possible all along (for example, it was possible, albeit more difficult, to create and maintain a personal Web site before blogging came along), but what has been the result of that?

I read a blog entry that left me with mixed feelings about the more serious issue facing the Web: that the abundance of content that has arisen from Web 2.0 is not necessarily better, just more, and the effect has been more detrimental than beneficial.

The author's thrust was that an "average" person with a blog is like an idiot with a bullhorn, who can now say idiotic things very loudly, as if the volume of his message makes it somehow less idiotic. And moreover, that the emergence of social media has given a bullhorn to every idiot, transforming the Internet into a cacophony of dunces, all attempting to out-shout one another - and in the process, the information that comes from reliable sources is drowned out.

I can't entirely disagree with that. While I don't accept the opinion that the average person is an idiot at all times, I have to concede that even intelligent people can be idiotic at times (present company included), especially when they choose to act in an asinine manner to call attention to themselves, or have an overpowering urge to express (or repeat) an opinion without having much knowledge or understanding of it.

I would also maintain that, in spite of the ocean of uninformed babble, the emergence of social media technology has also drawn out a lot of expertise from individuals who are both thoughtful and well-informed, but who thus far have been unable to leverage the Internet because they lacked technical sophistication.

The author's argument in that regard was that the necessity of technical knowledge - at the very least, to be able to compose and upload a Web page - was a kind of idiot filter. If you could not figure out how to perform those simple tasks, then you're probably not a very intelligent person anyway, and it's just as well that the world will never hear from you.

And again, I can't completely disagree with that notion, but would counter technical expertise has very little to do with subject-matter expertise for a vast majority of topics. A physicist who does not know how to develop a Web page is still an expert in physics, even moreso than the person who does know how to develop a web page and knows nothing about the subject at all.

Seen that way, the early Internet prior to social media was no less a cacophony of dunces: there were fewer of them, but those who were speaking did not have any special qualification to discuss the topics at hand, any more than the masses of people who can now communicate due to the ease of use.

The conclusion I'm aiming toward (in the usual wending way) is that knowledge of how to create a Web site is a lot like the knowledge of how to type a letter - it's a clerical task. What distinguishes idiot from expert is not knowing how to use a typewriter or how to compose and upload a Web page, but the knowledge of the subject being discussed.

And while I'll concede that social media has handed out bullhorns to idiots, it has also handed them to very knowledgeable people as well, who have contributed valuable information to the Internet.

What's lacking now is the ability to sort and filter through the din. You can't stop the idiots from shouting, but you can choose which sources you listen to and reply upon. This problem is also nothing new, and I hope an expect that within the next decade, a solution will be found.

Saturday, August 21, 2010

Handling 404 Errors

I had previously noted the lack of hard evidence about the ability of 404 error pages to salvage the user experience – but skipped any prescriptive information.

And so, here are some of the practices I’ve used in the past:

1. Paying Attention

Before the problem can be addressed, it must be understood. My approach has been to use an .htaccess file to redirect users to a CGI script that logs four bits of data (page requested, referring page, user-agent, IP address, and date stamp), the aggregating data and considering it in other reports. Without this step, finding 404 errors is very difficult.

2. Sorting It Out

The data in those logs are cleaned and sorted into three heaps, each of which requires a different remedy:

The first heap is hacker traffic. There are people (or most often, spiders) that will comb a site looking for backdoors into maintenance programs that can be used to gain access to the site. For example, a handful of systems use the address http://yourhost/admin/ as an administrative login, and hackers regularly comb sites looking for that address. (An unrelated tip: if you can help it, don’t put admin logins on the public site, or at the very least put them in locations that aren’t so easy to guess.)

The second heap is internal 404 errors. While there are various causes for this (a bad reference in your own HTML code), the ones of greatest interest are where a user has visited one of your pages and clicks a link to get to another page, but runs into a dead end. More on that later.

The third heap is external 404 errors. This occurs when someone is on a Web site that links to yours, and the user will get a 404 error when they click through because the link is and (there’s a typo, the file has moved, etc.). These are the most difficult to address, but are likely the most important.

3. Hacker Traffic

My approach to dealing with hacker traffic is to serve them as little content as possible. My standard 404 “redirect” script (not available online just now, as I haven’t taken the time to clean it up) serves up a blank page whenever there’s a file request that looks like someone attempting to find a back door.

I’ve been chided for this once or twice by those who suggest it’s possible turn a hacker into a customer by serving up some promotional content, but I’m not convinced that’s a good idea. Most of this traffic likely comes from programs that don’t bother to read the content – so it’s just wasted bandwidth. And even if it’s a real person, the kind of individual who attempts to hack into your Web site will likely try to take advantage of your business in other ways, so I don’t see the need to put out a welcome mat.

When I notice that a lot of this traffic comes from a specific IP address or user agent, I modify the access permissions on my site to block their access altogether. Again, there’s the argument that a given user may be a hacker one day and a customer the next, but my previous answer holds. The one problem worth considering is that an IP address may be dynamic, such that a legitimate customer might be using it at a later time. That’s valid, and worth considering, but is a separate matter to consider what level of nefarious behavior merits banishing a remote address – sometimes, it’s entirely warranted.

4. Internal 404 errors

Data pertaining to internal 404 errors is fairly simple to tidy up, in that it comes from a bad link within your own Web site, and you should be able to clean up your own house with minimal effort by using the data in the log file to identify the exact page and link that’s causing the problem.

It’s worth noting that there are maintenance utilities that can be used to keep your site error-free in this regard, but tend to choke when a site exceeds a few thousand pages of content and they’re not very good at finding errors in pages whose content is dynamic. In these instances, a log file really helps.

Ideally, you shouldn’t have any internal errors, and should be able to clean them up in short order if they do arise. My practice has been to set up a maintenance script that would e-mail me this report twice a day so that I could react promptly. Most times, the report came back empty, which is the goal.

5. External 404 Errors

Data pertaining to external 404 errors is harder to deal with, because the “broken links” are on other peoples’ Web sites and you have no ability to address them directly.

Fortunately, it’s been my experience that the majority of site operators are attentive to the problem of broken links, and will generally tidy up promptly if you send them an polite e-mail with specific information so that they can easily find and repair the link on their site.

However, not all are as prompt or conscientious as you’d prefer them to be, so there are two ways to deal with the problem yourself:

The best (but most labor intensive) method is to visit the other site to find the bad link, determine what they meant to link to, and set up your 404 error script to redirect the user to the appropriate content (and tweak the analysis program to flag it in future, as it’s been dealt with and should no longer distract you).

The easiest and least effective method is to create a single custom 404 error message that attempts to provide the user with a link to what they were seeking (not just a cute/funny error message). A general-purpose link to the site’s home page, site map, or search engine is better than nothing, but largely insufficient for the user’s needs. You can use the path/file name to get a fairly good idea of what they were searching for and provide a link (either run a search query and return matching pages, or keep a list of expected problems).

When to Bother

As a final note to all of this, it begs the question of “why bother?” The purist might argue that this level of effort should be undertaken for every site you build, but I would disagree.

I don’t invest this level of effort for most of the personal sites or some of the frivolous ones I operate because they get a low level of traffic, and I’m not making any income from them. That’s not to say I turn a blind eye to 404 errors completely, merely that there aren’t that many and there’s no return on investment, so I check the logs about once a month just to tidy up.

On the other hand, when a site gets a significant amount of traffic (I draw the line at 100,000 unique visitors per month) and generates significant revenue, then there is certainly value in attempting to salvage those visitors who have arrived at a dead-end.

Tuesday, August 17, 2010

Custom 404 Error Pages

A handful of blogs I read have caught news of a site that throws a custom 404 error page that plays on a trendy Internet meme. Their take sees to be that a cute or funny error page can take some of the sting out of running into a dead end – and while I’d agree with that notion, it’s probably not the best way to rescue a lost visitor.

One of the problems is that there is not much in the way of research into the topic of 404 errors. I’ve done some digging about, and can find no source that provides hard numbers indicating the number of visitors who leave a site entirely after encountering a 404 error, but my sense is it’s probably very high.

And since there are no figures for the number of users who bail, there are likewise no numbers to substantiate the value of providing a custom 404 error page – in terms of the decrease in percentage of users who bail after providing a custom error page, or using one tactic versus another.

In part, that may be because disclosing the information is embarrassing to the site operator – that 404 errors are thrown at all implies (sometimes rightly) that a problem exists, and the site operator hasn’t been attentive to it.

Another part of the problem is that content management and traffic analysis software tend to sweep these under the rug. I’ve seen at least two packages that purposefully ignore 404 errors when considering the “last page viewed” by a site visitor, and most of them separate error statistics from regular traffic, making it impossible to follow the click-stream of a user who has encountered such an error.

In the absence of any evidence, I expect that a high percentage of users leave a site after encountering a 404 error. While a “funny” error page might make the experience less jarring to the user, I don’t expect that it has a significant effect on that . That there may be tactics that can help to redirect the user to the content they were seeking, and I expect that has some effect in getting the user out of the cul-de-sac while keeping them on site.

But in the end, all of this is just speculation in the absence of evidence: these seem reasonable notions, but there is no proof.

Friday, August 13, 2010

The One Best Way

In conversations with others in my profession, I've been struck by how distressingly many of them subscribe to the notion that there can be only one best way to accomplish anything, and that our goal in designing user experience is to discover it and force it upon as many people as possible. This is, I think, a very dangerous assumption that leads many designers down the wrong road, and causes a great deal of frustration for consumers.

The metaphor I've been using lately is the notion of obtaining a meal. There are various levels of service that can be provided:
  • A supermarket sells all the ingredients necessary, but it is up to the customer to decide what to buy and prepare the meal. This is too often dismissed as old-fashioned and not very user- friendly.
  • Arguably, a higher level of service can be provided by selling a "kit" that contains all the necessarily ingredients, so that the customer can grab one package and go. This saves them the effort of consulting a cookbook and making a list, and finding the individual items they need.
  • Even better, a company can prepare the food, seal it in a vacuum pouch, so that the customer can buy the item, toss it in the microwave to warm it up, and have the meal they want with a minimum of fuss and bother.
  • Better still, the customer could phone in the order, pick up the item at the deli counter, hot and ready to eat.
  • To go a step further, a delivery truck could arrive at the customer's location and hand over a packaged meal, complete with disposable plates and plastic utensils to save the washing up.
  • Or better still, the food could be ground into a slurry, loaded into a caulking gun, and the delivery service could tube-feed the customers to save them the trouble of chewing the food.
It might seem that I went a little too far with that last one, but it's not as ridiculous as it might seem: it would be a great service for people who are disabled and unable to feed themselves. And I expect there are some people who, while capable of feeding themselves, would welcome the convenience and denounce anyone who failed to take advantage of such a service as old-fashioned. You never know what's going to catch on, when it comes to trends in consumer preferences.

The point I'm getting at is that there are various levels of service you can provide, in any industry, and that while you can arrive at the easiest, cheapest, and most effortless way for a customer to obtain the benefit they seek, not all customers will want that.

Some customers prefer to follow their own recipe, buy the individual ingredients, and prepare their own meal. And while it's more labor-intensive to do things any other way, not all customers care to be tube-fed. What's more, even a single customer might want different levels of service at different times. A customer who usually prefers the convenience of a microwave-ready meal might sometimes prefer to dine in a restaurant, and at other times, they may prefer to do things the "hard" way for the sake of quality.

In the end, there is no "one best way" for all customers, at all times - and I would argue (and have argued) that the approach to user experience design should accommodate the various methods by which customers care to be served. The approach of finding the "best" way, to the exclusion of all other options, will alienate the customers who prefer a different level of service - and drive them to a competitor who is more flexible in giving customers the freedom to choose for themselves.

Monday, August 9, 2010

Net Neutrality

The problem of bandwidth has cropped up again, as it periodically does. If anything, I'm surprised it's taken this long: I expected that the explosion of content that arose when "Web 2.0" enabled individuals to dump exabytes of text, photos, and video on the Internet would have jammed the network long before now.

The core conflict is between companies that own the networks an the companies that use the wires to transmit data. The latter camp has been more successful than usual in getting the media and certain members of the public spun up by representing the problem as a civil-rights issue rather than a technical problem, under the banner of "net neutrality."

Naturally, the propaganda has overwhelmed the facts, which are a bit complicated and far less interesting than a shouting-match among flamboyant buffoons. And naturally, those of us in the industry are being asked, "what's all this noise about net neutrality?"

And so ...

While the metaphor of "information superhighway" is a bit outdated and hackneyed, it's a good way to visualize the problem of traffic on the Internet, which sends data packets across networks in much the same way as vehicles travel across the network of roadways. In that way "bandwidth" can be understood as the capacity of the roads, and "congestion" as being akin to a traffic jam.

The challenge, since the beginning, has been in building out the roads to accommodate the traffic. Traffic can change in an instant, whereas it takes a significant amount of time and money to expand a roadway - and while the network providers have generally been attentive to the problem, there are times when the change in traffic patterns has taken place more quickly than expected, and the roads have become jammed.

When traffic jams occurs, traffic is diverted from the main highways: motorists discover that they can make their trip faster if they exit the main highway and cut across residential streets, which, in time, become jammed up with "highway traffic," such that the people who live in the neighborhood cannot back out of their own driveways due to a stream of traffic traveling across residential streets that were not designed to accommodate a constant flow 18-wheelers.

The solution for the residents is to close neighborhood roads to freeway traffic - in effect, to put up a "no trucks" sign and limit the traffic through their neighborhood to the residents and certain companies that are making deliveries to the neighborhood. With such a measure in place, the residents are happy, but the trucking companies who want to cut through their neighborhood to avoid the freeway traffic are stymied.

An in the case of "net neutrality," the trucking companies are claiming that they should have a right to drive their trucks through the neighborhood, and have spun up the residents of other neighborhoods by claiming that their inability to drive across the residential streets prevents them from delivering certain goods to the market - and therefore, that the "no trucks" restriction has harmed all by preventing certain goods to reach the marketplace.

Naturally, there's more to it than the metaphor will allow. When it comes to the Internet, even the superhighways are owned by private companies, the trucking companies pay for the right to use certain roads, the companies that maintain the roads are either unable to provide them with more capacity or unwilling to do so at a price customers are willing to pay, etc. You can easily stretch the metaphor to the breaking point, but I think it highlights some of the fundamental issues:

Primarily, while Internet congestion is a problem that ultimately affects users, the front lines of the battle are drawn between companies: specifically, the companies that own the networks, and the companies that want to send more traffic than the network can presently accommodate, without paying to upgrade the system.

Second, that "freedom of expression" has very little to do with the matter. To return to the analogy, the network providers don't care about the cargo inside the trucks, merely that there are too many vehicles on the road, and their restrictions are based on volume, giving preference to their own high-volume customers who pay for the use of their roads.

Third, that Internet congestion is temporary. Eventually, the lanes will be widened, traffic will flow, and everyone will be happy once again ... until even the expanded capacity is being consumed and there is yet again a need to expand. Seems to happen every so often, and I think it may be a while before it's possible to predict fluctuations with any degree of accuracy.

And finally, to be very cautions about the causes you support during times of crisis - because the restrictions and regulations put in place to cure a "temporary" problem will be even more damaging to your freedom and well-being in the long run. Witness the effects of the legislation put in place to address the great depression, or the terrorist incidents of the early 21st century - and the long-term effects of ill-conceived solution to short-term problems should be very clear.

Thursday, August 5, 2010

Needs-Based vs. Solution-Based

Some notes from a discussion on the differences between needs-based and solution-based strategy:

In theory, needs-based strategy begins by examining a market segment and identifying a common set of needs, then defining a solution that will satisfy the needs (at a price consumers will be willing to pay), whereas solution-based strategy begins with a solution (product or service) and then seeks to identify needs in the marketplace to which that solution can be applied.

In practice, it can be difficult to distinguish between the two, as it's highly speculative as to whether a company that sells a given product began with the notion of the solution or an examination of the need. Several examples were discussed, and in each instance, the discussion ended in stalemate.

It's also not a given that needs-based strategy is the better approach. It certainly is the newer of the two, and there's often the notion that "new" is somehow better. But upon detailed consideration it seems reasonable that a solutions-based approach can be successful (in that there is a coincidence of solution and need) and a needs-based approach can fail miserably (if the company lacks the expertise to deliver a satisfactory solution).

Neither is it a given that the choice is necessarily exclusive to one or the other. A solution-based company can identify additional needs for its product, or make modifications to its existing product to suit different or changing needs. The argument that it is in a better position to do so that a company that may be aware of needs, but lacks experience in providing the solution, holds some merit.

Sunday, August 1, 2010

Learning from Customer Defection

I chanced upon an article on the topic of customer defection - didn't notice the original publication or date - and posted reading notes. My sense is that it may be a decade or so old, because some of the concepts were fashionable in the late 1990's, and have since fallen from favor, but the general topic remains valid.

The author's sense is that companies pay lip-service to the notion of customer loyalty and have a vague sense that it's important to retain customers, but it's difficult to put into practice. Traditional marketing (hence the performance metrics for "success") are all geared toward gaining new customers, and some degree of "churn" is taken for granted - the solution to which is simply to ramp up new customer acquisition to outpace the attrition rate.

Aside of the common notion that it costs more to attract a new customer than to retain an old one, it's noted that even a minor improvement in retention has a major impact on the bottom-line performance of a firm due to the profitability of a customer who makes more purchases over a longer period of time, while needing less effort to be constantly "incentivized" to continue doing business with a firm.

The author suggests some practices and metrics to address the problem of defection, and to leverage the knowledge of "lost customers" to improve service and prevent others from leaving. I don't think it's a sure-fire approach, or a quick cure, or even very comprehensive, but it's an interesting perspective that merits further consideration.