Thursday, May 27, 2010

Moderation and Censorship

I've noticed a spike lately in blog postings about Facebook's "censorship" of users. It's a perennial issue, and Facebook isn't the only company to come under fire of bloggers, for whom freedom of expression is sacrosanct -until the moment that someone begins spamming their own blog. Funny how that works.

But more to the point, there is not yet a foolproof system of moderation for online forums and communities. Though some progress has been made in that area, the problem is inherently difficult to address - and while I don't have a good solution to share, I wanted to capture some thoughts about the solutions that have been tried:

1. No Moderation

The Internet began as an open system, without any moderation whatsoever. Even before the Web, there were bulletin boards (USENET) and chat forums (IRC) where anyone could show up and post anything they liked.

That didn't go over so well: unscrupulous advertisers, juvenile pranksters and attention-seekers, and those whose motives were to abuse the system to choke off any conversation that didn't suit their liking flooded the boards with posting that the vast majority of other users found annoying. It wasn't long before USENET began to provide moderated threads and IRC created features to set up private forums and enable room creators to boot users who misbehaved.

I don't think anyone could successfully argue that having no moderation at all is a viable solution. It's an attractive theory, but has never, to my knowledge, been effective in practice for any forum with a substantial membership, as there are always a few miscreants in the woodwork who simply have to be dealt with if the forum is to serve its intended purpose for the majority of users.

The greater the body of users, the larger the number of miscreants (and the greater its attractiveness to them), until the latter completely corrupt and pollute the conversation threads, rendering them useless. I've not seen statistics on this, but from my own experience, I'd guess that the line is drawn at about 10,000 users (though sometimes much sooner, depending on the forum and its topic).

2. Individual Filtering and Blocking

The next step up from no moderation at all is providing the user with the ability to filter and block. To disambiguate: "filtering" is hiding messages based on their content and "blocking" is hiding messages based on the identity presented by the sender. (And to escape clutter, the single term "filtering" will be used hereafter).

This solution predates moderation, in that IRC users could "mute" an obnoxious participant and USENET users could "killfile" posts. It's a democratic approach that allows each individual to decide what information to bother reading - and if there was any problem with the filters hiding desired messages or failing to hide undesired ones, the user had only himself to blame.

The drawbacks to this approach are inefficiency and ineffectiveness. Each user must attend to the task of filtering. Since those who wish to abuse a forum repeatedly are elusive: they may change their login ID or alter the content of their messages to get past the filters, so the task of filtering, and they will do both repeatedly, making effective filtering an ongoing and tedious task that, ultimately, is not very accurate.

Moderation by a central source is, in this way, merely the consolidation of blocking and filtering, assigning the task to a moderator who has the interest, time, and expertise to clean up the detritus for the benefit of the community. But naturally, when a task is delegated to another person, not everyone is pleased with they way they go about doing it. This is at the heart of the problem, and it is this very quality that continues to make it difficult to solve.

3. Operator Moderation

The first form of moderation is "operator" moderation - by which I mean that a human being, either the owner of the forum or appointed by the owner of the forum, tends to the task of filtering content for the participants.

This form of moderation relies on human judgment, which is both its strength (in that human beings can more accurately identify information that doesn't fit the purpose or rules of a forum) and weakness (in that not all humans agree, that human judgment is subject to error, it may be applied inconsistently, and humans have a limited capacity to process information).

Moreover, objections to moderation are at their most vehement when a human moderator is involved: accusations of prejudice, intention to suppress, etc. are given more credence when a person is making the decisions as to what others are "allowed" to post in a forum. And, in fact, moderation is a form of censorship, as it involves an estimation (or decision) of whether a given message is something the community wants to see or the owner wishes to have published on his forum.

If it weren't for the litigious nature of contemporary society, forum operators might find this option more attractive - but human moderation puts the operator in a precarious position whenever someone whose post has been blocked cries "foul" and threatens a civil rights lawsuit, or when others who object to a given post threaten to alert the authorities about the kind of information the operator is "allowing" to be published on the forum he operates.

4. Automated Moderation

To be precise, automated moderation is computer-assisted moderation: software is used to detect and filter unwanted messages and, when a given user has sent more than his quota of nuisance messages, to block or disable the account.

But automated moderation brings problems of its won: chiefly, that computers are very faithful, but not very smart at all. A system can be automated to block abusive messages ("John is an ass") but will erroneously block inoffensive ones ("John is an assistant principal" or even "No one should say that John is an ass"), and due to the complexities of language, the limitations of computer logic, and the inventiveness of miscreants, no reliable solution has been found.

Chiefly, this is because computers treat language as a jumble of letters and are unable to derive their meaning. While technology and techniques have advanced beyond mere pattern-matching in text to attempt to derive a sense of the ideas the text conveys, it remains even more incompetent at that task than it is at the simplest forms of pattern matching.

Also, automated moderation does not escape the problem of operator bias. While software can be consistent and objective in applying the criteria for filtering content, a human being must program and/or configure the logic that is applied by the system, so it (rightly) does not avoid accusations of bias. If anything, it makes bias more efficient and, by virtue of its own glitches and limitations, makes that bias seem more arbitrary and draconian.

5. Crowdsourcing Moderation

A recent fad in moderation is "crowdsourcing," which combines the various techniques listed above: there is no initial attempt at moderation, then individual users set their filtering preferences (of flag specific messages), and the system attempts to amalgamate these individual preferences into an automated filter.

This would seem to be an evolutionary leap, combining the best qualities of all previous forms of moderation, but it often combines all of the worst qualities as well: it requires redundant efforts of each user, depends on the users to be objective and fair in their filtering criteria, and uses their input to generate automated rules that may cause innocuous content to be blocked.

In addition to these problems, crowdsourcing moderation creates a problem of its own: ganging. A group of individuals can band together to take control of the forum by collectively seeking to control the topic by blocking the same posts from the same individuals, such that these participants can control the content available to the majority. This "ganging" effect may be the result of an intentional conspiracy of individuals to silence a particular person or point of view, or it may also be an unintentional side effect of the same individuals acting independently, without a conscious decision to gang together.

Arguably, that's merely a matter of fine-tuning to manage the balance, such that a small number of users can't skew the filtering algorithm, but it doesn't require so many users to agree that it undermines the efficiency of crowdsourcing. But the same argument has been made of the other forms of moderation, and after decades of effort, the problem has yet to be solved.


In the end, this post has been just another examination of the problems from a guy who doesn't have any solutions, and is seeking to better understand the phenomenon - but that's largely the point: some problems are inherently difficult, and there is no simple solution, though it seems that having a better understanding of them makes it a bit easier to swallow, and provides a bit better defense against those who constantly seem to be attempting to spin up panic or outrage at a situation that, in spite of the best efforts of many people, has yet to be effectively addressed.

Sunday, May 23, 2010

Persuasive Technology

I've added study notes on Dr. Fogg's book, Persuasive Technology: Using Computers to Change What We Think and Do. While the title seems alarmist, the book provides an interesting and well-researched perspective on the current and future direction of computer-based technology as an agent of influence, which is worth consideration by those who design and use technology products.

Where commercial interests are concerned, it's fairly easy to recognize the clumsy attempts of marketers to generate interest in buying product (though their methods are evolving to become more sophisticated and subtle), but focusing on advertising alone overlooks the vast majority of the attempts to influence the user. Implicit in every communication is the desire of the publisher for users to "believe this idea," and implicit in every application is the desire of the manufacturer for users to "use this functionality, in this way, toward this purpose."

As such, users do not recognize the constantly barrage of attempts to use technology to subtly adjust their way of thinking and the behavioral changes that precipitate. And what's more, those who develop, design, and product technology products are largely unaware of their implicit purpose in this regard, and the result is a clumsy, ineffective, and often misguided product that fails to accomplish its goals for the user and operator alike, and may even do unintentional harm.

The author's focus is largely on theory and practice. While he periodically strays into ethical territory, the majority of the book is an objective examination of the topic: what persuasion is, how it works, and how it can be used to effectively accomplish its goals (regardless of whether the intentions, methods, or outcomes a can be assessed as "good" or "evil").

It's also worth noting that nothing about this book is revolutionary: the theories of persuasion were laid down millennia ago by Aristotle in the Rhetoric, and psychological research into human behavior and motivation is several decades old, but I'm not aware that any other theorist has devoted the time to applying these old concepts to new media in an orchestrated or systematic manner, and the author concedes that research in this specific area is thin - and given that achieving "success" on the Web depends on the ability to exert influence over the user, it can be expected that interest and research in this area will increase, and the methods by which users are the subject of influence tactics will rapidly evolve in the near future.

Friday, May 21, 2010

User Irresponsibility

I went off on a bit of a rant in my reaction to a blog posting about privacy issues and Facebook - which is getting to be rather a habit lately and I probably need to work on it - and am reposting it here to maintain the information in my own repository.

In the original post, the author seems to be implying that Facebook is to be held responsible for users who indiscreetly post information about themselves, and that they are purposefully trying to make things difficult by having an unnecessarily complex interface (he refers to the need to go into "advanced" settings) and an incomprehensible privacy policy (he indicates that it's longer than the U.S. Constitution).

That rankles a bit, because from a perspective of UX design, we go to great lengths to make things as easy an understandable as we can, yet users seem to bumble along without bothering to read instructions or heed warnings and, when they find they've made a mistake, seek to blame the site operator rather than learn from their own mistakes.

It's a side effect of our litigious culture, I suppose, but I sense that the long-term effects may be severely detrimental. But rather than paraphrase myself further or add to an already-lengthy rant, here was my response:

The question you ask – how many of users bother to check their settings – is the thorn in everyone’s side when technology is adopted by the masses.

It’s not just highly sophisticated technologies, but even simple ones: for the longest time, there was this running joke about people whose VCRs were constantly flashing “12:00″ because they never bothered to figure out how to set the clock … and 10% of the audience laughed at the notion that people could be so stupid, while the other 90% laughed because, back at home, their VCR was flashing, and they felt a little awkward and embarrassed by it.

And in most cases, it’s not because the manufacturers of VCRs made it difficult to set the clock – a child could figure out how to do it within half a minute – it was just that people didn’t bother. I don’t think the Facebook privacy “issues” are much different – the settings are easy to access and modify. But people don’t bother.
There are sites like and that provide thousands of instances where people indiscreetly posted information that they probably shouldn’t have – and much like the VCR joke, 10% are laughing at the carelessness of others and the other 90% are a bit nervous because they may be doing the same. But unlike the VCR joke, being careless with social media isn’t a private embarrassment, but a public one.

In the end, what disturbs me most of all is that the carelessness of the user has the potential to become a liability for those who provide these services. And it’s not that Facebook is evil or is trying to make things unnecessarily complex – it’s about as easy as it could possibly be, but people just don’t bother.

No matter what Facebook does, how easy or obvious they make things, there will be people who simply won’t bother – and Web usability will bear out that even if you give them a “big red warning,” they’ll just click OK without bothering to read it – and worse, will get litigious when they seek someone else to blame for the consequences of their own disregard.

If these problems continue, and if lawsuits side with the faction that wants to hold Facebook liable, the only choice is to shut down the service or disable it to the point where it’s not useful at all.

That may seem a bit melodramatic from a short-run perspective of specific complaints and incidents … but over the long term, it’s a serious threat to the availability of technology in general.

If it’s not feasible to provide a technology unless you can render it safe in the hands of users who won’t bother to read instructions or heed warnings without risking liability, than most companies won’t.

My hope is that incidents such as the one you describe are used to illustrate to users the importance of paying attention – but in the “not-my-fault” climate of our current culture, I despair that may not be the outcome.
This impromptu rant could probably use a bit more development, but I think that further consideration will uphold the fundamental theses of the argument.

Sunday, May 16, 2010

Service Oriented Architecture

Every so often, I feel the need to peek behind the curtain to see what the IT guys are up to. I generally regret having done so, as the technology behind the screen has very little to do with the user's experience of interacting with a Web site. Even when I get a sense of what's going on back there, I feel none the wiser for it. And this is no exception: having just read the Service Oriented Architecture Field Guide, I'm left with an overwhelming sense of "so what?"

It's not that the book wasn't well written or failed to explain SOA in terms a layman can understand - it's actually quite good in both regards - just that I don't see that it matters at all whether the information systems that drive the functionality are cantankerous juggernauts with redundant capabilities or nimble services that enable the developers to patch together applications from mix-and-match components. If it works, it works, and I don't need to know the reasons why.

If I were to try to salvage some value from the time invested, I'd say that it's another reminder of one of the basic principles of user experience: that the user approaches a device with the desire to accomplish a task or fill a need, an they are not particularly interested in the nuts-and-bolts of how the value is delivered. But the more I think on it, the more it seems to be that few Web sites do that anymore, and that's a Good Thing.

Thursday, May 13, 2010

The Demise of Retail

I've been thinking a bit about the retail industry lately - I doubt that the conclusion I've drawn is unique, but I figured it was worth hammering out and stowing the notes on my blog:

Since 1993, when the Internet first went dot-com, futurists have been proclaiming the death of retail. Naturally, they were dismissed, and the fact that many retailers are still going strong has been presented as evidence that their predictions are misguided.

But the more I think about it, the more I am beginning to believe that the predictions were well reasoned, though the decline of the retail industry has been very slow over the past two decades, it may accelerate in the not-to-distant future, and may take a few generations to become virtually extinct - which I sense to be inevitable.

Two things have brought this back to mind: First, there is the spread of service-oriented architecture in information systems design, breaking down data so it may be re-integrated by a custom application, which will in effect enable manufacturers to add their products to a "store" of infinite size. Second, the iPhone application that enables individuals to scan a barcode in a retail store and locate the same item online, often at a much better price. As the first becomes increasingly inclusive and the second increasingly ubiquitous, this will create a shopping "platform" that will render the retail model of distribution largely moot.

Retailers have presently suffered losses to the online medium, as they cannot compete on price for the exact same item. The retailer must add to the price charged to customers in order to cover their own operational expenses and turn a profit - and so must every intermediary (wholesaler and broker) between the manufacturer and the retail outlet. What has saved retailers, thus far, is the greed of the manufacturers, most of whom have decided to price their goods at the same level as retail outlets and pocket the difference. Increased competition among manufacturers will put an end to price-matching, as all it takes is one or two large manufacturers to break ranks, sacrificing per-unit profit for the competitive advantage they will gain, and the rest of their sector will be motivated to fall in line to defend their market share.

A greater obstacle is consumer preference. Presently, consumers in the market for certain goods have a sense of unease about purchasing products without a hands-on examination, and are wary of the return policies of online retailers, in he belief (often well-founded) that they are not as simple as they are alleged to be. Retailers are currently capitalizing on this preference, relying on the belief that the holistic "customer experience" of shopping at their physical locations counterbalances the difference in price, and indeed this may be longer-lived than many have prophesized.

However, the psychological attachment to an in-store experience is learned. Shoppers today wish to handle merchandise because they have become accustomed to doing so by their years of shopping in physical stores when there was no suitable alternative channel, and are uncomfortable with the online channel due to its novelty. But this behavior is changing, and each generation of shoppers will have diminishing first-hand experience with physical stores and increasing comfort with the online channel. It may take a generation or two (or more), but the market will inevitably be weaned of its addiction to physicality and the value it places on the "experience" of visiting a retail store.

Some retail formats are already virtually extinct. Consider this: when is the last time you walked into a record store? The very name is an anachronism, in that vinyl "records" are seldom available except in antique stores or specialty retailers, and even the delivery channel has gone digital, through MP3 downloads. Meanwhile, Amazon has made bookstores (outside of airports and other locations where a book is a convenience purchase) virtually extinct, and devices such as the Kindle, the iPad, and their imitators are threatening to do the same for printed books and Netflix has crushed its competition in the video entertainment industry and is going increasingly digital.

Admittedly, books, music, and video are "special" categories of merchandise, in that they can be delivered digitally - but there are an increasing number of product categories that consumers no longer purchase in a physical location, but prefer to order online. Virtually the only categories that have not suffered significant losses have been perishables (such as groceries) and fashion (clothing and jewelry) - but again, as consumers break their habit of "handling the merchandise" and manufacturers adopt more accommodating return policies, even these categories will begin to leak, and eventually hemorrhage, market share to the online medium.

That's not to say that retail will become totally extinct, but I see a rapid de-evolution in the next decade or so, occurring in two phases:

First, stores will devolve into showrooms with limited inventory on hand for customers who simply must take immediate possession of an item. In some ways, stores already are becoming showrooms, for any customer with an iPhone (though it's debatable whether an individual who enters a store to view an item, then purchases from another source, is really a "customer" after all), and in order to remain competitive, they must slash their overhead to be able to compete on cost, such that any premium they charge for immediacy of possession is an acceptable trade-off to their market.

Second, there will likely be increased efficiencies and economies of scale in the logistics sector, such that the cost of short-term delivery (overnight, or a few days at the most) is less than the premium charged by physical retail outlets, shifting the scales in the market's assessment of the value of immediate possession. In all, there are very few products for which consumers are unwilling or unable to wait until the next day to possess.

Neither do I expect retailers to go gentle into that good night: they can be expected to take defensive measures, such as removing the barcodes from products, responding aggressively to customers who are seen using price-comparison software in their stores, or even attempting to block wireless communications within their premises (though it would take some lobbying with the FCC to be able to implement such a solution), but in the end, this is not an effective and permanent solution, merely a way of attempting to stave off the inevitable by making other media less convenient (and meanwhile degrading the "experience" they are attempting to rely upon as a competitive advantage).

In the end, the only thing that seems uncertain is the length of time: exactly how long will it take retail to die. I can't prophesy an answer ... a few more decades, or perhaps a few more generations ... but I have a definite sense that its demise is coming, slow and certain.

Tuesday, May 11, 2010

Why Is Business Writing So Awful?

I read Jason Fried's article "Why is Business Writing So Awful" on the Inc. magazine Web site. He makes a very good point about the way in which business writing is unnecessarily complex, packed with industry buzzwords, and completely lacking in personality. I'd agree with all of this, but I think he completely overlooks the main reason that the text that appears on business sites, and a good many others, is terrible:


The examples that Fried provides, even most of the ones he considers to be "good" writing, all convey information that the reader is highly unlikely to be interested in reading in the context of the task they are presumably trying to accomplish (though I have to concede that, since the examples are taken out of context, there's a chance that a person might want to know a bit about the company's history or corporate practices, I think the chance is slim). It's not enough for writing to be concise, and it's not enough for the company to find a funny and clever way of stating things. The writing must be written to serve the needs of the reader.

In user experience design, a great deal of attention has been paid to the needs and desires of the user. Web sites that force you through an unnecessarily long process that requires you to do unnecessary tasks and provide unnecessary information are far less successful than those that consider what the user wants and needs to do, and requires of them the minimum amount of effort to succeed.

The same principle should be applied to the writing on a Web site - and it is sorely neglected, even by companies that seem to be focused on the user in other regards: consider what the user needs to know, in the context of what they are doing at the time, and inflict upon them the minimum amount of information to succeed.

This will eliminate a vast majority of the long passages of boring text on Web sites (which people have learned that they don't need to read anyway) - and what's left will be information that the user will be genuinely interested in reading.

At that point, per Fried's thesis, you should still hire an editor to make it clear and concise, but copy editing is spit-and-polish - alone, it can't make "bad" text good, only less bad.

Sunday, May 9, 2010

Successful Business Intelligence

I recently added reading notes for Cindi Howson's book on Successful Business Intelligence, a topic that has been in vogue for a few years but has not been widely recognized as an effective investment. The author's take is that the poor performance of BI deployments is largely the fault of business users, IT professionals, and even BI consultants who simply don't "get it."

After reading the book, I have a better grasp of the potential value behind the hype, as well as a sense of the various techniques to analysis that have the potential to provide a business with a measurement of the efficiency of present operations and gauge proposed changes, but I didn't get a clear sense of how BI can be applied effectively.

I have a sense that's partly because a "successful" BI deployments confers competitive advantage, and the author can't be expected to disclose privileged information about the companies with which she has consulted. It's also likely that the book is intended as a marketing tool, designed to communicate the potential value of hiring a BI specialist, and stopping just short of the mark where a reader would be given sufficient information to proceed competently without hiring a consultant.

Taken all for all, I'd say it's a fairly good introduction to the topic, but that it would require further study to gain the perspective necessary to apply the discipline in an effective manner.

Tuesday, May 4, 2010

User Experience as Snake Oil

I read a posting on another blog* that seemed to me to present the concept of "user experience" as something new and different, a rival and superior to design and usability. My reaction was that the perspective was not only understating the value of user experience itself, but counterproductive to having the three disciplines work in unison toward a common goal.

The more I consider the matter, the more it seems to me that user experience isn't something "new and different," but something that's quite old and familiar, at least to companies that are driven by the belief that the their success as a business relies upon satisfying the needs of their customers.

This principle applies to any product or service, not merely to Web sites and digital services. The products that people genuinely love are designed to fill a need, completely and effortlessly, and seem to be engineered perfectly for that purpose. The products that people begrudgingly accept are those that provide only partial satisfaction, and are engineered to be manufactured in a cost-effective manner.

But between the two lies a dangerous area: the products that people hate. These are not products that are clearly bad, as consumers are indifferent to them (they don't buy them, and don't care), but products that seem attractive at first blush, but that are later discovered to have missed the mark. In the psychology of motivation, this is called cognitive dissonance. In common parlance, it's called being deceived, tricked, and lied to.

And that wraps me back around to the topic of user experience. The great danger in user experience today is to attempt to create a false sense of quality about a product that isn't really designed with the needs of the user in mind. The advertising is catchy, or there's some novel feature that seems keen at fist blush, but which loses its glamour in a very short time.

While this can be effective in generating short-term "success," it does not imbue the product with real quality, that will ensure a longevity of demand and, ultimately, the longevity of the company that produces it. Said another way, a snake-oil salesman should know better than to come back to the same town twice.

And so I wonder, with some degree of concern, if the hucksters of the user experience brand will eventually poison the profession for us all by using deceptive tactics to generate short-term rewards, then slink off before the users discover that they've been gypped - meanwhile, discrediting the profession, and those who seek to apply the principles of user experience to create lasting value, to the benefit of our employers and customers alike.

* I considered linking to that post, but have decided not to do so. The ideas I explore in the remainder of this entry have nothing to do with the original article (I don't think that it was the author's intention to position UX as snake oil), it just got me thinking in this direction, and by free-association, I ended up in a much darker place that has nothing to do with that blog, and I expect the author would (rightly) take exception to the implication that it might.