Stuck in the Middle with Neil

Oy. Neil McAllister is at it again, saving the online world by describing how Mom & Pop shops can compete with the Amazons of the world. With retail giants like Tesco and even Sears building out programming interfaces (APIs) that will allow people to buy mattresses and microwave ovens with their mobile phones (srsly. ed.) , he claims that small businesses are more vulnerable than ever.

(You know, I once thought Fatal Exception was a quirky title for a column, but now I realise it’s just an accurate description of the cognitive processes of its author.)

McAllister writes:

Ask any company that hosts an open source software project how many outsiders actually commit code changes on a regular basis and you’re likely to hear a discouraging figure.

His conclusion is that low uptake makes opening APIs a high risk activity. That’s as may be, but isn’t it equally possible that these organisations aren’t successful because they’re doing it wrong?

Unless I have some kind of moral ownership stake in the project (such as I might have if I maintained a Linux software package, for example) what incentive to I have to invest my time? I understand the reasons for it, but many large businesses today are notoriously unreliable when it comes to strategy. Driven as they are by quarterly returns and subject to the whim of an increasingly sociopathic class of managers driven by MBA culture to abstract all decisions into monetary terms, why in the hell should I, the lowly FOSS developer, want to hitch my wagon to their star?

(More accurately, they’re asking me to hitch my horse to their wagon, without giving me any say on the destination or even the route.)

There are a few organisations who really get how community relations and management work, but they are a tiny minority. The overwhelming majority baulk when they come to the realisation that FOSS means sharing ownership and control.

None of this is news to us geeks. What gets me riled up about this article is that someone who should know better spends his time chiding FOSS processes for being inappropriate to business status quo instead of explaining to business how they’ve got to adapt to a new set of circumstances.

The reason McAllister doesn’t want to say that is because he’s holding out for a new set of actors in the online world: Middlemen who build out standardised (but presumably proprietary) API and data management services for small and medium businesses so they can keep up with the Amazons and Tescos of the world without having to build their own data infrastructure.

McAllister is, in other words, trying to reinvent the Distributor in an environment that was invented precisely to remove the need for intermediaries. My only response is to apply an aphorism from another age of commercially appropriated social phenomena: ‘You’ve come a long way, baby.

Again With the Micro-Payments

Rex Sorgatz posted a quick and dirty re-think of how micro-payments could be made to work in a present-day web-browsing scenario. Again, I question the premise of the problem micro-payment purports to solve.

My fundamental objection to online payment is that most people won’t pay for something of unknown value. Speaking for myself (and a few others I know), the moment a website starts putting obstacles between me and the content I want to access, it’s easier for me to move on than it is to leap whatever interface hurdles are barring my path.

That’s because:

  1. I refuse to buy something sight unseen. In the material world, I can at least take a look at the package and compare with a few competing products before I pull out my wallet. On the Web, I can’t really know whether something is worthwhile until I’ve had a look. For a bit of writing of less than 5000 words, that means I need to see most – if not all – of it before I decide what it’s worth to me. For a short video, that means all of it. (The mere idea of a trailer for a 15 minute video makes me shudder.)
  2. The whole point of micro-payment is that the amount is ‘throw-away’ money. Increments so small that we don’t even have to think about it. Forcing someone through the UI equivalent of a toll booth creates an impediment that’s out of scale with the benefit.
  3. As I mentioned before: Online payment is not really payment, it’s reward. So much comes free with the price of admission (i.e. an Internet connection) that the only way we can assess the value of content is in the context of a gift economy. Think of it as a pay-as-you-exit performance, or busking, if you like. Modulo a few stingy, poorly socialised freeloaders, anyone who really enjoyed the show will happily toss a few coins into the hat. But not before they’ve seen the show.

To sum up: It’s best to leave interface and program flow issues alone until we’ve established the proper intellectual framework. Conceptualising a rewards system generates very diffierent results than a payment system. Given that reward and payment systems are both easily circumvented, the only thing we can rely on is the visitor’s goodwill. Place a little box at the exit, allow people to click right past it if they want, and you’ll never have any complaints about access to data.

More to the point, everyone who gives, gives gladly. This is more than just a moral point. The importance of goodwill from one’s website visitors cannot be understated. Remember: karma comes first, reward later, when it comes to online success. In fact, karma is the primary reward. Cash is just a symbolic representation of the goodwill people feel toward you.

P.S. If we’re honest with ourselves, we can accept that others’ failure to give us money is not an interface failure, nor is it a failure in their judgement. For better or for worse, if people aren’t willing to give money of their free will, then the failing is ours, not theirs.

I suspect that some manifestation of the Endowment Effect underlies most efforts to control access to online content. It’s irrational in the online context, but it’s human nonetheless to say, “I worked hard to produce this. I have a right to be paid for it.

Those of us who have more or less grown up online have fewer reservations about the benefits of sharing content without precondition, and I suspect such expectations will become the norm for at least a significant subset of society before too very long.

On Privacy

Slashdot recently reported the release of document analysing privacy issues in a number of major browsers. One of the findings was that the Flash plugin on all platforms and browsers was terribly insecure. One of the commenters had this to say:

“Privacy issues aside, I’ve never had any trouble with Flash.”

To which I replied:

I like your logic: Aside from a single tile, the space shuttle Columbia’s last mission went flawlessly.

Seriously, though: you’ve underlined the single greatest problem in computer security today – what we don’t see can hurt us. I’ve written about this at greater length elsewhere, but to put it simply, privacy is the battleground of our decade.

The struggle to come to terms with privacy will manifest itself in the legal, moral and ethical arenas, but it arises now because of technology and the cavalier approach that the vast majority of people take to it.

The ramifications of our ability to transmit, access and synthesise vast amounts of data using technology are consistently underestimated by people because of the simple fact that, as far as they’re concerned, they are sitting in the relative privacy of their own room with nothing but the computer screen as an intermediary.

On the consumer side of things, this creates what Schneier calls a Market for Lemons in which the substance of the product becomes less valuable than its appearance. As long as we have the illusion of security, we don’t worry about the lack of real protection.

On the institutional side, we see countless petty abuses of people’s privacy. There is nothing stopping a low-level employee from watching this data simply out of prurient interest. In fact, this kind of abuse happens almost every time comprehensive surveillance is conducted. In a famous example, low-level staffers in the US National Security Agency would regularly listen in on romantic conversations between soldiers serving in Iraq and their wives at home. The practice became so common that some even created ‘Greatest Hits’ compilations of their favourites and shared them with other staffers.

They would never have done so[*] had the people in question been in the room, but because the experience is intermediated by an impersonal computer screen, which can inflict no retribution on them, their worst instincts get the better of them.

When discussing software in the 21st Century, we cannot ever treat privacy as just one incidental aspect of a greater system. Privacy defines the system. Starting an argument by throwing it aside in the first subordinate clause gives little weight to any argument that follows.

[*] On consideration, that’s not strictly true. History shows that surveillance societies are perfectly practicable even without significant automation. The East German Stasi are but one example. The critical factor in such cases is of course that the state sanctioned, encouraged, even required this behaviour of its citizens. So let me modulate my statement to say:

They would never have taken this unsanctioned action had they had any sense that they were being subjected to similar – or any – scrutiny.

Becoming Digital

[This week’s Communications column for the Vanuatu Independent.]

In 1995, Nicholas Negroponte, founder of MIT’s Media Lab, published a seminal book of essays, titled Being Digital.  At the core of his work was his division of all things into atoms or bits. Just as an atom is the basic particle of matter in modern physics, bits are the basic particle of data in modern computing. All the material things in the world are composed of atoms. Increasingly, all of our ideas, learning, communications and stories are expressed in digital format.

As all technological fortune-tellers do, Negroponte gets some things very right and others very wrong. I’m not writing a book review, though, so I’m not going to enumerate each little quirk and quibble. He did get one big lesson right, and we need to learn it.

Developing nations everywhere share a common set of problems. The most obvious and common of them is a simple lack of capacity to begin taking advantage of the things that people in developed nations take for granted: instantaneous communications and the ability to access, gather and store vast amounts of information about every single aspect of humanity, no matter how trivial.

Whether we want to peek at Brad and Angelina’s twins or carbon date Eva de Naharon, we can do so via digital technology. Negroponte puts it quite simply: Everything that can be stored as bits will be stored as bits. Lack of resources, planning and understanding mean that in many parts of the developing world, most local knowledge can’t or won’t survive the transition.

Continue reading

Steaming Piles

I give up. I can’t support OpenOffice Write any more, and it’s nobody’s fault but their own. For anything more than simple tasks, the application is terrible. Their only saving grace is that Microsoft Office has its own brand of polished turd, named Word. Collectively, they are racing to the bottom of a decade-long decline in useability.

No, that’s too generous. The thing is, they’re at the bottom. They are useless for any but the most trivial tasks, and the most trivial tasks are better accomplished elsewhere, anyway.

Yes, I’m ranting. Let’s put this into a proper context:

I hate word processors. For any but the simplest tasks, their interfaces are utterly ridiculous. I haven’t liked a word processing interface since WordPerfect circa version 5, and if I had my own way, I’d author all my documents in either emacs or vi, depending on the circumstances.

Why do word processors suck so badly? Mostly, it’s because of the WYSIWYG approach. What You See Is What You Get, besides being one of the most ghastly marketing acronyms to see the light of day in the digital era, is ultimately a lie. It was a lie back in the early 1990s when it first hit the mainstream, and it remains a lie today. The fact of the matter is that trying to do structuring, page layout and content creation all at the same time is a mug’s game. Even on a medium as well understood as paper, it’s just too hard to control all the variables and still have a comprehensible interface.

But the real sin that word processors are guilty of is not that they’re trying to do WYSIWYG – okay it is that they’re trying to do WYSIWYG, but the way they go about it makes it even worse. Rather than insisting that the user enter data, structure it and then lay it out, they cram everything into the same step, short-circuiting each of those tasks, and in some cases rendering them next to impossible to achieve.

Learning how to write, then structure, then format a document (or even just doing each through its own interface) is easier to accomplish than the all-in approach we use today. For whatever reason, though, we users are deemed incapable of creating a document without knowing what it’s going to look like right now, and for our sins, that’s what we’ve become. And so we are stuck with word processors that are terrible at structuring and page layout as well as being second-rate text authoring interfaces. They do nothing well, and many things poorly, in no small part because of the inherent complexity of trying to do three things at once.

It doesn’t help that their technical implementation is poor. The Word document format is little better than a binary dump of memory at a particular moment in time. For our sins, OpenOffice is forced to work with that as well, in spite of having the much more parse-worthy ODF at its disposal these days.

There’s no changing any of this, of course. The horse is miles away, and anyway the barn burned down in the previous millennium. The document format proxy war currently underway at the ISO is all the evidence I need to know that I’ll be dealing with stupid stupid stupid formatting issues for years to come. I will continue to be unable to properly structure a document past about the 80th percentile, which is worse than not at all. I will continue to deal with visual formatting as my only means to infer context and structure, leaving me with very little capacity to do anything useful with the bloody things except to print them out and leave them on someone’s desk.

Maybe I’ll just stop using them at all. Maybe I’ll just start doing everything on the web and never print again.

I’m half serious about this, actually. At least on the Web, the idea that content and presentation are separate things isn’t heresy. At least on the Web, I can archive, search, contextualise, comment, plan, structure and collaborate without having to wade through steaming piles of cruft all the time.

At least on the Web, I can choose which steaming piles I step into.

I’m going to start recommending people stop using Word as an authoring medium. There are far better, simpler tools for every task, and the word processor has been appropriate for exactly none of them for too long now. Sometimes you have to destroy the document in order to save it.


UPDATE: How wrong could I be about the severity of this threat? Very wrong, apparently. I haven’t confirmed it yet, but it’s hard to imagine how this week’s mass server hack could have happened without tools like the one described below. I’ll write more about this in this week’s column….

Heh, cute:

Cult of the Dead Cow Announces Goolag Vulnerability Search Engine.goooooolagOnce you get past the Chinese porn silliness, there’s a real story here:

Google’s effectiveness as a search engine also makes it an effective… well, search engine. Common website weaknesses are exposed by search engines such as Google, and anyone can access them by using specially crafted queries that take advantage of Google’s advanced searching capabilities. As the cDc press release indicates, there are approximately 1500 such searches published and readily accessible on the Internet. And now the cDc has built a(n a)cutely satirical web front end and are offering a downloadable desktop search application for Windows, giving script kiddies the world over something else to do with their time.

What effect has this had on website security? It’s difficult to tell. The principle of using Google as a scanning tool has been common knowledge since at least 2006, but according to Zone-H, who record large numbers of website defacements every year, the only significant increase in website attacks since then was the result of an online gang war between various Russian criminal factions, back in 2006. Ignoring that anomalous rise in activity, the rate of attack actually fell slightly in 2007 compared to recent years, relative to the number of active websites.

Zone-H’s latest report proves only that the percentage of insecurely configured websites scales on a roughly linear basis with the number of available websites, and that the choice of technology has almost no bearing on the likelihood of a successful attack. Indeed, most exploits are simple attacks on inherent weaknesses: guessing admin passwords or copying them when they’re sent in cleartext, misconfigured shares and unsafe, unpatched applications. Attacks requiring any amount of individual effort are not very common at all. Man-in-the-middle attacks rated only fifth place in the list of common exploits, representing only 12% of that total. But researchers have elsewhere noted that cross-site-scripting attacks are on the rise, and are being used mostly by spammers to increase the size of their bot nets.

The lesson here is fairly obvious: Making simple mistakes is the easiest way to expose yourself to attack. And search tools like Goolag make finding those mistakes remarkably easy. You won’t be targeted so much as stumbled across. Given the recent rise in the number of websites being used to inject malicious software into people’s computers, spammers and other online criminals appear to have a strong incentive to use even the less popular websites to ply their trade.

Your choice of technology won’t save you, either. Most popular web servers are fairly secure these days and though not all server operating systems are created equal, the big ones have improved markedly. But the same cannot be said of the applications and frameworks that run on them. The old adage that ease of use is universal still applies. When you make things easy for yourself and your users, you are liable to make things easy for other, less welcome guests as well.

The lesson for the average website owner: Do the simple things well. Don’t waste your time trying to imagine how some intrepid cyber-ninja is going to magically fly across your digital alligator moat. Just make sure your systems are well-chosen and properly patched, pay attention to access control and treat authentication seriously. Statistically, at least, this will drop your chances of being Pwned to nearly nil, or close enough as makes no never mind.

Web Standards – A Rant

It’s very common on Slashdot and other, er, technical fora, to see people make assertions like the following:

IE extensions [of existing standards] have proven to be a very good thing for the web overall. It has always been IE that has pushed the limits of dynamic web pages through the inclusion of similar extensions (primarily for the development of Outlook Web Access) which have given birth to the technologies that fuel AJAX and other modern web techniques.

What an interesting viewpoint. I couldn’t disagree more.

The ‘Embrace and Extend’ strategy on which Microsoft has relied since about 1998 is designed to be divisive and ultimately to support Microsoft’s one interest: by hook or by crook, to land everyone on the Microsoft platform. They worked with little or no support or cooperation from any other body[*] and more often than not used their position to subvert the activities of others. They published competing specifications and duplicated functionality through their own proprietary implementations.

Now before we go any further, it’s important to remember that this strategy was dressed up nicely, spoken about politely in marketing euphemisms and was seldom openly disparaging of competing technologies. It is also important to note that very few of the people actually responsible for the creation and fostering of standards ever felt anything but frustration and animosity toward these efforts to subvert the process. I’ve seen such luminaries as Lawrence Lessig and Sir Tim Berners Lee stand up in public fora and state in absolutely unambiguous terms that ‘this MS technology is the single biggest threat faced by the web today.’ (WWW Conference, Amsterdam 2000, for those who care).

It’s true that there are some who have argued for accomodation, and while they’ve achieved short-term gains (RSS and SOAP, for example), the recent announcement of MS-only implementations and extensions of these standards offers further evidence that MS’ intentions are anything but benevolent.

Now, some may trot out the sorry old argument that a corporation’s job is to profit and damn the ethical/legal torpedoes, but the fact is that to most of the people working in standards, this is not the goal. Believe it or not, most of us actually care about the community, and feel that the way things are implemented is just as important as what gets done. So feel free to act as apologist for the soulless corporate machine if you must, but please, don’t pretend that that’s the only way things can be made to work.

Microsoft (and Netscape in its time) are not only guilty of skewing standards in their favour. They’re also guilty of something far more insidious: the infection of the application space with software designed to lock people into their proprietary approach to things. Often enough, the design is fatally compromised in the process. The example cited above, Outlook Web Access, is a prime example of how to break things in the name of lock-in.

Here’s a quick summary of just some of the ways in which Outlook Web Access, which encapsulates email access inside HTTP and passes it through ports 80/443 by default, is technically broken:

  • Caching proxy servers might or might not do the right thing – behaviour here is undefined
  • Traffic/network analysis is subverted
  • Security is compounded, as activity patterns have to be checked on more, not fewer ports (think about it)
  • Likewise, security audits are far more difficult, as traffic has to be disambiguated
  • Security is subverted, users can simply tunnel high volume traffic through to (at least) the DMZ with no guarantee that it’s being inspected (i.e. no one catches that the traffic is neither going to the web nor the Exchange server; each one assumes it’s going to the other and that it’s ‘okay’. Same goes with large volumes of outgoing information.)
  • Deliberate bypassing of firewall policies, promoting insecure configurations (e.g. pushing things through ports 80 and 443 as a matter of informal policy, reducing the firewall to an ornament)
  • Buggier software due to additional complexity
  • Non-standard, meaning (little or) nothing else will support it
  • Promotes software lock-in, which has cost and management implications
  • Promotes monoculture, which has cost, management and *security* implications
  • Protocols exist for this purpose already

That last point is the key. Why on earth would MS build an entirely new way to get one’s email when secure IMAP or POP3 already exist? Microsoft doesn’t particularly care about doing things better, they just want to make sure that their customers do things differently. Quality is seldom a concern, and as a result, it’s usually a casualty.

[*] It’s true that they were – and remain – members of such organisations as the World Wide Web Consortium.