Steaming Piles

I give up. I can’t support OpenOffice Write any more, and it’s nobody’s fault but their own. For anything more than simple tasks, the application is terrible. Their only saving grace is that Microsoft Office has its own brand of polished turd, named Word. Collectively, they are racing to the bottom of a decade-long decline in useability.

No, that’s too generous. The thing is, they’re at the bottom. They are useless for any but the most trivial tasks, and the most trivial tasks are better accomplished elsewhere, anyway.

Yes, I’m ranting. Let’s put this into a proper context:

I hate word processors. For any but the simplest tasks, their interfaces are utterly ridiculous. I haven’t liked a word processing interface since WordPerfect circa version 5, and if I had my own way, I’d author all my documents in either emacs or vi, depending on the circumstances.

Why do word processors suck so badly? Mostly, it’s because of the WYSIWYG approach. What You See Is What You Get, besides being one of the most ghastly marketing acronyms to see the light of day in the digital era, is ultimately a lie. It was a lie back in the early 1990s when it first hit the mainstream, and it remains a lie today. The fact of the matter is that trying to do structuring, page layout and content creation all at the same time is a mug’s game. Even on a medium as well understood as paper, it’s just too hard to control all the variables and still have a comprehensible interface.

But the real sin that word processors are guilty of is not that they’re trying to do WYSIWYG – okay it is that they’re trying to do WYSIWYG, but the way they go about it makes it even worse. Rather than insisting that the user enter data, structure it and then lay it out, they cram everything into the same step, short-circuiting each of those tasks, and in some cases rendering them next to impossible to achieve.

Learning how to write, then structure, then format a document (or even just doing each through its own interface) is easier to accomplish than the all-in approach we use today. For whatever reason, though, we users are deemed incapable of creating a document without knowing what it’s going to look like right now, and for our sins, that’s what we’ve become. And so we are stuck with word processors that are terrible at structuring and page layout as well as being second-rate text authoring interfaces. They do nothing well, and many things poorly, in no small part because of the inherent complexity of trying to do three things at once.

It doesn’t help that their technical implementation is poor. The Word document format is little better than a binary dump of memory at a particular moment in time. For our sins, OpenOffice is forced to work with that as well, in spite of having the much more parse-worthy ODF at its disposal these days.

There’s no changing any of this, of course. The horse is miles away, and anyway the barn burned down in the previous millennium. The document format proxy war currently underway at the ISO is all the evidence I need to know that I’ll be dealing with stupid stupid stupid formatting issues for years to come. I will continue to be unable to properly structure a document past about the 80th percentile, which is worse than not at all. I will continue to deal with visual formatting as my only means to infer context and structure, leaving me with very little capacity to do anything useful with the bloody things except to print them out and leave them on someone’s desk.

Maybe I’ll just stop using them at all. Maybe I’ll just start doing everything on the web and never print again.

I’m half serious about this, actually. At least on the Web, the idea that content and presentation are separate things isn’t heresy. At least on the Web, I can archive, search, contextualise, comment, plan, structure and collaborate without having to wade through steaming piles of cruft all the time.

At least on the Web, I can choose which steaming piles I step into.

I’m going to start recommending people stop using Word as an authoring medium. There are far better, simpler tools for every task, and the word processor has been appropriate for exactly none of them for too long now. Sometimes you have to destroy the document in order to save it.

Stop Bad Errors

I recently upgraded to Ubuntu 8.04, which comes with the most recent beta of Firefox 3.0. The new version of Firefox has a number of interesting features, not the least of which is a set of measures to reduce drive-by infection of PCs.

If they wander from the beaten path, people now see a big red sign warning them about so-called ‘Attack Sites’ – websites that are reported to have used various means to infect visiting systems with malicious software:

The graphic is fairly well done, but interestingly, there’s no obvious way to over-ride the warning and go to the site anyway. Not that one would want to, but it does raise the bar for circumventing this anti-rube device while raising questions about who gets to decide what’s bad and what’s good.

The ‘Get Me Out Of Here!’ button smacks of Flickr-style smarminess, sending (in my humble opinion) the wrong kind of message. Either be the police constable or be my buddy, but don’t try to be both. That’s just patronising.

I followed the second button to see how the situation would be explained to the curious. I was brought to a page providing a less-than-illuminating statement that the site in question had been reported to be infected by so-called ‘badware’.

The StopBadWare.org service tracks websites whose content has been compromised, deliberately or not, and provides data about these sites to the public in order to protect Internet users from drive-by infection. With sponsorship from Google, Lenovo, Sun, PayPal, VeriSign and others, the service is obviously viewed in the corporate community as a necessary and responsible answer to the issue of malware infection.

At the time of this writing, the Stop Badware databases listed over a quarter of a million websites as infected.

The report page itself was less than a stellar example of information presentation, especially about a security-related topic. In the top left corner is a colour-coded circle with three states:

Safe StopBadware testing has found badware behavior on this site.
Caution One or more StopBadware partners are reporting badware behavior on this site.
Badware No StopBadware partners are reporting badware behavior on this site.

So the difference between red and yellow here is not one of degree, it’s based on who reported it. Not only is this useless as a threat measurement, it sends the wrong message to people using the service, implying that there’s a distinction to be made between what Stop Badware finds out for themselves and what their partners find. By treating the sources differently, they’re inadvertently creating a distinction between gospel and rumour, implying that some sources are less reliable than others.

The report page for the domain in question is populated using the GET method, meaning that you can plug any domain name right into the address bar (if you know the URL components) and get a report on it. Unfortunately, it never occurred to the good people at Stop Badware that some might want to use this capability to check the status of an arbitrary domain. (Amusingly, this method also circumvents the captcha on the ‘official’ report page.)

When I checked the status of my own domain, I was informed that, in effect, I’d recently stopped beating my wife:

Google has removed the warning from this site.

It’s interesting when you’re faced with a sentence in which nearly every word is wrong. Google has removed the site? Where am I? Isn’t this Stop Badware? Removed the warning for this site? There never was one. And even if there was a warning at one point in time, people don’t need to be told that. This message is a bit like saying, ‘So-and-so is a great guy! He doesn’t drink at all any more.

I applaud the Stop Badware service and the concept, and I look forward to the day when someone actually does a bit of usability research for them.

P.S. Could we please do something about the term ‘badware’? It’s almost sickeningly patronising. Some might argue that terms like ‘virus’, ‘trojan’ and ‘malware’ are too arcane, but I say we should just pick one and stick with it, regardless of how accurate it actually is.

People know and (ab)use the term ‘virus’, so why don’t we get the geek-stick out of our lexical butt and just use it? It’s a virus. You’ve got a virus. Who cares what it is or how you got it. You got a virus and now your computer needs to be treated before you can use it safely again. Now, how hard was that?

Fix This and Tell Me When You're Done

[First written in February of 2004. I’m reposting it here for posterity, and because it came up in conversation earlier today. There’ve been a few serious attacks against expats recently, including a murder and a particularly brutal rape. The perception among some is of a sudden uptick in violent crime. I recounted this story to suggest that plus ça change, plus c’est la même chose.]

The attack happened last Monday in the afternoon. It didn’t last long, but it left her with a concussion and a broken collarbone.

She was in her apartment, had been for a little while. She settled herself down at her laptop to write up some workshop notes. She heard a noise from the front bedroom, empty now because her friend had left precipitately after no one listened to her fears. She stood, not sure whether to investigate or flee. A man appeared in the doorway, and knocked her down hard as she started to scream. The broken bone immobilised her, so all she could do was scream as loud as she could. Her assailant fled within seconds.

And nobody came.
Continue reading

Walk Like a Dinosaur

Michael Krigsman’s most recent entry in the IT Project Failures blog is an interesting, colourfully-illustrated and upside-down look at the relationship between IT and traditional business.

His question, based on numerous similar postulations, is whether IT is becoming extinct. His answer (you knew it was a rhetorical question, right?) goes like this:

Since the days of punch cards, IT has believed itself to be guardian of precious computing resources against attacks from non-technical barbarians known as “users.” This arrogant attitude, born of once-practical necessity in the era of early data centers, reflects inability to adapt to present-day realities. Such attitudes, combined with recent technological and social changes, are pushing IT to share the fate of long-extinct dinosaurs.

The list of arguments he offers in support of this thesis are all valid to some degree, and all supportive of what he’s positing, but he somehow manages to miss the point that means most to business:

Monolithic, top-down, IT-as-bureaucracy approaches are being subverted by recent changes in technology and services, but so too is business in general.

Continue reading

Gooooolag

UPDATE: How wrong could I be about the severity of this threat? Very wrong, apparently. I haven’t confirmed it yet, but it’s hard to imagine how this week’s mass server hack could have happened without tools like the one described below. I’ll write more about this in this week’s column….


Heh, cute:

Cult of the Dead Cow Announces Goolag Vulnerability Search Engine.goooooolagOnce you get past the Chinese porn silliness, there’s a real story here:

Google’s effectiveness as a search engine also makes it an effective… well, search engine. Common website weaknesses are exposed by search engines such as Google, and anyone can access them by using specially crafted queries that take advantage of Google’s advanced searching capabilities. As the cDc press release indicates, there are approximately 1500 such searches published and readily accessible on the Internet. And now the cDc has built a(n a)cutely satirical web front end and are offering a downloadable desktop search application for Windows, giving script kiddies the world over something else to do with their time.

What effect has this had on website security? It’s difficult to tell. The principle of using Google as a scanning tool has been common knowledge since at least 2006, but according to Zone-H, who record large numbers of website defacements every year, the only significant increase in website attacks since then was the result of an online gang war between various Russian criminal factions, back in 2006. Ignoring that anomalous rise in activity, the rate of attack actually fell slightly in 2007 compared to recent years, relative to the number of active websites.

Zone-H’s latest report proves only that the percentage of insecurely configured websites scales on a roughly linear basis with the number of available websites, and that the choice of technology has almost no bearing on the likelihood of a successful attack. Indeed, most exploits are simple attacks on inherent weaknesses: guessing admin passwords or copying them when they’re sent in cleartext, misconfigured shares and unsafe, unpatched applications. Attacks requiring any amount of individual effort are not very common at all. Man-in-the-middle attacks rated only fifth place in the list of common exploits, representing only 12% of that total. But researchers have elsewhere noted that cross-site-scripting attacks are on the rise, and are being used mostly by spammers to increase the size of their bot nets.

The lesson here is fairly obvious: Making simple mistakes is the easiest way to expose yourself to attack. And search tools like Goolag make finding those mistakes remarkably easy. You won’t be targeted so much as stumbled across. Given the recent rise in the number of websites being used to inject malicious software into people’s computers, spammers and other online criminals appear to have a strong incentive to use even the less popular websites to ply their trade.

Your choice of technology won’t save you, either. Most popular web servers are fairly secure these days and though not all server operating systems are created equal, the big ones have improved markedly. But the same cannot be said of the applications and frameworks that run on them. The old adage that ease of use is universal still applies. When you make things easy for yourself and your users, you are liable to make things easy for other, less welcome guests as well.

The lesson for the average website owner: Do the simple things well. Don’t waste your time trying to imagine how some intrepid cyber-ninja is going to magically fly across your digital alligator moat. Just make sure your systems are well-chosen and properly patched, pay attention to access control and treat authentication seriously. Statistically, at least, this will drop your chances of being Pwned to nearly nil, or close enough as makes no never mind.

Idea: Personal Navajo

Instead of exposing the painful ritual of public/private key exchange, software developers should instead be using metaphors of human trust and service.

A ‘translator’ service,  for example. The user ‘invents’ an imaginary language, then decides who among her friends is allowed to speak it with her. She then instructs her ‘translator’ (e.g. her own personal Navajo) to convey messages between herself and her friend’s translator.

(Only the personal Navajos actually need to speak this ‘language’ of course. As far as the two correspondents are concerned, the only change is that they’re sending the message via the ‘translator’ rather than directly, but even that is a wafer-thin bit of functionality once the channel is established and the communications process automated.)

Quick encryption, well understood, and easy to implement. Most importantly, you don’t have to explain encryption, public and private keys,  or any other security gobbledygook to someone who really doesn’t want – and shouldn’t need – to hear it.

Update: Of course, the greatest weakness to this idea is if Microsoft were to create an implementation of this and name it Bob.

Ghost in the Machine

In the most recent RISKS mailing list digest, Peter Neuman includes a brief article by Adi Shamir describing a method of exploiting minor faults in math logic to break encryption keys in a particular class of processor.

Titled Microprocessor Bugs Can Be Security Disasters, the article makes an interesting argument. In fairly concise terms, Shamir outlines an approach that quickly circumvents much of the hard work in breaking private keys, no matter how heavily encrypted. He uses the RSA key encryption method in his example, probably out of humility. With even my limited knowledge of mathematics, I was able to follow the broad strokes of the approach.

Put most simply, if you know there is a math flaw in a particular kind of processor, then you can exploit that by injecting ‘poisoned’ values into the key decryption process. By watching what happens to that known value, you can infer enough about the key itself that you can, with a little more math, quickly break the private key.

And of course, once you’ve got someone’s private key, you can see anything that it’s been used to encrypt.

This is in some ways a new twist on a very old kind of attack. Code breakers have always exploited mechanical weaknesses in encryption and communications technology. During the Second World War, code breakers in the UK learned to identify morse code transmissions through the radio operator’s ‘hand’ – the particular rhythm and cadence that he used. This sometimes gave them more information than the contents of the communications themselves. Flaws in the Enigma coding machines allowed the Allies to break the device some time before Alan Turing and his early computers got their ‘Bombe’ computer working efficiently:

One mode of attack on the Enigma relied on the fact that the reflector (a patented feature of the Enigma machines) guaranteed that no letter could be enciphered as itself, so an A could not be sent as an A. Another technique counted on common German phrases, such as “Heil Hitler” or “please respond,” which were likely to occur in a given plaintext; a successful guess as to a plaintext was known at Bletchley as a crib. With a probable plaintext fragment and the knowledge that no letter could be enciphered as itself, a corresponding ciphertext fragment could often be identified. This provided a clue to message keys.

These days, computing processors and encryption are used in almost every aspect of our lives. The risks presented by this new class of attack are outlined in fairly plain English by Shamir:

How easy is it to verify that such a single multiplication bug does not exist in a modern microprocessor, when its exact design is kept as a trade secret? There are 2^128 pairs of inputs in a 64×64 bit multiplier, so we cannot try them all in an exhaustive search. Even if we assume that Intel had learned its lesson and meticulously verified the correctness of its multipliers, there are many smaller manufacturers of microprocessors who may be less careful with their design. In addition, the problem is not limited to microprocessors: Many cellular telephones are running RSA or elliptic curve computations on signal processors made by TI and others, FPGA or ASIC devices can embed in their design flawed multipliers from popular libraries of standard cell designs, and many security programs use optimized “bignum packages” written by others without being able to fully verify their correctness. As we have demonstrated in this note, even a single (innocent or intentional) bug in any one of these multipliers can lead to a huge security disaster, which can be secretly exploited in an essentially undetectable way by a sophisticated intelligence organization.

I’m surprised that I haven’t seen much concern voiced about this class of attacks. Maybe I just hang out with an insufficiently paranoid crowd….

Stars in Their Eyes

In an online discussion recently, I defended the XO laptop by mentioning how impressed people were when I conducted demonstrations of the hardware and software. If the XO is such a mediocre piece of hardware, “why,” I asked, “do people walk away with stars in their eyes?”

I went on to say that in my experience, I’d never seen any technological device more appropriate to the particular task of providing a useful learning environment for children in remote and/or underdeveloped areas.

This was met with a particularly vehement explosion of outrage, accompanied by accusations that I was “happy because there’s a new toy in the block, to help [me] with [my] ideologically-motivated occupation.”

I confess to an impish desire to agree with that accusation.

Continue reading

Web Standards – A Rant

It’s very common on Slashdot and other, er, technical fora, to see people make assertions like the following:

IE extensions [of existing standards] have proven to be a very good thing for the web overall. It has always been IE that has pushed the limits of dynamic web pages through the inclusion of similar extensions (primarily for the development of Outlook Web Access) which have given birth to the technologies that fuel AJAX and other modern web techniques.

What an interesting viewpoint. I couldn’t disagree more.

The ‘Embrace and Extend’ strategy on which Microsoft has relied since about 1998 is designed to be divisive and ultimately to support Microsoft’s one interest: by hook or by crook, to land everyone on the Microsoft platform. They worked with little or no support or cooperation from any other body[*] and more often than not used their position to subvert the activities of others. They published competing specifications and duplicated functionality through their own proprietary implementations.

Now before we go any further, it’s important to remember that this strategy was dressed up nicely, spoken about politely in marketing euphemisms and was seldom openly disparaging of competing technologies. It is also important to note that very few of the people actually responsible for the creation and fostering of standards ever felt anything but frustration and animosity toward these efforts to subvert the process. I’ve seen such luminaries as Lawrence Lessig and Sir Tim Berners Lee stand up in public fora and state in absolutely unambiguous terms that ‘this MS technology is the single biggest threat faced by the web today.’ (WWW Conference, Amsterdam 2000, for those who care).

It’s true that there are some who have argued for accomodation, and while they’ve achieved short-term gains (RSS and SOAP, for example), the recent announcement of MS-only implementations and extensions of these standards offers further evidence that MS’ intentions are anything but benevolent.

Now, some may trot out the sorry old argument that a corporation’s job is to profit and damn the ethical/legal torpedoes, but the fact is that to most of the people working in standards, this is not the goal. Believe it or not, most of us actually care about the community, and feel that the way things are implemented is just as important as what gets done. So feel free to act as apologist for the soulless corporate machine if you must, but please, don’t pretend that that’s the only way things can be made to work.

Microsoft (and Netscape in its time) are not only guilty of skewing standards in their favour. They’re also guilty of something far more insidious: the infection of the application space with software designed to lock people into their proprietary approach to things. Often enough, the design is fatally compromised in the process. The example cited above, Outlook Web Access, is a prime example of how to break things in the name of lock-in.

Here’s a quick summary of just some of the ways in which Outlook Web Access, which encapsulates email access inside HTTP and passes it through ports 80/443 by default, is technically broken:

  • Caching proxy servers might or might not do the right thing – behaviour here is undefined
  • Traffic/network analysis is subverted
  • Security is compounded, as activity patterns have to be checked on more, not fewer ports (think about it)
  • Likewise, security audits are far more difficult, as traffic has to be disambiguated
  • Security is subverted, users can simply tunnel high volume traffic through to (at least) the DMZ with no guarantee that it’s being inspected (i.e. no one catches that the traffic is neither going to the web nor the Exchange server; each one assumes it’s going to the other and that it’s ‘okay’. Same goes with large volumes of outgoing information.)
  • Deliberate bypassing of firewall policies, promoting insecure configurations (e.g. pushing things through ports 80 and 443 as a matter of informal policy, reducing the firewall to an ornament)
  • Buggier software due to additional complexity
  • Non-standard, meaning (little or) nothing else will support it
  • Promotes software lock-in, which has cost and management implications
  • Promotes monoculture, which has cost, management and *security* implications
  • Protocols exist for this purpose already

That last point is the key. Why on earth would MS build an entirely new way to get one’s email when secure IMAP or POP3 already exist? Microsoft doesn’t particularly care about doing things better, they just want to make sure that their customers do things differently. Quality is seldom a concern, and as a result, it’s usually a casualty.

[*] It’s true that they were – and remain – members of such organisations as the World Wide Web Consortium.