The Pacific Economic Survey

Earlier this week, Australia unveiled the Pacific Economic Survey here in Port Vila. Present for the event was a delegation from all around the Pacific Region, including Melanesia and Polynesia as well as senior politicians from Australia. AUSAid’s chief economist was also there to present the findings.

The report is the first of a series of annual surveys that will provide an overview and update of economic developments in the Pacific island region and Timor-Leste. It collates and summarises public data on various aspects of the region’s national economies, performs some comparative and collective analysis with the results, then provides a few basic recommendations.

The theme for this year’s report was Connectivity. The survey focuses on aviation, shipping and telecommunications. It argues that liberalisation, more input from the private sector, and a cooperative regional approach to the problems inherent in improving connectivity are keys to improving Pacific economies.

The findings in the area of telecommunications do much to validate the Government of Vanuatu’s market liberalisation strategy and provide every encouragement to expand upon them. It addresses some potential pitfalls that might be encountered, primarily where access to technical expertise is concerned. And that is where it risks missing the boat.

Continue reading

Power and Politics – a Sketch

Chief Vincent Boulekone with Duncan Kerr

I had the privilege this week of being asked to take some photographs at the Vanuatu unveiling of the Pacific Economic Survey. The event was attended by two Australian Parliamentary Secretaries and by a number of fairly senior individuals in Vanuatu. The photos I took will be collected here.

I was proudest of the photo above. It’s of two veteran politicians whose approach and presentation could hardly be further apart.

Continue reading

Walk Like a Dinosaur

Michael Krigsman’s most recent entry in the IT Project Failures blog is an interesting, colourfully-illustrated and upside-down look at the relationship between IT and traditional business.

His question, based on numerous similar postulations, is whether IT is becoming extinct. His answer (you knew it was a rhetorical question, right?) goes like this:

Since the days of punch cards, IT has believed itself to be guardian of precious computing resources against attacks from non-technical barbarians known as “users.” This arrogant attitude, born of once-practical necessity in the era of early data centers, reflects inability to adapt to present-day realities. Such attitudes, combined with recent technological and social changes, are pushing IT to share the fate of long-extinct dinosaurs.

The list of arguments he offers in support of this thesis are all valid to some degree, and all supportive of what he’s positing, but he somehow manages to miss the point that means most to business:

Monolithic, top-down, IT-as-bureaucracy approaches are being subverted by recent changes in technology and services, but so too is business in general.

Continue reading

Gooooolag

UPDATE: How wrong could I be about the severity of this threat? Very wrong, apparently. I haven’t confirmed it yet, but it’s hard to imagine how this week’s mass server hack could have happened without tools like the one described below. I’ll write more about this in this week’s column….


Heh, cute:

Cult of the Dead Cow Announces Goolag Vulnerability Search Engine.goooooolagOnce you get past the Chinese porn silliness, there’s a real story here:

Google’s effectiveness as a search engine also makes it an effective… well, search engine. Common website weaknesses are exposed by search engines such as Google, and anyone can access them by using specially crafted queries that take advantage of Google’s advanced searching capabilities. As the cDc press release indicates, there are approximately 1500 such searches published and readily accessible on the Internet. And now the cDc has built a(n a)cutely satirical web front end and are offering a downloadable desktop search application for Windows, giving script kiddies the world over something else to do with their time.

What effect has this had on website security? It’s difficult to tell. The principle of using Google as a scanning tool has been common knowledge since at least 2006, but according to Zone-H, who record large numbers of website defacements every year, the only significant increase in website attacks since then was the result of an online gang war between various Russian criminal factions, back in 2006. Ignoring that anomalous rise in activity, the rate of attack actually fell slightly in 2007 compared to recent years, relative to the number of active websites.

Zone-H’s latest report proves only that the percentage of insecurely configured websites scales on a roughly linear basis with the number of available websites, and that the choice of technology has almost no bearing on the likelihood of a successful attack. Indeed, most exploits are simple attacks on inherent weaknesses: guessing admin passwords or copying them when they’re sent in cleartext, misconfigured shares and unsafe, unpatched applications. Attacks requiring any amount of individual effort are not very common at all. Man-in-the-middle attacks rated only fifth place in the list of common exploits, representing only 12% of that total. But researchers have elsewhere noted that cross-site-scripting attacks are on the rise, and are being used mostly by spammers to increase the size of their bot nets.

The lesson here is fairly obvious: Making simple mistakes is the easiest way to expose yourself to attack. And search tools like Goolag make finding those mistakes remarkably easy. You won’t be targeted so much as stumbled across. Given the recent rise in the number of websites being used to inject malicious software into people’s computers, spammers and other online criminals appear to have a strong incentive to use even the less popular websites to ply their trade.

Your choice of technology won’t save you, either. Most popular web servers are fairly secure these days and though not all server operating systems are created equal, the big ones have improved markedly. But the same cannot be said of the applications and frameworks that run on them. The old adage that ease of use is universal still applies. When you make things easy for yourself and your users, you are liable to make things easy for other, less welcome guests as well.

The lesson for the average website owner: Do the simple things well. Don’t waste your time trying to imagine how some intrepid cyber-ninja is going to magically fly across your digital alligator moat. Just make sure your systems are well-chosen and properly patched, pay attention to access control and treat authentication seriously. Statistically, at least, this will drop your chances of being Pwned to nearly nil, or close enough as makes no never mind.

Idea: Personal Navajo

Instead of exposing the painful ritual of public/private key exchange, software developers should instead be using metaphors of human trust and service.

A ‘translator’ service,  for example. The user ‘invents’ an imaginary language, then decides who among her friends is allowed to speak it with her. She then instructs her ‘translator’ (e.g. her own personal Navajo) to convey messages between herself and her friend’s translator.

(Only the personal Navajos actually need to speak this ‘language’ of course. As far as the two correspondents are concerned, the only change is that they’re sending the message via the ‘translator’ rather than directly, but even that is a wafer-thin bit of functionality once the channel is established and the communications process automated.)

Quick encryption, well understood, and easy to implement. Most importantly, you don’t have to explain encryption, public and private keys,  or any other security gobbledygook to someone who really doesn’t want – and shouldn’t need – to hear it.

Update: Of course, the greatest weakness to this idea is if Microsoft were to create an implementation of this and name it Bob.

Network Neutrality: Not Negotiable

Someone asked:

I’m curious what the[…] community thinks… what if a company such as Comcast were to offer two plans:

1. $30/mo – The internet as we know it today without any preference to content providers, advertising, etc
2. $15/mo – An internet where some content providers get preference, subsidizing the lower monthly bill.

If companies offered a choice would we still care?

Effectively, it would be no choice at all. It would, in fact, be disastrous.

The effects described in George Akerlof’s 1970 paper, The Market for ‘Lemons’ come into play in such a scenario. In a nutshell, the paper states that certain markets (like used cars) favour the sale of ‘lemons’ over quality. The reason is that it’s easier to simply wax and buff a lemon (and rely on the buyer’s ignorance) than it is to do the right thing and service it properly before re-selling.

The reason this approach works is because buyers can’t see what’s under the hood and, generally speaking, wouldn’t know what to look for even if they could. So instead of paying well for quality, they tend to buy the cheapest item, regardless of its condition. The same is true of Internet service. People just don’t know what’s possible. Worse still, they don’t have the ability to recognise whether they’re getting what they’re supposed to or not.

So if the telcos were to foist a divided offering on their customers, they could rely on ignorance to invoke a market for ‘lemons’. People see no extra value in buying the better service, so they flock en masse to the cheaper one. Telco then discontinues the more expensive one, citing lack of consumer interest.

Minimum operating standards such as Network Neutrality were put into place to protect consumers and the market itself. Absent Net Neutrality, the potential for abuse of control over traffic by carriers is far too great. No compromise is possible in this regard, because degradation of Net Neutrality is a degradation of the market itself.

Ghost in the Machine

In the most recent RISKS mailing list digest, Peter Neuman includes a brief article by Adi Shamir describing a method of exploiting minor faults in math logic to break encryption keys in a particular class of processor.

Titled Microprocessor Bugs Can Be Security Disasters, the article makes an interesting argument. In fairly concise terms, Shamir outlines an approach that quickly circumvents much of the hard work in breaking private keys, no matter how heavily encrypted. He uses the RSA key encryption method in his example, probably out of humility. With even my limited knowledge of mathematics, I was able to follow the broad strokes of the approach.

Put most simply, if you know there is a math flaw in a particular kind of processor, then you can exploit that by injecting ‘poisoned’ values into the key decryption process. By watching what happens to that known value, you can infer enough about the key itself that you can, with a little more math, quickly break the private key.

And of course, once you’ve got someone’s private key, you can see anything that it’s been used to encrypt.

This is in some ways a new twist on a very old kind of attack. Code breakers have always exploited mechanical weaknesses in encryption and communications technology. During the Second World War, code breakers in the UK learned to identify morse code transmissions through the radio operator’s ‘hand’ – the particular rhythm and cadence that he used. This sometimes gave them more information than the contents of the communications themselves. Flaws in the Enigma coding machines allowed the Allies to break the device some time before Alan Turing and his early computers got their ‘Bombe’ computer working efficiently:

One mode of attack on the Enigma relied on the fact that the reflector (a patented feature of the Enigma machines) guaranteed that no letter could be enciphered as itself, so an A could not be sent as an A. Another technique counted on common German phrases, such as “Heil Hitler” or “please respond,” which were likely to occur in a given plaintext; a successful guess as to a plaintext was known at Bletchley as a crib. With a probable plaintext fragment and the knowledge that no letter could be enciphered as itself, a corresponding ciphertext fragment could often be identified. This provided a clue to message keys.

These days, computing processors and encryption are used in almost every aspect of our lives. The risks presented by this new class of attack are outlined in fairly plain English by Shamir:

How easy is it to verify that such a single multiplication bug does not exist in a modern microprocessor, when its exact design is kept as a trade secret? There are 2^128 pairs of inputs in a 64×64 bit multiplier, so we cannot try them all in an exhaustive search. Even if we assume that Intel had learned its lesson and meticulously verified the correctness of its multipliers, there are many smaller manufacturers of microprocessors who may be less careful with their design. In addition, the problem is not limited to microprocessors: Many cellular telephones are running RSA or elliptic curve computations on signal processors made by TI and others, FPGA or ASIC devices can embed in their design flawed multipliers from popular libraries of standard cell designs, and many security programs use optimized “bignum packages” written by others without being able to fully verify their correctness. As we have demonstrated in this note, even a single (innocent or intentional) bug in any one of these multipliers can lead to a huge security disaster, which can be secretly exploited in an essentially undetectable way by a sophisticated intelligence organization.

I’m surprised that I haven’t seen much concern voiced about this class of attacks. Maybe I just hang out with an insufficiently paranoid crowd….

Reality Check

Jason Hiner at Tech Republic has written an article entitled “How Microsoft beat Linux in China and what it means for freedom, justice, and the price of software.” He contends that Microsoft’s ‘victory’ over Linux in China is total.

But what kind of a victory are we talking about here? Well, they gave away access to their crown jewels, the source code:

“In 2003, Microsoft began a program that allowed select partners to view the source code of Windows, and even make some modifications. China was one of 60 countries invited to join the program.”

They cut prices drastically:

“Microsoft got serious about competing on price by offering the Chinese government its Windows and Office software for an estimated $7-$10 per seat (in comparison to $100-$200 per seat in the U.S., Europe, and other countries).”

And they caved completely on piracy and so-called Intellectual Property enforcement:

“Microsoft’s initial strategy was to work to get intellectual property laws enforced in China, but that was an unmitigated disaster. Microsoft realized that it was powerless to stop widespread piracy in China, so it simply threw up the white flag.”

So what exactly did Microsoft win, again? This article is rife with untested assumptions. Let’s establish a bit of context here before going too far.

Continue reading

NSA for Dummies

There’s been a lot of discussion recently about the NSA eavesdropping programme, which reportedly has been surveilling US citizens without first getting a warrant. In one of these discussions, someone asked:

What’s the worst case scenario? How big could it be?

That’s a really good question. It occurs to me that no one has really attempted to address this yet in layman’s terms, so here goes….

Continue reading