Wednesday, July 23, 2008

The déjà vu Java Card 3.0

Soon, the Java Card 3.0 should be widely available to developers. Looking at the list of new features and specifications I can’t help to have this déjà vu feeling, and I am not talking about the classic edition here.

Here it goes:
Multithreading
Network Communications
Access Control


When is the last time you saw such an announcement of combined technologies? Well in my view it is around 1993, when Microsoft decided to introduce Windows For Workgroup. At that time there were a lot of talks regarding the “peer-networking capabilities as a security risk” (http://www.byte.com/art/9402/sec10/art3.htm) and today things aren’t that different (http://javacard.vetilles.com/2007/12/17/countdown-which-security-in-java-card-3/)

The other analogy is the relatively low recognition of the importance of the multithreading feature. A Google search for “Java Card”+ security did return 259,00 hits, while the search for “Java Card”+ multithreading did only return 639 hits!
But think about it, today’s acceptance of computer applications is based on multithreading not security or brute power. Multithreading is the key, at the desktop users wouldn’t accept anymore to switch between applications, email/rss feeds running, download have to happen silently in the background; at the server side things aren't much different multithreading is key to scalability and performance. Users are willing to compromise (e.g. the internet before broadband) as long they could do simultaneous things.

So am I snubbing Java Card 3.0? Not at all, I just believe that the most powerful technology in Java Card 3.0 is multithreading, and it seems overlooked. It allows other technologies (Web Applications, Transactions, Inter Application Communications…) to deliver their full potential. We have the potential to see there a revolution similar to what was the Internet to the PC back in the mid 1990’s, then to B2C and B2B applications.

So is multithreading, communications, access control enough to make a smart card revolution? Probably not, a killer app is required, as what the web browser and search engine were to the web. And by the way such a killer app and its usage is likely to suck up most of the resources that a moment ago seems to be unlimited.

Java Card 3.0 has the potential to bring a breath of fresh air, to a confused card industry. But more efforts have to be made to make it sexy, the best new concept in the world won't make money unless people know it's interesting and become enthusiastic about it.

So what will make you say hello to Java Card ?

Wednesday, July 16, 2008

The Sisyphean Challenge of Security

Recently, the news broke that NXP is suing Radboud University Nijmegen (in the Netherlands), to block publication of a research paper, “A Practical Attack on the MIFARE Classic”.

Well what’s the big deal? Mifare was always known to be very weak, a brute force attack would take only days. So have the researchers done more damage? Well they have exposed two flaws:
  • Through reverse engineering they have identified new vulnerability of the MIFARE card and its cryptographic algorithm that makes attack works even faster.
  • NXP adopted a security by obfuscation approach for the MIFARE card that has lead to poor design and above vulnerability.
The second flaw is embarrassing, but there is nothing new. History will tell that as far back people invented cryptographic mechanisms other people found ways (sometime inventive ways) to break it.

A very good example is the work by Paul Kocher on Simple and Differential Power Analysis (SPA/DPA). In the mid 1990’s people look at smart card as a practical media to securely store and perform cryptographic operations using secrete and private keys. Such keys are stored in unreadable memory and the cryptographic operations rely on practically unbreakable algorithm (e.g. RSA algorithm security is based on the difficulty to factor large numbers into prime factors). None of those criteria were successfully challenged at that time (neither they are today, but it doesn’t mean they won’t be tomorrow). But the combination of the two was surprisingly insecure. By carefully measuring the power consumption (or its variation) during cryptographic operations, the researchers thanks to the knowledge of the algorithm involved managed to discover characteristics of the key and eventually recover the key itself.

The reality is that you should never under-estimate your opponent. In 1857, this costs the life of Mary Queens of Scots. While she was plotting to murder Queen Elizabeth, she sent encrypted messages to her supporters. Those messages were intercepted and decrypted, and Mary Queens of Scots was decapitated.

Events like this lead August Kerckhoffs to say as far back as in 1883, “any ciphering method can be known to the enemy and the security of the whole system depends only on the choice of key”. Today, this principle is still valid. Essentially it means that the design of a cryptographic system should be published and be subject to peer review, the less secret in the cryptographic system the higher the chance for it to be secure. Delaying the publication of such design often only delay the identification of flaws and increase embarrassment of the designer and the cost of fixing it. Unfortunately it is only a matter of time, as theoretical and technical advances are made, for even the most scrutinized cryptographic systems to reveal flaws.

So this is a never-ending quest, but the cryptographer should take comfort like Sisyphus that "The struggle itself towards the heights is enough to fill a man's heart."

1- In P. Kocher, J. Jaffe, B. Jun, "Differential Power Analysis," Advances in Cryptology - Crypto 99 Proceedings, Lecture Notes In Computer Science Vol. 1666, M. Wiener, ed., Springer-Verlag, 1999.
2- In August Kerckhoffs « La cryptographie militaire », Journal des sciences militaires, vol. IX, pp. 5–38, Janvier 1883, pp. 161–191, Février 1883“toute méthode de chiffrement est connue de l’ennemi et la sécurité du système ne dépend que du choix des clés.”
3- In Albert Camus, Le Mythe de Sisyphe, Paris, Gallimard, 1942. ” La lutte elle-même vers les sommets suffit à remplir un cœur d’homme. Il faut imaginer Sisyphe heureux.”

Tuesday, July 8, 2008

Adaptogens for Software System ?

A general problem of Card Systems is that their high level of complexity is hindering their adoption. This complexity comes from the high number of components and technologies employed (databases, application servers, web-based interfaces for users and systems, cryptographic and security mechanisms, imaging and printing...). For the system to work as a whole each components and technologies must work individually as well as integrated within one single system.

In fact complication, rather than complexity, would be a better term to describe the situation, as the root cause of the problem is the number of interactions due to the aggregation of many components and technologies. Complexity would rather apply to components which are more than the linear functions of their properties. In Card Systems security or biometrics mechanisms are intrinsically complex since you can not comprehend their whole behaviour without complete knowledge of all its properties and rules that governs them, e.g. the factoring problem in public key (PK) cryptography. But many components (databases, application servers, printers….) are straightforward enough and can be described in simple –albeit lengthy, terms.

So having make this clear distinction between complexity and complication, I am now taking the artistic license to use complexity in place of complication.

This complexity problem is nothing new, and it is faced by any large modern distributed computing system. Organisations are employing large-scale computer networks to deploy their systems. Those systems are performing a wide variety of tasks ranging from database processing, workflow, to presenting web content and to helpdesk… Often if one task aborts unexpectedly the whole system is put to a halt or don’t perform correctly.

So how do we tackle this problem? The industry has come with various solutions, that we will review shortly then we will see what further steps we could take.

Looking at the hardware and software stacks, we first find fault tolerant systems that address hardware issues. Here redundancy is the key word. UPS (uninterruptible power supply) powered computers run on multiple CPU (central processing units) and rely on RAID (Redundant Array of Independent Disk) system to store data, on dual network cards to access the network. All those hardware components are hot-swappable making the system tolerant to hardware fault.

However in any computer systems there is much more than hardware that can go wrong. Taking an analogy with the human body, it is great to have 2 kidneys, 2 lungs, 2 eyes, 2 hears, 2 hands… but you also need mechanisms to regulate your heartbeat, to control temperature, to alert you when you require food… Similarly in computer you also need mechanisms to load balance processing request, to control i/o activity.

IBM, in its effort toward tackling the skill shortage within the IT industry, has come up with the concept of autonomic system. And right now, the autonomic technology from IBM and other vendors build-up on mechanisms designed for fault tolerant systems to provide more resilient and responsive IT infrastructures. This includes monitoring application; supporting cluster-based deployment with and automatic node switch over, managing basic network and database parameters. Such innovations are really in the right directions. For example, in today’s world there is no reason why a database application should fail because a tablepsace got filled-up and didn’t extend automatically. As Alfred North Whitehead said "Civilization advances by extending the number of important operations which we can perform without thinking about them.", so would computer systems it seems; or a more exact paraphrase would be "Computer Systems advance by extending the number of important operations which they can perform without a Human thinking about them.”. This is similar to the reflex system in the human body, you don’t need to engage the brain to remove your hand from an hot source that will prevent you to get burn or to increase your heart beat in response to an additional effort.

Still there is more to be done, computer systems need to evolve to consistently adapt to their ever changing environment. So I am very hopeful of initiatives such as Spring Dynamic Modules for OSGi(tm). As for the human body, computer system should have the ability to take on new skills, update existing ones and get rid of bad old habits. The ability to dynamically add, remove, and update modules in a running system, goes a long way toward this goal.

These mechanisms are important as they provide a way to adapt to radical changes to the environment. However, what seems still to be lacking is the ability for software system to counter adverse user interaction, unexpected request or response with other systems. Today’s solution is in the initial design that can be validated through negative testing. But we haven’t engineered yet something like adaptogens, which are non-toxic metabolic regulators that can enhance metabolic homeostasis during stress, or in simpler terms increase the body’s resistance to adverse physical, chemical, or biological stressors by raising non-specific resistance toward such stress. This isn’t like Dynamic Modules, discussed in the previous paragraph, such mechanisms should work without a Human thinking about them.