systems fail, and these are mirrored by the
main ways in which cryptographic systems fail. This is
unsurprising, since computer security relies heavily on
cryptography. Things can go wrong because:
• the underlying design is flawed (e.g. a defective cipher),
• the implementation is incorrect (e.g. insufficient key
material is used),
• the system is used wrongly (e.g. users write down their
PINs).
In a seminal paper about the failure of cryptosystems [5],
Ross Anderson shows that problems in implementation and
use seem to be the main reasons for failure, rather than weak
cryptography.
With hindsight, this is perhaps obvious, since they are the two
aspects in which human error is most likely and in which
rigorous peer review is hardest. In the last case, human error
can effectively be guaranteed by cheating or misleading users.
Of course, what this means is that systems which can work
correctly to provide us with safe on-line commerce may fail in
unexpected ways.
Q. But if a system is vulnerable because it doesn’t deal well
with inadvertent or unexpected use, doesn’t that mean the
design is wrong?
A. Perhaps it does. But the PC, and its operating system, is
designed to be a flexible, general-purpose tool which can be
adapted to many tasks, such as word processing, browsing the
Internet, watching movies, making art, designing buildings
and searching for extraterrestrial life. Users are generally free
to add and remove any software they like at any time in order
to enjoy this flexibility.
When you carry out commerce on-line, for
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24