Modern society and modern economies rely on infrastructures for communication, finance, energy distribution, and transportation. These infrastructures depend increasingly on networked information systems. Attacks against these systems can threaten the economical or even physical well-being of people and organizations. There is widespread interconnection of information systems via the Internet, which is becoming the world's largest public electronic marketplace, while being accessible to untrusted users. Attacks can be waged anonymously and from a safe distance. If the Internet is to provide the platform for commercial transactions, it is vital that sensitive information (like credit card numbers or cryptographic keys) is stored and transmitted in a secure way.
Developing secure software systems correctly is difficult and error-prone. Many flaws and possible sources of misunderstanding have been found in protocol or system specifications, sometimes years after their publication. For example, the observations in (Lowe 1995) were made 17 years after the affected well-known protocol had been published in (Needham, Schroeder 1978). Many vulnerabilities in deployed security-critical systems have been exploited, sometimes leading to spectacular attacks. For example, as part of a 1997 exercise, an NSA hacker team demonstrated how to break into U.S. Department of Defense computers and the U.S. electric power grid system, among other things simulating a series of rolling power outages and 911 emergency telephone overloads in Washington, D.C., and other cities (Schneider 1999).
Firstly, security requirements are intrinsically subtle, because they have to take into account interaction of the system with motivated adversaries that act independently. Thus some security mechanisms, for example security protocols, are notoriously hard to design in a correct way, even for experts. Also, a system is only as secure as its weakest part or aspect.
Secondly, risks are very hard to calculate because of a positive reinforcement in the failure occurrence rates over repeated system executions: security-critical systems are characterized by the fact that the occurrence of a failure (that is, a successful attack) at system execution time dramatically increases the likelihood that the failure will occur during any following execution of a system using the same error-prone part of the design. For some attacks (for example against web sites), this problem is made worse by the existence of a mass communication medium that is currently largely uncontrolled and enables fast distribution of exploit information (again, the Internet).
Thirdly, many problems with security-critical systems arise from the fact that their developers, who employ security mechanisms, do not always have a strong background in computer security. This is problematic since in practice, security is compromised most often not by breaking dedicated mechanisms such as encryption or security protocols, but by exploiting weaknesses in the way they are being used (Anderson 2001):
Thus it is not enough to ensure correct functioning of used security mechanisms; they cannot be "blindly" inserted into a security-critical system, but the overall system development must take security aspects into account (Anderson 1994). In the context of computer security, "an expansive view of the problem is most appropriate to help ensure that no gaps appear in the strategy" (Saltzer, Schroeder 1975). In other words, "those who think that their problem can be solved by simply applying cryptography don't understand cryptography and don't understand their problem" (mutually attributed by B. Lampson and R. Needham to each other). Building trustworthy components does not suffice, since the interconnections and interactions of components play a significant role in trustworthiness (Schneider 1999).
Lastly, while functional requirements are generally analyzed carefully in systems development, security considerations often arise after the fact. Adding security as an afterthought, however, often leads to problems (Gasser 1988, Anderson 2001). Also, security engineers get less feedback about the secure functioning of the developments in practice, since security violations are often kept secret in fear of harm for a company's reputation.
It has remained true over the last 25 years that "no complete method applicable to the construction of large general-purpose systems exists yet" (Saltzer, Schroeder 1975) that would ensure security, in spite of very active research and many useful results addressing particular subgoals (Schneider 1999). Ad hoc development has lead to many deployed systems that do not satisfy relevant security requirements. Thus a sound methodology supporting secure systems development is needed.