Information Systems Trustworthiness Interim Report ----------------------------------------------------------------- Committee on Information Systems Trustworthiness Computer Science and Telecommunications Board Commission on Physical Sciences, Mathematics, and Applications National Research Council National Academy Press Washington, D.C. 1997 ----------------------------------------------------------------- Copyright (c) 1997 by the National Academy of Sciences. All rights reserved. NOTICE ----------------------------------------------------------------- Committee on Information Systems Trustworthiness ----------------------------------------------------------------- Contents 1 INTRODUCTION * 1.1 About This Project * 1.2 Constituents of Trustworthiness * 1.3 Organization of This Interim Report 2 THE TECHNICAL LANDSCAPE: MONOCULTURE, NETWORKS, AND MOBILE CODE * 2.1 Technology Trends * 2.1.1 Processing * 2.1.2 Communication * 2.1.3 Software * 2.2 Plausible Scenarios * 2.3 Monoculture Dominance? * 2.4 Risks of Homogeneity * 2.5 Mobile Code 3 SYSTEMS OF SYSTEMS * 3.1 A Minimum Essential Information Infrastructure * 3.2 Models and Control for Systems of Systems 4 SOME EXTANT TECHNOLOGIES FOR INDIVIDUAL DIMENSIONS OF TRUSTWORTHINESS * 4.1 Cryptography * 4.1.1 What Makes for a Winning Solution? * 4.2 Firewalls * 4.2.1 Firewalls and Policies * 4.2.2 Positioning Firewalls * 4.3 Security Models * 4.3.1 New Security Models * 4.4 Exploiting Massive Replication * 4.5 Increasing the Quality of Software Systems * 4.5.1 Formal Methods * 4.6 Hardware Support * 4.6.1 Tamper-resistant Technology * 4.6.2 Hardware Random Number Generators 5 NON-TECHNICAL REALITIES APPENDIXES A Workshop 1: Networked InfrastructureParticipants and Agenda B Workshop 2: End Systems InfrastructureParticipants and Agenda ---------- Copyright Permission Request National Academy Press Copyright Permission Request ----------------------------------------------------------------- Permission to use portions of National Academy Press books in course packs, published articles or books, electronic databases, or other media can be requested electronically, by fax, or by regular mail. Use the form below or mail your request to: Dick Morris National Academy Press 2101 Constitution Avenue, NW Washington, DC 20418 Please include the title of the book from which you are excerpting; the exact pages, tables, figures, or other material to be used; the publication or database in which the excerpted material will be used; the number of copies planned (if it is for a course pack or a published book); the price you plan to charge for the book or database; and when you need a response. ----------------------------------------------------------------- Name : Email address: Postal Address: Telephone no.: Requests: ---------- NOTICE: The project that is the subject of this report was approved by the Governing Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for appropriate balance. This report has been reviewed by a group other than the authors according to procedures approved by a Report Review Committee consisting of members of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. Upon the authority of the charter granted to it by the Congress in 1863, the Academy has a mandate that requires it to advise the federal government on scientific and technical matters. Dr. Bruce Alberts is president of the National Academy of Sciences. The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of outstanding engineers. It is autonomous in its administration and in the selection of its members, sharing with the National Academy of Sciences the responsibility for advising the federal government. The National Academy of Engineering also sponsors engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. William A. Wulf is president of the National Academy of Engineering. The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Kenneth I. Shine is president of the Institute of Medicine. The National Research Council was organized by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academys purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in providing services to the government, the public, and the scientific and engineering communities. The Council is administered jointly by both Academies and the Institute of Medicine. Dr. Bruce Alberts and Dr. William A. Wulf are chairman and vice chairman, respectively, of the National Research Council. Support for this project was provided by the Defense Advanced Research Projects Agency and the National Security Agency. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. Additional copies of this report are available from: Computer Science and Telecommunications Board National Research Council 2101 Constitution Avenue, NW HA 560 Washington, DC 20418 202/334-2605 http://www2.nas.edu/cstbweb Copyright 1997 by the National Academy of Sciences. All rights reserved. Printed in the United States of America ---------- COMMITTEE ON INFORMATION SYSTEMS TRUSTWORTHINESS STEPHEN D. CROCKER, CyberCash Inc., Co-chair FRED B. SCHNEIDER, Cornell University, Co-chair STEVEN M. BELLOVIN, AT&T Labs Research MARTHA BRANSTAD, Trusted Information Systems Inc. J. RANDALL CATOE, MCI Telecommunications Inc. CHARLIE KAUFMAN, Iris Associates Inc. STEPHEN T. KENT, BBN Corporation JOHN C. KNIGHT, University of Virginia STEVEN McGEADY, Intel Corporation RUTH R. NELSON, Information System Security ALLAN M. SCHIFFMAN, Terisa Systems Inc. GEORGE A. SPIX, Microsoft Corporation DOUGLAS TYGAR, Carnegie Mellon University Special Advisor W. EARL BOEBERT, Sandia National Laboratories Staff MARJORY S. BLUMENTHAL, Director JERRY R. SHEEHAN, Program Officer LESLIE M. WADE, Research Assistant (through March 28, 1997) LISA L. SHUM, Project Assistant (as of April 1, 1997) ----------------------------------------------------------------- COMPUTER SCIENCE AND TELECOMMUNICATIONS BOARD DAVID D. CLARK, Massachusetts Institute of Technology, Chair FRANCES E. ALLEN, IBM T.J. Watson Research Center JEFF DOZIER, University of California at Santa Barbara SUSAN L. GRAHAM, University of California at Berkeley JAMES GRAY, Microsoft Corporation BARBARA J. GROSZ, Harvard University PATRICK HANRAHAN, Stanford University JUDITH HEMPEL, University of California at San Francisco DEBORAH A. JOSEPH, University of Wisconsin BUTLER W. LAMPSON, Microsoft Corporation EDWARD D. LAZOWSKA, University of Washington BARBARA H. LISKOV, Massachusetts Institute of Technology JOHN MAJOR, Motorola ROBERT L. MARTIN, Lucent Technologies Inc. DAVID G. MESSERSCHMITT, University of California at Berkeley CHARLES L. SEITZ, Myricom Inc. DONALD SIMBORG, KnowMed Systems Inc. LESLIE L. VADASZ, Intel Corporation MARJORY S. BLUMENTHAL, Director HERBERT S. LIN, Senior Staff Officer PAUL D. SEMENZA, Program Officer JERRY R. SHEEHAN, Program Officer JULIE C. LEE, Administrative Assistant LISA L. SHUM, Project Assistant SYNOD P. BOYD, Project Assistant ----------------------------------------------------------------- COMMISSION ON PHYSICAL SCIENCES, MATHEMATICS, AND APPLICATIONS ROBERT J. HERMANN, United Technologies Corporation, Co-chair W. CARL LINEBERGER, University of Colorado, Co-chair PETER M. BANKS, Environmental Research Institute of Michigan LAWRENCE D. BROWN, University of Pennsylvania RONALD G. DOUGLAS, Texas A&M University JOHN E. ESTES, University of California at Santa Barbara L. LOUIS HEGEDUS, Elf Atochem North America Inc. JOHN E. HOPCROFT, Cornell University RHONDA J. HUGHES, Bryn Mawr College SHIRLEY A. JACKSON, U.S. Nuclear Regulatory Commission KENNETH H. KELLER, University of Minnesota KENNETH I. KELLERMANN, National Radio Astronomy Observatory MARGARET G. KIVELSON, University of California at Los Angeles DANIEL KLEPPNER, Massachusetts Institute of Technology JOHN KREICK, Sanders, a Lockheed Martin Company MARSHA I. LESTER, University of Pennsylvania THOMAS A. PRINCE, California Institute of Technology NICHOLAS P. SAMIOS, Brookhaven National Laboratory L.E. SCRIVEN, University of Minnesota SHMUEL WINOGRAD, IBM T.J. Watson Research Center CHARLES A. ZRAKET, MITRE Corporation (retired) NORMAN METZGER, Executive Director ---------- 1 INTRODUCTION Our nation's infrastructures are undergoing a profound change. Networked information systems are becoming critical to the daily operation of increasingly large segments of government, industry, and commerce. Moreover, in responding to the needs of subscribers, critical infrastructures like the electric power utilities and public switched telephone network are increasing their dependence on computers and communications networks [1]. But this growing dependence on networked computers is accompanied by increased risk. First, the infrastructure becomes vulnerable to new forms of attacks--attacks that may not require physical penetration of a specific site or system by the perpetrator [2]--and the number of targets is increased. Second, the use of extremely complex technologies always presents risks. For example, software systems today are rarely free of defects and are notoriously difficult to configure and operate. Finally, the interconnection of previously isolated infrastructures enables the propagation of attacks and failures from one to the other. In short, our nation's infrastructures could well evolve into an interdependent system of fragile and vulnerable subsystems. Understanding how to ensure that they will operate reliably is thus vital. 1.1 ABOUT THIS PROJECT The Computer Science and Telecommunications Board (CSTB) study on information systems trustworthiness was initiated at the request of the Defense Advanced Research Projects Agency (DARPA) and the Information Systems Security Research Joint Technology Office (a collaboration among DARPA, the National Security Agency (NSA), and the Defense Information Systems Agency). It aims to elucidate a research agenda and program of technical activities for strengthening the reliability of information systems and thus enhancing our society's ability to depend on them. Among the questions to be addressed are the following: * What technical problems must be solved? * What is the relative importance of solving these problems? * What technical solutions are available today to solve the various problems? What impedes the deployment of those solutions? * What technical areas are ripe for further exploration, because they promise new or more effective solutions or because they will give rise to new problems? This interim report of the Committee on Information Systems Trustworthiness satisfies a requirement from the sponsors and invites interim comments. It is intended only to frame the technical issues the committee is considering; the final report will expand on these, as well as present conclusions and recommendations and supporting tutorial material. Although the discussion in this interim report illuminates some potential research areas for future exploration, the list presented is not comprehensive; moreover, some of the listed items are highly speculative or may have unreasonable cost/benefit ratios. The final report also will discuss the DARPA-funded research program in survivable systems, the NSA R2 research program, and the Information Systems Security Research Joint Technology Office, since these are the primary sources for federal research funds in the discipline. The technical analysis and recommendations that will be provided in the final report will be derived, in part, from examination of trends and prospects for the commercial systems marketplace. The term "trustworthiness" is used in this study as a single label encompassing all of the attributes a system must have so that society can depend on the systems operation for its critical infrastructures. Trustworthiness thus encompasses a broad range of distinct and quite different properties, the understanding and explanation of which are goals of this study. Necessarily, a trustworthy system must produce outputs that relate correctly to users' inputs (so-called "functional correctness"), but other elements of trustworthiness imply satisfaction of requirements that take into account the possible behavior of parties other than users and the possibility that the computing platform may experience failures. Thus, a trustworthy system must survive acts of malice as well as random failures. At issue in implementing trustworthiness are elements traditionally addressed by software engineering, computer and communications security, and fault tolerance, as well as system safety and survivability. Trustworthiness is a holistic property--a property of an entire system--and achieving it requires more than just assembling components that are themselves trustworthy. Many of the individual properties constituting trustworthiness are already being studied by one or another subdiscipline of computer science. But because the properties and approaches to realizing them are not independent, the matter of combining approaches must receive careful attention. The distinct approaches now employed for each dimension of trustworthiness will likely be important elements of an overall solution, but perfecting any one element will not be sufficient; panaceas are improbable. Progress appears to require understanding where the biggest problems lie (and why they should be considered the biggest) and how to develop strategies for improving trustworthiness that take into account the probability of a problem occurring, the probability that a possible approach will work as a remedy, and the trade-offs in cost and benefit among alternative approaches and/or (given the consequences of more limited action) in allowing a system to incur anticipated problems. Because problems that underlie achieving trustworthiness are so closely intertwined with technology, and because their solutions are often technology-based, the committee was asked to focus on technical issues. Of course, the viability of a technical solution frequently depends on a larger non-technical context, which can include public policy, procedural aspects of how systems are used, the education and training of systems designers, administrators, and users, and so on. Other non-technical concerns revolve around the nature of the evolving national information infrastructure (NII): it is federated (no one entity is in charge), is subject to partial and uneven regulation, operates largely on the basis of de facto or separate industry-specific standards, and all too often resembles a kind of gold rush, in which considerations of market share, hence time to market, predominate. Some of these factors may be addressed in more detail in the final report; here, they are simply enumerated to help frame the technical issues. This project is distinguished from other related activities (e.g., Defense Science Board studies, President's Commission on Critical Infrastructure Protection) by its emphasis on technical issues and associated needs for research. In addition to the expertise and deliberations of the committee, this report reflects input from two workshops that the committee organized. The first focused on the trustworthiness of networked infrastructure. It examined the perspectives of suppliers of network-related infrastructure components and services, the views of network customers of different kinds, and the implications of network-related technologies. The second workshop concentrated on new technologies that potentially could have significant impacts on system trustworthiness, either by contributing to the problem (e.g., mobile code) or by contributing to the solution (e.g., formal methods). A third workshop, anticipated in fall 1997, will address technology transfer and other concerns relating to commercial systems trustworthiness prospects. 1.2 CONSTITUENTS OF TRUSTWORTHINESS A key constituent of trustworthiness is assurance that design and implementation flaws are not present. As systems become ever more complicated, gaining this assurance becomes more difficult. The popular press now documents systems failing with alarming regularity: outages of significant portions of the public switched telephone network, banks that temporarily lose track of their assets, and so on. These failures, though often blamed on "computers," are, in fact, design and implementation flaws; it is the designers and implementors of those systems who are at fault [3]. A study concerned with trustworthiness cannot ignore issues related to software and hardware correctness, and the committee is examining trends and research directions to identify possible approaches to facilitating elimination of design flaws and implementation flaws. Another constituent of trustworthiness is system security, which must be discussed relative to a set of perceived threats. System security can be compromised when a threat--often a motivated, capable adversary [4]--mounts specific attacks, the attack being some means of exploiting a flaw in the system. Such a flaw, an artifact of the design, the implementation, or the operation of the system, is called a vulnerability if it can be used to cause some characteristic of the system to be violated. For example, the 1988 R.T. Morris Internet Worm (RTM Worm) [5], which succeeded in bringing down large numbers of Internet sites, implemented a multipronged attack that exploited several vulnerabilities, including: * a bug in the UNIX "finger" service daemon program that permitted an outsider to inject his own code; * a compile-time configuration error that caused sendmail to have certain debugging features enabled, one of which allowed for unprotected remote execution; * the fact that many users pick bad passwords and encrypted versions of these passwords are stored by UNIX in a publicly readable file, making the passwords easy to guess; and * the transitive nature of trust between the connected systems [6]. Analyses of the RTM worm and of other incidents suggest that growth in an installed base of systems having minimal security, proliferation of associated vulnerabilities, and dissemination of knowledge about how to attack systems despite progress in countermechanisms all combine to present a picture of escalating problems and responses. Appendix A of the 1996 Defense Science Board report on information warfare [7] gives an unclassified assessment of threats to the defense information infrastructure as well as the national and global information infrastructures. This range of threats is reproduced in Box 1. +---------------------------------------------------------------+ * BOX 1 Defense Science Board Assessment of Threats Threats can be partitioned usefully as follows, depending on the resources and motivation of the adversary: ------------------------------------------------------------- * Incompetent, inquisitive, or unintentional blunders; ------------------------------------------------------------- * Hackers driven by technical challenge; ------------------------------------------------------------- * Disgruntled employees or customers seeking revenge; ------------------------------------------------------------- * Crooks interested in personal financial gain or stealing services; ------------------------------------------------------------- * Organized crime operations interested in financial gain or covering criminal activity; ------------------------------------------------------------- * Organized terrorist groups or nation-states trying to influence U.S. policy by isolated attacks; ------------------------------------------------------------- * Foreign espionage agents seeking to exploit information for economic, political, or military purposes; ------------------------------------------------------------- * Tactical countermeasures intended to disrupt specific U.S. military weapons or command systems; ------------------------------------------------------------- * Multifaceted tactical information warfare applied in a broad, orchestrated manner to disrupt a major U.S. military mission; and ------------------------------------------------------------- * Large organized groups or major nation-states intent on overthrowing the United States. _______________ SOURCE: Defense Science Board, Office of the Under Secretary of Defense for Acquisition and Technology. 1996. Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D). Defense Science Board, Washington, D.C., November, p. A3. +---------------------------------------------------------------+ Implicit in each category of the threats listed in Box 1 is a level of resources or expertise that the adversary can bring to bear in planning and conducting an attack. More relevant to the present study, and efforts that do not receive extensive treatment in the literature, are simply characterizing classes of attacks for today's and near-future systems and predicting the costs of mounting (or responding to, via defenses or recovery) these attacks. Because it is reasonable to presume that relatively inexpensive attacks are more likely, the prudent course would be to address those first. The final report will address such comparative considerations. A third dimension of trustworthiness is fault tolerance. The concern here is with guaranteeing that a system continues to operate despite random failures of hardware and low-level support software. Unlike the previous two dimensions of trustworthiness, fault tolerance is an area where significant progress is visible--both in understanding classes of failures that must be tolerated and in devising mechanisms to tolerate those failures. Moreover, a healthy market for fault-tolerant computing systems means that such systems are available commercially today. 1.3 ORGANIZATION OF THIS INTERIM REPORT Technical approaches to enhancing trustworthiness not only must be grounded in fundamental science, but also must be tempered with a vision of the environment in which the solutions will be deployed. Section 2 of this interim report describes sample scenarios for the near-term future of computing and networking, along with some trends that are likely to have an impact on the trustworthiness of networked computing systems. Section 3 addresses the technical implications that follow from having interconnected systems of infrastructures rather than a collection of separate and isolated infrastructures. The so-called minimum essential information infrastructure (MEII), a concept advanced by some as a response to the concerns that motivate this report, is discussed there. Section 4 surveys some individual technologies that concern single elements of information system trustworthiness and that have received significant attention from technologists in government, industry, and academia. The final report will assess all technologies that seem promising to the committee and will discuss interactions among technologies, cost, and other elements of a holistic approach to ensuring system trustworthiness. In the final report, the committee expects to lay out the software engineering landscape, relate practice and people/management aspects to tools/techniques, and reach conclusions regarding certain segments (e.g., mobile code, formal methods). Finally, Section 5 outlines some non-technical realities that must be confronted in deploying new technologies for trustworthiness. Of particular interest to the committee are assumptions that drive commercial supply and demand, because commercial hardware and software are forecast to be a prominent feature in the computing landscape, both for private and government computing. Cost and public policy issues will be discussed more fully in the final report. NOTES [1] If anything, the current climate of deregulation and privatization will accelerate the dependence by infrastructures on computers and computer networks, because networked computers support enhanced and flexible subscriber services. They also enable more cost-effective operation of the infrastructures themselves. [2] Moreover, broader dependence on computers and networked communications may alter the number and distribution of personnel on which a system depends. In some cases bribery or trickery would no longer be required to subvert the system. In other cases, the set of people against whom attacks might be directed becomes larger. [3] See Neumann, Peter G. 1995. Computer Related Risks,ACM Press, New York. See also the RISKS digest edited by Neumann for examples of unintended consequences of system design and implementation aspects. [4] Because trustworthiness includes fault-tolerance properties as well as security properties, the term "threat" in this report includes natural processes, such as device failures, which are responsible for hardware failures. Most people would not consider Mother Nature a "motivated, capable adversary." [5] See Spafford, Eugene H. 1989. "Crisis and Aftermath," Communications of the ACM 32(6):678-687. [6] There was also a serious bug in the implementation of the Worm itself, which caused it to multiply essentially without bounds on each system it infected. It was this inadvertent effect--the serious overload of many of the hosts on the Internet--that caused the damage and also caused the Worm to be discovered. [7] Defense Science Board, Office of the Under Secretary of Defense for Acquisition and Technology. 1996. Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D). Defense Science Board, Washington, D.C., November, p. A3. ---------- 2 The Technical Landscape: Monoculture, Networks, and Mobile Code 2.1 TECHNOLOGY TRENDS Progress in basic computing and communication capabilities not only drives the performance and price improvements that attract new users and uses, but also shapes system vulnerabilities. Enhancements of several kinds are likely in the near future. 2.1.1 Processing It is generally acknowledged that for the next decade or so, the underlying capability of semiconductors will continue to develop at approximately the rate articulated by Moore's Law: the number of transistors that can be placed on the same size semiconductor doubles every 18 months. As a practical matter, this has meant the doubling of microprocessor power every 18 to 24 months, and that trend is expected to continue. Disk speed, storage capacity, and network input/output performance have also been improving rapidly, although not at the same rate as processor speed. Improved hardware and architecture mean that aggregate speed and storage improvements could provide software developers with a virtually clean sheet of paper accompanied by no loss of functionality every 5 years. With that come new opportunities to appeal to new dimensions of customer needs and to make different decisions on how capacity will be used. However, backwards compatibility and continued dependence on legacy software represent constraints and inhibit the introduction of new technology, particularly at lower levels of the system (i.e., hardware architecture and operating systems). Whether future additional hardware performance can be applied effectively to the problem of improving software quality is not clear. Enhanced computing capacity often has been devoted to providing more powerful (but resource-intensive) programming languages and environments. This approach has met with some success, for example, in supporting uniform high-level graphical user interfaces, something that could be expected to broaden the user base (and market size) for software. The newer languages and environments also support general schemes for reuse of software components, freeing developers to direct efforts elsewhere and enabling the use of toolkits that enhance the power of the abstractions with which programmers work. Some characteristics that commodity computers are predicted to have in 5 to 6 years are as follows: * 320-megabyte (MB) RAM with multi-gigabyte per second (GB/s) memory bandwidth (compared with todays 32-MB RAM with 320 megabyte per second (MB/s memory bandwidth); * 40-GB capacity disk with 60-MB/s transfer rate and 2.5-ms latency (compared with todays 4-GB disk with 6-MB/s transfer rate and 2.5-ms latency) * Gigabit per second (Gb/s) network interface (compared with todays 100-Mb/s network interface) In addition, new system architectures may give rise to additional computational power for some applications. For example, parallel processing architectures, already in use in many high-performance computers, and dual- and quad-processor servers (also in common use) increase the available aggregate computational power for programs that can exploit the parallel computing structure or that run many tasks simultaneously. Multiprocessor systems composed of commodity hardware enable both a broadening capability to harness large numbers of processors that may be distributed--as illustrated by press accounts of concerted efforts to break certain cryptographic systems--and a broadening dependence by all kinds of users on significant computational resources. In 1997, vendors will demonstrate multiprocessor PC engines that support terabyte databases and are capable of processing 1 billion transactions per day. Software is moving to effectively exploit multiprocessor systems that have as many as 16 processors; large servers are being replaced by clusters of processors (non-shared memory architectures running loosely coupled operating systems), and there are numerous 4- and 8-processor servers on the market already. 2.1.2 Communication [8] Network bandwidth has also shown impressive growth. Today, 100-Mb/s "Fast Ethernet" cards can be purchased for under $100 in small lots; they will rapidly displace conventional Ethernet for local area network communications in most installations. For wide area links, DS1 rates (1.5 Mb/s) are now the norm, and DS3 rates (45 Mb/s) are the norm for Internet Service Provider (ISP) backbones. Some ISPs have deployed OC-3 (155 Mb/s) backbones and are planning for OC-12 (600 Mb/s) in the very near future. Switching gear has improved commensurately. The current limit for wide area communications bandwidth--at least for residential and small business use--is defined by the capacity of the local loop. With use of commercial modem technology, transmission speeds over the local loop are limited. Modems that operate at 56 kb/s are emerging, while the products more common today work at 28.8 or 33.6 kb/s. There are faster modes of operation over local loops, such as Integrated Services Digital Network (ISDN) service, which can operate at 128 kb/s in the mode normally installed in homes. Even faster is the 1.55 Mb/s T1 service, which is used in leased lines, where the loop is not connected to the traditional switching equipment used for making telephone calls, but instead is permanently connected between two points. There are new technology alternatives for sending at higher rates over the local loop, such as asymmetric digital subscriber line (ADSL), which can send at several megabits per second toward the home, and at a fraction of a megabit back toward the center. However, ADSL and its variants (HDSL, VDSL, and BDSL) can only operate over loops of limited length, and not all of the copper pairs in place today are expected to work successfully with approaches such as ADSL. (The variants of the digital subscriber line technology differ in the length of the wire they can utilize and the speed they achieve.) Cable systems, which use another kind of modem (a "cable modem"), provide a local loop alternative, especially where cable plant has been upgraded to hybrid fiber-coaxial cable technology. However, the current systems and system plans for hybrid fiber/coax are relatively asymmetric in their use of that bandwidth [9]. The emergence of vigorous competition in the local service arena may yield surprising and rapid changes in the availability of higher-speed digital service. Of particular note are recent announcements of wireless residential service. History suggests that the local area network connection will improve in step with the computer it connects. But, the performance of the network in aggregate will be slower. The smaller the network, the closer it will be to the latest technology. The owner of the "last mile" to the customer LAN will vary, based on circumstances and competitive environment: wealthier customers will have more options (e.g., broadcast cable, subsidized analog telephone loop, wireless), earlier [10]. A surprise may come from satellite systems, several of which are being planned; these (and perhaps other wireless technologies) may enhance local service for a relatively broad clientele. Significant parts of the wide area network will be slow to change, due to the higher installed base that would have to be replaced and the long amortizations that have been used by telephone companies, even though the cost/performance of the digital subsystems used in building networks will continue to improve at the rate of mass-market commodity computing hardware--10-fold increases every 5 years. Some of the Internet service providers, however, amortize their plants over relatively shorter time frames, and they are able to upgrade their networks much more rapidly. In addition, improvements made possible by fiber, satellite communications, and cellular telephone services over the next 10 years will likely deliver individual voice communication to the entire worlds population and at competitive end-user costs [11]. Besides the nature of the communications facilities and basic services (e.g., bandwidth, quality of service), questions arise about the overall architecture of communications and information systems. Where functionality is located affects system vulnerabilities and the prospects for effecting countermeasures: which architecture is most sensible tends to vary over time with technology. As client systems become more powerful, more functions are moved to them, until some fundamental change occurs that increases server functionality beyond what is available at a reasonable cost to the client, resulting in functions being migrated to the server. The cycle then begins anew. The personal computer and Internet models have presumed intelligence on the edges of the network, whereas telephony has presumed intelligence within the network (and relatively non-intelligent devices at the periphery). Support for mobility and affordability are among the drivers for additional alternatives. Most computer networks today are organized hierarchically. PCs and workstations are connected to each other and to servers. A local area network supports most of the communication, but communication to hosts that do not reside on the local area network is handled by wider-area networks. Various network wide services are supported locally and by servers throughout the network, arranged in a hierarchy. If a local server cannot satisfy a request, servers at higher levels of the hierarchy become involved. The opposite trend is "peer-to-peer" computing, where any number of personal computers operate in concert on a network, sharing mass storage and communication, each acting as a server for the others. In the Internet model, end-user systems communicate directly with remote servers (e.g., Web browsers and clients, Telnet and FTP (file transfer protocol) sessions). Both the Internet model and peer-to-peer computing rely on the development and mass deployment of a significant software infrastructure that is not yet fully understood. Computer terminals are still in widespread use, too, displaying the output of programs that run on central computing systems. Airline reservation systems today are an example, although many now do use PCs as terminals. A hybrid scheme has been proposed and may develop in which what formerly were terminals become "network computers" that rely on central servers only for heavy-duty processing and long-term mass storage. The development of a significant market for this model of computing would shift the balance from widespread distribution of PCs to more powerful and centralized servers. Embedded computers deployed in everything ranging from household appliances to heavy machinery in factories may also drive network architectures, particularly for local area networks. These embedded computers will be used as intelligent controllers, to enhance functionality or reduce costs. These functions usually do not require high communications bandwidths or remote access to significant processing capacity. Communications latencies and quality of service guarantees, however, are a concern in networks for this domain, since time delays become critical in performing some control functions. 2.1.3 Software Software, of course, is critical for harnessing processing and communication technology. Today, providing acceptable performance remains a key concern to programmers, and developers continue to employ programming languages, like C, that are not far removed from assembly language. The use of such languages grants the developer direct control of the hardware--control that is then exploited to tune how an application executes. However, choosing to exercise this degree of direct control requires sacrificing assurance techniques, such as strong typing (which allows a compiler to automatically detect certain types of programming errors), and higher-level data and control abstractions. The quest for high performance also can lead to use of fragile programming techniques, such as violation of abstraction boundaries (e.g., storage layout or instruction timings). When these boundaries are ignored, a program works only if some abstraction is implemented in one way or another; changes to that implementation, which should be transparent to program operation, no longer are. Increased demands for new applications software will be hitting an industry that is only marginally more productive now than it was 20 years ago. Some of these increased demands may be qualitatively different, because they involve requirements for high assurance or other dimensions of trustworthiness. Whereas in the past, growth in new applications has been shaped by limits in computing and communications, in the future it may be shaped by limits in programming productivity and quality. There appears to be a limit on the complexity that can be mastered by most developers, and this is probably the root cause of the present software productivity and quality limits. Significant gains would be possible only if new approaches were developed to managing complexity in large software systems. Those gains would involve both new technology and better training of and practice by software engineers, a topic deferred to the final report. 2.2 PLAUSIBLE SCENARIOS Improvements in processing, communications, and software technology are only one factor that will shape the computing and communications fabric of the future. Other important technical factors include interoperability (including compatibility with legacy code and, more importantly, data) and ease of use. Market forces also will have an impact, since they determine what products and services are available at commodity prices. The committee is considering four possible scenarios for future computing platforms. * Increased homogeneity. Current market forces prevail during the forecast period and a single processor manufacturer and software provider become dominant. Minor players come and go, introducing new technology as older technologies become obsolete. As new technologies are accepted, they are assimilated by the basic platform. * Retardation. Deployment of computer technology slows markedly. A dramatic loss of trust in the national information infrastructure (NII) might be a cause for this slowdown, but also plausible are technical limitations that limit computing or communication. The environment would look, at least initially, much like todays. It could be argued that this scenario is unstable and would lead to one of the following two alternatives. * Increased heterogeneity. As technology continues to mature, more companies are able to compete, and the installed computing platforms become more heterogeneous as newer technologies gain market share. Other manufacturers and architectures gain market share. The dominant operating system becomes hidden under successive layers of "middleware," increasing the variety of software systems. * Paradigm shift. A new computer and/or software technology is introduced and quickly displaces the current one. Historically such shifts have occurred every decade: the personal computer and later the Java-based network computers in the 1990s, the desktop workstation in the 1980s, the shared minicomputer in the 1970s, and the shared mainframe in the 1960s. The committee is considering four scenarios for evolution of the communications fabric, which is somewhat independent of computing platform: * Slow change. For technical, regulatory, or market reasons, deployment of high-speed digital communications continues at its current slow pace. The market thus evolves along current lines: slow-speed access from the home, and higher-speed but congested access to businesses. * Retardation. For regulatory or market reasons, growth in overall communications capacity is retarded by high prices, poor service, or low customer interest. * Increased heterogeneity. A variety of different communications capabilities become available at vastly different prices and with different characteristics. Not all capabilities are available in all locales. * Paradigm shift. A new communication technology is introduced and the market moves rapidly to embrace it. In all four communication scenarios, a key theme is realizing that control is not under a single authority. Perhaps there are lessons in the experiences of the public switched telephone networks, as they evolve in a world of diminishing or nonexistent centralized administrative and technical control. 2.3 MONOCULTURE DOMINANCE? Current computing platforms, as well as communications infrastructure and software, are largely homogeneous. (See Box 2.) Computing platforms are quite uniform in the operating system they run and the instruction-set architecture they support. Secondary characteristics--display, network interfaces, disks--are made uniform by adherence to either government or manufacturers' standards (e.g., VGA graphics interface or IDE and SCSI disk interfaces) or are presented to application software as common interfaces by operating systems software in the form of device drivers and hardware adaptation layers. The communications infrastructure today is also homogeneous. Local area networks are typically Ethernets or Token Rings, although some increased diversity is being introduced by asynchronous transfer mode (ATM) and the various high-speed Ethernets. Today's wide area networks are constructed from hubs and routers, most of which are sold by a small number of manufacturers (Cisco Systems and Bay Networks, for example, dominate the router market). And the software that controls these networks is also homogeneous at multiple levels. A single stack of protocols manages the Internet, and all these Internet protocol implementations descend from a few. That the core Internet Protocol (IP) works well over a diverse set of network technologies is itself a factor favoring homogeneity. +---------------------------------------------------------------+ BOX 2 Processor Homogeneity In 1997, a significant majority of computer systems sold (85 percent of personal computers and servers by unit volume) contained some version of Intel's "x86" microprocessor (manufactured either by Intel Corporation or one of a small number of others) to implement an IBM-compatible PC architecture. When deployed as personal computers, a significant majority are running a version of the Microsoft Windows operating system. Less than 15 percent of personal computers are a variant of the architecture designed and sold by Apple Computer; a small percentage are variant architectures made by Sun Microsystems, Silicon Graphics, Digital Equipment Corporation, and others. Many among this last group of systems run versions of the UNIX operating system. +---------------------------------------------------------------+ In fact, one can argue that having a homogeneous rather than a heterogeneous hardware and software base is not an accident: * Homogeneity is advantageous for the sale and use of popular software. A larger market gives providers of hardware and software incentives for entry, and providers can also exploit economies of scale. * Homogeneity provides a low-cost solution to ensuring interoperability. Enormous leverage results when computers can communicate and share data, especially in ways that are not anticipated when the computers are procured or the data is created [12]. * Homogeneity supports more efficient transfer of skills within organizations, effectively lowering the cost of computerizing additional functions. Also, homogeneity leads to increased skill-lifetimes, because a skill is likely to remain useful even after computing platforms are upgraded. * Homogeneity enables aggregations of resources to strengthen design, implementation, and testing. This bias toward homogeneity is important for interpreting the effects of rapid innovation and change in information technology. 2.4 RISKS OF HOMOGENEITY The similarity intrinsic in the component systems of a homogeneous collection implies that these component systems will share vulnerabilities. A successful attack on one system is then likely to succeed on other systems as well--the antithesis of what is desired for implementing trustworthiness. With a monoculture, the effort in designing an attack is leveraged, because such an attack can have applicability beyond any single system. Moreover, today's monoculture is based on hardware and software that were not designed with security in mind; these systems are easily compromised. For purposes of discussion, it is useful to view a computing system as a set of interfaces and their implementations. Two computing systems can be characterized as similar based on the similarity of their interfaces and interface implementations. Attacks on a system involve its interfaces and succeed because of vulnerabilities associated with those interfaces (either because of their specifications or their implementations). Identical interfaces that are supported by distinct implementations are less likely to share (implementation) vulnerabilities; similarly, systems that do not share identical interfaces throughout (say, because their internal structure is different) are less susceptible to common attacks. Thus, attacks that exploit TCP/IP flaws will work on any system implementing that protocol suite. Attacks that exploit UNIX flaws will work on any host running UNIX, whether it is from Sun, SGI, IBM, or Intel. And Word viruses work equally well on Macintosh computers and PCs. Clearly, there is at least some tension between homogeneity and trustworthiness. There are compelling reasons to have a technological monoculture (as detailed in the previous subsection), but some attributes of trustworthiness benefit from diversity. On the other hand, a widely used trustworthy operating system might be superior to a variety of nontrustworthy operating systems; diversity, per se, is not sufficient to ensure trustworthiness. This picture is further complicated by the role of technical standards. Standards enable interoperabilty of components, which can lead to larger markets for compliant components. Moreover, the extensive discussion, review, and analysis by experts and stakeholders associated with standards setting increases the likelihood (but does not guarantee) that design flaws will have been detected and eradicated by the time a standard emerges. Standards thus can enable less-educated consumers to defer to others (the standards body) for technological choices, although the limitations of standards should be understood by consumers. Complying with guidelines supplied by the government or an authoritative independent standards-setting organization--such as the data encryption standard (DES; NIST's FIPS 140-1), American National Standards Institute (ANSI) standards or those that may result from the conglomerate Information Infrastructure Standards Panel (IISP), or attainment of a rating under the Trusted Computer System Evaluation Criteria (TCSEC; also known as the Orange Book)--provides both third-party validation of a choice of technology and potential relief from liability [13]. Using a standard thus reduces one form of risk (design vulnerabilities) at the possible expense of increased exposure: * The existence of the standard may provide an adversary with detailed technical information that simplifies discovering flaws. * Attacks that exploit vulnerabilities can be reused in the variety of settings where the standards prevail. * It is easier to mount attacks against many representatives of a single standard than against representatives of different standards. The value of standards setting is illustrated by the data encryption standard (DES), whose presence and widespread adoption clearly benefited all concerned. Yet security experts consider it anomalous, given other experiences with standards. (Box 3 expands on this.) For example, the recently published compromise of the standardized Cellular Message Encryption Algorithm, which affects numbers of any kind entered via the keypad, illustrates the risk of treating standards as indicators of assurance [14]. +---------------------------------------------------------------+ BOX 3 Cryptographic Challenges The design and implementation of secure cryptographic algorithms, as well as protocols that make use of such algorithms, have proven to be quite difficult. Over the last 20 years (the interval during which public interest in cryptography has grown substantially), there have been many examples of missteps: ------------------------------------------------------------- * Symmetric and public-key cryptographic algorithms and one-way hash functions developed by respected members of the academic and commercial cryptographic community all too often have succumbed to cryptanalysis within a few years after being introduced. Examples include the Merkle-Hellman trapdoor knapsack public-key algorithm, some versions of the FEAL cipher, and the MD4 hash algorithm. ------------------------------------------------------------- * Authentication and key management protocols have suffered a similar fate, as they have been shown to be vulnerable to various sorts of attacks that undermine the security presumed provided by them. Examples include the original Needham-Schroeder key management protocol and the various protocols that were intended to repair its flaws. These experiences emphasize the need for cryptographic algorithm standards and security protocol standards that have been carefully developed and vetted. Because implementations of security technology represent a major source of vulnerabilities, there is also a need for high-assurance implementations for this technology. This latter need has sometimes been met through the use of government or third-party evaluation programs for hardware or software components supporting cryptography or cryptographic protocols (e.g., in connection with FIPS 140-1 and ANSI X9.17 standards) As an example, consider the data encryption standard (DES). The DES was developed initially by IBM and submitted as a Federal Information Processing Standard (FIPS) in the mid-1970s. Even though the design of DES was public, the algorithm met with considerable skepticism from some members of the (largely academic) cryptographic community because the design principles were not disclosed and because of concerns over the key size. Over time, as this community developed improved cryptanalytic methods, DES actually came to be viewed as a well-designed algorithm. DES became widely used, promoting interoperability among a number of security products and applications. DES hardware (and, later, software) was evaluated and certified by the National Institute of Standards and Technology, providing independent assurance of the implementations. +---------------------------------------------------------------+ 2.5 MOBILE CODE Whatever else the future promises, the committee is convinced that mobile code--programs that move from one processor in a network to another--raises questions that require careful scrutiny. Java "applets" and Microsoft's ActiveX controls are two examples that have attracted much attention in both the popular press and the technical literature. The committee's second workshop devoted two sessions to discussing mobile code. Mobile code is actually not a new phenomenon: * An early ARPANET request for comment (RFC) [15] discusses something that today would be labeled mobile code. * The PostScript language, which has enjoyed over a decade of widespread use for representing printed documents, is a surprisingly powerful language. PostScript files sent to a printer are actually programs; although they are intended generally to produce printed pages, they sometimes can have undesired and occasionally serious effects. * Microsoft Word documents can include macros that, when executed, have effects far beyond the document they were accompanying. When MS Word documents are exchanged using e-mail, the macros are transferred from one system to another, often unbeknown to sender or receiver. For example, the "Concept" virus, one of the most pervasive of the PC viruses today, is an MS Word macro. * Since their emergence in 1993, graphical user interface (GUI)-oriented Web browsers have encouraged the growth of helper "app[lication]s"--programs that augment the browser's functionality. One typically needs a helper app to view images in a particular format, and browser users are encouraged to download apps as they need them to view a new data type. * Floppy disks (and other portable media) provide physical mobility that can create a major problem by distributing viruses. In fact, anyone who has ever downloaded software from a network bulletin board or loaded software from a floppy disk has been a recipient of mobile code, with all the risks implied therein. The fundamental technical problem that mobile code raises is how to execute imported software in a way that protects the rest of the system from unauthorized processing activities by that software. Like mobile code, this problem is not itself a new problem, having been studied during the 1970s by the operating systems community [16]. Numerous operating system designs for implementing relevant protection have been proposed, although few have been implemented and even fewer actually offered commercially. Mobile code as now supported on the Internet is particularly insidious. Users do not, in general, choose or know that they are executing mobile code, and so an attack can be invisible to the victim. Furthermore, mobile code enables large numbers of machines to be assaulted at once. Finally, firewalls (see Section 4) do not provide protection against many mobile code attacks. Firewalls are designed on the assumption that attacks originate from the outside, not, as a rule, from malicious insiders. Also, a Java applet, behaving within the strict limits of the Java security model, can penetrate a firewall that is itself behaving properly (it is intrinsically difficult for a firewall to determine what mobile code will do). Perhaps most troubling to the committee is that currently proposed approaches to security for mobile code do not stand up to detailed scrutiny. The vendors generally have been responsive in fixing individual bugs in their implementations. But the fundamental approaches from which those implementations have been derived are far--both philosophically and functionally--from implementing the sort of controls that seem to be required. Specifically, it appears that fine-grained access control with a very good user interface will be required to allow users to suitably constrain the execution of mobile code in personal computing environments. Neither the "sandbox" [17] approach of Java nor the code signing of ActiveX provides an adequate basis for such access control: * ActiveX is based on the expectation that mobile code will be signed. But there is no evidence that signed code meets expected business models or user expectations. Will the thousands of software developers providing code over the Internet today sign their code? If so, how can users tell which are the "good" software developers, signatures, or pieces of code? Moreover, one compromised certificate/private-key pair could be used by malicious individuals to sign any number of mobile programs. And even if the existence of such compromised credentials were discovered, recovery would involve relying on the weakest part of any public-key system: revocation. This is not revocation across some closed community, but revocation across the entire Internet, whose population is, by and large, technically unsophisticated users. Moreover, it is likely that enough prospective vendors of ActiveX code modules will be certified so that there will be many opportunities to introduce malicious code using this route, for example, through poor security management at any one of the vendors that results in signing of "bad" code [18]. The real problem with signed code is that it does not scale. Even upgrading a signed ActiveX module is a problem (especially relating to revocation). * Java involves a combination of a virtual machine and a language definition. Java, per se, is a modern, object-oriented programming language. For mobile code use, it is compiled into a "byte code" that can be downloaded to user machines as part of Web pages; the user never sees the Java source. An interpreter executes the byte code, but only after assorted security checks are performed. Confinement [19] is implemented in conjunction with a Java "security manager," which is itself Java byte code. The security manager defines safe versions of the network and file input-output primitives. The class and name inheritance mechanisms of Java ensure that only these entry points are accessible to the downloaded code. A Java "byte code verifier" and Java "class loader" ensure that that code conforms to the semantics of a legal Java program and thus cannot evade the language-based strictures. Unfortunately, the conditions for satisfying the requirements are very complex. The constraints have not been formally verified, and a number of bugs have been found already in the implementation of the verifier. While the details vary, most of these bugs provide ways out of the language-based confinement "sandbox" [20]. Use of mobile code in active networks is also being contemplated and has recently attracted DARPA support and interest. The term "active nets" refers to technology that allows subscribers to send packets containing executable code to network infrastructure elements, e.g., routers. The intent is to allow applications to install custom-tailored network servicessuch as a custom multicast service for multimedia image distributionin a flexible and efficient fashion. However, the notion of active networks raises serious concerns about the trustworthiness of networks. The static code that now executes in routers and switches is already vulnerable, due to a combination of design, implementation, and management vulnerabilities. The operating systems currently in routers and switches are not especially robust and have not been designed to coexist with potentially hostile code, i.e., active network packets. Thus, key to the feasibility of implementing active networks is developing operating systems security technology. At the second workshop, there was a debate about how security-oriented functionality for mobile code should be partitioned among language design, load-time checking, and run-time checking. There was agreement about neither the problems nor how they should be handled. The workshop also revealed evidence of a trend toward increasing functionality for Java by relaxing restrictions that are imposed today. Technological solutions for controlling some of the security problems raised by mobile code are not implausible. Success in building reasonably priced, fine-grained security mechanisms would go a long way toward limiting the effects of mobile code on hosts. Code-signing schemes, now used for ActiveX and in Netscape and Internet Explorer browsers and proposed for Java, depend on the development and deployment of a suitable public-key infrastructure (including, as noted above, capability for revocation). Exploiting language semantics and theorem-proving, the cornerstone of Java's security implementation, depends on a science base that is far from fully developed but is arguably worth investigating. It is unlikely that any single technology will solve all the security problems that mobile code introduces [21]. A host's integrity must be preserved, but mobile code introduces new vectors for attack and new methods for transmitting information. Thus, the committee plans to study carefully the extent to which mobile code qualitatively changes the security picture. NOTES [8] In the section on processing, bus, transfer, and interface speeds were stated in terms of bytes per second, as is the convention in such discussions. In this section, communication speeds are given as bits per second. In most circumstances today, one byte per second (B/s) is the same as 8 bits per second (b/s). [9] Estimates today range between 3 percent and 8 percent for the portion of the cable plant in the United States that is capable of handling two-way traffic. Because many of the older cable systems are cramped for channel space, data access will need to compete with more traditional forms of entertainment. Where two-way cable systems do exist, subscribers should see effective data rates of several megabits per second, but as deployed this may involve on the order of 27 Mb/s downstream and only about 2 to 3 Mb/s upstream--supporting comparatively low speeds for return traffic. (Kagan Associates, April 30, 1996). [10] Today, the DirectPC service from Hughes Corp. provides 400 kb/s to a home for U.S.$30/month. [11] For example, AT&T has announced wireless home service as an alternative to the local telephone company. [12] IP and presentation encodings enable such leverage despite processor heterogeneity. [13] Technology transfer and avoidance of at least some known problems lie behind past government efforts to promulgate guidelines and criteria for trusted systemsthe so-called TCSEC and more recent international harmonized criteria that build on the U.S. TCSEC and comparable efforts overseas. Lack of widespread adoption of such guidelines and criteria appears to relate at least as much, and probably more, to non-technological aspects (e.g., distrust of or limited communication with government sponsors of these programs, delays associated with compliance testing, little market demand) as to issues of technical compliance (e.g., difficulty in satisfying the standard). [14] Markoff, John. 1997. "Code Set Up to Shield Cellular Calls Breached," The New York Times, March 20 [n.p.on-line version]. Schneier, Bruce, et al. 1997. "Telecommunications Industry Association algorithm for digital telephones fails under simple cryptanalysis," press release posted at http://www.counterpane.com/cmea.html, March 20. [15] "Decode-Encode Language (DEL)," RFC 5, J. Rulifson, June 2, 1969. [16] However, it would appear that the problem is not considered by all to be entirely solved. A session at the committee's second workshop included discussions of two recently developed methods for solving this protection problemsoftware fault-isolation and proof-carrying code. Each reduces run-time costs by performing program analysis before execution commences. [17] The term sandboxing was introduced by Robert Wahbe, et al., to refer to a mechanism that restricts accesses by a program to only certain regions of memory by modifying the object code of that program before execution is started. See Wahbe, Robert, Steven Lucco, Thomas E. Anderson, and Susan L. Graham. 1993. "Efficient software-based fault isolation." Proceedings of the 24th ACM Symposium on Operating Systems Principles, Operating Systems Review 27, December, pp. 203-216. [18] Worse is the possibility that the vendor could sign code that interprets an input stream, and that that code could have a subtle vulnerability that was exploited by a malicious or malfunctioning input stream whose authorship was anonymous to even the weak code-signing Authenticode mechanism. This has actually occurred with the Shockwave vulnerability: the Shockwave interpreter is signed by a reputable vendor, but that does not matter--it is the input that contains the attack that can kill you. [19] Boebert, W.E., and R.Y. Kain. 1996. "A Further Note on the Confinement Problem," Proceedings of the IEEE 1996 International Carnahan Conference on Security Technology. IEEE, New York, pp. 198-203. [20] McGraw, Gary, and Edward Felton. 1996. Java Security: Hostile Applets, Holes and Antidotes. John Wiley and Sons, New York. [21] See, for example, Feigenbaum, Joan, and Peter Lee. 1997. "Trust Management and Proof-Carrying Code in Secure-Mobile-Code Applications," DARPA Workshop on Foundations for Secure Mobile Code,March 26-28. ---------- 3 Systems of Systems The trustworthiness of a communications infrastructure can be crucial to the trustworthiness of a system built using that fabric. A 1995 National Security Telecommunications Advisory Committee report asserts that 90 percent of the U.S. government's communications capabilities are being provided by public networks [22]. There are both economic and pragmatic forces behind this change. First, government policy today is emphasizing greater use of commercial products and services. Second, the public networks have grown dramatically, both in their geographic scope and functionality. Third, much of the special information technology available to the government has been transferred to the public sector or is obsolete. Constructive steps may be possible to promote aspects of NII design and management that enhance trustworthiness as required for government use [23]. Today's Internet, the likely predecessor of a key element in tomorrow's NII, is a good place to look for the challenges that must be faced. For certain users, the services provided by the Internet today are perfectly adequate. But users seeking to implement trustworthy systems will find the Internet seriously flawed [24]. Current conditions will be affected by a movement to quality-of-service routing, which will give the Internet more of the flavor of circuit-switched networks, as well as by other changes in Internet protocols (some intended to increase security). Box 4 outlines some of the Internet's vulnerabilities. Ironically, survivability was a paramount concern for an Internet predecessor, the ARPANET. Survivability there was achieved by diversity and replication of interconnection paths and by employing protocols that routed traffic around congestion, failed links, and crashed switches [25]. There has been some increase in structure and hierarchy in moving from the ARPANET to the Internet, but Internet design and culture, and the involvement of a very diverse set of players, favor minimal control and structure. A technical challenge warranting the committee's study is clear: how to augment the structure (which includes the architecture, but also aspects of operation and management) with the mechanisms to enable Internet use in building trustworthy systems. Some improvements in trustworthiness are likely to happen naturally, in response to commercial interest in using the Internet for electronic commerce. But the problems of supporting higher levels of trustworthiness remain. +---------------------------------------------------------------+ BOX 4 Internet Vulnerabilities End-system Vulnerabilities ------------------------------------------------------------- * Services that rely on network addresses for authentication ------------------------------------------------------------- * Plaintext passwords ------------------------------------------------------------- * Eavesdropping on data transfers ------------------------------------------------------------- * Buggy code ------------------------------------------------------------- * Consumption of resources Routing System Vulnerabilities ------------------------------------------------------------- * Traffic black holes ------------------------------------------------------------- * Rerouting to permit eavesdropping ------------------------------------------------------------- * Rerouting to permit impersonation ------------------------------------------------------------- * Rerouting to enable connection hijacking ------------------------------------------------------------- * Route trashing for economic gain (e.g., make competitors look bad) Domain Name System Vulnerabilities ------------------------------------------------------------- * Mail misdirection ------------------------------------------------------------- * Misdirection of other connections ------------------------------------------------------------- * Services that rely on network names for authentication ------------------------------------------------------------- * Organization information leakage +---------------------------------------------------------------+ 3.1 A MINIMUM ESSENTIAL INFORMATION INFRASTRUCTURE The notion of a "minimum essential information infrastructure" (MEII) has been proposed as an alternative to hardening the entire NII to provide a communications infrastructure for implementing trustworthy systems [26]. It is a concept concerned, in large part, with the availability dimension of trustworthiness. As a subset of the NII, the MEII would have sufficient capability to permit essential services to continue despite failures and attacks. But "minimum" and "essential" turn out to be matters of degree and are defined largely by the application. Box 5 gives a taxonomy of applications that might be supported by an MEII. Clearly, what constitutes "essential" depends on context. For example, losing water or power for a day in one city is troublesome, but losing it for a week is unacceptable, as is having it out for even a day for an entire state. What constitutes a "minimum" also depends on context: a hospital has different minimum information needs for "normal" operation (e.g., patient health records, billing and insurance records) than it does during a civil disaster. Finally, the properties that should be preserved by an MEII depend on the customer: local law enforcement agents may not require secrecy in communications when handling a civil disaster but would in day-to-day crime fighting. +---------------------------------------------------------------+ BOX 5 Taxonomy of Applications to Be Supported by a Minimum Essential Information Infrastructure Military. Short-term strategic communications and information management needs of the Armed Forces as required to operate national defense systems, gather intelligence, and conduct operations against hostile powers. Non-military federal government. Communications and information needs of the federal government to communicate with the military and local governments, to coordinate civil responses to natural disasters, and to direct national law enforcement against internal threats, terrorists, and organized crime. National information and news. Infrastructure required to communicate national issues rapidly to the U.S. public. Current examples include national radio and television networks (both broadcast and cable) and the national emergency broadcast program and national newspapers.* National power and telecommunications services. Communications required to operate electric power distribution grids and the public switched telephone network at a moderate level allowing non-military communication. National economy. Communications required to operate public and private banking systems, stock exchanges, and other economic institutions; the concept may also extend to social service programs, which include income distribution components. Local government. Communications and information management needs of state and municipal governments to coordinate civil responses to natural disasters, to communicate with federal authorities, and to direct local law enforcement, fire, and health and safety personnel. Local information and news. Infrastructure required to communicate local information to a local area rapidly. Current examples include local television, radio, and newspapers. Non-government civil. Communications and information management needs of civil institutions, such as the Red Cross, hospitals, ambulance services, and other critical and safety-related civil institutions. Local power and telecommunications. Communications required to operate local power grid and telephony network at a restricted level. Local economic and mercantile. Communication infrastructure required to operate local banks, markets, stores, and other essential mercantile infrastructure. Transportation. Communications infrastructure needed to manage air traffic, signaling and control infrastructure for controlling railroads, and infrastructure for automobile traffic signaling and control of congestion in cities. ______________ * The value of communications with the public in coping with disasters is discussed in some detail in Computer Science and Telecommunications Board, National Research Council, 1996, Computing and Communications in the Extreme: Research for Crisis Management and Other Applications, National Academy Press, Washington, D.C. +---------------------------------------------------------------+ Having a single MEII has not stood up to analysis by the committee, but having a family of MEIIs may be sensible. Such MEIIs probably would share certain characteristics. First, an MEII should degrade gracefully, shedding less essential functions if necessary to preserve more essential functions. For example, low-speed communications channels might remain available after high-speed ones are gone; recent copies of data might, in some cases, be used in place of the most current data [27]. Second, an MEII should, to the extent possible, be able to function after being disconnected from other elements of the infrastructure. An example is the public switched telephone network, which is back-up powered by batteries, thereby enabling it to continue operating for a few hours after a power failure, even when telephone company emergency generators are not functioning. The Internet, on the other hand, has many interdependent subsystems; it is not always easy to restart them independently. For example, a router that tries to download a configuration file from a remote server depends on the availability of neighboring routers. Domain Name Service (DNS) servers are (in essence) general-purpose computers; these often cannot boot properly if routers are down. Other machines, including many network operations center workstations, will not boot if the DNS is down. As this sketch suggests, an MEII must be designed with restart and recovery in mind. It should be possible to restore the operation of an MEII, starting from nothing if necessary. One problem with hardening predetermined subsets of the NII to form MEIIs is the incompatibilities that inevitably would be introduced as the non-hardened subset is upgraded to exploit new technologies. In fact, the hardened subset might be prevented from taking advantage of the full capabilities of the NII due to these incompatibilities. Limiting the range of NII resources that might be used seems imprudent, especially in the event of an attack, because the diversity of the NII might be the most effective way to resist an attack. This insight suggests an alternative to having multiple MEIIs as a strategy for maintaining essential services: organization of the NII so that it has a spectrum of possible operating modes. At one end of the spectrum, resource utilization is optimized. At the other end--entered in response to an attack--routings are employed that may be suboptimal but more trustworthy because they use diverse and replicated routings. In this more conservative mode, packets are duplicated or fragmented [28] using technology that is effective for communicating information even when a significant fraction of the NII is compromised [29]. Thus, the MEII becomes an operating mode of the entire NII rather than a specific subset of the NII. For a multimode MEII implementation to be viable, the NII must possess some degree of diversity. Thus, there might well be a point after which hardening NII components by using replicas of a system that is known to be secure should be deferred to design goals driven by diversity. Other technical problems must be addressed as well. First, anticipating the occurrence of an attack is a prerequisite to making an operating-mode change that then constitutes a defense. Tools for monitoring the global status of the network also become important, since a coordinated attack might be recognizable only by observing a significant fraction of the network. This represents an interesting aspect of the intrusion detection problem. Yet a third strategy for implementing an MEII is use of a service broker that would monitor the status of the communications infrastructure. A service broker would sense problems and provide information to restore service dynamically, interconnecting islands of unaffected parts of the NII. For example, it might be used in drafting for priority uses unaffected parts normally operating as private intranets. 3.2 MODELS AND CONTROL FOR SYSTEMS OF SYSTEMS Many of our large-scale infrastructures (electric power distribution, telecommunications networks, and so on) are managed with help from models of various kinds. In these systems, delays associated with adding or reallocating capacity are typically much larger than the speed with which demands for that capacity may change; it takes time to "spin-up" a generator or to negotiate the purchase of additional power or bandwidth. Better management made possible by using models to inform day-to-day operations can mean that less capacity needs to be kept in reserve, leading to efficiency and cost savings [30]. Moreover, having the models is invaluable for coping with failures or other extraordinary events. An unaided human operations staff could not quickly enough identify plausible responses to such discontinuities given the complexity of the constraints. On the other hand, attention must be paid to ensuring that the model receives reliable inputs, since attackers could corrupt the inputs to the models and achieve their desired results without directly attacking the system. When systems are connected, either because they are linked by dependencies or because they communicate explicitly, the result is often a system that exhibits surprising behavior, including instabilities. This can happen because the interconnections create new control channels, and these channels may affect system behavior. Consequently, the model for an aggregate system is not just a combination of models for each of its components. Electric power distribution is an infrastructure in which this phenomenon has been very visible. The Northeast power blackout of 1965 was the result of a failure cascading through the electric power transmission grid. Industry-wide reforms ensued, but the United States still experiences regional power outages that can be traced to relatively minor events propagating through the grid in surprising ways [31]. Effective models--a tractable control theory--for systems of systems do not exist. Moreover, the recent trend toward reduction in spare on-line capacity by electric utilities means that sharp changes in demand are no longer damped out close to their sources, and these impulses therefore can propagate throughout the transmission grid. Prohibiting the interconnection of systems is a potential solution, but typically not a realistic one for both economic and political reasons. The lack of a technological basis for predicting or controlling the behavior of large systems of systems thus concerns the committee, [32] not only because of the particular systems that are already interconnected but also because of what could be interconnected. For example, the stock market crash in 1987 is widely attributed to the advent of programmed trading. The crash led the New York Stock Exchange to forbid programmed trading whenever the market swings over a certain threshold. Yet, even with these "circuit breakers" in place, market activity must be--and is--closely observed by a centralized authority in order to prevent another systemically introduced instability. As financial functions migrate to the Internet (e.g., electronic commerce and check clearing), the possibility of a serious disruption to the financial system will increase, and so will our failure to understand how these functions interact to create dangerous situations. In many cases, once the economy adapts to high-speed delivery of information, delays may cause panic, instability, or even greater load on the system. These situations seem likely to arise at times of national crisis and moreover, have the potential to create national crises. New theories seem to be needed for reasoning about systems of systems, and that will require new research. NOTES [22] An Assessment of the Risk to the Security of Public Networks, Dec. 1995, National Security Telecommunications Advisory Committee (NSTAC, NSIE report, page 2. Also, note that until recently, critical (essential) and sensitive (private) U.S. government systems were independent and isolated from publicly accessible systems, having their own communications capabilities that were leased or owned and that met government specifications for the various dimensions of trustworthiness (e.g., quality of service, survivability, security). For example, packet-switched networks handling classified government data typically were constructed using dedicated routers or switches connected by link-encrypted circuits. The switches and routers, as well as the computers attached to these networks, were physically protected and electrically isolated from public telecommunications networks, e.g., the Internet and the public switched telecommunications network (PSTN). The systems were custom designed and the facilities were dedicated to a specific use. Government systems for handling sensitive data were typically totally isolated for reasons of security; critical systems included arrangements to use public systems, like the PSTN, for backup. [23] One of the issues that has surfaced in NII policy discussions, for example, has been the nature of and need for a possible "emergency lane on the information highway," intended in part to meet crisis management requirements. [24] For example, the routing protocols and the associated execution environments exhibit design vulnerabilities, e.g., unauthenticated exchanges of routing data. The implementation of these protocols and the associated software in routers exhibit numerous implementation vulnerabilities, e.g., susceptibility to flooding attacks effected at lower protocol layers. And management of these routing systems is vulnerable to misconfiguration, inadequate filtering of offered connectivity data, and social engineering attacks. Although BGP-5 is expected to add authentication and integrity checking to inter-router messages, there are still no provisions for secure authorization mechanisms. Such mechanisms would provide a basis for determining which portions of the address space a routing domain/autonomous system (and its BGP routers) was authorized to represent in inter-routing domain advertisements. Research is just getting under way on means to provide such authorization mechanisms as an adjunct to exterior routing protocols. [25] By contrast, the PSTN and X.25 networks at that time and for many years afterward simply dropped connections (calls) in the event of a failure. [26] The MEII extends availability concerns previously centered on governmental infrastructure to elements of the civil infrastructure. There are compelling reasons to believe that hardening the entire NII would be intractable: to begin with, constructing secure systems is notoriously difficult and expensive, especially a system of such large (national) scale. Further, the NII is an evolving entity that grows in response to demand and changes in technology. Even defining the boundaries of the NII as a system, which would be prerequisite to hardening the entire system, is difficult, and it is hard to imagine a central authority that could mandate the hardening of the system. See, for example, RAND Corporation, 1996, An Exploration of Cyberspace Security R&D Investment Strategies for DARPA: The Day After . . . in Cyberspace II, Washington D.C. It can be downloaded from http://www.rand.org/publications/MR/MR797/index.html. See also, Defense Science Board, Office of the Under Secretary of Defense for Acquisition and Technology, 1996, Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D), Defense Science Board, Washington, D.C., November; and Robert H. Anderson (Head of the Information Sciences Group, RAND Corporation ), June 25, 1996, testimony before the Permanent Subcommittee on Investigations, Senator Bill Roth, Chairman, Government Affairs Committee, U.S. Senate. [27] Applications that depend on a gracefully degrading MEII must themselves be able to function in the full spectrum of resource availability that such an MEII might provide. [28] See, for example, Rabin, M.O. 1989. "Dispersal of Information for Security, Load Balancing, and Fault Tolerance," Journal of the ACM 36(2):335-348. [29] Note that this multimode scheme implements resistance to attacks using techniques that have traditionally been used for supporting fault tolerance, something that seems especially attractive because a single mechanism is then being used to satisfy multiple requirements for trustworthiness. On the other hand, single mechanisms do present a common failure mode risk. [30] An unintended consequence of depending on these models is the replacement of senior operations staff by workers who lack--and will never acquire--in-depth knowledge about the domain. This puts at risk the infrastructure being controlled, since there is less expertise available to cope with problems not handled by the model. [31] According to an article in the January 1997 IEEE Spectrum, (pp. 24-25), the July 2, 1996, disturbance in the Western States Coordinating Council (WSCC) system stemmed from a tree shorting a power line running to a power plant in Idaho. Then, as protective measures were taken and generators tripped, cascading outages soon downed all three of the main California-Oregon trunk transmission lines and formed at least five islands (disconnected subregions) within the grid system. Nearly 2 million customers had their service interrupted. [32] Such difficulties in prediction and control naturally raise questions about modeling and simulation tools, which the committee will consider in its ongoing work. ---------- 4 Some Extant Technologies for Individual Dimensions of Trustworthiness Trustworthiness encompasses a broad collection of properties, and, consequently, the committee is studying a wide-ranging collection of technologies. Some of those are surveyed in this section to illustrate elements and underpinnings of the committee's ongoing analysis, and to elicit comments and other inputs at this intermediate stage of its deliberations. Differences in the depth of coverage are not intended to convey any sense of the relative importance of these technologies; they are simply an artifact of the timing of this report. 4.1 CRYPTOGRAPHY Cryptography can be employed for implementing a variety of different security services in computer and network environments. Specifically, it can be used to: * implement confidentiality for stored data and data in transit; * authenticate the identity of users and computers (processes), especially in network environments; * authenticate the source and the integrity of data, including software downloaded from network servers or data stored on such servers; and * support non-repudiation, a service often viewed as essential to the broad use of electronic commerce [33]. If cryptography had been incorporated aggressively into the designs of operating systems and network protocols in the 1970s and 1980s, there would be widespread use of challenge-response styles of authentication, wider use of public-key cryptography, and general encryption of TCP, Telnet, and FTP streams in the Internet (subject to the computational cost of performing encryption in real time). But there are limits to what can be accomplished by cryptography. Cryptographic mechanisms used to verify the integrity of data do not prevent its destruction, for example. Moreover, many security problems simply do not have cryptographic solutions. An analysis of all CERT (Computer Emergency Response Team; based at Carnegie Mellon University) advisories issued in 1996 and the latter half of 1995 reveals that of 37 security vulnerabilities described, at most 2 could have been prevented by adequate use of cryptographic technology. On the other hand, an analysis of all CERT advisories shows that half of the problems could have been prevented by suitable use of cryptography. Analysis based only on more recent CERT reports may be more representative of current problems or may just be skewed by other factors. Also, neither set of analyses takes into consideration how often the vulnerabilities cited are exploited. One might even argue that adding cryptography to a system that is insecure can have the effect of making easy attacks harder and the hard attacks more cost-effective for the attacker. One example, discussed in Section 2.5 ("Mobile Code"), is the reliance on digital signatures to determine whether a particular piece of software is "certified" as being safe to execute. If the digital signature is being checked by a system that is largely insecure, then a successful attack on the checking mechanism can destroy the protection afforded by the signature. Perhaps the greatest technical impediment to the widespread successful use of cryptography is key management. Whenever cryptographically protected data is exchanged between two (or more) parties, keys must also be distributed between (or among) the participating parties. When symmetric cryptography (e.g., DES) is employed, this problem is usually addressed through manual distribution of secret keys or through the use of a key distribution center (KDC; e.g., "Kerberos" [34]). In either case, confidentiality, integrity, and authenticity must be provided for the keys. Manual distribution is difficult and expensive, especially because up to order N2 keys are needed for a community of size N. A KDC-based system can support a community of size N with a bootstrapping phase of manual key distribution of order N, followed by transition to an automated mode of operation. A KDC typically serves a single community under the administrative control of one entity. To extend the scope of coverage to multiple communities, KDCs need to exchange keys, and this usually requires initial, manual key distribution of order N2, for N KDCs. Moreover, a KDC has access to keys that, if compromised, would undermine the security of all of the subscribers within the community served by the KDC. Thus, even automated key management technology, based on symmetric cryptography, tends not to not scale well, and concerns remain about the security of KDCs. Asymmetric or public-key cryptography (e.g., RSA or Diffie-Hellman) has the potential to significantly mitigate the scaling and trust problems associated with symmetric key management systems. Public-key key management systems rely on the use of certificates, digitally signed data structures used to bind public keys to identity (and perhaps to authorize access to information). Certification authorities (CAs) vouch for the accuracy of these bindings and transform the problem of public-key distribution into an order-N problem. Thus, using CAs avoids some of the security problems associated with KDCs. The compromise of a CA still can undermine subscriber security. Such compromises are likely to be detectable, and their impact can be mitigated through the use of appropriate protocols; they generally do not affect the security of transactions that have taken place previously (since KDCs may have keys for old transactions). The major technical issues associated with the use of public-key cryptography can be categorized as public-key management, private-key management, and recovery from key loss and compromise. Public-key management involves reliably communicating public keys to the parties who need them. A system for doing so is called a public-key infrastructure (PKI). A PKI embodies the set of policies, procedures, and mechanisms needed to create and revoke certificates. Various sorts of PKIs seem likely to arise, including both closed (proprietary) systems and public systems. The former are now being deployed to facilitate electronic commerce for existing client bases, e.g., systems supporting financial services customers and credit card customers and merchants. In these systems, the certificates typically identify a user only in a narrow context and are not usable outside that context. Such PKIs parallel the hard-copy credentials that most of us accumulate in everyday life (e.g., credit cards, employee identification cards, professional society memberships). There also is a nascent public certification authority industry, issuing certificates to users and to organizations to facilitate a wide range of applications. The CAs in this context do not "own" the name spaces in which they certify users or organizations, but rather accept collateral evidence of the accuracy of the identity assertions provided by the would-be certificate holders. An open question is whether the U.S. and state governments have a role to play here, acting as CAs for populations in much the same way that they issue drivers' licenses and passports in the hardcopy environment. However, it is worth noting that there is no single form of identification credential that universally, unambiguously, and meaningfully identifies people in the physical world. Thus it is unlikely that any individual certificate issued by any single CA will prove appropriate for all transactions in the NII context. Private-key management is the set of procedures for making a private key available for use by its owner with protection against either disclosure or unauthorized use. The securest way to manage private keys is to use a hardware token that can perform cryptographic operations using the private key but that will not disclose it. With such a device, a private key can be kept safe from disclosure even if its user or a server is running untrustworthy software. Without cryptographic capability, protecting the key against unintended and unauthorized use in the presence of untrustworthy software is a much harder problem. Protecting users' private keys in portable tokens (e.g., "smart cards") is not currently practical in many contexts because the required hardware interfaces to the tokens are not ubiquitous. Recovery from key loss and compromise is perhaps the most difficult problem in the operation of large-scale cryptographic systems, and additional research would be useful. Ideally, one should be able to revoke a compromised key immediately upon notification of its compromise. This capability requires communication between the point of notification and every end point making use of the public key. Such communication can be done either by posting certificate revocation lists (CRLs) or by having a verifier check with an on-line trusted agent. In either case, the requirement for reliable and timely communication reduces the scalability of public-key cryptography. If a user forgets his password or loses or destroys his token, it is vital that there be a rapid way to get the user "back on line." But mechanisms for key recovery introduce new points of attack within the system, and thus reduce the advantages of a public-key-based system. In spite of these challenges, it is generally recognized that systems incorporating public-key cryptography are the most appropriate technology for securing a wide range of applications, and that deploying a PKI should be a top priority. PKIs must manage the keys of both users and automated trusted entities, such as computers or even processes on computers. Although customers of a bank today typically confirm the bank's authenticity by checking the sign on the door, users working over a computer network must be assured that they are connecting to authenticated services and must themselves be authenticated to the server. While the appropriate technology for protecting private keys on servers differs from that for protecting the identity of people (e.g., portability is not important for computers), common mechanisms can serve for the management of associated certificates [35]. In both cases, the important challenge is to develop procedures for reliably determining the identity of the requester and delivering that information and the public key to the certifier. 4.1.1 What Makes for a Winning Solution? Despite significant technical advances over the last two decades, relatively little use is made of cryptography in commercial and personal computer and network environments. The DOD Fortezza card, an element of the Multilevel Information Systems Security Initiative (MISSI) that industry as well as researchers supported by DOD agencies have been encouraged by DOD to use, is an attempt to provide a cryptographic building block in support of system security for unclassified but sensitive government uses. It has met with mixed success. Because it is a hardware-based solution, concerns about cost and interface compatibility may limit its adoption, even in government circles. As a core cryptographic module, it offers reasonable performance and good security, but ease of use (see criteria described below) ultimately depends more on application software than on cryptographic module designs per se. It is expected that Fortezza will find significant deployment in conjunction with the Defense Message System (DMS), where it provides the basis for secure e-mail, and a Fortezza-enabled version of the Netscape browser also is available. If cryptography is to be widely used, it must be inexpensive and easy to use, and must not impose significant performance burdens. These criteria for success raise technical issues. For example, one way to satisfy the cost and ease-of-use criteria would be to embed cryptography into applications in a way that is transparent to users. The IBM/Lotus Notes product succeeds at doing this, and current versions of World Wide Web browser software use cryptography in a manner that is largely transparent, for server and client authentication, and for data confidentiality and integrity. However, Notes uses a non-standard certificate format, and it is but one closed application. Current versions of World Wide Web browser software make use of cryptography for server and client authentication, and offer data confidentiality and integrity in a very simple and largely transparent fashion via the Secure Socket Layer (SSL), which provides for an encrypted and optionally authenticated channel. Generally, the authentication is one-way--it authenticates a Web server to the client. Client-side certificates are available but are rarely used. But browsers are not the only means of communication in the Internet, and the security offered by the SSL protocol is constrained due to its position in the protocol stack. Despite the widespread use of these encryption products, which is encouraging, there are many contexts not addressed by the security offered in these products (e.g., e-mail and multicast real-time communication). Moreover, these products merely secure communication using cryptography, and that does not make the applications secure in a larger sense. Attacks that exploit vulnerabilities in the underlying operating systems can still undermine the security afforded by these products. 4.2 FIREWALLS Firewalls are a common point-defense for computer networks [36]. That is, they are a mechanism that is deployed at the boundary between a trusted enclave and an untrusted computer network. The firewall examines traffic entering or leaving the enclave and permits only authorized communications to transit the firewall. Traffic addressed to exit or enter the enclave is controlled based on specified criteria [37]. Computers inside the enclave are thus not subject to direct attack from outside. In theory, firewalls should not be necessary. If a single computer can be hardened against attack, in principle all computers can be; and, if all computers on a network are hardened, then there is no need for an additional, external defense. However, most computer systems are penetrable. In practice, firewalls are an effective and relatively inexpensive approach to improving security for an enclave. Moreover, hardening of computers is rarely simple. Many systems need to run protocols for which suitable protections are unavailable. For example, even when cryptographic authentication could have been provided, vendors have often chosen to rely on authentication based on network addresses, thus giving end users no choice in the matter. Users then have no choice but to rely on outboard protective measures such as firewalls. A more subtle issue is that security problems are often due to "buggy" software. The best cryptography in the world will not protect a service that has a hacker at one end of the connection and software with a back door at the other end; rather, one ends up with a secure, authenticated connection to someone who is about to penetrate the system. In the large, the computer industry has been unable to produce software that satisfies its requirements. Consequently, there is no choice but to block external access to vulnerable services. Furthermore, since any service might suffer from such weaknesses, prudence often dictates offering only minimal services to the outside world. Users of general-purpose computers expect to find a whole range of services available to them, ranging from e-mail to directory servers to internal World Wide Web hosts. Firewalls permit such services to be offered to insiders, while attempting to deny access to outsiders. Note that use of system privileges by network services is rarely at issue. Computer security is often best accomplished by keeping intruders out of the system in the first place, because today's machines are much weaker at resisting inside attacks. Thus, permitting an attacker any sort of user-level access to a machine is often tantamount to compromising the total system. Seen in this light, a firewall is not a network security mechanism. Rather, it is a network solution to flawed host software. Finally, firewalls are often deployed to implement a defense in depth. Even if a system is believed to be secure, with proper authentication and presumably reliable software, a firewall can provide a layer of insurance. 4.2.1 Firewalls and Policies Another way to view firewalls is as representing a single point for policy control over a network. A firewall applies controls over all packets entering and leaving an enclave. By using a single machine, or perhaps a small, controlled set of machines, namely firewalls, it is easy to ensure that a consistent security policy is being applied; all of the necessary files are at the fingertips of the firewall's administrators. In a network of machines, there will often be no way to verify each machines configuration short of physically being present at its console, an impractical requirement in an organization with hundreds or thousands of machines [38]. Policies that can be enforced at the firewall are limited to restrictions on inbound and outbound traffic. For example, a policy that required logging of all outbound mail could be implemented by permitting only an authorized mail gateway to talk to the outside; all other machines would relay their mail via this gateway. Similarly, proper authentication could be demanded of anyone who wanted to export a file via the network. There are limits to how effective these restrictions can be. First, sufficiently determined employees can often bypass restrictions. A user who, for example, cannot talk directly to a Web server--a policy implemented by restricting outbound access to port 80 (the standard port for HTTP requests)--might then set up a Web proxy-server that listens on port 8000 (a commonly used port number for the local Web proxy) on some outside machine. Second, care must be exercised in deciding just what protocols are allowed to pass through the firewall. If too many protocols are allowed through, then the firewall will be undermined because any of these protocols might have a fatal flaw. Dial-up lines and administrative access points and maintenance ports inside the perimeter of trustworthiness, even if nominally secret, can be a very serious vulnerability, because they are used to control the network. Indeed, there is substantial evidence that the hacker community is aware of this vulnerability and is actively seeking out such points. 4.2.2 Positioning Firewalls Firewalls are a perimeter defense, making them sensible mechanisms only in settings where a well-defined border exists. Networks with unstructured connectivity cannot be protected by firewalls. For example, a corporate network with many links to suppliers, customers, and joint-venture partners is a poor candidate for a firewall; too much traffic would end up bypassing the checkpoint. Also, Internet and other network service providers, which tend to have the bulk of their equipment outside of such perimeters, may find less direct value from a firewall. It would be unusual for a Web server to be protected by a firewall: the most vulnerable point is the Web domain itself, and, by definition that must be exposed to the outside. Similarly, routers--the primary infrastructure of most Internet service providers--must be reachable, more or less by definition. The constraints provide guidance about where a firewall might sensibly be positioned. The network to be protected must be under the control of the firewall administrator, to guard against unknown outside connections. The firewall must serve a reasonably small user community, to lessen the chances of disloyalty. And the firewall must protect a community that has a reasonably consistent security policy, or too many different services will be allowed to pass through. Accordingly, well-placed firewalls demark the boundaries between different security domains. These boundaries will be largely independent of geography: that two organizations share the same building does not mean that they do not need a firewall between them. The overall security policy should dictate the network topology, rather than the other way around--even though such network topologies are likely to be more expensive than topologies set entirely by geography. Firewalls are not a panacea for network security woes. There are a number of different ways in which a firewall can fail to protect an enclave. Most obviously, a firewall does not protect against inside attacks. While the exact figures are open to some debate, there is little doubt that many computer security problems originate with trusted employees. A firewall cannot help in this situation. The second obvious way a firewall can fail is when connectivity to the outside bypasses the firewall. This may occur via an authorized link to some outside organization, through an unprotected modem pool, or perhaps by a careless employee dialing out to an Internet service provider. Similarly, protocols that are allowed through the firewall can be weak points. Some protocols must be allowed to transit the firewall--if nothing is to be allowed through, an air gap will be a cheaper solution--but these protocols must be carefully limited and monitored. Developing policies to be enforced by the firewall is probably the most important factor in the success or failure of a firewall. More subtle vulnerabilities arise because the level at which the firewall operates. Any firewall operates at some level of the protocol stack. Attacks directed at a higher-level protocol cannot be stopped by the firewall; the firewall is transparent to any such messages. For example, consider a firewall implemented by a router-based packet filter. Such a filter would be unaware of, and hence unable to protect against, weaknesses in the Simple Mail Transfer Protocol (SMTP) because SMTP is at a higher level than the routing protocol. Similarly, an SMTP-level relay could not deal with attacks based on mail headers. A fundamental limitation of firewall technology is that a firewall cannot protect against attacks involving messages that the firewall cannot itself understand. Not only do higher-level protocols provide a means of hiding attacks in this way, but the use of encrypted messages by lower-level protocols can also hide attack. It is obviously impossible for a firewall to inspect the contents of an encrypted packet. The usual solution is to decrypt messages at the firewall. In some cases, multiple levels of cryptographic protection must be used, with an outer layer permitting passage through the firewall and the inner layer providing end-to-end encryption. The "rules" for making protocols firewall-friendly include the following: (1) use outbound calls only, (2) favor configurable mechanisms for indirection (for example, the X11 windowing protocol can be passed through a firewall easily, because it is easy to redirect an application to use a firewall proxy instead of the genuine server; mail can be redirected via MX records, which allows easy firewall processing); (3) avoid in-band setup of secondary channels, such as the FTP data channel, because the call setup is hard to detect and process by a firewall; (4) make it easy to distinguish different services at a low level (for example, multiplexing different protocols (i.e., TCPMUX) or using dynamic port numbers (remote procedure calls) causes problems); and (5) favor circuit-based protocols, which are easier to handle than datagram-based ones. At the limit, to work for any application, a firewall would have to reproduce the functionality of the application that it is guarding and then interpret all messages destined for that application, evaluate what their effect would be if executed on the end-user system, and determine whether that effect is consistent with the security policy. The firewall's pass or suppress decision would then depend on the exact effect the packet would have on the application being guarded. Not surprisingly, such a firewall might approach the size and complexity of the application it is guarding. There would be less confidence in the ability of such a large firewall to satisfy its requirements and provide the expected protection. Thus, a key to the success of firewalls is the existence of checks that can be coded simply despite the complexity of the applications that they guard. It is interesting to speculate on practical methods for augmenting firewalls with application semantics. For example, it might be possible to organize network applications in a way that enables structures affecting security to be easily extracted and moved or replicated in a firewall. Such structures would include mechanisms for authentication and for use and release of privileges. An improved network programming paradigm could eliminate checks in the firewall if, for example, checks were guaranteed to prevent array overflows and other vulnerabilities that firewall filtering eliminates. In another vein, modern firewalls often supply "proxies" to relay particular services. Can this be done more generically? Is it safe to do it at all? 4.3 SECURITY MODELS Specifications for system functionality prescribe desired behaviors. Security specifications and policies constrain access to system resources (information, processing and communications) so that they are allocated fairly and as authorized. Authentication of users and their access requests is crucial for these specifications to be effective. Examples of security properties include: * confidentiality properties that specify limits on access to information, * integrity properties that specify limits on modification of information, and * availability properties that specify limits on the extent of disruption of service. Additional limits are specified by other important properties, such as authenticity or reliability. For many applications, confidentiality is not the most important security property. For example, in military command-and-control systems, protection against denial-of-service attacks is extremely important. Electronic commerce requires high integrity of communications, processing, and authentication. Historically, most of the effort in computer security has focused on control of user access to information. This control was to be provided by a reference monitor [39] encapsulating the security-relevant functionality of the computing system in a small amount of highly trustworthy software. The Trusted Computing Base specified by the TCSEC is a generalization of this approach. This model was designed for centralized access control in a single operating system environment. It does not address other security requirements, including denial of service and the integrity of applications processing. There have been some approaches that go beyond this type of access control, but they have not been widely used or incorporated into standards. Computer applications and computer systems have evolved as hardware has grown more cost-effective and software has become a commodity. Today, it is networked computers, and not shared centralized time-sharing systems, that deliver a majority of the computing cycles--although there are other kinds of shared systems, such as file and compute servers. The well-understood security models are inadequate to address today's concerns and are being increasingly ignored by system developers and users. There are no well-developed models that address the current problems, and developers of new systems do not have the intellectual tools to implement the security properties they require. A security policy typically attempts to characterize acceptable system behaviors in a way that can be efficiently enforced by the computing system being governed by that policy. All too often, however, there is a gap between the security properties desired and what can be checked mechanically, i.e., by a computing system. Usually automatically enforced security policies are based on the syntax or format of data; security policies based on semantics or data content (e.g., "libelous, inflammatory, treasonous") are impossible to implement automatically. Most often in automatically enforceable security policies some type of attribute tag is associated with the data or with the name associated with the data. A security policy based on those checks, then, will be only an approximation of the security policy that is desired. And the fidelity of that approximation will determine the extent to which the security policy is useful and usable. For example, privacy properties are often approximated in terms of potential information flow. This model presumes that most of the software in the system is not trustworthy and may leak any information it can read into any data it can write. To prevent this leakage, the security policy might stipulate: If a program reads from an object containing top-secret data, then that program is prevented from writing to an object that can be read by users having a lower level of clearance. This policy is known to be overconstraining. Summary data, for instance, often has a lower secrecy classification than the source data from which it is derived, and such flows should be permitted, but are not by this policy. The policy also forbids network acknowledgments of receipt of data from a lower security environment and complicates reliable communications. Finally, the policy can allow leakage of sensitive information, because an output that results from data processing may be more sensitive than any of the input data, for example as the result of data fusion or signal processing. In short, a simple information flow policy that is independent of program and data semantics cannot distinguish which flows are safe and which are not, and it is not necessarily a good representation of the desired privacy properties of the system. Implicit in most security policies is a set of assumptions about the system and the application. It is important not to overlook the fact that these are only assumptions. Attackers will not, and a system that uses a security policy based on flawed assumptions could have design vulnerabilities. For example, access control policies are limited by the effectiveness of the method that is used to authenticate users. Access checks are useless when a subject is misidentified. Similarly, and this is the basis of many attacks, if a user can subvert a program that is executed with system privileges, that user can circumvent access control checks. While the assumptions made by a security policy do not change from context to context, whether those assumptions are realistic might change. Networks, for example, allow new types of covert channels and enable an attacker to reside far from the computer under attack and to be aided by considerable computational resources--a different world from the centralized time-sharing systems of the 1970s with terminals hardwired to a host. Networks also open the possibility of communication among systems having inconsistent security policies (perhaps because they do not share goals). Among the most commonly cited security policies today are the access control policies defined in the TCSEC, which was first published by DOD in 1983 and revised in 1985. Because of its historic role and the fact that it is often misunderstood, key TCSEC tenets are sketched briefly here. Along with other things, TCSEC defines two access control policies. They are both based on a model involving subjects, which correspond to active entities such as users and programs, and objects, which correspond to containers. 1. Discretionary Access Control (DAC). The creator of an object defines the rights for all subjects to that object. Thereafter, rights are changed by subjects that have appropriate rights and execute operations. 2. Mandatory Access Control (MAC). Objects are tagged with a label (e.g., sensitive, secret, top secret), as are subjects. A subject may not write to an object that has a lower tag value nor read from an object with a higher tag value. Both MAC and DAC are limited in their applicability because they are based on the subject/object model. This model does not distinguish between actual users (people) and the software acting on their behalf. Therefore, such a model cannot protect users against malicious actions of software they invoke or use. The purpose and usefulness of MAC and DAC are in limiting access to information or, more specifically, to the information containers (objects) managed by the access control system. DAC is needed and useful for many applications to support users' rights and privacy. But DAC is subject to Trojan horse attacks--programs running on behalf of one subject but that store data in objects that are accessible to another subject, the attacker. MAC was developed to counter Trojan horse attacks by implementing the information flow constraints discussed above. Note that, whereas attacks of this kind were once considered esoteric, the advent of mobile code changes that. However, even MAC offers only limited protection against attacks from mobile code or other code that has not been certified. Specifically, MAC prevents only one type of offensive action that such code may take (i.e., data being stored in objects accessible to an attacker); it does not, for example, prevent corruption or destruction of data. DAC, which is supported in both UNIX and Windows/NT, is the most widely used access control policy, although it does not address the integrity concerns addressed in the Clark-Wilson model [40], which are considered particularly relevant for business applications. MAC was developed with military security levels and classifications in mind. It is appropriate for situations in which users with different clearance levels share a processing system and protection of classified data from unauthorized users is the primary concern. But in military command-and-control systems and many business situations, the protection afforded by MAC is not suitable [41]; there may not even be a natural label scheme for these systems. Moreover, even in military settings, the constraints on information flow enforced by MAC may be too restrictive. Some information flow is generally necessary for a system to work. This may be a small amount, such as an acknowledgment of receipt of a packet, or it may be larger, as when command information is sent out to troops from a classified higher-level system. The solution in MAC-based systems is to mediate this flow through trusted processes that can violate the information flow constraints. This has the useful effect of localizing flows, but it can still pose security risks, since the trusted processes may not be able to distinguish between information leaks and safe releases of information, and they may themselves be vulnerable to attack. 4.3.1 New Security Models Information flows and subject-object relationships seem to be a useful basis for security models when the subjects and objects in a system are all controlled by a single authority. Are they still sensible when the system involves multiple authorities? Since the Internet lacks a central authority, with hosts each implementing potentially different policies under different authorities, the obvious issue is the extent to which an even more complex NII requires new security models and will give rise to new types of security policies. Information must be protected within the host systems connected to a computer network. This involves controlling access to each host system's information and processing resources. Information must also be protected as it passes between host systems. It must be possible to preserve message privacy and integrity, and to perform source authentication on a message, even though that message has traversed network territory that is controlled by neither the sender nor the receiver. In fact, the sender and receiver will themselves likely be under different administrative authorities. Thus, a sender cannot necessarily trust a receiver to implement the sender's security policy. These are notions for which the traditional models may not be suitable, because there may no longer be a single goal structure shared by the participating hosts. But there may well be some dimensions of commonality between subsets of the hosts, and these commonalties may drive the need for new types of security models and policies, albeit having forms that perhaps differ from the well-studied ones. Finally, network wide services, implemented by collections of sites that are each operating with different policies, are already integral to the functioning of the NII. Routing and naming in the Internet today are examples of this--no authority is responsible for all the routing tables or the entries in all the name servers. The integrity of these services and their availability affect the operation of the Internet. But stating such guarantees might well take a fundamentally different form, being statistical in nature rather than absolute, than is usual for security properties. And while there is experience in specifying and implementing stochastic guarantees for service (e.g., dial-tone guarantees provided by telephone companies), there is no similar experience with stochastic guarantees for integrity. Techniques from fault-tolerant system design may be applicable, but this remains to be seen. 4.4 EXPLOITING MASSIVE REPLICATION Much of the discussion surrounding increasing various dimensions of trustworthiness is related to hardening of individual system components. However, it may be possible to build networked computer systems that exhibit increased resilience by virtue of their group behavior. The use of replication and voting to implement fault tolerance is an example of this style of solution and has given rise in the last 15 years to an impressive body of work concerned with distributed protocols to solve the Byzantine Agreement problem [42] and other subproblems of replication management [43]; the resilient operating-mode described in Section 3.1 for an MEII is another. In both, the functioning of specific single components according to their requirements no longer dominates, but trustworthiness of the aggregate system nevertheless is achieved. Achieving progress in networked systems depends on recognizing that composition of elements into larger systems often has unanticipated effects. Today, trustworthiness of the aggregate, in contrast to trustworthiness of individual components, is what the electric power transmission grid, the public switched telephone network, and the Internet strive to provide. Thus, it is a form of service that is applicable in contexts of interest to the committee. The style of solution has its origins in natural and social systems. Without any central organizing authority, fish school and animals herd in order to defend against predator species. The failure of an individual element (a fish or a zebra) in the face of an attack does not unduly diminish the group's chances of survival. It may be possible to build networked computer systems that rely on group behavior in order to increase their resilience against attack and error. A system that relies on a consensus decision to change a routing table may be more resilient than one that does not, because an attacker would need to subvert not just an individual router but the entire consensus group. Moreover, in an NII, different components would likely be controlled by separate administrative authorities, making the attacker's task even harder. As another example, an aggregate in which each element checks for correctness in the output of a number of other elements and cuts off communication to an apparently malfunctioning system may (with a correct set of rules) resist element failure more vigorously than a system that does not. If such systems can be built, they will undoubtedly require: * a communication infrastructure that allows for scalable group communication; * methods for understanding, predicting, and specifying group behavior in networked computers (including understanding the vulnerabilities introduced by having one party talk to others to make its own decisions as, for example, is addressed by Byzantine Agreement protocols); and * a better understanding of what behaviors best resist attack, and are least vulnerable to common-mode attacks. Simple and flexible rules might be all that is needed, with metaphors and observations about the nature of nature--flocking birds, human immunological systems, crystalline structures in physics--informing research and development in methods for organizing and managing networks of computers and the information they contain. But this remains to be seen, because networks of components that follow complex rules--as is likely to be the case for networked computers--may not exhibit the desirable robust behaviors but instead might converge, becoming synchronized in unintended ways. It is worth noting that some researchers in artificial intelligence and robotics have arrived separately at a research agenda for a type of distributed intelligent software popularly known as "autonomous intelligent agents." The agents include simple rules that govern their interaction with data and other agents. The actions and behaviors of these systems arise not from deterministic programming, but from the complex interactions of the individual elements (agents) with one another within the environment of the network. The systems act collectively, emulating a larger and more reliable system. Despite the promise of these technologies, however, related work has tended to ignore security, reliability, unforeseen interactions, and trustworthiness overall. 4.5 INCREASING THE QUALITY OF SOFTWARE SYSTEMS A great deal of computer science research is concerned with software quality--the design and implementation of software that satisfies its requirements. Despite these efforts, the expense of demonstrating that realistic software systems satisfy their requirements remains high. Commercial software is difficult to use, does not work as intended, crashes frequently, and is hard to modify in response to changed user needs and hardware. The development failures that have occurred with systems such as the Internal Revenue Services tax-system modernization [44], the Advanced Automation System for next-generation air traffic control, and the Confirm computerized travel reservation system in the private sector give a clear indication that even where market pressures are not a factor, software development can be overwhelming. And unfortunately, the design and implementation vulnerabilities inherent in poor-quality software contribute to the ease with which such systems are successfully attacked. Not all techniques to improve the quality of software are impractical. Some, like the use of high-level languages, strong typing, coding standards, object-oriented design, and testing based on various coverage metrics, are effective and not prohibitively expensive. In addition, there is some indication that evaluation of the development process itself, rather than the products of that process, pays dividends. The Software Engineering Institute's Capability Maturity Model (CMM) [45] provides a basis for ranking organizations by properties of their development process, and there is some evidence that organizations that are highly ranked will produce higher-quality software than their lower-ranked peers. But a high ranking does not guarantee the quality of developed software. While the committee does not plan to survey the software engineering literature or thoroughly study this research area, an attempt will be made to understand why modern software theory and practice are not more widely used in the software development industry. The final report will address such tensions and trade-offs, as well as such basic questions as the following: * Is limited use of known good methods a consequence of poor practitioner training? * Do the methods, often developed in research settings, fail to account for one or another significant aspect of the industrial software development scene? * Do the methods attack problems that are relevant to software development? * Does the software development community accurately perceive what the key problems really are? The hope in asking these questions is to understand research directions that might lead to improved technology for enhancing software quality. Software whose quality is critical for enhancing the trustworthiness of the NII may have special characteristics, and the committee will also attempt to identify those characteristics so that promising avenues for research can be discussed. 4.5.1 Formal Methods One controversial subarea of software engineering, often justified as being cost-effective for high-assurance systems, is the use of formal methods. When specifications can be represented in a language having a formal semantics, a variety of techniques become available for gaining assurance about properties that will be exhibited by behaviors. These techniques are called formal methods and are based on formal logic. Usually, a set of logical formulas, called a specification, is written to describe expected system behavior. Analysis of the specification then allows properties of a system to be inferred. The analysis may be as simple as type-checking the formulas, to gain confidence that the specification is meaningful, or checking the specification to make sure that all cases have been enumerated. Or, the analysis may be as complex as mathematical theorem proving, to enable concluding that all behaviors satisfying the specification will also satisfy some property of interest. Two general approaches underlie the various forms of analysis: model checking and theorem proving. In both, determining whether a specification satisfies a property is transformed into determining whether a formula of some logic is valid. Sometimes this determination is made with the help of mechanical tools; at other times it suffices to perform calculations by hand, analogous to the "back of the envelope" calculations commonplace in many engineering disciplines. In model checking, the validity of a formula is established by enumerating all possible scenarios (in an extremely efficient way). Performing such an enumeration is sensible only when finite state spaces are involved, although recent advances in model-checking algorithms allow this state space to be quite large. Further developments promise to enable working with specifications involving infinite-sized state spaces exhibiting certain structure. Model checking has enjoyed considerable commercial use in supporting hardware design, specifically for analyzing chips [46]. Semiconductor designers and manufacturers have, over the past 5 years, been making substantial investments to exploit this area of formal methods, and today this sector of the industry is responsible for hiring a sizable fraction of the annual Ph.D. production in formal methods. In addition, communications protocols, although frequently not implemented in hardware, are invariably finite-state and therefore are amenable to analysis based on model checking. Lucent's Bell Laboratories, for example, now markets a model-checking tool that its researchers developed. It remains to be seen whether model checking will have the same impact in this domain as it has had in hardware. The theorem-proving school of formal methods is based on proving theorems in some logic in order to deduce that a formula is valid. Software tools have been implemented to assist in theorem proving. Some of these (called proof checkers) check texts purported to be proofs, thereby serving as skeptical assistants and ensuring that details have not been overlooked; other tools are able to help construct parts of proofs by completing sufficiently simple (low-level) subproofs or by applying heuristics. Theorem proving does not suffer from the finite-sized state space restriction of model checking, but does require facility with logic by its users. For this reason alone, industrial application of the approach has been limited. Theorem proving nevertheless has been employed in verifying properties of hardware and software and even systemwide safety properties. Also, key management and authentication protocols have been successfully analyzed using theorem proving, and flaws have been found. These successes have led to the construction of theorem-proving tools designed especially for cryptographic protocols [47]. Discussions at the committee's second workshop, which brought together a sampling of developers and users of formal methods, suggest that the ultimate role of the technology remains uncertain, but that there is a better understanding today than a decade ago about which applications can benefit from formal methods technologies, which cannot, and why [48]. The committee will strive to understand this possible progress as well as to identify where further technical developments are likely to have an impact. 4.6 HARDWARE SUPPORT Supporting some security properties would be simplified if special-purpose hardware features were available. These features are being investigated by the committee. Of course, the actual impact of specialized hardware on system trustworthiness will depend on the extent to which that hardware is embraced by the marketplace. The features being investigated by the committee fall into two categories: tamper-resistant technology and hardware random number generators. 4.6.1 Tamper-resistant Technology Recently, manufacturers have announced the availability of "tamper-resistant" devices. These devices, which are intended to comply with Federal Information Processing Standard 140-1 (level 2 and above), erase the sensitive data in their memories whenever they are improperly used or opened. Advocates of the technology suggest that such devices can implement functionality that otherwise would have to be performed in physically secured areas. In a network, there would likely be relatively few such physically secure areas, and network communication failures could render them inaccessible. Nevertheless, hosts with tamper-resistant devices might be able to tolerate such interruptions in communication. 4.6.2 Hardware Random Number Generators Section 4.1 discusses the importance of cryptography in solving the trustworthiness problem. Integral to many cryptographic protocols is the generation of random numbers [49], which are then used as keys or nonces. For example, the National Institute of Standards and Technology's digital signature standard is critically dependent on the availability of good random numbers. Unfortunately, the vast majority of software-implemented pseudo-random number generators are vulnerable to so-called guessing attacks, in which, by observing timing or past values, an adversary predicts partial or complete values that the generator will produce in the future [50]. A question, then, is the extent to which inclusion of true random number generators in off-the-shelf processors makes sense--both from the commercial and security perspectives. NOTES [33] Note that cryptography, by itself, cannot implement non-repudiation; there must also be a way to establish the time at which a transaction occurs. [34] Steiner, J.G., B.C. Neuman, and J.I. Schiller. 1988. "Kerberos: An Authentication Service for Open Network Systems," Usenix Conference Proceedings, Dallas, Texas, February, pp. 191-202. [35] Certificates need not be issued only to individuals; certificates also can be issued to devices and to processes. The motivation for these latter certificates is the same as for personal certificates, i.e., identification and authentication, typically as input to an access control decision. Such certificates are beginning to be issued now, e.g., for use with routers as part of improved router-to-router authentication. In these instances, the CA may be local (e.g., the organization operating the router), though "outsourcing" of this CA function is also a viable alternative being pursued by some vendors (e.g., Cisco and VeriSign). [36] Cheswick, William R., and Steven M. Bellovin. 1994. Firewalls and Internet Security, Addison-Wesley, Don Mills, Ontario. [37] Different types of firewalls have been developed. Some examine traffic at the transport-level protocols, assessing permission to enter or exit the enclave based on the identity of the sending or receiving host. Other firewalls examine traffic at the application level, permitting access controls based on application type as well as the identity of the sending or receiving host. Different types of firewalls have been developed. Some examine traffic at the transport-level protocols, assessing permission to enter or exit the enclave based on the identity of the sending or receiving host. Other firewalls examine traffic at the application level, permitting access controls based on application type as well as the identity of the sending or receiving host. [38] Use of several firewalls in the same enclave sometimes can increase security, depending on how these firewalls differ and are positioned. [39] Anderson, J.P. 1972. Computer Security Technology Planning Study, ESD-TR-73-51, Vol. I, AD-758, 206, ESD/AFSC. Hanscom AFB, Bedford, Mass., October. [40] In 1987, Clark and Wilson examined whether the standard DOD lattice model is capable of representing the security concepts important in commercial settings. Two main control rules, commonly used in financial systems, require that the two people authorizing an action must be unrelated to each. "Being Unrelated" is not representable within a lattice model, and thus different sorts of models are required. Clark, D.D. and D.R. Wilson. 1987. "A Comparison of Commercial and Military Computer Security Policies," Proceedings of the 1987 IEEE Symposium on Security and Privacy, IEEE Computer Society, Oakland, Calif., April 27-29, pp. 184-194. [41] An example of a business situation in which MAC would be suitable is its use for implementing the separation of a mergers and acquisitions department from the general trading and brokerage department of a financial firm. [42] The Byzantine Agreement problem is concerned with disseminating a value among a collection of processors in a distributed system. Initially, the value is known to (only) one processor; a protocol that solves the problem must ensure that all non-faulty processors agree on that value. The problem is tricky because some of the processors (including the one initially holding the value) might be faulty, and faulty processors can exhibit arbitrary behavior. See Lamport, L., R. Shostak, and M. Pease. 1982. "The Byzantine Generals Problem," ACM TOPLAS 4, 3 (July), pp. 382-401. [43] See, for example, Mullende, Sape (ed.). 1993. Distributed Systems. Addison-Wesley, Don Mills, Ontario. [44] Computer Science and Telecommunications Board, National Research Council. 1996. Continued Review of the Tax Systems Modernization of the Internal Revenue Service. National Academy Press, Washington, D.C. [45] The Capability Maturity Model for Software was developed by the Software Engineering Institute at Carnegie Mellon University as a model and method for assessing the software development capabilities of an organization. See Paulk, Mark, et al. 1993. A Capability Maturity Model for Software. CMU/SEI-91-TR-24, ADA 263403. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pa., February. [46] This example received considerable discussion at the committees second workshop. [47] See, for example, Millen, J. 1984. "The INTERROGATOR: A Tool for Cryptographic Protocol Security," IEEE Symposium on Security and Privacy, pp. 134-141, May. Also, Brackin, S. 1997. "An Interface Specification Language for Automatically Analyzing Cryptographic Protocols," Proceedings of the 1997 Symposium on Network and Distributed System Security, pp. 40-51, February. [48] For example, see Clarke, Edmund M., and Jeanette M. Wing. 1996. "Formal Methods: State of the Art and Future Directions," Technical Report, Number CMU-CS-96-178, Carnegie Mellon University, Pittsburgh, Pa, September. [49] Randomness Recommendations for Security, RFC 1750, Donald Eastlake, Jeffrey I. Schiller, and Stephen D. Crocker. Premised on the fact that choosing random numbers for use in cryptographic systems is much more difficult than for other applications, this paper describes some of the pitfalls and ways to avoid them. [50] A well-publicized flaw in the Netscape SSL implementation was an example of this problem. ---------- 5 Non-Technical Realities Technologies rarely make the transition from laboratory to marketplace unaided. As a result, research planning should be mindful of commercial and political realities, as well as purely scientific questions. For information systems trustworthiness, one of those realities is the commercial off-the shelf (COTS) marketplace. Two classes of customer are influential: 1. Retail end users, predominantly in organizations (business, government, and so on) but increasingly acting as individuals (e.g., members of households), and 2. Wholesale or value-adding users (e.g., Internet service providers, financial service organizations), who connect end users to others via communications and information services. For commercial customers, the primary consideration is functionality; demand for computing and communications derives from this. R&D also reflects perceived desires and willingness to pay for features. Although many have requested more secure systems and solutions, when given an opportunity to choose between new functionality and support for trustworthiness, the market invariably has selected for features [51]. The significant dollar costs for implementing trustworthiness, the time-to-market delays, and the impacts of such features on other functionality and ease of use contribute to this trade-off, as does the historically low (reported) incidence of problems. Most users have been spared from serious attack, and there is no popular concern for either the risk of catastrophe (analogous to an airplane crash or worse) or the risk that many distinct parties may be injured (analogous to traffic accidents). Even when an appreciation for the risks exists, there is evidence that some users prefer a variety of coping mechanisms (e.g., business planning that assumes a certain degree of loss, as seen in the credit card and cable television industries; insurance for business continuation) to investing in the systems, training, and other elements of a robust technical approach to avoiding or containing the risk directly. The committee will obtain further input on the nature of the marketplace with an eye toward predicting how supply and demand are likely to change given the scenarios for the future outlined in Section 2 and the costs that can be anticipated for supporting enhanced trustworthiness properties. Of particular concern is expected demand for more integrated solutions, since COTS solutions traditionally have favored add-ons (e.g., firewalls). Such add-ons historically have enjoyed only limited success, but wholesale redesign of systems in place is probably not realistic. In addition to user experiences and needs, public policy can and does affect the COTS marketplace. Technology development for trustworthiness is shaped by public policy to a greater degree than many other aspects of information technology--although the integration of different aspects of computing and communications into a multitechnology, multiservice information infrastructure opens the prospect of broader public policy influence in the future. Assessing the influence of public policy is confounded by a history of uneasy relations between the government and the private sector in the area of computer security. That history leads to strongly held perceptions and concerns, not all of which are backed up by reality. Although a detailed consideration of public policy issues is outside the scope of this interim report, the committee notes the recurrence of the following issues: Computing and communications are being integrated into more infrastructures on which the public critically depends. This trend raises questions about extending national security concerns to include privately held resources, especially in the current climate of diminishing direct public-sector control. Recent government actions, such as the establishment of the President's Commission on Critical Infrastructure Protection, the parallel investigations under the aegis of the ongoing National Security Telecommunications Advisory Committee, the consideration of NII security issues by the Information Infrastructure Task Force [52] and the Office of Management and Budget's Office of Information and Regulatory Affairs, the Defense Science Board study cited above [53], and other DOD examinations of information warfare, attest to uncertainty about the scope and definition of relevant problems and solutions [54]. These activities also reflect a lack of consensus on where responsibility lies within the public sector. Perhaps such responsibility must be broadly distributed, as the history of the transportation infrastructure might suggest [55]. Questions that are philosophical as much as practical include the following: How much damage does society have to sustain (or expect) before the government intervenes? Are there special concerns relating to "survivability" that must be treated separately from other aspects of trustworthiness? Telecommunications illustrates this problem of national security concerns extending to privately held resources. The U.S. economy and the operation of the government itself depend on the communications infrastructure of the country. However, this infrastructure is run almost entirely by private companies. Although the dichotomy exists for conventional telephone networks, it is more pronounced for the Internet. Almost all of the Internet is privately run. Even military sites purchase their connectivity from the private sector. While this provides higher performance at a lower cost, it leaves open the question of whether or not government operations would be hindered if these networks malfunction or are compromised. During the so-called "sniffing incidents" [56] of a few years ago, many government and military sites were penetrated. But the root problem was machines belonging to Internet service providers (ISPs) being subverted and transformed into eavesdropping stations. Similar concerns persist today. For example, the most effective way to prevent "SYN flooding" [57] attacks requires all ISPs to install certain address filters on the borders of their networks. But no single party is in a position to do this. The enduring national security mission continues to give rise to programs and policies that foster development and use of technologies and practices for trustworthiness (the request for this project is an obvious illustration). These, in turn, influence the commercial prospects for certain technologies. Debates over cryptography policy epitomize this tension, which is simply noted at this time as a factor industry points to as partly shaping supply and demand for relevant COTS options [58]. Recent controversies in this area have added public policy motivations relating to law enforcement, which suggests that the larger area of public safety (however defined) may further influence--either by reinforcing or by complementing the influence of national security--the terms and conditions under which COTS options are considered. The government history of designing systems for enhancing computer and communications security and other aspects of trustworthiness, as well as the more closely held government history of studying how systems may be attacked or fail, implies a considerable (if highly focused) reservoir of expertise in the government. Some of this has been shared--witness the Orange Book and related publications. But as discussed in Section 4.3, that work captures only a military classification security model and it does not deal well with today's networks. Cryptographic protocols are another area where the government might be helpful. The civilian sector is gradually coming to terms with just how hard it is to find errors in such protocols; joint work on validation tools and principles would be valuable. Whether and how that expertise is shared will affect the know-how that shapes both COTS options and research within the technical community. At the same time, the different product demand, rapidity of technology change, organizational policy context, and overall experience base in the private sector raise questions about the value of technology transfer in the reverse direction. Again, the request for this project is indicative of such value. The committee will consider shifts in relative expertise, including whether there are areas (e.g., computer networking) where the private sector may dominate and what that implies for the processes by which greater trustworthiness is fostered. The enhancement and integration of information infrastructure may well drive new initiatives relating to information policy. The privacy of personal and organizational information and the protection of intellectual property are concerns motivating consideration of new public policy options [59]. Already, these concerns have motivated private efforts to develop rights management technologies, and those efforts may be accelerated, slowed, or redirected as a result of policymaking. Preliminary readings from the committee's second workshop suggest that rights management efforts are not likely to lead the trustworthiness development community, but they may well give rise to new demands on the NII and a new dependence on it. The pervasiveness of information technology and its growing entrenchment in all aspects of life raise questions about dividing lines between public and private rights and responsibilities. Common to all kinds of public policy relating to trustworthiness will be decisions about where individuals are responsible for their own protection and where--because of potential consequences and/or because, as with pharmaceuticals, the products are too complex for most individuals to make their own appraisals--the government should intervene. Mobile code, for example, raises the prospect of system penetration and manipulation without a trail of evidence. Changes in evidentiary expectations raise questions about the framework for liability and tort laws, as well as business insurance expectations for best practices and a variety of auditing and reporting requirements, and joint public and private decision making about where standards are needed and whether they should be wholly private or some combination of public and private. The resolution of questions of rights and responsibilities will affect the kinds (and costs) of products vendors can offer and the kinds people will demand. Impacts will depend on the nature and focus of the intervention--whether, for example, any possible government action is focused on product design, product performance, or the preparation and performance of people designing, developing, and implementing information technology--all areas on which many have speculated and argued. The incorporation of concepts relating to system safety into "trustworthiness" underscores the interactions between technology development processes and the public policy framework. NOTES [51] Acquisitions by governmentincluding procurement for logistics use by the militaryalso demonstrate a preference for new functionality, perhaps motivated by the (as yet unsubstantiated) belief that trustworthiness can be retrofit externally at a lesser total project cost given the savings and leverage of using COTS hardware and software. One might expect that the relative magnitude of government purchasing would influence the design and choice of what gets marketed, thereby creating a market for trustworthy systems. In fact, while government spending accounted for 30 percent of the market in the 1970s, experts believe it may reach only 5 percent today. [52] See, for example, the overview created for the IITF's Technology Policy Working Group: Cross-Industry Working Team. 1996. A Process for Information Technology Security Policy: An XIWT Report on Industry-Government Cooperation for Effective Public Policy. Corporation for National Research Initiatives, Reston, Va., March. Available on-line from http://www.cnri.reston.va.us. The IITF's Information Policy Committee has addressed policy and security policy principles and other issues. [53] Defense Science Board, Office of the Under Secretary of Defense for Acquisition and Technology. 1996. Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D). Defense Science Board, Washington, D.C., November. [54] The flurry of activity attests also to recognition that the fundamental messages of a previous CSTB report, Computers at Risk: Safe Computing in the Information Age (National Academy Press, Washington, D.C., 1991), have become more widely understood over time and as increases in network deployment and use have raised concerns. [55] Even in the days prior to the combustion engine, rules of the road were required to prevent head-on collisions between horse-drawn conveyances. In the United States, the states have imposed laws on the use of the highway infrastructure; some of these laws are inherited from the federal level; others are peculiar to each state. Users must comply with these laws or lose driving privileges; third parties (e.g., automobile manufacturers) wishing to sell to these users must comply with these laws. A variety of monitoring and enforcement mechanisms have grown up around this infrastructure. The infrastructure itself is substantially redundant (but with differing qualities of service for the redundant links) so that there can be no single point of catastrophic failure. [56] In sniffing incidents, hosts on the Internet have been subverted and turned into eavesdropping stations. See ftp://ftp.cert.org/pub/cert_advisories/ CA-94%3A01.network.monitoring.attacks. [57] SYN flooding is a denial-of-service-attack. The attacker sends fake SYNsTCP open request messagesfrom forged, non-existent host addresses to a particular service on a target host. Most implementations permit only a certain number of partially open connections; since the putative host address is non-existent, the open sequence will never complete, thus clogging the queue. See ftp://ftp.cert.org/pub/cert_advisories/ CA-96.21.tcp_syn_flooding. [58] Computer Science and Telecommunications Board, National Research Council. 1996. Cryptographys Role in Securing the Information Society. National Academy Press, Washington, D.C. [59] For a discussion of emerging concerns relating to medical data privacy, see Computer Science and Telecommunications Board, National Research Council. 1997. For the Record: Protecting Electronic Health Information. National Academy Press, Washington, D.C. ---------- Appendix A Workshop 1: Networked Infrastructure Participants* Wendell Bailey, National Cable Television Association Michael Baum, VeriSign Inc. Steven M. Bellovin, AT&T Research Barbara Blaustein, National Science Foundation Marjory S. Blumenthal, Computer Science and Telecommunications Board Earl Boebert, Sandia National Laboratories Information Systems Trustworthiness Martha Branstad, Trusted Information Systems Blaine Burnham, National Security Agency William E. Burr, National Institute of Standards and Technology David Carrel, Cisco Systems Inc. J. Randall Catoe, MCI Telecommunications Inc. Stephen N. Cohn, BBN Corporation Stephen D. Crocker, CyberCash Inc. Dale Drew, MCI Telecommunications Inc. Mary Dunham, Directorate of Science and Technology, Central Intelligence Agency Roch Guerin, IBM T.J. Watson Research Center Quality of Service and Trustworthiness Michael W. Harvey, Bell Atlantic Chrisan Herrod, Defense Information Systems Agency Defense Information Infrastructure (DII): Trustworthiness, Issues and Enhancements G. Mack Hicks, Bank of America Stephen R. Katz, Citibank, N.A. Charlie Kaufman, Iris Associates Inc. Stephen T. Kent, BBN Corporation Alan J. Kirby, Raptor Systems Inc. Is the NII Trustworthy? John Klensin, MCI Communications Corporation John C. Knight, University of Virginia Gary M. Koob, Defense Advanced Research Projects Agency Steven McGeady, Intel Corporation Douglas J. McGowan, Hewlett-Packard Company Robert V. Meushaw, National Security Agency Ruth R. Nelson, Information System Security Michael D. ODell, UUNET Technologies Inc. Hilarie Orman, Defense Advanced Research Projects Agency Radia Perlman, Novell Corporation Information Systems Trustworthiness Frank Perry, Defense Information Systems Agency Elaine Reed, MCI Telecommunications Inc. Robert Rosenthal, Defense Advanced Research Projects Agency Margaret Scarborough, National Automated Clearing House Association Richard C. Schaeffer, National Security Agency Richard M. Schell, Netscape Communications Corporation Allan M. Schiffman, Terisa Systems Inc. Fred B. Schneider, Cornell University Henning Schulzrinne, Columbia University The Impact of Resource Reservation for Real-Time Internet Services Basil Scott, Directorate of Science and Technology, Central Intelligence Agency Mark E. Segal, Bell Communications Research Trustworthiness in Telecommunications Systems George A. Spix, Microsoft Corporation Douglas Tygar, Carnegie Mellon University Leslie Wade, Computer Science and Telecommunications Board Abel Weinrib, Intel Corporation QoS, Multicast and Information System Trustworthiness Rick Wilder, MCI Telecommunications Inc. John T. Wroclawski, Massachusetts Institute of Technology __________________________ *Some participants developed materials for this workshop, which are listed by title. Other participants shared materials at this workshop that are published elsewhere. Agenda Monday, October 28, 1996 +------------+--------------------------------------------------+ 7:30 a.m. Continental breakfast +------------+--------------------------------------------------+ 8:00 Welcome and Overview (Stephen Crocker) What is trust? What is complexity? What are your problems composing networked infrastructure? +------------+--------------------------------------------------+ 8:15 Session 1 (George Spix and Steven McGeady) How are we doing? Is the NII trustworthy . . . and How Do We Know It? Tell us a story: What failed and how was it fixed? What do you believe is todays most critical problem? What is your outlook for its resolution? What is tomorrows most critical problem? What are you doing to prepare for it? What is your highest priority for 5-10 years out? Is complexity a problem and why? Is interdependence a problem and why? +------------+--------------------------------------------------+ Overview +------------+--------------------------------------------------+ Panelists Earl Boebert, Sandia National Laboratories Dale Drew, MCI Telecommunications Inc. +------------+--------------------------------------------------+ 8:45 Panel 1 - Suppliers and Toolmakers (George Spix and Steven McGeady) +------------+--------------------------------------------------+ Panelists David Carrel, Cisco Systems Inc. Alan Kirby, Raptor Systems Inc. Douglas McGowan, Hewlett-Packard Company Radia Perlman, Novell Corporation +------------+--------------------------------------------------+ 9:45 Break +------------+--------------------------------------------------+ 10:00 Panel 2 - Delivery Vehicles (George Spix and Steven McGeady) +------------+--------------------------------------------------+ Panelists Wendell Bailey, National Cable Television Association Michael Harvey, Bell Atlantic Michael ODell, UUNET Technologies Inc. +------------+--------------------------------------------------+ 11:00 Panel 3 - Customers (George Spix and Steven McGeady) +------------+--------------------------------------------------+ Panelists Chrisan Herrod, Defense Information Systems Agency Mack Hicks, Bank of America Stephen Katz, Citibank Margaret Scarborough, National Automated Clearing House Association +------------+--------------------------------------------------+ 12:30 p.m. Lunch +------------+--------------------------------------------------+ 1:30 p.m. Session 2 (Steven Bellovin) Given increasing complexity, why should we expect these interconnected (telco, cableco, wireless, satellite, other) networks and supporting systems to work? How do these systems interoperate today in different businesses and organizations? How will they interoperate tomorrowhow is the technology changing, relative to context? Do they have to interoperate or can they exist as separate domains up to and into the customer premise? +------------+--------------------------------------------------+ Panelists (plus Session 1 participants) Elaine Reed, MCI Telecommunications Inc. Frank Perry, Defense Information Systems Agency +------------+--------------------------------------------------+ 2:30 Break +------------+--------------------------------------------------+ 2:45 Session 3 (Allan Schiffman) What indications do we have that quality of service differentiated by cost is a workable solution? What is the intersection of QOS and trustworthiness? What are the key technical elements? How are QOS targets met today across networks and technologies? What are the trustworthiness tradeoffs of multi-tier, multi-price QOS compared to best-effort? +------------+--------------------------------------------------+ Panelists Roch Guerin, IBM T.J. Watson Research Center Henning Schulzrinne, Columbia University Abel Weinrib, Intel Corporation Rick Wilder, MCI Telecommunications Inc. John Wroclawski, Massachusetts Institute of Technology +------------+--------------------------------------------------+ 4:00 Break +------------+--------------------------------------------------+ 4:15 Session 4 (Stephen Kent) The role of public key infrastructures in establishing trust: tackling the technical elements. How is success defined in the physical world? What are your current challenges (technical, business, social)? How can national-scale PKIs be achieved? What technology is needed to service efficiently users who may number from several hundred thousand to tens of millions? What is your outlook? What are the hard problems? What topics should go on federal or industrial research agendas? If multiple, domain-specific PKIs emerge, will integration or other issues call for new technology? +------------+--------------------------------------------------+ Panelists Michael Baum, VeriSign Inc. William Burr, National Institute of Standards and Technology Stephen Cohn, BBN Corporation +------------+--------------------------------------------------+ 5:30 Reception and Dinner +------------+--------------------------------------------------+ Tuesday, October 29, 1996 +-----------+---------------------------------------------------+ 7:30 a.m. Continental breakfast +-----------+---------------------------------------------------+ 8:00 Recap of Day One (George Spix) +-----------+---------------------------------------------------+ 8:45 Session 5 (Steven McGeady) What is the current status of software trustworthiness and how does the increasing complexity of software affect this issue? Tell us a story: What failed and how was it fixed? What do you believe is todays most critical problem? How will it be resolved? What is tomorrows most critical problem? What are you doing to prepare for it? What happens when prophylaxis fails? How do you compare problem detection, response, and recovery alternatives? How can we implement safety and reliability as components of trust, along with security and survivability? Is distribution of system elements and control an opportunity or a curse? What are the key technical challenges for making distributed software systems more trustworthy? When will all human-to-human communication be mediated by an (end-user programmable or programmable-in-effect) computer? Do we care, from the perspective of promoting trustworthy software? Should this influence research investments? +-----------+---------------------------------------------------+ Panelists John Klensin, MCI Telecommunications Inc. Richard Schell, Netscape Communications Corporation Mark Segal, Bell Communications Research +-----------+---------------------------------------------------+ 10:00 Break +-----------+---------------------------------------------------+ 10:30 Continue discussion, Session 5 +-----------+---------------------------------------------------+ 11:30 Hard problems in terms of timeframe, cost, and certainty of result Summary of definitionstrustworthiness, complexity, compositional problems? What are our grand challenges? Discussion, revision; feedback from federal government observers +-----------+---------------------------------------------------+ 12:00 Adjourn ---------- Appendix B Workshop 2: End Systems Infrastructure Participants* Martin Abadi, Systems Research Center, Digital Equipment Corporation Formal, Informal, and Null Methods Steven M. Bellovin, AT&T Research Matt Blaze, AT&T Research W. Earl Boebert, Sandia National Laboratories Martha Branstad, Trusted Information Systems Ricky W. Butler, NASA Langley Research Center Formal Methods: State of the Practice Shiu-Kai Chin, Syracuse University Highly Assured Computer Engineering Dan Craigen, Odyssey Research Associates (Canada) A Perspective on Formal Methods Stephen D. Crocker, CyberCash Inc. Kevin R. Driscoll, Honeywell Technology Center Cynthia Dwork, IBM Almaden Research Center Edward W. Felten, Princeton University Research Directions for Java Security Li Gong, JavaSoft Inc. Mobile Code in Java: Strength and Challenges Constance Heitmeyer, U.S. Naval Research Laboratory Formal Methods: State of Technology Charlie Kaufman, Iris Associates Inc. Stephen T. Kent, BBN Corporation Rohit Khare, World Wide Web Consortium Rights Management, Copy Detection, and Access Control John C. Knight, University of Virginia Paul Kocher, Cryptography consultant Position Statement for Panel 4 Robert Kurshan, Bell Laboratories Inc. Algorithmic Verification Peter Lee, Carnegie Mellon University Karl N. Levitt, University of California at Davis Intrusion Detection for Large Networks Steven Lucco, Microsoft Corporation Teresa Lunt, Defense Advanced Research Projects Agency Leo Marcus, Aerospace Corporation Formal Methods: State of the Practice John McHugh, Portland State University Formal Methods for Survivability John McLean, U.S. Naval Research Laboratory Formal Methods in Security Steven McGeady, Intel Corporation Dejan Milojicic, The Open Group Research Institute Alternatives to Mobile Code J Strother Moore, University of Texas at Austin Postition Statement on the State of Formal Methods Technology Ruth R. Nelson, Information System Security Clifford Neuman, Information Sciences Institute, University of Southern California Rights Management, Copy Detection, and Access Control Elaine Palmer, IBM T.J. Watson Research Center Research on Secure Coprocessors David L. Presotto, Bell Laboratories Inc. Joseph Reagle, Jr., World Wide Web Consortium Robert Rosenthal, Defense Advanced Research Projects Agency John Rushby, SRI International Formal Methods: State of Technology Allan M. Schiffman, Terisa Systems Inc. Fred B. Schneider, Cornell University Margo Seltzer, Harvard University Dealing with Disaster: Surviving Misbehaved Kernal Extensions Jerry Sheehan, Computer Science and Telecommunications Board George A. Spix, Microsoft Corporation Mark Stefik, Xerox Palo Alto Research Center Security Concepts for Digital Publishing on Trusted Systems Vipin Swarup, MITRE Corporation Mobile Code Security Douglas Tygar, Carnegie Mellon University Leslie Wade, Computer Science and Telecommunications Board Bennet S. Yee, University of California at San Diego __________________________ *Some participants developed materials for this workshop, which are listed by title. Other participants shared materials at this workshop that are published elsewhere. Agenda Wednesday, February 5, 1997 +------------+--------------------------------------------------+ 7:30 a.m. Continental breakfast available in the Refectory +------------+--------------------------------------------------+ 8:30 Welcome and Overview (Fred Schneider) +------------+--------------------------------------------------+ 8:45 Panel 1 (Douglas Tygar) Mobile Code: Java +------------+--------------------------------------------------+ Matt Blaze, AT&T Research Edward W. Felten, Princeton University Li Gong, JavaSoft Inc. David L. Presotto, Bell Laboratories Inc. +------------+--------------------------------------------------+ 10:15 Break +------------+--------------------------------------------------+ 10:30 Panel 2 (Douglas Tygar) Mobile Code: Alternative Approaches +------------+--------------------------------------------------+ Peter Lee, Carnegie Mellon University Steven Lucco, Microsoft Corporation Dejan S. Milojicic, The Open Group Research Institute Margo Seltzer, Harvard University Vipin Swarup, MITRE Corporation +------------+--------------------------------------------------+ 12:00 p.m. Lunch in Refectory +------------+--------------------------------------------------+ 1:00 Panel 3 (Allan Schiffman) Rights Management, Copy Detection, Access Control +------------+--------------------------------------------------+ Cynthia Dwork, IBM Almaden Research Center Rohit Khare (accompanied by Joseph Reagle, Jr.), World Wide Web Consortium Clifford Neuman, USC/Information Sciences Institute Mark Stefik, Xerox Palo Alto Research Center +------------+--------------------------------------------------+ 2:30 Break +------------+--------------------------------------------------+ 2:45 Panel 4 (Stephen Crocker) Tamper Resistant Devices +------------+--------------------------------------------------+ Paul C. Kocher, Cryptography consultant Elaine Palmer, IBM T.J. Watson Research Center Bennet S. Yee, University of California at San Diego +------------+--------------------------------------------------+ 4:15 Break +------------+--------------------------------------------------+ 4:30 Continue discussion +------------+--------------------------------------------------+ 5:30 Reception and Dinner +------------+--------------------------------------------------+ Thursday, February 6, 1997 +------------+--------------------------------------------------+ 7:30 a.m. Continental breakfast +------------+--------------------------------------------------+ 8:30 Introductory Remarks (Fred Schneider) +------------+--------------------------------------------------+ 8:45 Panel 5 (Fred Schneider) Formal Methods: State of the Technology +------------+--------------------------------------------------+ Constance L. Heitmeyer, U.S. Naval Research Laboratory Robert Kurshan, Bell Laboratories Inc. J Moore, Computational Logic Inc. and University of Texas at Austin John Rushby, SRI International +------------+--------------------------------------------------+ 10:15 Break +------------+--------------------------------------------------+ 10:30 Panel 6 (John Knight) Formal Methods: State of the Practice +------------+--------------------------------------------------+ Ricky W. Butler, NASA Langley Research Center Dan Craigen, Odyssey Research Associates (Canada) Kevin R. Driscoll, Honeywell Technology Center Leo Marcus, Aerospace Corporation +------------+--------------------------------------------------+ 12:00 p.m. Lunch in the Refectory +------------+--------------------------------------------------+ 1:00 Panel 7 (Marty Branstad) Formal Methods and Security +------------+--------------------------------------------------+ Martin Abadi, Digital Equipment Corporation, Systems Research Center Shiu-Kai Chin, Syracuse University Karl N. Levitt, University of California at Davis John McHugh, Portland State University John McLean, U.S. Naval Research Laboratory +------------+--------------------------------------------------+ 2:30 Concluding discussion +------------+--------------------------------------------------+ 3:00 Adjourn ---------- End of Document