DEPARTMENT OF DEFENSE COMPUTER SECURITY INITIATIVE: A STATUS REPORT AND R&D PLAN
Document Type:
Collection:
Document Number (FOIA) /ESDN (CREST):
CIA-RDP91-00280R000100110014-6
Release Decision:
RIPPUB
Original Classification:
U
Document Page Count:
149
Document Creation Date:
December 23, 2016
Document Release Date:
April 16, 2013
Sequence Number:
14
Case Number:
Publication Date:
March 1, 1981
Content Type:
MISC
File:
Attachment | Size |
---|---|
![]() | 8.93 MB |
Body:
Ii
Declassified in Part - Sanitized Copy Approved for
CIA-RDP91-00280R000100110014-6 rv
Release 2013/04/16: DATE nr% efi
I) TO:
7
ROOM
BUILDING
REMARKS:
oe
(SE X) P) 16t4./c_
FROM:
g129J
Declassified in Part - Sanitized Copy Approved for
Release 2013/04/16:
CIA-RDP91-00280R000100110014-6
(47)
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : CIA-RDP91-00280R000100110014-6
EP 4RTUENT
OF
EFENSE
CUI.IFY !NI I I TI
SI ATUS E T
AIN
PREPAREI BY THE CHAI VIAN
CO UTER SECURITY TECHNIC CONSORTIUM
INFORMATION SYSTEMS DU ECTORATE
ASSISTANT SECRETARY OF DEFENSE
COMMUNICATIONS COMMAND, CONTROL AND-INTELLIGENCE
STAT
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
DEPARTMENT
OF
DEFENSE
COMPUTER SECURITY INITIATIVE:
A STATUS REPORT
AND
R&D PLAN
MARCH 1981
Prepared by the Chairman
Computer Security Technical Consortium
Information Systems Directorate
Assistant Secretary of Defense
Communications, Command, Control, and Intelligence
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part- Sanitized Copy Approved forRelease2013/04/16 : CIA-RDP91-00280R000100110014-6
(,)
Section
TABLE OF CONTENTS
Page
1 Executive Summary
1.1 Computer Security Initiative Activities
1.2 Coordination of DOD R&D Activities
Background
2.1 Nature of the Problem
2.2 Technology Development History
2.3 Trusted OS Fundamentals
2.4 System Security Vulnerabilities
2.5 ,Trusted System Environments
2.6 Verification Technology Overview
1
3
6
8
8
22
24
29
35
37
Computer Security Initiative Status
40
3.1 The Evaluated Products List
40
3.2 The Trusted Computing Base
46
3.3 Technology Transfer Activities
65
14 Trusted Computer System Evaluation
85
3.4.1 Evaluation Criteria
85
3.4.2 Evaluation Center
88
3.4.3 Evaluation Process
89
3.4.4 Summary
93
3.5 Current Evaluation Efforts
94
3.6 Status Summary
103
4 R&D Activities
104
4.1 Trusted Operating Systems
104
4.1.1 Current Research
104
4.1.2 Status of Industry Efforts
108
4.1.3 Future Directions
111
4.2 Trusted Applications
112
4.2.1 Current Research
112
4.2.2 Future Directions
118
4.3 Specification and Verification Technology
121
4.3.1 System Components
121
4.3.2 Security Properties
123=
4.3.3 Current Research
.124
4.3.4 R&D Plan
135
REFERENCES
142
1
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
SECTION 1
EXECUTIVE SUMMARY
Effective I Jan 1981 the Director of the National Security
Agency was assigned the responsibility for the evaluation of
computer security for the Department of Defense (figure
1-1). This function, to be called the DoD Computer Security
Evaluation Center, will be responsible for advising all
elements of the DoD on matters concerning the security of
Computer systems and in particular when the integrity of the
hardware and software of a system must be relied upon. The
Center will be an independent Program Management Office,
separate from all other functions at NSA but drawing upon
specialized skills as needed.
This report serves three purposes. The first is as a status
report on the DoD Computer Security Initiative, an OSD-level
effort which since mid-1978 has significantly advanced the
level of, understanding, both technical and managerial, of
computer security and has led to the establishment of the
DoD Computer Security Evaluation Center at NSA. The second
purpose of this report is a summary of computer security R&D
in the DoD and a projection of where future research should
be directed. The third purpose of this report is as a point
of departure and technical guide for the Computer Security
Evaluation Center.
On 1 June 1978 ASD(C31) formed the DoD Computer Security
Technical Consortium and began the Computer Security
Initiative. The goal of the Initiative is to establish
widespread availability of trusted* computer systems. There
are three major activities of the Initiative seeking to
advance this goal: (1) coordination of DoD R&D efforts in
the computer security field, (2) identification of
consistent and efficient evaluation procedures for
determining suitable environments for the use of trusted
computer systems, and (3) encouragement of the computer
industry to develop trusted systems as part of their
standard product lines. This report is a summary of the
activities of the Initiative since June 1978 and an
* A "trusted" computer system is one that employs sufficient
hardware and software integrity measures to allow its use
for simultaneously processing multiple levels of classified --
and/or sensitive information.
, Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
U.
THE DEPUTY SECRETARY OF DEFENSE
WASHINGTON. D.C. 20301
JAN 2 1981
MEMORANDUM FOR SECRETARIES OF THE MILITARY DEPARTMENTS
CHAIRMAN, JOINT CHIEFS OF STAFF .
DIRECTOR, DEFENSE ADVANCED RESEARCH
PROJECTS AGENCY
DIRECTOR, DEFENSE COMMUNICATIONS AGENCY
DIRECTOR, DEFENSE INTELLIGENCE AGENCY
DIRECTOR, DEFENSE INVESTIGATIVE SERVICE
DIRECTOR, DEFENSE LOGISTICS AGENCY
DIRECTOR, DEFENSE MAPPING AGENCY
DIRECTOR, DEFENSE NUCLEAR AGENCY
DIRECTOR, NATIONAL SECURITY AGENCY
DIRECTOR, WWMCC SYSTEM ENGINEERING
SUBJECT: DOD Computer Security Evaluation Center
Although your comments in response to Dr. Dinneen's
memorandum of November 13 indicate some concern about working
relationships within the proposed Evaluation Center, there .
is:no disagreement or doubt regarding the need, Therefore,
the proposal made by the Director, National Security Agency
to establish a Project Management Office is approved. Ef-
fective January 1, 1981, the Director, National Security
Agency is assigned theresponsibility for Computer Security
Evaluation for the Department of Defense.
Please provide the name of your representative for
computer security matters to ASD(C3I). The individual
chosen for this task should be empowered to work in your
behalf to develop and coordinate the charter and imple-
menting directives for the Center. I expect this working
group to identify necessary personnel and fiscal resources.
CC:
ASD(C3I)
ASD (Comptroller)
DUSD(Policy Review)
W. Graham Claytor, Jr.
Figure 1-1
2
n.1.101, ?
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
assessment of theprogress to_date in achieving widespread
availability of trusted computer systems.
The computer manufacturers are making substantial progress
in improving the integrity of their products, as can be seen
by a review of section 3 of this report. Most of the
incentive for this comes from a strong need to build more
reliable and easily maintainable products coupled with a
significant increase in the computer science understanding
of how to produce more reliable hardware and software. This
trend Was well established before the efforts of the
Initiative and can be expected to continue at an
accelerating pace. But the existence of an organized effort
on the part of the government to understand the integrity
measures of industry developed computer products will have a
strong influence on the evolution of the industry's
integrity improvement measures.
If the government can establish .consistent evaluation
criteria, the efforts of the Initiative to date have shown
that the industry will evolve their systems in accordance
with those criteria and the government can then expect to be
able to purchase high integrity computer products in the
same manner they purchase standard ADP systems today,
without the high additional costs of special purpose
development and maintenance. This is the philosophy being
pursued by the Initiative, to influence the evolution of
highly reliable commercial products to enable their use in
sensitive information handling applications and to obtain
sufficient understanding of the integrity of individual
products to determine suitable environments for their use.
This report is organized in the following manner. The
remainder of this section summarizes the major activities of
the Initiative since June 1978. Section 2 gives background
on the general nature of the computer security problem and
some technical details helpful in understanding the trusted
system evaluation process. ..$ection:.3 describes the current,
status of the Initiative, including: (1) a description of
the Evaluated Products List concept, (2) a description of
the Trusted Computing Base (TCB) concept, (3) current draft
evaluation criteria, (4) a proposed evaluation process, and
(5) the status of current Initiative evaluation efforts.
Section 4 describes ongoing R&D, plans, and industry
implications in the areas of trusted operating systems,
trusted applications, and verification technology.
1.1 COMPUTER SECURITY INITIATIVE ACTIVITIES
Figuke 1-2 illustrates the overall activities of the
Initiative. There are three main efforts being pursued in
parallel. The eventual outcome of this work is the
3
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
CD
0
CD(/)
(/)
CD
0_
5'
-0
CD
(I)
CD
m ?
CD
a ?
0
CD
7:1
CD
(T) ?Ps
CD
(/)
CD
0.
?1
0)
0
. .
?0
co
o
column mu
EDUCATION PHASE
PUBLIC SEMINARS/WORKSHOPS
? SPECIFICATION PHASE.
0
(D
? 0
MT INIT AMIE =,
5'
?
-0
e?INftemsflaCP.M.11.2111.10.011.?=51.
DRAFT 1-1-7-171001 INDUSTRY COORD. REVIEW AND ENHANCEMENT
1978
EVALUATION PHASE
INFORMAL
KSOS 11 .
KVNI
KSOS ? 6
UNIVAC
1980
Figure 1-2
FORMAL
INDUSTRY
SUBMITTED
SYSTEMS
"EVALUATED PRODUCTS LIST"
1982
(D
0
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
establishment of a. consistent and systematic means of.
evaluating the integrity of industy:andA0vernment--
developed computer systems. This outcome will be
accomplished when the Initiative has reached the Formal
Evaluation of industry developed systems represented in the
lower rightof the figure. Before this can happen, the
evaluation process must be formalized, criteria for
evaluation established and an Executive Agent identified to
carry out the evaluations. The vertical dotted line
represents the accomplishment of this formalization. Prior
to this, in the Specification Phase (Part II on the figure),
draft- evaluation criteria and; specifications for a "Trusted
Computing Base" (TCB) are being developed (see section 3 of
this report). These draft documents are being distributed
for comment to the DoD through the Consortium and to
industry through the Education Phase efforts described
below. In order to ensure that thedraft criteria and
specifications are realistic and feasible, the Initiative
has been conducting, at the invitation of various computer
manufacturers, evaluations of several potential industry
trusted systems. (Section 3.5 describes present efforts).
These informal evaluations are performed by members of the
Consortium, governed by OSD General Council approved legal
limitations and non-disclosure agreements. They are
conducted as mutually beneficial technical discussions with
the manufacturers and are serving a vital function in
illustrating the feasibility of such an evaluation process
and the industry's strong interest and willingness to
participate.
The other Major part of the Initiative's efforts as
represented on figure 1-2 is the Education Phase. The goal
in this effort is two fold: (1) to transfer technology to
the computer manufacturers on how to develop trusted
computer systems and (2) to identify to the general computer
user community that trusted computers can be built and
successfully employed in a wide variety of applications.
The method-of-accomplishing-these,goals ds-through
public seminars-. Three. such seminars have been hold at the
National Bureau of Standards in July 1979, January 1980, and
November 1980. These seminars were attended by 300-350
people representing all the major computer manufacturers,
over 50 computer system user organizations and over 25
Federal and State organizations- The seminars have. ,
generated a great deal of interest in the development of
trusted computer systems. In addition frequent
_participation in national level conferences such as the
National computer Conference (1979' and 1980) have helped to
establish the viability of the trusted computer concept.
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
There are three major efforts in the DoD computer security
Rsip program. The first is the development and demonstration
of trusted operating Systems. Included in these efforts are
.11e KerneliZed Secure Operating System (KSOS), which went
into initial test site evaluation during the fall of 1980,
, and the Kernelized VM/370 System (KVM/370), which will be
installed at two test sites by the first quarter of 1981.
Also included in this activity is the hardware and security
kernel development efforts on the Honeywell Secure
Communications Processor (SCOMP). All of these efforts
began as DARPA programs with joint funding from many
sources. Through the efforts of the Initiative,
arrangements have been made, starting in Oct 1980, for the
Navy to assume technical and contractual responsibility for
the KSOS and SCOMP efforts and for the Air Force to assume
similar responsibility for the KVM/370 effort. These
efforts are essential for the demonstration of trusted
computer systems to the DoD and also as examples to the
manufacturers as incentives to produce similar systems.
The second major R&D activity is the development of
applications of trusted computer systems. These include the
various guard-related information sanitization efforts (e.g.
ACCAT GUARD, FORSCOM GUARD), trusted front-end systems (e.g.
COINS Trusted TAS, DCA COS-NFE), trusted message system
activities (e.g. DARCOM Message Privacy Experiments), and a
recently-started effort in trusted data base management
systems.
The third R&D thrust is the establishment of a verification
technology program to advance the state of the art in
trusted system specification and verification. The first
phase of this program (FY80-FY83) includes major competitive
procurement activities to broaden our experience in using
current program verification technologies. This effort is
being undertaken to better understand the strengths and
weaknesses of these systems in order to be able to better
.specifTour-requirementsfor.future?improved systems which- ?
will be developed in the second phase of the program
(FY83-FY86). The Air Force has a major effort in this area
beginning in FY81. The Navy is initiating an R&D effort to
integrate several existing technologies into a package for
the specification and verification of applications like the
various Guard systems now under development.
A significant factor in the progress of the DOD R&D
activities in the past year has been the actions taken in
?response to recommendations of the Defense Oversight
Committee's Report, "Employing Computer Security Technology
to Combat Computer Fraud." The Committee's report
recommended. that the Services establish long-term programs
in computer security R&D and that specific sums be allocated
by each service in FY79, 80 and 81 while these long term
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
programs are being established. The FY80 funds recommended
.by the Committee were provided bv March 1980 and have been
instrumental in keeping ongoing efforts underway and
providing the resources needed to establish the new
application and verification technology development efforts.
7
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 CIA-RDP91-00280R000100110014-6
SECTION 2
BACKGROUND
The Defense Science Board Task Force on Computer Security
described the nature of the computer security problem in a
report entitled "Security Controls for Computer Systems"
dated February 1970 [WARE701. That description remains
valid today and is reprinted here in pert to set the context
for this report.
2.1 NATURE OF THE PROBLEM
2.1.1 The Security problem
"The wide use of computers in military and defense
installations has long necessitated the application of
security rules and regulations. A basic principle
underlying the security of computer systems has
traditionally been that of isolation--simply removing the
entire system to a physical environment in which
penetrability is acceptably minimized. The increasing use
of systems in which some equipment components, such as user
access terminals, are widely spread geographically has
introduced new complexities and issues. These problems are
not amenable to solution through the elementary safeguard of
physical isolation.
"In one sense, the expanded problems of security provoked by
resource-sharing systems might be viewed as the price one
pays for the advantages these systems have to offer.
However, viewing the question from the aspect of such a
simplistic tradeoff obscures more fundamental issues.
First, the security problem is not unique to any one type of
.computer?system_prconfiguxation;_it applies acrossthe:.-
spectrum of computational technology. While the present
paper frames the discussions in terms of time-sharing or
multiprogramming, we are really dealing not with system
configurations, but with security; today's computational
technology has served as catalyst for focusing attention on
the problem of protecting classified information resident in
CompUter systems
!'Secondly? resource-sharing systems,. where.the.problems of
security are admittedly most acute at present, must be
designed to protect each user from interference by another
user or by the system itself, and must provide some sort of
."privacy" protection to users who wish to preserve the
integrity of their data and theirl)rograMs. Thus, designers
and manufacturers of resource-shaking systems are concerned
with the fundamental problem of protecting information. In
8
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
protecting classified information, there are differences of
degree, and there are new surface problems, but the basic
?issues are generally equivalent. The solutions the
manufacturer designs into the hardware and software must be
augmented and refined to provide the additional level of
protection demanded of machines functioning in a security
environment.
2.1.2 Types of Computer Systems
"There are several ways in which a computer system can be
physically and operationally organized to serve its users.
The security controls will depend on the configuration and
the sensitivity of data processed in the system. The
following discussion presents two ways of viewing the
physical and operational configurations.
2.1.2.1 Equipment Arrangement and Disposition
"The organization of the central processing facilities for
batch or for time-shared processing, and the arrangement of
access capabilities for local or for remote interaction are
depicted in figure 2-1. Simple batch processing is the
,historical and still prevalent mode of operation, wherein a
?
number of jobs or transactions are grouped and processed as
a unit. The batches are usually manually organized, and for
the most part each individual job is processed to completion
in the order in which it was received by the machine. An
important characteristic of such single-queue, batched,
ruh-to-completion systems which do not have an integrated
file management system for non-demountable, on-line memory
media is that the system need have no "management awareness"
frOm job to job. Sensitive materials can be erased or
removed from the computer quickly and relatively cheaply,
and-mass memory media containing sensitive information can
be physically separated from the system and secured for
protection. -This characteristic_explains_why.solution
the problem we are treating has not been as urgent in the
past.
"In multiprogramming, on the other hand, the jobs are
organized and processed by the system according to
algorithms designed to maximize the efficiency of the total
system in handling the complete set of transactions. In
local-access systems, all elements are physically located
within the computer central facility; in remote-access
systems, some units are geographically distant from the
central processor and connected to it by communication
lines.
9
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Local-Access
Remote-Access
Batch
Type I
. FILE
QUERY
Limited
Application
Programs
Type II
PROGRAMMING VIA
INTERPRETATION
Remote-Access
Multiprogramming
Figure 2-1
Difficulty and
Complexity of
Security Controls
Type III
PROGRAMMING VIA
LIMITED LANGUAGES
AND
CHECKED-OUT
COMPILERS
Remote-Access
Time-Shared
Type IV
? FULL PROGRAMMING
CAPABILITY
Figure 2-2
10
New Languages
New Compilers
Increasing
User Capability,
Difficulty, and
Complexity of
Security Controls
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part-- Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
2.1.2.2 User Capabilities
"Another way of viewing the types of systems, shown in
figure 2-2, is based on the levels of computing capability
available to the user.
"File-query Systems (Type I) enable the user to execute only
limited application programs embedded in the system and not
available to him for alteration or change. He selects for
execution one or more available application programs. He
may be able to couple several of these programs together for
automatic execution in sequence and to insert parameters
into the selected programs.
"Interpretive systems (Type II) provide the user with a
programming capability, but only in terms of input language
symbols that result in direct execution within the computer
of the operations they denote. Such symbols are not used to
construct an internal machine language program that can
subsequently be executed upon command from the user. Thus,
the user cannot obtain control of the machine directly,
because he is buffered from it by the interpretive software.
"Compiler systems (Type III) provide the user with a
programming capability, but only in terms of languages that
execute through a compiler embedded in the system. The
instructions to the compiler are translated by It into.an
assembly language or basic machine language program.
Program execution is controlled by the user; however, he has
available to him only the limited compiler language.
"Full programming systems (Type IV) give the user extensive
and unrestrained programming capability. Not only can he
execute programs written in standard compiler languages, but
he also can create new programming languages, write
compilers for them, and embed them within the system. This
gives the user intimate interaction.with and control over
the machine's complete resources--excepting of course, any
resources prohibited to him by information-protecting
safeguards (e.g., memory protection, base register controls,
and I/O hardware controls).
In principle, all combinations of equipment configurations
(figure 2-1) and operational capabilities (figure 2-2) can
exist. In practice, not all the possible combinations have
been implemented, and not all the possibilities would
provide useful operational characteristics.
2.1.3 Threats To System Security
"By their nature; computer systems bring together a series
of vulnerabilities. There are human vulnerabilities
throughout; individual acts can accidentally or deliberately
11
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
jeopardize the systems information protection capabilities.
Haydware vulnerabilities are shared among the computer', the
communication facilities, and the remote units and consoles.
There are software vulnerabilities at all levels of the
machine operating system and supporting software; and there
are VUlnerabilities in the organization of the protection
system (e.g., in access control, in user identification and
authentication, etc.). How serious any one of these might
be depends on the sensitivity (classification) of the
information being handled, the class of users, the
compUtational capbilitieS- available to the user, the
operating environment, the skill with which the system has
been designed, and the capabilities of potential attackers
of the system.
"These points of vulnerability are applicable both in
industrial environments handling proprietary information and
in government installations processing classified data.
This Report is concerned directly with only the latter; it
is sufficient here to acknowledge that the entire range of
isSuesconsidered also has a "civil" side to which this work
is relevant.
"The design of a secure system must provide protection
against the various types of vulnerabilities. These fall
into three major categories: accidental disclosures,
deliberate penetrations, and physical attack.
"Accidental Disclosure, A failure of components, equipment,
software, or subsystems, resulting in an exposure of
information or violation of any element of the system.
Accidental disclosures are frequently the result of failures
of hardware or software.- Such failures can involve the
coupling of information from one user (or computer program)
with that of an other user, the "clobbering" of information
(i.e., rendering files or programs unusable), the defeat or
,circumvention ofsecurity_measures, or, unintended change_in
security status of users, files, or terminals. Accidental'
disclosures may also occur by improper actions of machine
operating or maintenance personnel without deliberate
-intent.
"Deliberate Penetration. A. deliberate and covert attempt to
(1) obtain information contained in the system, (2) cause
the system to operate to the advantage of the threatening
party, or .(3) manipulate the system so as to render it
unreliable or unusable to the legitimate operator.
Deliberate efforts to penetrate secure systems can either be
active or passive. Passive methods include wire tapping and
monitoring of electromagnetic emanations. Active
infiltration is an attempt to enter the system so as to
obtain data from the files or to interfere with data files
or the system.
12
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
(_
"One method of accomplishing active infiltration is for a
legitimate user to penetrate portions of -thesystem-for
which he has the authorization. The design problem is one
of preventing access to files by someone who is aware of the
access control mechanisms and who has the knowledge and
desire to manipulate them to his own advantage. For
example, if the access control codes are all four-digit
numbers, a user can pick any four-digit number, and then,
having gained access to some file, begin interacting with it
in order to learn its contents,
"Another class of''active;infiltration techniques involves
the exploitation of trap-door entry points in the system
that by-pass the control. facilities and permit direct access
to files. Trap-door entry points often are created
.deliberately during the design and development stage to
simplify the insertion of authorized program changes by
legitimate system programmers, with the intent of closing
the trap-door prior to operational use. Unauthorized entry
points can be created by a system programmer who wishes to
provide a means for bypassing internal security controls and
thus subverting the system. There is also the risk of
implicit trap-doors that may exist because of incomplete
system design--i.e., loopholes in the protection mechanisms.
For example, it might be possible to find an unusual
combination ofsystem'control variables that will create an
entry path around some or all of the safeguards.
"Another potential mode of active infiltration is the use of
a special terminal illegally tied into the communication
system. Such a terminal can be used to intercept
information flowing between a legitimate terminal and the
central processor, or to manipulate the system. For
example, a legitimate user's sign-off signal can be
intercepted and cancelled; then, the illegal terminal can
take over interaction with the processor. Or, an illegal
terminal, canAmaintain.?4civ.#YAur.ing..Pi9d_when_the
legitimate user is inactive but still maintaining an open
line. Finally, the illegal terminal might drain off output
directed to a legitimate terminal and pass on an error
message in its place so as to delay detection.
"Active infiltration also can be by an agent operating
within the secure organization. This technique may be
restricted to taking advantage of system protection
inadequacies in order to commit acts that appear accidental.
but which are disruptive to the system or to its users, or
which could result in acquisition of classified information.
At the other extreme, the agent may actively seek- to obtain
removable files or to create trap doors that can be
exploited at a later date. Finally, an agent might be
placed in the organization simply to learn about the system
and the operation of the installation, and to obtain ?what
13
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
pieces of information come his way without any particularly
covert attempts on his part at subversion.
"In passive subversion, means are applied to monitor
information resident within the system or being transmitted
through the communication lines without any corollary
attempt to interfere with or manipulate the system. The
most obvious method of passive infiltration is the wire tap.
If communications between remote terminals and the central
processor are over unprotected circuits, the problem of
applying a wire tap to the computer line is similar to that
of bugging a telephone call. It is also possible to monitor
the electromagnetic emanations that are radiated by the
high-speed electronic circuits that characterize so much of
the equipment used in computational systems. Energy given
off in this form can be remotely recorded without having to
gain physical access to the system or to any of its
components or communication lines. The possibility of
Successful exploitation of this technique must always be
considered.
"Physical Attack. Overt assault against or attack upon the
physical environment (e.g., mob action) is a type of
vulnerability outside the scope of this Report.
2.1.4 Areas of Security Protection
"The system design must be aware of the points of
vulnerability, which may be thought of as leakage points,
and he must provide adequate mechanisms to counteract both
accidental and deliberate events. The specific leakage
points touched upon in the foregoing discussion can be
classified in five groups: physical surroundings, hardware,
software, communication links, and organizational (personnel
and procedures). The overall safeguarding of information in
a computer system, regardless of configuration, is achieved
by a combination of protection features aimed at the
different areas of leakage points, Procedures, regulations,
and doctrine for some of these areas are already established
within DoD, and are not therefore within the purview of the
Task Force. However, there is some overlap between the
various areas, and when the application of security controls
to computer systems raises a new aspect of an old problem,
the issue is discussed. An overview of the threat points is
depicted in figure 2-3.
2.1.4.1 Physical Protection
"Security controls applied-to safeguard the physical
equipment apply not only to the computer equipment itself
and to its terminals, but also to such removable items as
printouts, magnetic tapes, magnetic disc packs, punch cards,
etc. Adequate DoD regulations exist for dissemination,
14
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
L.)
C 4 ZWYCAV ntAn21014/770
eADIATION
PROCESSOR
TAPS
RADIATION
RADIATION
TAPS t
RADIATION RADIATION
CROSSTALK 1 CROSSTALK 1
- COMMUNICATION _ffir
LINES
FILES
THEFT
COPYING
UNAUTHORIZED ACCESS
? ,:.: HARDWARE
FAILURF. OF PROTECTION CIRCUITS M"II"?4CI ACCESS'
COHTRZUTE TO SOFTWARE FAILURES\ DISAZLE VAROV4142 DEVICES ATTACHMENT OF RECORDERS
\ USE STAND-ALONE UTILITY PROGRAMS 5uGs
SOFTWARE USER
OPERATOR
REPLACE SUPERVISOR
REVEAL PROTECTIVE
MEAStlets
aHARDWARE
IMPROKR COWNECTIOINS
CROSS CMIPLING
SWITCHING
CENTER
SYSTEMS PROGRAMMI.g.
MAUI PROTECTIVE FEATURES
PROVIDE "INS"
REVEAL PROTECTIVE MEASURES
FAILURE OF PROTECTION FEATURES
ACCESS CONTROL
ZOUNDS CONTROL
ETC.
Figur 2-3
15
IDENTIFICATION
AUTHENTICATION
SUISTLE SOFTWARE
MODIFICATIONS
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
control, storage, and accountability of classified removable
items, Therefore, security measures for these elements of
the system are not examined in this Report unless there are
some unique considerations. The following 'general
guidelines apply to physical protection.
(a) The area containing the central computing complex and
associated equipment (the machine room or operational
area) must be secured to the level commensurate with
the most highly classified and sensitive material
handled by the system.
(b) Physical protection must be continuous in time,
because, of the threat posed by the possibility of
physical.tampering:.with equipment and because of the
likelihood that classified information will be stored
within the computer system even when it is not
operating.
(c) Remote terminal device must be afforded physical
protection commensurate with the classification and
sensitivity of information that can be handled
through them. While responsibility for instituting
and, maintaining physical protection measures is
normally assigned to the organization that Controls
the terminal, it is advisable for a central authority
to establish uniform physical security standards
(specific protection measures and regulations) for
all terminals in a given system to insure that a
specified security level can be achieved for an
entire system. Terminal protection is important in
order to:
- Prevent tampering with a terminal (installing
intelligence sensors);
- Prevent visual inspection of classified work
in progress;
- Prevent, unauthorized persons from trying to
call and execute classified programs or obtain
classified data.
"If parts of the computer system (e.g., magnetic disc files,
copies of printouts) contain unusually sensitive data, or
must be physically isolated during maintenance procedures,
it may be necessary to physically separate them and
independently control access to them. In such cases, it may
be practical to provide direct or remote visual surveillance
of the ultra-sensitive areas. If visual surveillance is
used, it must be designed and installed in such a-manner
that it cannot be used as a trap-door to the highly
sensitive material it is intended to protect.
16
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
2,1.4,2 Hardware Leakage POints
"Hardware portions of the system are subject to malfunctions
that can result directly in a leak or cause a failure of
security protection mechanisms elsewhere in the system,
including inducing a software malfunction. In addition,
properly operating equipment is susceptible to being tapped
or otherwise exploited. The types of failures that must
directly affect security include malfunctioning of the
circuits for such protections as bounds registers, memory
read-write protect, privileged mode operation, or priority
interrupt. Any hardware failure potentially can affect
security controls; e.g., a single-bit error in memory.
"Both active and passive penetration techniques can be used
against hardware leakage points. In the passive mode, the
intervener may attempt to monitor the system by tapping into
communications lines, or by monitoring compromising
emanations. Wholly isolated systems can be physically
shielded to eliminate emanations beyond the limits of the
secure installation, but with geographically dispersed
systems comprehensive shielding is more difficult and
expensive. Currently, the only practical solutions are
those used to protect communications systems.
"The problem of emanation security is covered by existing
regulations; there are not new aspects to this problem
raised by modern computing systems. It should be
emphasized, however, that control of spurious emanations
must be applied not only to the main computing center, but
to the remote equipment as well.
"Although difficult to accomplish, the possibility exists
that covert monitoring devices can be installed within the
central processor. The problem is that the computer
hardware involved is of such complexity that it is easy for
knowledgeable .person_ to orpor.a.te.? tlw -necessary,
equipment in such a way as to make detection very difficult.
His capability to do so assumes access to the equipment
during manufacture or major maintenance. Equipment is also
vulnerable to deliberate or accidental rewiring by
maintenance personnel so that installed hardware appears to
function normally, but in fact by-passes or changes the
protection mechanisms.
"Remote consoles also present potential-radiation
vulnerabilities. Moreover, there is a possibility that
recording devices might be attached to a console to pirate
information. Other remote or peripheral equipment can
present dangers. Printer ribbons or platens may bear
impressions that can be analyzed; removable storage media
(magnetic tapes, disc packs, even punch cards) can be
stolen, or at least removed long enough to be copied.
17
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
".?.?.?
"Erasure_ standards for magnetic media are not within the
scope of this Task Force to review or establish. However,
system designers should be aware that the phenomena of
retentivity in magnetic materials is inadequately
understood, and is a threat to system security.
2.1.4.3 Software Leakage Points
"Software leakage points include all vulnerabilities
directly related to the software .in the computer system. Of
special concern is the operating system and the
supplementary programs that support the operating system
because they contain the software safeguards. Weaknesses
canresult from improper design, or from failure to check
adequately for combinations of circumstances that can lead
to unpredictable consequences. More serious, however, is
the fact that operating systems are very large, complex
structures, and thus.it it impossible to exhaustively test
for every conceivable set of conditions that might arise.
Unanticipated behavior can be triggered by a particular user
program or by a rare combination of User actions.
Malfunctions might only disrupt a particular user's files or
programs; as such, there might be no risk to security, but
there is a serious implication for system reliability and
utility. On the other hand, operating system malfunctions
might couple information from one program (or user) to
-another; clobber information in the system (including
information within the operating system software itself); or
change classificationThf users, files, or programs. Thus,
malfunctions in the system sOftware represent potentially
serious security risks. Conceivably, a clever attacker
might establish a capability to induce software malfunctions
deliberately; hiding beneath the apparently genuine trouble,
an on-site agent may be able to tap files or to interfere
with system operation over long periods without detection.
7The security.safeguards_provided_bythe operating_system
"SoftWat?include-addeSS cOntrOls,?uSer
memory bounds control, etc. As a result of a hardware
'Malfunction,' especially a transient one, such controls can
become inoperative. Thus, internal checks are necessary to
insure that the protection is operative. Even when this is
done, the simultaneous failure of both the protection
feature and its check mechanism must always be regarded as a
possibility. With proper design and awareness of the risk,
it appears possible to reduce the probability of undetected
failure of Software safeguards to an acceptable level.
"Probably the most serious risk in system software is
incomplete design, in the sense that inadvertent loopholes
exist in the protective barriers and have not been foreseen
by the designers. Thus, unusual actions on the part of
users, or unusual ways in which their programs-behave, can
18
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
fl
? induce a loophole. There may result A security breach, a
suspension or modification of software safeguards (perhaps
undetected), or wholesale clobbering of internal programs,
data, and files. It is conceivable that an attacker could
mount a deliberate search for such loopholes with the
expectation of exploiting them to acquire information either
from the system or about the system-:-e.g., the details of
its information safeguards.
2.1.4.4 Communication Leakage Points
"The communications linking the central processor, the
switching center and the remote terminals present a
potential vulnerability. Wiretapping may be employed to
steal information from land lines, and radio intercept
equipment can do the same to microwave links. Techniques
for intercepting compromising emanations may be employed
against the communications equipment even more readily than
against the central processor or terminal equipment. For
example, crosstalk between communications lines or within
the switching central itself can present a vulnerability.
Lastly, the switch gear itself is subject to error and can
link the central processor to the wrong user terminal.
2.1.4.5 Organizational Leakage Points
"There ate two prime organizational leakage points,
personnel security clearances and institutional operating
procedures. The first concerns the .structure,
administration, and mechanism of the national apparatus for
granting personnel security clearances. It is accepted that
adequate standards and techniques exist and are used by the
cognizant authority to insure the reliability of those
clOared. This does not, however, relieve the system
designer of a severe obligation to incorporate techniques
that minimize the damage that can be done by a subversive
individual working from within the secure organization. A
secure system must be based on the concept of isolating any
given individual from all elements of the system to which he
has no need for access. In the past, this was accomplished
by denying physical access to anyone without a security
clearance of the appropriate level. In resource-sharing
systems of the future, a population of users ranging from
uncleared to those with the highest clearance levels will
interact with the system simultaneously. This places a
heavy burden on the overall security control apparatus to
insure that the control mechanisms incorporated into the
computer systems are properly informed of the clearances and
restrictions applicable to each user. The machine system
.must be designed to apply these user access restrictions
reliably.
19
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
"In.-some installations,, it.may..he-feasible to reserve
certain terminals for highly classified or highly Sensitive
or. restricted work, while -other terminals are used
exclusively for less sensitive operation. .ConVersely, in
some installations any terminal can be used to any --degree of
classification or sensitivity, depending on the clearance?
and needs of the user at the given moment. In either of
these cases., the authentication-and.verification mechanisms
built, into the machine system can be relied upon only to the
degree that ?the data on. personnel and. on operational
characteristics provided it by the security apparatus are
?
accurate.
"The second element of organizational leakage points
concerns institutional operating procedures. The
consequences of inadequate organizational procedures, or of
their haphazard application and unsupervised Use, can be
just as sevete as any other malfunction. Procedures include
the insertion of clearance and status information into. the
security checking mechanisms of the machine system , the
methods of authenticating users and. of receipting for
Classified information, the scheduling of computing
operations and maintenance periods, the provisions for
storing and keeping track of removable storage media, the
handling of printed machine output and reports, the
monitoring and control of machine-generated records for the
-security-apparatus, and all other functions whose purpose is
to insure reliable but unobtrusive operation from a security
control viewpOint. Procedural shortcomings represent an
area of potential weakness that can be exploited or
manipulated, and which can provide an agent with innumerable
opportunities .for system subversion. Thus, the installation
operating procedures have the dual function of providing
overall management efficiency and of providing the
administrative bridge between the security control apparatus
and the computing system and its users.
"iThe'Tatk FOtde-h-dt-ho'Spedifia-COMMentt-tO 'k with
respect to personnel security issues, other than to note
that control of the movement of people must include control
over access to remote terminals that handle classified
information, even if only intermittently. The machine room
staff must have the capability and responsibility to control
the movement of personnel into and within the central
computing area in order to insure that only authorized
.individuals operate equipment located there, have access to .
-removable storage media, and have access to. any machine
parts not ordinarily open to casual inspection.
2.1.4.6 Leakage Point Ecology
"In dealing with threats to system security, the various
leakage points cannot be considered only individually.
20
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Almost any imaginable deliberate attempt to. explOt
weaknesses will necessarily involve a combination of
factors. Deliberate acts mounted against the system to take
advantage of or to create leakage points would usually
require both a system design shortcoming, ? either unforeseen
or undetected, and the placement of someone in a position to
initiate action. Thus, espionage activity is based on
exploiting a combination of deficiencies'and circumstances.
A software leak may be caused by a hardware: malfunction.
The capability to tap or tamper with hardware may be
enhanced because of deficiencies in software checking
routines. A minor, ostensibly acceptable, weakness in one.
area, in combination with similar shortcomings in seemingly
unrelated activities, may add up to a serious potential for
system .subversion. The system designer must be aware of the
totality of potential leakage points in any system in order
to create or prescribe techniques and procedures to block
entry and exploitation.
"The security problem of specific computer systems must be
solved on a case-by-case: basis employing the best judgment
of a team consisting of system programmers, technical,
hardware, and communications specialists, and security
experts."
21
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
2.2 TECHNOLOGY DEVELOPMENT HISTORY
Much has been leatnedabout methods of assuring the
integrity of information processed on computers since the
emergence of operating systems in the early 1969s. Those
early, efforts were primarily concerned with improvements in
the effective use of the large computer centers that were
then being established. Information protection was not a
major concern sinc'e these centers were operated as large
isolated data banks., There were many significant hardware
and software advances in support of the new operating system
demands. Many of these Changes were beneficial to the
interests of information protection but since protection was
not an essential goal at that time, the measures were not
applied consistently and significant protection flaws
.existed in all commercial operating systems [TANG80].
In the late 1960s, spurred by activities such as the Defense
Science Board study quoted in the previous section, efforts
'were' initiated to determine how vulnerable computer systems
were to penetration. The "Tiger Team" system penetration
efforts are well known. Their complete success in
penetrating all commercial systems attempted, provided
convincing evidence that the integrity of computer systems
hardware -and software could not be relied upon to protect
information Irom-disclosure to other Users of the same
computer system.
By the early 1970s penetration techniques were well
understood. Tools were developed to aid in the systematic
detection of critical system flaws. Some detected
mechanical coding errors, relying on the sophistication of
the user to' discovera way to exploit the flaws (ABB076],
others organized the search into a set of generic conditions
which when present often indicated an integrity flaw
(CARL71. Automated algorithms were developed to search for
these.generld COnditiOnS,?freeing-the-"Penetratot"'frOM-
tedious code searches and allowing the detailed analysis of
specific potential flaws. These techniques have continued
to be developed to considerable sophistication. In addition
to their Value in searching for flaws in existing software,
these algorithms are useful as indicators of conditions to
avoid in writing new software 'if one wishes to avoid the
flaws that penetrators most often exploit.
These penetration 'aids are, however, of limited value in
producing high integrity software systems. While they could
be used to reveal certain types of flaws, they could assure
the analysts that no further exploitable flaws of other
types did not remain.
In the early 1970s the Air Force/Electronic Systems Division
(ESD) conducted in?depth analyses of the requirements for
22
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
secure systems [ANDE72]. The concepts which emerged. frob
their efforts are today the basis for most major trusted
computer system developments. The basic concept is a
Reference Monitor which mediates the access of all active
system elements (people or programs) referred to as
subjects, to all systems elements containing information
(files, record, etc.) referred to as objects. All of the
security relevant decision making functions within a
conventional operating system are collected into a small
primitive but complete operating system referred to as the
Security Kernel.. The security kernel is a specific
implementation of the reference monitor in software and
hardware. The three essential characteristics of this
kernel are that it be:
complete (i.e., that all accesses of all subjects to
all objects be checked by the kernel)';
isolated (i.e., that the code that comprises the
kernel be protected from modification or interference
by any other software within the system).;
correct (i.e., that it perform the function for which
it was intended and no other function).
Since these Air Force studies, considerable effort has gone
into building security kernels for various systems. The
reference monitor concept was the basis for work'.by MIT, -
MITRE and Honeywell in restructuring the Multics operating
system [SCHR77] . MITRE and UCLA have built prototype
security kernels for the PDP-11 minicomputer
[W00D77,POPE79]. System Development Corporation (SDC) is
building a security kernel for the IBM VM/370 operating
system [GOLD79]. Ford Aerospace and Communications
Corporation is implementing the Kernelized Secure Operating
System [MCCA79,BERS79] based on the Secure UNIX prototypes
of UCLA and MITRE. AUTODIN II, the DCA secure packet
switching. system is employing. this. technology in the packet
switching nodes. The Air Force SACDIN program (formerly
called SATIN IV) is also employing this technology.
23
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
2.3 TRUSTED OPERATING SYSTEM FUNDAMENTALS
An operating system is a specialized set of software which
provides commonly needed functions for user developed
application programs. All operating systems provide a well
defined interface to application programs in the form of
system calls and parameters, Figure 2-4 illustrates the
relationship between the operating system and application
software. The operating system interfaces to the hardware
through the basic machine instruction set and to
applications software through the system calls which
constitute the entry points to the operating system.
Applications programs (e.g., A, B, and C) utilize these
system calls to perform their specific tasks.
A trusted operating system patterned after an existing
system is illustrated in figure 2-5. The security kernel is
a primitive operating system providing all essential
security relevant functions including process creating and
execution and mediation of primary interrupt and trap
responses. Because of the need to prove that the security
relevant aspects of the kernel perform correctly, great care
is taken to keep the kernel as small as possible. The
kernel interface is a well defined set of calls and
interrupt entry points. In order to map these kernel
functions into a specific operating system environment, the
operating system emulator provides the nonsecurity software
interface for user application programs which is compatible
with the operating system interface in figure 2-4. The
level of compatibility determines ?what existing single
security level application programs (e.g., A, B, C) can
operate on the secure system without change.
Dedicated systems often do not need or cannot afford the
facilities or environment provided by a general purpose
operating system, but they may still be required to provide
internal protection. Because the security kernel interface
is -well defined and-prOVide'aIl.the-primitiV6-fUnctIon'
needed to implement an operating system it can be called
directly by specialized application programs which provide
their own environment in a form tailored for efficient
execution of the application program. Examples of this type
of use al.:07 dedicated data base management and message
handling-systems.
_Figure 2-6 illustrates the relationship between two typical
computer systems connected by a network. Each system is
composed of an operating system (depicted by the various
support modules arrayed around the outside of each box) and
application programs (e.g., A, Q, and R in the inner area of
the boxes). The dotted path shows how a:terminal user on
System I might access File X on System II. Working through
the terminal handler, the user must first communicate with
24
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
WELL DEFINED
INTERFACE
WELL DEFINED
INTERFACE
WELL DEFINED
INTERFACE
Figure 2-4
APPLICATION
PROGRAMS
A BI
OPERATING
SYSTEM
EMULATOR
?
SECURITY
KERNEL
HARDWARE
Figure 2-5
. 25
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
an :application:,prosram- (1%), which will initiate a network
connection with the remote computer through the network
interface software. On System II an application program or
a system utility (Q) is initiated on the user's behalf to
access File X using the file system. Program Q could
perform a data base update or retrieval for the user or it
could arrange to transfer the file across the network to the .
local computer for processing.
When this scenario is applied in a secure environment, the
two systems are placed in physically secure areas and, if
the network is not secure, encryption devices are installed
at the secure interface to the network as shown in figure
276.
Figure 2-7 illustrates the function of the security kernel
in the above scenario. Because the kernel resides directly
on the hardware (figure 2-5) and processes all interrupts,
traps and other system actions, it is logically imposed
?between all "subjects" and "objects" on the system and can
perform access checks on every event affecting the system.
It should be noted that depending on the nature of the
hardware architecture of the system, the representation of
the kernel may have to include the various I/O device
handlers. The DEC PDP-11, for example, requires that all
device handlers be trusted and included in the kernel since
I/O has direct access to memory. The Honeywell Level 6 with
the Security Protection Module option (under development)
does not require trusted device drivers since I/O access to
memory is treated the same way as all other memory accesses
and can be controlled by the existing hardware mechanisms.
26
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
?
-J
LT.
.??=. mos.? omm. Era. ommin? 4.1??? .ausr ?????? amok.
27
Lu
1?
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
28
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
c;)
2.4 SYSTEM SECURITY VULNERABILITIES
Protection is always provided in relative quantities.
Complete security is not possible with today's physical
security measures-, nor will it be with new computer security
measures. There will always- be something in any security
system which can. fail. The standard approach tO achieving
reliable security is to apply multiple measures in depth-.
Traditional- locks and fences provide degrees Of protection
by delaying an intruder until some other protection
mechanism such as a roving watchman can discover the
attempted intrusion. With computer systems this "delay
until detected" approach won't always work. Once an
intruder knows about a security flaw in a computer system
he can generally exploit it quickly and repeatedly with
minimal risk of detection.
Research on the security kernel approach to building trusted
operating systems has produced a positive change in this
situation. While absolute security cannot be achieved, the
design process for trusted computer systems is such that one
can examine the spectrum of remaining vulnerabilities and
make reasonable judgments about the threats he expects to
encounter and the impact that countermeasures will have on
system performance.
A caution must be stated that the techniques described here
do not diminish the need for physical and administrative
security measures to protect a system from unauthorized
external attack. The computer security/integrity measures
described here allow authorized users with varying data
access requirements to simultaneously utilize a computer
facility. They provide this capability which relies upon
the existing physical and administrative security measures
rather than replacing them.
,,,T.11latureof.traditional,physical and administrative. __
security vulnerabilities encountered in the operation of
computers with sensitive information is well understood.
Only users cleared to the security level of the computer
complex are allowed access to the system. With the advent
of trusted computer systems allowing simultaneous use of
computers by personnel with different security clearances
and access requirements, an additional set of security
vulnerabilities comes into play. Table 2-A describes one
view of this new vulnerability spectrum as a series of
categories of concern. Each of these concerns was not
serious in previous systems because there was no need or
opportunity to rely on the integrity of the computer
hardware and software.
The first category is the Security Policy which the system
must enforce in order to assure that users access only
29
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16
9-171-001-1-001-000108Z00-1-6dCll-V10
Table 2?A Operating System Security Vulnerabilities
Category
Security Policy
System Specipcation
Hish Order Language
Implementation '
. Machine Language
Implementation
Function
Establish security rela-
tionship between all
system users., resources
(e.g., Do0 Security
Policy)
Establish policy rela..-
tionship fcr Each system
module (e.g., Parnas I/0
assertions)
Transform System Speci-
fication Provisions for
each module into (e.g.
Fortran, PASCAL, C)
Transform HOL implemen-
tation into binary codes
which are executed by
hardware
Ooftware (Installation Independent)
J..Hardware (Installation Dependent)
Hardware Instruction
'Modules
Circuit Electronics
Device Physics
Perform machine instruc-
tions (e.g., ADD instruc-
tion)
Perform basic logic
functions which comprise
instructions (e.g., AND,
OR functions)
Perform basic electro-
magnetic ft,nctions which
comprise btsic logic
function, (.9. electron
interaction)
Vulnerability
Resolution
ReViw
Relative
Security Risk
Moderate
For each module establish High
security assertions which
govern activity
. Manual or interactive High
validation that HOL
? obeys system spec
Compiler Testing
Testing, redundant chetks.
of security relevant
hardware.
Maintenance Testing
Moderate
Low -- except for security
related hardware
Low
Maintenance Testing, Very Low :17:
1-led u! Peg!ssepea
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
authorized data. ,This policy consists?of the rules which
the computer. willenforce governing the interactions between
System users. There are many different policies possibleranging from allowing- no one access. to anyone else".s.
information -to full access to all data on the system. The
DOD security policy (table 2-B) consists of a lattice
relationship in which there are classification levels,
typically Unclassified through?Top Secret,-and compartments
(or 'categories) which are often mutually exclusive groupings
[BELL74].. With this policy apartial ordering relationship _
is: established in which users with higher personnel security
clearance.levels.can have access to information at lower
classification levels provided the users also have a "need
to know" the, information. The .vulnerability- concern
associatedwith the security policyis assuming that the
policy properly meets the' total system security
requirements. -
The second general concern is the System Specification
Level. Here the function of each module within the system
and its interface to other modules is described in detail.
Depending upon the exact approach employed, the system
specification level may involve multiple abstract
descriptions. The vulnerability here is to be able to
assure that each level of the specification enforces the
policy previously established.
The next vulnerability concern is the high level language
implementation. This category constitutes the actual module
implementation represented in a high order language ..(HOL)
such as EUCLID or PASCAL. This vulnerability involves the
assurance that the code actually obeys the specifications.
The next concern on the vulnerability list is the machine
code implementation which includes the actual instructions
to be run on the hardware. The step from HOL implementation
to machine code is usually performed by a compiler and the
concern is to assure that the compiler accurately transforms
--t.11-110L-iltiplementation into-machine-language
The next level of concern is that the hardware modules
implementing the basic instructions on the machine perform
accurately the functions they represent. Does the ADD
instruction perform an ADD operation correctly and nothing
else? Finally, the last Concerns include the circuit .
electronics. and more fundamental device physics itself. Do
these elements accurately perform in the expected manner?
can be seen by analyzing this vulnerability spectrum,
some of the areas of concern are more serious than others.
In particular, relatively little concern is given to circuit
electronics and device physics since there is considerable
confidence that these elements will perform as expected..
There is -a concern with hardware modules, though in general
31
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
TABLE 2-B DoD Security Policy
I. Non discretionary (i.e., levels established by national policy
must be enforced).
Top Secret
Compartmpnts
Secret
Confidential
Unclassified
411111?111MIMUM,
Partially Ordered Relationship:
Top Secret) Secret > Confidential > Unclassified
Compartments A, B, C are mutually exclusive
Example:
User in Compartment B, level Secret can have access to *all
information at Secret and below (e.g., Confidential and
Unclassified) in that compartment, but no access to
information in Compartments A or C.
II. Discretionary, "Need to know" - (i.e., levels established
"informally").
32
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
cae)
most nonsecurity relevant hardware failures .do ..not pose a
significant vulnerability to the security of the system and
will be detected during normal operations of the machine.
Those security relevant hardware functions can be subject to
frequent software testing to insure (to a high degree) that
they are functioning properly. The mapping between HOL and
machine code implementation is a serious concern. The
compiler could perform improper transformations which would
violate the integrity of the system. This mapping can be
checked in the future by verification of the compiler
(presently beyond the state-of-the-art). Today we must rely
on rigorous testing of the compiler.
The selection of the security policy which the system must
support requires detailed analysis of the application
requirements but is not a particularly complex process and
can be readily comprehended so the level of concern is not
too high for this category.
The system specification and HOL implementations are the two
areas which are of greatest concern both because of the
complex nature of these processes and the direct negative
impact that an error in either has on the integrity of the
system. Considerable research has been done to perfect both
the design specification process and methods for assuring
its correct HOL implementation (POPE78b,MILL76,FEIE77,
WALK79,MILL791. Much of this research has involved the
development of languages and methodologies for achieving a
complete and correct implementation [ROUB77,AMBL76,HOLT781.
As stated earlier this vulnerability spectrum constitutes a
set of conditions in which the failure of any element may
compromise the integrity of the entire system. In the high
integrity systems being implemented today, the highest risk
vulnerability areas are receiving the most attention.
Consistent with the philosophy of having security measures
in. depth, it .will be necessary to maintain strict physical
and administrative security Me.asures to protect against -
those lower risk vulnerabilities that cannot or have not yet
been eliminated by trusted hardware/software measures. This
will result in the continued need to have cleared operation
and maintenance personnel and to periodically, execute
security checking programs to detect hardware failures.
Over the next few years as we understand better how to
handle the high risk vulnerabilities we will be able to
concentrate more on the lower risk areas and consequently
broaden the classes of applications in which these systems
will be suitable.
Computer system security vulnerabilities constitute paths
for passing information to unauthorized users. These paths
can be divided into two classes: direct (or overt) and
indirect (or covert) channels [LAMP73,LIPN75). Direct paths
33
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
grant access to information through the direct request of a
user. If an unauthorized user asks to read a file and is
granted access to it; he has made use of a direct path. The
folklore of computer security is filled with case histories
Of commercial operating systems being "tricked' into giving
direct access to unauthorized data. Indirect or covert
channels are those paths used to pass information between
two user programs with different access rights by modulating
some system resource such as a storage allocation. For
example, a user program at' oneaccess level can manipulate
his use of disk storage so that another user program at
another level can be pasted information through the number
of unused disk pages.
Unauthorized direct access information paths can be
completely eliminated by the security kernel approach since
all objects are labeled with access information and the
kernel checks them against the subject's access rights
before each access is granted. The user who is interested
only in eliminating unauthorized direct data access can
achieve "complete" security using these techniques. Many
environments in which all users are cleared and only a
"need-to-know" requirement exists, can be satisfied by such
a system.
Indirect data paths are more difficult to control. Some
indirect channels can be easily eliminated, others can never
be prevented. (The act of turning off the power to a system
can alviays. be used to pass information to users.) Some
indirect channels have very high bandwidth (memory to memory
speeds), many operate at relatively low bandwidth.
Depending upon the sensitivity of the application, certain
indirect channel bandwidths can be tolerated. In most cases
external measures can be taken to eliminate the utility of
an indirect channel to a potential penetrator.
The elimination of indirect data channels often affects the
performance of a system. This situation requires that the
_
-customer ',carefully 'examine the' ha te of the- th'ieat he
expects and that he eliminate those indirect paths which
pose a real problem in his application. In a recent
analysis, one user determined that indirect path bandwidths
of approximately teletype speed are acceptable while paths
that operate at line printer speed are unacceptable. The
assumption was. that the low speed paths could be controlled
by external physical measures. With these general
requirements to guide the system designer it is possible to
build a useful trusted system today.
34
npclassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
(*..)
2.5 TRUSTED SYSTEM ENVIRONMENTS
The applications for which trusted operating systems will be
used and the environments in which they will operate cover a
wide spectrum. The most sensitive practical environment
encompasses highly sensitive intelligence information on a
system with unclassified users. AUTODIN LI is employing
security kernel technology to operate a packet switched
network in such an environment. A minimum sensitive
environment in which a trusted system might be placed
involves unclassified information where individual
need-to-know on privacy- must be maintained. There are a
large number of environments between these two that have
differing degrees of sensitivity.
The type of application for which the trusted system will be
used influences the concern for the integrity of the system.
For example, while AUTODIN II does not employ full code -
verification or fault resistant hardware,.it is being.used
for an application which offers the user few opportunities
to exploit weaknesses within the packet switch software.
Thus it can be used in a much higher-risk environment than
can a general-purpose computer system. A general-purpose
programming environment offers many more opportunities to
exploit system weaknesses. The combination of the
sensitivity of information being processed relative to the
clearances of the users and the degree of user capability
afforded by a particular application are the primary factors
in determining the level of concern required for a
particular system.
There are examples of multilevel systems that have been
approved which provide significant data points in the
environment/application spectrum. Honeywell Multics,
enhanced by an "access isolation mechanism", is installed as
a general-purpose timesharing system at the Air Force Data
Services Center in the Pentagon in a Top Secret environment
with some users cleared only to the Secret level. Multics
has the best system integrity of any commercial operating
system available today. While it does not have formal
design specifications as described in the previous section,
the system was designed and structured with protection as a
major goal but. Formal development procedures were not used
because the system was developed before these techniques
were available. In spite of this, after a very thorough and
careful review, the Air Force determined that the benefit of
using this system exceeded the risk that a user might
attempt to exploit a system weakness, given that all users
have at least a Secret clearance.
There have been several other examples where current
technology enhanced by audit procedures and subjected to
rigorous testing have been approved for use in limited
35
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
sensitivity: applications.
The degree to which One Must rely on,: technical features of a
system for integrity depends significantly on the
environment that the system:will operate in and the
capabilities that a user has to exploitsystem-weaknesses.
There has been some study of the range of sensitivities for
different applications and environments [ADAM79] Section
3.1 describes a way of combining these application and
environment concerns with the technical measures of system
integrity.
36
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
2.6 VERIFICATION TECHNOLOGY OVERVIEW
The security kernel approach to designing trusted computing
syste= collects the security relevant portions of the
operating system into a small primitive operating system.
In order to have confidence that the system can be trusted,
it is necessary to have confidence that the security kernel
operates correctly. That is, one must have confidence that
the security kernel enforces the security policy which the
system is supposed to obey.
Traditional means such as testing and penetration can and
should be used to uncover flaws in the security kernel
implementation. Unfortunately, it is not possible to test
all possible inputs to a security kernel. Thus, although
testing may uncover some flaws, no amount of testing will
guarantee the absence of flaws. For critical software, such
as a security kernel, additional techniques are needed to
gain the necessary assurance that the software meets its
requirements. Considerable research has been devoted to
techniques for formally proving that software operates as
intended. These techniques are referred to as software
verification technology or simply verification technology.
In the case of a security kernel, the critical aspect of its
operation is the enforcement of a security policy. The
ultimate goal of a verification is to prove that the
implemented security kernel enforces the desired security
policy. There are five main areas of concern in relating
th security policy to the implemented security kernel: the
security policy itself, system specification, high order
language implementation, compiler, and hardware. The
following paragraphs discuss the way in which verification
addresses each of these areas.
2.6.1 Security Policy
DoD has established regulations covering the handling of
classified information.(e.g. DOD Directive 5200.28).
However, in order to prove that a security kernel enforces a
security policy, it is necessary to have a formal
mathematical model of the security policy. It is not
possible to prove that the model is correct since the model
is a formal mathematical interpretation of a
non-mathematical policy. Fortunately, mathematical models
of security have existed since 1973 when Bell and LaPadula
formulated a model of multilevel security. [BELL74].
Various models of multilevel security have been used since
1973, but they have all been derived from the original
Bell-LaPadula model. Since this model has been widely
disseminated and discussed, one can have confidence that the
model correctly reflects the non-mathematical DoD
regulations. In the case of software with security
37
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
C.)
requirements different from those of a security kernel, a
specialized model is needed, and 'thorough review is-tequired
to determine that the model guarantees the informal
requirements.
2.6,.'2 System Specification
In practice the gap between the mathematical model of
security and the implemented ?security kernel is too great to
directly prove that the kernel enforces the model. A
specification of the system design can be used to break the
proof up into two _parts:
a) Show the system specification obeys the model.
b) Show the kernel code correctly implements the
specification.
Step a) is called Design Verification. Step b) is called
Implementation or Code Verification.
To be useful for verification, the meaning of the system
specification must be precisely defined. This requires that
a formally defined specification language be used. Formal
specification languages, associated design and verification
methodologies, and software tools to help the system
designer and verifier have been developed by several
organizations. Since a specification typically, hides much
of the detail which must be handled in an implementation,
design verification is significantly easier than code
verification. The design verification usually requires the
proof of a large number of theorems, but most of these
theorems can be handled by automatic theorem provers. There
are several methodologies available today that work with
existing automatic theorem provers. Verification that a
formal design specification obeys a security model has been
carried out as part of the AUTODIN II, SACDIN, KSOS, and
KVM/370 programs. Design verification can be useful-even if
no code verification is done. Traditional techniques can
give some confidence that the code corresponds to the
implementation, and design verification will uncover design
flaws, which are the most difficult to correct.
2.6.3 HOL, Compiler, Hardware
After the system specification has been verified to obey the
security jmodel, the remaining problem .is to show that the.:
kernel implementation is consistent with its specification.
The gap from specification to object code is too great for
current verification methodologies to prove that object code
is consistent .with a specification. However, work has been
devoted to developing techniques for proving that. the HOL
implementation of a system is consistent with its
38
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
specification.- The implementation for a system is much more
detailed than a specification, and more attributes must be
shown to be true to support the top-level design assertions.
Thus, verification that the code is consistent with its
specification is much more difficult than verification of
the design properties of the specification. Usually many
theorems must be proved for code verification. Even with
automatic theorem provers the verification requires
significant human and computer resources. Recent work in
verification, technology ,-1.as developed code verification to
the point that it ,Lt now, feasible to attempt code
verification in some small systems. To date, code
verification hasbtn done only for example systems.
To complete the verification one would have to consider the
compiler and hardware. At present, it is beyond the state
of the art to formally prove that production compilers or
hardware operate as specified. 'However, since the compiler
and hardware will probably be used on many systems, flaws in
their operation are more likely to be revealed than flaws in
the code for a new system. The software is the area where
there is the greatest need for quality assurance effort.
2.6.4 Summary
Verification is useful for increasing one's confidence that
critical software obeys its requirements. An example df
critical software where verification can be useful is a
security kernel. Verification does not show that a system
islcorrect in every respect. Rather verification involves
ph(i*ing consistency between a mathematical model, a formal
speciftcation, and an implementation. Verification that a
formal specification is consistent with a mathematical model
ofsecurity has been demonstrated on several recent systems.
Verification of consistency between a specification and a
HOL implementation is on the verge of becoming practical for
'small systems,....but has-not-yet-been-dmonstratedexcept,fag-
example systems. Verification of consistency between the
HOL and machine language is not practical in the near
future. (Verification is discussed in more detail in
section 4.3)
39
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Cop....i82)pproved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
SECTION 3
COMPUTER SECURITY INITIATIVE STATUS
The goal of the Computer Security Initiative is to establish
widespread availability of trusted computer systems. There
are three major activities of the Initiative seeking to
advance this goal: (1) coordination of DOD R&DGeffotts in
the computer security field, (2) identification of
consistent and efficient evaluation procedures-for
determining suitable environments for the use of trusted
computer systems, and (3) encouragement of the computer
industry to develop trusted systems as part of their
standard product lines. This section describes the
Initiative activities in support of 2 and 3 above. (Section
4 addresses item 1.)
3.1 THE EVALUATED PRODUCTS LIST
Section 1-1101 of the Defense Acquisition Regulations (DAR,
formerly called the Armed Services Procurement Regulations
or ASPRS) defines a procedure for evaluating a product prior
to a procurement action. This procedure establishes a
Qualified Products List (QPL) of items which have met a -
predefined government specification. This procedure can be
used when one or more of the following conditions exist:
The time required to conduct one or more of the
examinations and tests to determine compliance
with all the technical requirements of the
specification will exceed 30 days (720 hours).
(Use of this justification should advance product
acceptance by at least 30 days (720 hours).)
-(ii) Quality conformance inspection would require
special equipment not commonly available.
(iii) It covers life survival or emergency life saving
equipment. (See 1-1902(b)(ii).)"
Whenever any of these conditions exist, a Qualified Products
List process may be established. Under these regulations, a
specification of the requirements that a product must meet .
is developed and widely distributed. Any manufacturer who
believes his product meets this specification may submit his
product for evaluation by the government. If the product is
determined to meet the specification, it is entered on a
Qualified Products List maintained by the government agency
performing the evaluation.
40
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Any agency or component seeking-. to procure ? an item .which
meets the OPL specifitation can utilize. the QPL evaluation
in its procurement process in lieu of performing its own. -
separate evaluation. ? The QPL process allows the efficient
and consistent. evaluation ,of complex- products -and the
general. availability of the evaluation results to all DOD
procurement organizations.
There is a provision of the QPL process described in the DAR
that requires all products considered as part of a
particular government RFP to be *already on the QPL prior to
issuance of. the RFP. If a manufacturer believes that his
product Meets the government specification but the
evaluation has not been completed at the time of issuance of
the RFP,, that product will be disqualified from that
procurement action. This provision has been viewed by many
as anti-competitive and has been a deterrent to the wide use
of the QPL 'process.
The Special Committee on Compromising Emanations (SCOCE) of
the .National Communications Security Board has established a
modified QPL process for the evaluation of industry devices
which meet government standards for compromising emanations
(NACSEM 5100). Under the provisions of their Preferred
Products List (PPL), a manufacturer supplies the government
with the results of tests performed either by himself or one
of a set of industry TEMPEST evaluation laboratories whiCh
indicate compliance with the NACSEM 5100 specification.
Upon affirmative review of these test results, the product
will be entered on the TEMPEST Preferred Products List. Any
manufacturer may present the results of the testing of his
product to the government at any time including during the
response to a particular RFP.
The evaluation of the integrity of industry developed
computer systems is a complex process requiring considerable
J.,ipe and r.e.soklr.qeshatare Supply. .., A .Q.P.L.-7likP--
process for disseminating the results of these evaluations
is essential. Under these circumstances, a small team.of
highly .competent government computer _science and system
security experts will perform the evaluation of industry
submitted systems and the results of their evaluations will
be made available to any DOD organization for use in their
procurement process, eliminating the inefficiency and
inconsistency of duplicate evaluations.
As described in section 3.4.1, there are many technical
features which influence the overall integrity of a system.
Some of these features are essential for protecting
information within a system regardless of the type of
application or the environment. However, many of these
features may not be particularly relevant in particular
applications or environments and therefore it may be
41
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
reasonable to approve systems for use in some environments
even with known ?deficiencies in certain. teohniOl-areas,
For. example, in an environment where' all users are cleared
to ..a. high level and there is a need-,to-know -requirement, it
may ,be reasonable to employ a system which has not
completely eliminated all indirect data paths (see section
2.44) on the premise that a hi(111 degree of trust has
already been placed in the cleared,uSers and they are not
likely- to conspire with another user to attempt to exploit a
complex -.indirect channel to obtain, information for which
they have, no' need7-taknow. Similar arguments can be made
for systems processing information of a low ?level of
sensitivity. Since indirect paths require two conspiring
users they are. difficultto use and in most cases are not
worth the risk of being detected.
Thus, systems with certain- technical features should be
usable for applications of a particular type in environments
of a particular type. It is possible to describe classes of
those integrity features required for different application
and risk environments. If there is a process (as described
in section 3.4) for evaluating the integrity of various
trusted systems, then an "Evaluated Products List" (EPL) can
be constructed matching products to these protection classes
(and, thus., to certain application and risk environments).
It appears that the technical integrity measures can be
categorized into a small set of classes (six to nine) with
considerable consistency in determining into -which class a
particular system will fit. Figure 3-1 is an example of an
Evaluated Products List, consisting of six classes ranging
from systems about which very little is known and which can
be used only in dedicated system-high environments (most of
the commercial systems today) to systems with technical
features in excess of the current state-of-the-art. The
environments are described in terms of the sensitivity of
?,the-information-and-the_degres_of_user.Capabi).ity. -
The Evaluated Products List includes all computer systems
whose protection features have been-evaluated. The first
class implies superficial protection mechanisms. A system
in this class is only suitable for a system-high
classification installation. Most modern commercial systems
satisfy at least the requirements of Class I. As one
progresses to higher classes, the technical and assurance
features with respect. to system protection. are significantly.
strengthened and the application environment into which a
system may be placed can be of a higher sensitivity.
In discussing the Evaluated PrOducts List (EPL) concept with
various communities within the defense department and the
intelligence community, it has become clear that, while the.
42
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
'Jed U! IDOWSSepea
9171-001- 1-001-000108Z00-1-6dCll-V10 91-/170/?1,0z 3Se3I3i -104 panaiddv Ado Pez!4!ueS
Possible
Class Technical features Examples Environments
May have some protection
Login authentidAtion
Most modern Dedicated
commercial mode
systems
2 Mandatory data security Mature Benign ,<
-a
:Penetration testing "enhanced" need-td-know >
,ID
..Auditing . OS environments ?0
8
<
3 Top-Level Specification of PCB . m
a
:Clearly Identified & Protected TCB Multics AFDSC TS-S
0
,Tbp-Down Design 7)
:-,
Testing Based on TLS cr.
(T)
m
.Formal Top-Level Specifications KSOS-6 Limited user w
m
nimp Verification 0KSOS-11 programming m:
:Testing Generation from Formal TLS KVM TS-S-C o
_.
w
Limited Covert Path Provisions 6
.r.
,
Full user m
programming
TS-S-C 0
Full user (77j
.0
' programming m
TS-S--C-U _.
o
5 :Verified Implementation
Test Case Generation from LLS
Extended Covert Path Provisions
6 Object Code Analysis
Hardware Specs
Figure 3-1 EVALUATED PRODUCTS LIST
cb
0
OD
0
0
0
0
0
0
0
0
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
technical feature evaluation process is understood and
agreed Upon, the idsntification:of Suitable application.
environments will differ depending upon the community
involved. For example, the Genser community may decide that
the.. technical featuTes of a Class IV system are suitable for
a particUlar application, whereas the same application in
thejiltelligence community may require a Class, V system.
a _result, the ,EPL becomes a matrix of suitable application
environments (figure 3-2), depending upon the sensitivities
of the information being processed. In addition to the
intelligence community and the-Genser community, :there are
the interests of the privacy and the financial communities
and the non-national security communities whose
requirements, frequently, are less restrictive than those of
the national security communities.
The successful ,establishment of an Evaluated-Products List
for trusted computingsystems requires that the computer
industry become cognizant of the EPL concept and .of computer
security technology, and that a procedure for evaluating
systems be formulated. Section 3.2 (below) discusses the
focus of operating system protection requirements, the
Trusted Computing Base. Section 3.3 describes the
Initiative's technology transfer activities. Section 3.4
presents a proposed process for trusted system evaluation
and section 3.5 summarizes current, informal system
evaluation activity.
44
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
0
(1).
?
- Z
U"
>-
1--
a
LU
0
LU
?
LU Z
C.7
2
Ed 2
0
4520
CN1
LO
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
3.2 THE TRUSTED COMPUTING BASE
A significant prerequisite to achieving the widespread
availability of commercial trusted systems is the definition
of just what the requirements for a trusted system are.
Security kernel prototypes had beenbuilt over the years,
but they were specific to particular hardware bases or
operating systems. In order' to present the basic concept of
a security kernel and trusted processes in a general manner
that would apply to a wide range of computer systems and
Many applications,. aproposed specification for a Trusted
Computing Base (a kernel and trusted processes) was prepared
by Grace Nibaldi of The MITRE Corporation [NIBA79a]. The
specification describes, the concept of a Trusted Computing
Base (TCB) and discusses TCB- requirements. The rest of this
section describes the Trusted Computing Base, and is
excerpted from [NIBA79a], (We have preceded the section
numbering used in [NIBA.79a] by TCB, Thus, Nibaldi's section
3.1 appears below as TCB.3.1.)
TCB.1 Scope
In any computer operating system that supports
multiprogramming and resource sharing, certain mechanisms
can usually be identified as attempting to provide
protection among users against unauthorized access to
computer data-. However, experience has shown that no matter
how well-intentioned the developers, traditional methods of
software design and production have failed to provide
systems with adequate, verifiably correct protection
mechanisms. We definea trusted computing base (TCB) to be
the totality of access control mechanisms for an operating
system.
A TCB should provide both a basic protection environment and
the additional user services required for a trustworthy
turnkey system. The basic protection environment is
.equivale4,..tgt114t:prOvide0,1DY .a.security-kernel Aa,
verifiable hardware/software mechanism that mediates access
to information in a computer. system); the user services are
analogous to the facilities provided by trusted processes in
kernel-based systems. Trusted processes are designed to
provide services that could be incorporated in the kernel
but are kept separate to simplify verification of both
kernel and trusted processes. Trusted processes also have
been referred to as "privileged," "responsible,"
"semi-trusted", and "non-kernel security-related (NKSR).",.in
various implementations. This section documents the
performance, design, and development requirements for a TCB
for a general-purpose operating system.
In this section, there will be no attempt to specify how any
particular aspect of a TCB must be implemented. Studies of
46
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
present-day computer architectures (SMIT75,TANG78] indica:te
that in q.e. near terma significant amount Of Software will
be needed far protection regardless of any support provided
by the underlying hardware. In future computer
arChitectures, more of the TCB functions may be implemented
in hardware or firmware. Examples of specific hardware ACT
software implementations are given merely as illustrations,
and are not meant to be requirements.
This specification is limited to computer hardware and
software protection mechanisms;. not covered are the
administrative, physical, personnel, communications, and
other security measures that complement the internal
computer security controls. For more information in those
areas, see DOD Directive 5200.28 that describes the
procedures for the Department of Defense.
(Section 2 of the TCB specification contains references.
They have been. included in the references for this report
rather than being included here as TCB.2.)
TCB.3 General Requirements
TCB.3.1 System Definition
A TCB is a hardware and software access control mechanism
that establishes a protection environment to control the
sharing of information in computer systems. Under hardware
and software we include implementations of computer
architectures in firmware or microcode. A TCB is an
implementation of a reference monitor, as defined in
(ANDE72), that controls when and how data is accessed.
In general, a TCB must enforce a given protection policy
describing the conditions under which information and system
resources can be made available to the users of the system.
Protection policies address such problems as undesirable
?
disclosure and destructive modification of information, in
the system, and harm to the functioning of the system
resulting in the denial of service to authorized users.
Proof that the TCB will indeed enforce the relevant
protection policy can only be provided through a formal,
methodological approach to TCB design and verification, an
example of which is discussed below. Because the TCB
consists of all the security-related mechanisms, proof of
its validity implies the remainder of the system will
perform correctly with respect to the policy.
Ideally, in an implementation, policy and mechanism can be
kept separate so as to make the protection mechanisms
flexible and amenable to different environments, e.g.,
military, banking, or medical applications. The advantage
47
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
here is that a change in Or reinterpretation of the required
-policy need not result intewriting or reverifying the TCB.
In the following sections, general requirements for TCB
design and verification are discussed.
TCB.3.2 Protection Policy
The primary requirement on a TCB is that it support .a
well-defined protection policy. The precise policy will be
largely application and organization dependent. Four
specific protection policies are listed lpelow_as:examples
around which TCBs may be designed. All are fairly general
purpose, and when used in combination, would satisfy the
needs of most applications, although they do not
specifically address the denial of service threat. The
policies are ordered by their concern either with the
viewing of information--security policies--or, with
information modification--integrity policies; and by whether
the ability to access information is externally
predetermined--Mandatory policies--or controlled by the
processor of the information--discretionary policies:
1. mandatory security (used by the Department of
Defense--see DoDD 5200.28), to address the
. compromise of information involving national
security;
2. discretionary security (commonly found in general
purpose computer systems today);
3. mandatory integrity; and
4. discretionary integrity policy.
In each of these cases, "protection attributes" are
associated with the protectable entities, or "objects"
(computer resources such as-files-and'peripheral-deViCsS
that contain the data of interest), and, with the users of
these entities (e.g., users, processes), referred to as
Subjects. In particular, for mandatory security policy, the
attributes of subjects and objects will be referred to as
"security levels." These attributes are used by the TCB to
determine, what .accesses are valid. The nature of these
attributes will depend on the applicable protection policy.
See Nibaldi [NIBA79b] for a general.discusSion on policy,.
See Biba [BI3A75] for a discussion of integrity.
TCB.3.3 Reference Monitor Requirements
As stated above, a TCB is an implementation of a reference
monitor. The predominant criteria for a sound reference
48
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
monitor implementation are that it be
1. complete in its mediation of access to data and
other computer resources;
2. self-protecting, free from interference and
spurious modification; and
3. verifiable, constructed in a way that enables
convincing demonstration of its correctness and
infallibility.
TCB.3.3.1 Completeness
The requirement that a TCB mediate every access to data in
.the computer system is crucial. In particular, a TCB should
mediate access to itself--its code and private data--thereby
supporting the second criterion for self-protection. The
implication is that on every action by subjects on objects,
the TCB is invoked, either explicitly or implicitly, to
determine the validity of the action with respect to the
protection policy. This includes:
1. unmistakably identifying the subjects and objects
and their protection attributes, and
Making it impossible for the access checking to be
circumvented.
In essence, the TCB must establish an environment that will
simultaneously (a) partition the physical resources of the
system (e.g., cycles, memory, devices, files) into "virtual"
resources for each subject, and (b) cause certain activities
performed by the subjects, such as referencing objects
outside of their virtual space, to require TCB intervention.
TCB.3.3.1.1 Subject/Object Identification
What are. the subjects and objects for a given system and how
are they brought into the system and assigned protection
attributes? In the people/Paper world, people are clearly
the subjects. In a computer, the process has commonly been
taken as a subject in security kernel-based systems, and
storage entities (e.g., records, files, and I/O devices) are
usually considered the objects. Note that a process might
also behave as an object, for instance if another process
sends it mail -(writes it). Likewise, an I/O device might be
considered to sometimes act as a subject, if it can access
any area of memory in performing an operation. In any case,
the policy rules governing subject/object interaction must
always be obeyed. The precise breakdown for a given system
will depend on the application. Complete identification of
subjects and objects within the computer system can only be
49
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : 'CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
u
assured if their creation, name Association, and protection
attribute assignment always take place under TCB control,
and no Subsequent manipulations on subjects and objects are
allowed to Change these attributes without TCB involvement.
Certain issues remain, such as (a) how to associate
individual users and :the programs they run with subjects;
and (b) how to associate all the entities that must be
accessed on the system (i.e..; the computer resources) with
objects. TCB functions for this purpose are described in
TCB.4, "Detailed Requirements."
TCB.3 .3 .1.2 Access Check ing
How are the subjects constrained to invoke the TCB on every
access to objects? Just as the TCB should be responsible for
generating and unmistakably labelling every subject and
object in the system, the TCB must also be the facility for
enabling subjects to manipulate objects, for instance by
forcing every fetch, store, or I/O instruction executed by
non-TCB software to be "interpreted" by the TCB.
Hardware support for checking on memory accesses exists on
several machines, and has been found to be very efficient.
This support has taken the form of descriptor-based
addressing: each process has a virtual space consisting of
segments of physical memory that appear to the process to be
connected. In fact, the segments may be scattered all over
memory, and the virtual space may have holes in it where no
segments are assigned. Whenever the process references a
location, the hardware converts the "virtual address" into
the name of a base register (holding the physical address of
the start of the segment, the length of the segments, and
the modes of access allowed on the segment), and an offset.
The content of the base register is called a descriptor.
The hardware can then abort if the form of reference (e.g.,
read, write) does not correspond to the valid access modes,
if the offset exceeds the size of the segment, or if no
segment has been "mapped" to that address. The software
Portion of the TCB need merely be responsible for setting up
the descriptor registers based on one-time checks as to the
legality of the mapping.
Access checking in I/O has been aided by hardware features
in a variety of ways. In one line of computers, devices are
manipulated through the virtual memory mechanism: a process
accesses a device by referencing a virtual address that is
subsequently changed by hardware into the physical address
of the device. This form of I/O is referred to as "mapped
I/O" [TANG78]. Other methods of checking I/O are discussed
in section TCB.4.1.2.
50
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
TC3-.3.3 .2 Self-Protection
Following the principle of economy of 'mechanism [SALT75] ,
the TCB ideally protects itself in the same way that it
protects other objects, so the discussion on the?
completeness property applies here as ?well. In addition,
not uncommonly glany. computer architectures provide for . ?
multiple protection "domains" of varying privilege (e.g.,
supervisor, user). Activities across domains are limited by
the hardware so. that software, in; the the more privileged ?
domains might affect the operations in less privileged
domains, but not necessarily vice versa. Also, software not
executing. in a- privileged-- domain is: restricted, again. by the
hardware, from using., -certain instructions, -e.g.,
manipulate-descriptor-registers, set-privilege-bit, halt,
and start-I/O. Generally only TCB software -would run in the
most privileged domain and rely on the hardware for its
protection. (Of course, part of the TCB might run outside
of that domain, e.g., as a trusted process.) Clearly, if in
addition to the TCB, non-TCB or untrusted software were
allowed to run in the privileged region, TCB controls could
be subverted and the domain mechanism would be useless.
TCB.3.3.3 Verifiability
The responsibility given to the TCB makes it imperative that
confidence in the controls it provides be established.
Naturally, this, ,applies to TCB hardware, software, and
firmware. . The f011owing discussion considers only software
verification. Techniques for verifying hardware correctness
have tended to emphasize exhaustive testing, and will no
doubt continue to do so. Even here, however, the trend is
toward more formal techniques of verification, similar to
those being applied to software. One approach is given in
[FURT78]. IBM has done some work on microcode verification.
Minimizing the complexity ?of TCB software is a major factor
in raising. confidence level that.cambeassigned to the.
protection mechanisms it provides. Consequently, two
general design goals to follow, after identifying all
security relevant operations for inclusion in the TCB are
(a) to exclude from the TCB software any operations not
strictly security-related so that one can focus attention on
those that are, and (b) to make as full use as possible of
protection features available in the hardware. Formal
techniques of verification, such as those discussed in the
next section, are promoted in TCB design to provide an
acceptable methodology upon which to base a decision as to
the correctness of the design and of the implementation.
TCB.3 .3 .3 .1 Security Model
Any formal methodology for verifying the correctness of a
TCB must start with the adoption of a mathematical model of
51
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : CIA-RDP91-00280R000100110014-6
the desired protection policy. A model encompassing
mandatory seCurity:, and to some ei't-eht the discretionary
security and integrity policies was developed by Bell and
LaPadula [BELL73]. Biba (BIBA75] has shown how mandatory
integrity is the dual of security and, consequently may be
modeled similarly. There are five axioms of the model. The
primary two are the, simple security condition and the
*-property (read star-property). The simple security
condition states that a subject cannot observe an object
unless the security level of the subject, that is, the
protection attributes, is greater than or equal to that of
the object. This axiom alone might be sufficient if not for
the threat of non-TCB software either accidentally or
intentionally copying information into objects at lower
security levels. For this reason, the *-property is
included. The *-property states a subject may only modify
an object if the security level of the subject is less than
or equal to the security level of the object.
The simple security condition and the *-property can be
circumvented within a computer system by not properly
classifying the object initially or by reclassifying the
object arbitrarily. To prevent this, the model includes two
additional axioms: the activity axiom guarantees that all
objects have a well-defined security level known to the TCB;
the tranquility axiom requires the classifications of
objects are not changed.
The model also defines what is called a "trusted subject"
that may be privileged to violate the protection policy in
some ways where the policy is too restrictive. For
instance, part of the TCB might be a "trusted process" that
allows a user to change the security level of information
that should be declassified (e.g., has been extracted from a
classified document but is itself not classified). This
action would normally be considered a tranquility or
*-property violation, depending on whether the object
containing the information had its security level changed or
the information was copied into an object at a lower
security level.
TCB .3.3.3.2 Methodology
A verification methodology is depicted in figure 3-3. In
this technique, the correspondence between the
implementation (here shown as the machine code) and
protection policy is proven in three steps: ( a) the
properties of a mathematical model of the protection policy
are proven to be upheld in a formal top level specification
of ,the behavior of a given TCB in terms of its input,
output, and side effects; (b) the implementation of the
specifications in a verifiable programming language
(languages such as Pascal, Gypsy, Module, and Euclid for
52
1)p:cl2ssified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
?.
tkra-
sea
21C
0
eta
.11tC
.12%
%.0
.faS
1112.1
0
Ida
b211
Cte.
cad
Ihavi
0
53
Let.
FIGURE 3-3
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
which verification tools either exist or are currently being
Planned fGOOD78b]) is shown to-faithfully correspond to the
formal specifications; ahd finally (6). the generated machine
code, it demonstrated to correctly implement the programs.
The model describes the conditions under which the subjects
in the. system access the objects. With this approach, it
can be shown that the machine code realizes the goals of the
model, and as a result, that the specified protection is
provided.
Where trusted subjects are part of the system, a similar
correspondence proof starting with an additional Model of
the way in which the trusted subject is allowed to violate
the general model becomes necessary. Clearly, the more
extensive the duties of the trusted subject, the more
complex the model and proof.
TCB.3.3.3.3 Confinement Problems
The TCB is designed to "confine" what a process can access
in a computer system. The discussion above centers around
direct access to information. Other methods exist to
compromise information that are not always as easily
detected or corrected. Known as "indirect channels", they
exist as a side-effect of resource-sharing. This manner of
passing information may be divided into -"storage" channels
and "timing" channels. Storage channels involve shared
control variables that can be influenced by a sender and
read by a receiver, for instance when the fact that the
system disk is full is returned to a process trying to
create a file. Storage channels, however, can be detected
using verification techniques. Timing channels also involve
the use of resources, but here the exchange medium is time;
these channels are not easily detected through verification.
An example of a timing channel is where modulation of -
scheduling time can be used to pass information.
In order to take advantage of indirect channels, at least
two "colluding"- processes are needed, One With.direct'aaCeSS?
to the information desired, and a second one to detect the
modulations and translate them into information that can be
used by an unauthorized recipient. Such a channel might be
slowed by introducing noise, for instance by varying the
length of time certain operations take to complete, but
performance would be affected.
Storage channels are related to the visibility of control
information: data "about" information,, for example, the
names of files not themselves directly accessible, the
length of an IPC message to another user, the time an object
was last modified, or the access control list of a file. It
is often the case that even the fact that an object with
certain protection attributes exists is information that
must be protected. Even the name of a newly created object
54
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
such as a file can be a channel if this name is dependent on
information about other files, e.g., if the name is derived
from an incremental counter, used only to generate new file
names. This type of channel can often be closed by making
the data about legitimate information as protected as the
information itself. However, this is not always desirable:
for instance, in computer networks, software concerned only
with the transmission of messages, not with their contents,
might need to view message headers containing message
length, destination, etc.
Systems designers should be aware of confinement problems
and the threats they pose. Formal techniques to at least
identify and determine the bandwidth of the channels, if not
completely close them, are certainly of value here. Ad hoc
measures may be necessary in their absence.
TCB.3.4 Performance Requirements
Since the functions of the TCB are interpretive in nature,
they may be slow to execute unless adequate support is
provided in the hardware. For this reason, in the examples
of functions given below, hardware implementations
(including firmware/Microcode), as opposed to software, are
stressed, with the idea that reasonable performance is only
accomplished when support for the protection Mechanisms
exists in hardware Certainly, software implementations are
not excluded, and due to the malleability of software, are
likely more susceptible to appreciable optimization.
TCB.4 Detailed Requirements
The kinds of functions that would be performed by a TCB are
outlined below. Those listed are general in nature: they
are intended to support both general-purpose operating
'-SytteMS'and-a V.ariety of dedioatectapplications-that-due to...
potential size and complexity', could not easily be verified.
The functions can be divided into two general areas:
software interface functions, operations invoked by
programs, and user interface functions, operations invoked
directly by users. in terms of a security kernel
implementation, the software interface functions would for
.the most part be implemented by the kernel; the user
interface'functiOns would likely be carried out in trusted ?
processes.
TCB.4.1 Software Interface Functions
The TCB acts very much like a primitive operating system.
The software interface functions are those system calls that
user and application programs running in processes on top of
the TCB may directly invoke. These functions fall into three
categories: processes, input/Output, and storage.
95
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
.?
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
In the descriptions that follow, -general input, output, and
processing requirements are stated. Output values to
processes in particular could cause confinement problems
(i.e., serve as indirect channels), by relating the status
of control variables,that are affected by operations by
other processes. Likely instances of this are mentioned
wherever possible.
TCB.4.1.1 Processes
Proae'ssesia=t14reitsTvactive elements' in the system,
embodYimD.: the notion Of the subject in the mathematical
model. (Processes also behave as objects When cotmunicating
with each other) By definition, a process is "an address
space, s point of execution, ancIa unit of scheduling." More
precisely, a process consists: of code and data accessible as
part of its address space; a program location at which at
any point during the life of the process the address of the
currently executing instruction can be foundf- and periodic
acdess to the processor in order to continue? The role of
the PCB is to manage the individual address spaces by
providing a unique environment for each process, often
called a "per-process virtual space", and to equitably
schedule the processor among the processes. Also, since
many applications require cooperating processes, an
inter-process communication (IPC) mechanism is required as
part of the PCB.
TCB.4.1.1.1 Create Process
A create process function causes a new per-prOcess virtual
space to be established with specific program code and an
identified starting execution point. The identity of the
user causing-the process to be created should be associated
with the process, and depending on the protection policy in
force, protection attributes should be assigned, such as a_
-security-level-at which the- prOCeSS Should'ekecUte-in-the
case of mandatory security.
TCB.4.1.1.2 Delete Process
A delete process function causes a process to be purged from
the system,. and its virtual: space freed. - The process is no
longer considered a valid subject or object. If one process
may delete another with different protection attributes, an
indirect channel may arise from' returning the fact of the
success or failure of the operation to the requesting
process.
56
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
TCB.4.1.1.3 Swap Process
A swap process function allows a process to become blocked
and consequently enable others .to run. A TCB implementation
may choose to regularly schedule other processes to execute
after some fixed "time-slice" has elapsed for the running
process. If a TCB:supports time-slicing, a swap function
may not be necessary. In order to address a. denial of
Service threat, this will not be the only process blocking
operation: certain I/O opezations should cause the process
initiating the oge-ration7 to 1:)-e suspended until the operation
_completes.
For example, the hardware could- support such an operation
through meChanisms that effect fast process swaps with the
corresponding change in address spaces. An example of such
support is a single "descriptor base" register that points
to descriptors for a process-- address space, only modifiable
from the privileged domain. The swap would be executed in
little more than the time required for a single "move".
operation.
As was mentioned above, the "scheduling" operation in itself
may contribute to a timing channel, that must be carefully
monitored.'
TCB.4 .1 .1 .4 IPC Send
A process may send a message to another process permitted to
receive messages from it through an IPC send mechanism. The
TCB should be guided by the applicable protection policy in
determining whether the message should be sent, based on the
protection attributes of the sending and receiving process.
The TCB should also insure that messages are sent to the
correct destination.
An indirect channel may result from returning the success or
failure of "queuing" the message to the sending process,
because the returned value may indicate the existence of
other messages for the destination process, as well as the
existence of the destination process. This may be a problem
particularly where processes with different protection
attributes are involved (even if the attributes are
sufficient for actually sending the message). If such a
channel ,is of concern, a better option might be to only
return errors involving the message itself (e.g., message
too long, bad message format). Clearly, there is a tradeoff
here between utility and security.
TCB .4 .1.1.5 IPC Receive
A process may .receive a message previously sent to it
. through an IPC receive function. The TCB must insure that
57
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
in allowing a process to receive the message, the process
does not violate the applicable protection policy.
TCB.4 1.2 input/Output
Depending on the sophistication othe 'ITCBi I/O operations
may range from forcing the user to take care of low level
control all the way tohiding from:the User all device
dependencies, essentially by presenting I/O devices as
simple storage objects, such as described below. Where I/O
details cannot be entirely hidden from the user, one could
classify I/O devices as devices that, can only manipulate
data objects with a,common protection attribute at one time
(such as a line printer), and those that can manage data
objects representing many different protection attributes
simultaneously (such as disk storage devices). These two
categories can be even further broken down into devices that
can' read or write any location in memory and those that can
only access specific areas. :.These categories present
special threats, but in all cases the completeness criteria
must apply, requiring that the TCB mediate the movement of
data from one place to another, that is, from one object to
another.: To resolve this problem, all I/O operations should
be mediated by the TCB.
Some computer architectures only allow software running in
the most privileged:mode to. execute instructions directing
I/O. As a result, if only the TCB can assume privileged
mode, TCB mediation of I/O is more easily implemented.
In the first category, if access to the device can be
controlled merely by restricting access to the memory object
which the device uses, the problem becomes bow to properly
assign the associated, memory to a users process, and no
special TOB I/O functions are necessary. However, if
special timing requirements must be met to adequately
complete an I/O. operation, quick response times may only be
possible by having the TCB service the device, in which case
' -'-'----
When the device can contain objects having different
protection attributes, the entire I/0 operation will involve
not only a memory object, but also a particular object on
the device having the requisite protection attributes. TCB
mediation in such a case" is discussed under "Storage
Objects."
TCB.4.1.2.1 Access Device
The access device function is a directive to the TCB to
perform an I/O operation on a given device with specified
data. The operations performed will depend on the device:
terminals will require read and write operations at a
58
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
minimum. The TCB would determine if the protection
attributes of the requesting process allow it to reference
the device in the manner requested.
This kind of operation will only be necessary when mapped
I/O is not possible.
TCB.4 .1.2..2 Map Device
The map device operation makes the memory and control
associated with a device correspond to an area in the
process" address space ttie case of the "access
device" function, a process must have: protection attributes
commensurate to that of the information allowed on the
device to succesafUlly execute this operation. This
operation may not be possible if mapped I/O is not available
in the hardware.
TCB .4 .1.2 .3 Unmap Device
The unmap device frees a device mapped in the address space
of a process.
TCB.4.1.3 Storage Objects
The term "storage objects" refers to the various logical
storage areas into which data is read and written, that is,
areas that are recognized as objects by the PCB. Such
objects may take the form of logical files or merely
recognizable units of a file such as a fixed-length block.
These objects may ultimately reside on a long-term storage
device, or only exist during the lifetime of the process, as
required. Where long-term devices have information with
varied protection attributes, as discussed in the previous
section, TCB mediation results in virtualizing the device
into recognizable objects each of which may take on ?
different protection attributes. The operations on storage
'objeCtS'indlude-dteatiOn-, and-the-direct.access-----
involved in reading and writing.
TCB.4.1.3.1 Create Object
The create object function allocates a new storage object.
Physical space may or may not be allocated, but if so, the
amount of space actually allocated may be a system default
value or specified at the time of creation.
As mentioned above, naming conventions for storage objects
such as files may open an undesirable indirect channel.- If
the names are (unambiguously) user-defined or randomly
generated by the TCB, the channel can be reduced.
59
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
C..)
TCB.4.1.3.2 Delete Object
.The delete object f'unctior removes an .o.oject from the system
and expunges- the information and any space associated with
it. The 'TCB.first .must vex ifv that the .protection
attributes of the process and. object ands,' the object to be
deleted. .Indirect channels in thiscase are similar to
those for "delete process." The fact of the success or
failure of the operation may cause undesirable information
leakage.
TCB.4.1.3.3 Fetch Object
The fetch object function makes any data written in the
object available to the calling process. The TCB must
determine first if the protection attributes of the object
allow it to be accessed by the process. This function may
be implemented primarily in hardware, by mapping the
physical address of the object into a virtual address of the
caller, or in software by copying the data in the object
into a region of the caller's address space.
. TCB.4.1.3.4 Store Object
The store object function removes the object from the active
environment of the calling process. If the object is mapped
into the callers,. virtual space, this function will include
an unmap.
TCB.4.1.3.5 Change Object Protection Attributes
A protection policy may dictate that subjects may change
some or all of the protection attributes of objects they can
access. Alternatively, only trusted subjects might be
allowed to change certain attributes. The TCB should
determine if such a change is permitted within the limits of
the protection policy.
?-TCB-:4?;2.-User -Interface-Functions ?
The TCB software interface functions address the operations
executable by arbitrary user or applications software. The
user interface functions, on the other hand, include those
operations that should be directly invokable by users. By
localizing the security-critical functions in a TCB for
verification, it becomes .unnecessary for the remaining
software running in the system to be verified before the
zystem'can be trusted tO enforde a'ptotection policy. Most
applications software should be able to run securely, by
merely taking advantage of TOB software interface
facilities. Applications may enforce their own protection
requirements in addition to those of the TCB, e.g., a data
base management system may require Very small files be
60
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
(?2
?
controlled, where the granularity of the files is too small
to be feasibly protected by the TCB. In such a case, the
application would. still rely on the basic proteetion
environment provided by the TCB. When users need
capabilities beyond that normally provided to general
applications, such as the ability to change the owner of a
file object, direct contact With the TCB is required.
In kernel-based systems., the user interface functions are
commonly implemented as trusted processes. Moreover, these
trusted processes rely on the equivalent of the software
interface functions for support.
These functions fall into three categories: user services,-
operations .and maintenance, and administration.
TCB..4.2.1 User Services
Certain operations may be available to users as part of
standard ?set of functions a user may wish to perform. Three
are of interest here: authentication of the user to the
system and of the .system to the user, modification of
protection attributes, and special I/O.
TCB.4.2.1.1 Authentication
The act of "logging in", of identifying oneself to the
system and confirming that the system is ready to act on the
behalf of the requester, is critical to the protection
mechanisms, since all operations and data accesses that
subsequently occur will be done in the name of this user.
Consequently, identification and authentication mechanisms
that play a part in validating a user to the system should
be carefully designed and implemented as part of the TCB.
Likewise, the system must have some way of alerting the user
when the TCB is in command of terminal communications,
rathei'th'an-UntrUsted-Seftware'ffiefely. Mimicking-the-TC8';--
For example,- the TCB might signal to the user in a way that
non-TCB software could not, or a special terminal button
could be reserved for users to force the attention of the
TCB, to the exclusion of all other processes.
.TCB.4.2.1.2 Access Modification
Access modification functions allow a user to securely
'redefine the protection attributes of objectshe/She
controls, particularly in the case of discretionary policy.
Also included here are operations that allow a user to
select the protection attributes to be assumed while using
the system, where the attributes may take on a range of
values. For example, a user with a security level of Top
Secret, may choose temporarily to operate as if Unclassified
61
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
in order to update bowling scores.
Many factors must be considered in implementing such an
operation, particularly if implemented in a process. The
user must have some way ofConvincing himself that the
object for whiCh the protection attributes are being changed
is indeed what is intended. For instance, the user might-be
allowed to view, a file to confirm its contents before
changing its security level. Another issue involves the
synchronization problem resulting from other processes
possibly accessing the object at the instant the access
modification is attempted'. The TCB should prevent such a
change from occurring unless the object were "locked", or
temporarily made inaccessible to other processes, until the
operation was complete, and also. access to the other
processes should be re-evaluated on completion.
TCB.4.2.1.3 Special I/O
I/O functions not covered in the software interface
functions due to their specialized nature are: (a) network
communications, and (b) spooling, e.g. to a line printer or
mailer. The ramifications of both of these areas are too
extensive to adequately cover here. The reader is referred
to [KSOS78].
TCB.4.2.2 Operations/Maintenance
In the operations and maintenance category fall those
functions that would normally be performed by special users,
the system operators, in running and maintaining the system.
Examples of such operations are system startup and shutdown,
backup and restore of longterm storage, system-wide
diagnostics, and system generation. -
TCB .4 .2 .2.1 Startup/Shutdown
purity model discussed above assumes _that in .a
initial secure state is attained and that subsequent
operations on the system obey the protection policy and do
not affect the security of the system. This characteristic
of a TCB,can be said to be true regardless of the protection
policy and security model employed. A "startup", or
bootstrap, operation addresses the initialization of the
system and the: establishment of the protection environment
upon which subsequent operations are based- The model
itself, or the formal specifications of a specific design,
can address what the characteristics of all secure states
are, and hence the requirements for the initial secure
state. Consequently, programs that create this state canbe
well-defined. Since it is the operator who must execute the
necessary procedures that initialize the system, TCB
functions interfacing the operator Must be trusted to do
62
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
what the operator specifies.
Shutdown procedures are equally crucial in that An arbitrary
suspension of system activities could easily leave the
system in an incomplete state, making it difficult to resume
securely (for instance, if only half of an updated password
file is moved back to-disk). One must, for instance, write
all memory-resident tables out to disk, where necessary.
TCB,4.2.2.2 Backup/Restore
To allow .far rAcuvexy fromunpredictAble hardware failure,
and conseauentIy the-arbitrary suspension mentioned. above,
'checkpoints' may-.1p-etaen oEa_given.state?of the storage
system', for ? instance, by copying all' files from disk to some
other medium, such as magnetic tape. In the event of system
failure, the state of files at some earlier time can be
recovered. The backup function must operate on the system
in a consistent state, and accurately reflect that state;
the restore function must reliably rebuild from the last .
completely consistent record it has of_a secure state. Note
that the. backup system requires an especially high level of
trust since it ?stores protection attributes as well as data.
TCB.4.2.2.3. Diagnostics.
Diagnostics of both hardware and software integrity can
thwart potentially harmful situations. In particular,
hardware diagnostics attempt to signal when problems arise,
or, when something has already gone wrong, they try. to aid
the technician in pinpointing where. the problem is.
Diagnostics written in software- typically access all areas
of memory and devices, and consequently, if run during .
normal operation of the rest of the system, require tight
TCB controls. If possible, they should be relegated to user
programs and limited to specific access spaces during the
course of their operation. However, in such a case it would
'be-iMposSible-to-test 'the' securitycritical hardware, such ?
as descriptor registers if present. Such software, for
on-line diagnosis, must be included in the TCB, and limited
to operator use.
TCB.4.2.2.4 System Generation
System generation deals with creating the program, modules in
executable form that can subsequently be loadedduring
systeM startup. it is included' here for completeness,
although there is no intention to require that editors,
compilers, loaders, and so forth, be verified to correctly
produce the code that is later verified correct. Correct
system generation is an area that is clearly vulnerable, and
procedures- must be made to ensure that the ?master source is
not intentionally corrupted.
63
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified. in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
TCB.4.2.3 Administration
The administration and overall management of a system both
in terms of daily operations and security operations may be
relegated to a user, or uSers, other than the system
operator. Functions in Support of system administration
include but are not limited to updating data bases of users
and their valid protection attributes; and audit and
surveillance of protection violations;
TCB.4.2.3.1 User Data Base Updates
A typical user data base would contain at a minimum the
names of valid users, their authentication data (e.g.,
password, voice print, fingerprints), and information
relating to the protection attributes each user may take on
while using the system. TCB functions must be available to
an administrator to allow updates to the data base in such a
way that the new information is faithfully represented to
the user authentication mechanism.
TCB.4.2.3.2 Audit and Surveillance
Audit facilities capture and securely record significant
events in the system, including potential protection
violations, and provide functions to access and review the
data. Surveillance facilities allow for real-time
inspection of system activities. Audit and surveillance
mechanisms provide an additional layer of protection. They
should be implemented as part of a TCB not only because they
require access to all activities on the system as they
occur, but also since if they are not themselves verified to
be correct and complete, flagrant violations might go
undetected.
(End of the TCB extract.)
64
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
3.3 TECENOLOGY TRANSFER ACTIVITIES
Once the. requirements have been defined,- a second
significant r)rerequis.ite to achieving the Widespread
availability. of commercial trusted systems ? is the transfer
of the computer security :technology, developed largely under
the auspices of the DoD, to industry. This technology,
although available (in part) in the 'open literature, had
never been, presented in a cohesive and detailed manner. To
stimulate technology transfer,. the Initiative sponsors a
series. of computer security seminars aimed at. Consortium
members', the. c-omputer- indtstry.- and general computer users.
Consortium- members- are also. actively involved- in nation-wide
conferences, and wor'K,shops?addressing'computer security and
computer systems in general- Descriptions of some of. these
activities follow in chronological order.
.3.3.1 The MITRE Computer Security Workshop
During the week of 29 January to 2 February 1979, MITRE
Corporation personnel conducted a computer security workshop
for DoD personnel. The workshop involved eight general
lectures, five different technical workshop groups, and four
guest lecturers.
The goal of the workshop was to bring together for the first
time all of the technology, background, and experience
necessary for a DoD program manager to understand the
state-of-the-art in computer internal security controls
(e.g. the security kernel), but the material included
traditional concepts (e.g. periods processing with color
changes) as well.
There were 53 registered attendees from DoD and related
'agencies, including one person from the Canadian Department
of National Defense. Among the agencies and services
represented were: NSA, DCA, DIA, WSEG, DoDCI, USAF, ESD,
-RAIDC;-SACi-DSD:24- DP6'DARCCM;',NESC",'NOSC',.-USMC;?and CINCPAC.'
The general lectures were presented on the following topics:
Introduction, History, and Background
Operating systems and Security Kernel Organization
Mathematics of Secure Systems
Specification of Verifiably-Secure systems
Secure Computer System Developments (KSOS, SCOMP,
'KVM/37O) -
Design and Verification fo Secure systems
Secure systems: Experience, Certification, and
Procurement
Secure systems: Present and Future
65
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
U.
The guest lecturers were:
Mr. Stephen T. Walker
Staff Assistant, OUSDRE/C3I
"DOD Computer Security Initiative"
Mr. Steven B. Lipner
ADH, MITRE COrporation
"The. Evolution of Computer Security Technology"
Mr. Clark Weissman
Chief Technologist, System Development Corporation
"System Security Analysis/Certification: Methodology
and Results"
Prof. Jerome Saltzer
Professor of Computer Science and Engineering, MIT
"Security and Changing Technology"
Technical Workshops were given on the following topics:
Basic Principles
Security Kernel/Non-Kernel Security-Related Software
Design
Secure system Verification
Secure Computer Environment
Capability Architecture
3.3.2 1979 National Computer Conference
On June 4-7, 1979, the 1979 National Computer Conference was
held. An entire session of the conference was devoted to
the Initiative. Seven technical papers were prepared for
the session, which was chaired by Mr. Stephen T. Walker,
OUSDRE/C3I. The papers appear in the proceedings of the
conference. The papers, and their authors, are as follows:
"Applications for. Multilevel Secure Operating .Systems,"
John P. L. Woodward, The MITRE Corporation.
"The Foundations of a Provably Secure Operating System
(PSOS)", Richard J. Feiertag and Peter G.- Neumann., SRI
International.
"A,Security Retrofit of VM/370," B. b. Gold,
R. R. Linde, R. J. Peeler, M. Schaefer, J. F. Scheid,
and P. D. Ward,. System Development Corporation..
"KSOS - The Design of a Secure Operating System,"
E. 3. McCauley and P. J. DrongOwski, Ford Aerospace and
Communications Corporation.
66
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
"UCLA Secure UNIX," Gerald J. Popek, t1ark Kampe,
Charles. S. Kline, Allen Stoughton, Michael Urban, and
Evelyn J. Walton, UCLA.
"KSOS - Development Methodology for a Secure Operating
System," T. A. Berson and G. L. Barksdale, Jr., Ford
Aerospace and Communications Corporation.
"KSOS - Computer Network Applications,"
M. A. Padlipsky, K. J. Biba, and R. B. Neely, Ford
Aerospace and Communications Corporation.
3.3.3 1979 Summer Study on Air Force Computer Security_
During June and July 1979, the Air Force Office of
Scientific Research sponsored a summer. study-on Air Force
computer security'issues. The Initiative provided extensive
support and assistance to the study. -Following is a
significant portion .of the Executive Summary written by
Dr. J. Barton DeWolf and Paul A. SzuieWski, editors of the
study report.[DEW079].
The study .was held at the Charles Stark Draper Laboratory,
Inc. (CSDL), with some sessions at _Hanscom Air Force Base
in Bedford, MA. and at. the MITRE Corporation in Bedford, MA.
The objectivesof the study were to evaluate current
research and development in relation to Air Force
requirements for multilevel secure computer 'systems, to
identify critical research issues, and to provide guidance
?4nd recommendations for future researchand development
'emphasis. To this end, over 150 attendees representing
academic, industrial, .civilian government, and military
organizations, participated from June 18. through July 13 in
?an intensive technology review and evaluation.
The summer study was divided into the following nine
49P tP,3 days..,
(1) Air Force Computer Security Requirements.
(2) Security Working Group Meeting.
(3) Trusted.Operating Systems.
(4) Verification Technology.
.(5) Secure Data-Base Management.
(6) Secure Systems Evaluation.
.(7) Secure Distributed Systems and --Applications.
67
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
(8) Air Force Computer Security _Policy.
(g) Summary and Research Recommendations..
Although all the sessions shared a. common format, :each-
individual session chairperson was responsible for the
-specific form and content of his or her Session.
Participants,_ in general, prepared only slides to supplement
their oral presentations. - Typically, each session began
with _short presentations by each of the participants, which
served' tb'provide an. Overview of the, teChnology and to
stimtaate?ideas-and!daclizadOn, The presentations were
fon-owed by?d4facussionperdod'a-, in which questions.of
interest were addressed. Certain.participants knowledgeable
in the pertinent areas of computer security under discussion
were selected to anutarize the.. sessions.. in detail. These
Session summaries form the body-of this report. The
remainder of this executive summary_ highlights key findings
and recommendations.
In the keynote presentation on the opening day, Major
General Robert Herres described the multilevel security
.'problem as a dilemma:
...on the one hand-, we must maintain the
security and integrity of our sensitive
information," but on the other hand, we must be
able to respond quickly to rapidly changing
situations, especially during times of crisis or
war. And this means we must process and
distribute information' rapidly among many people
at different levels of command, and possessing a
variety of clearances and 'needs to know'.
"We cannot let security considerations throttle
our operational responsiveness, but we also
cannot jeopardize sources of intelligence
information,,war.plans? actionsorsensitive., _
inforMation by having' some unknown hole in Our
security which could be exploited by some
individual or group, quite undetectably."
The Requirements Session emphasized the need for solutions
to problems arising from the sharing of sensitive
information in computer systems. Presentations were made by
representatives of the Defense Communications Agency (WWMCCS
program)., ESD (OASIS program)., the Military Airlift Command,
the Air Force Weapons Laboratory, the Defense Mapping
Agency, and the Rome Air Development Center (EAIS program;
other tactical programs). In general, it was found that
requirements had not changed significantly from those
reported in the 1972 study, but that the trend towards
distributed processing and computer networks was adding a
68
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part- Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
new dimension of urgency and. complexity to the. problem. The
presentations described current modes of processing
classified information as combinations of dedicated, system
high, and periods processing. These modes entail numerous
inefficiencies which. include the following.:
(l) Waste of computer resources.
(2) Overclassification of information.
(3) Overclearing of users.
(4) Excessive number of clearances.
(5) Duplication of hardware and software.
'(6) Reliance on cumbersome, costly, and time-consuming
manual procedures for, review and declassification of
information.
There was also widespread concern regarding the cost of
converting or adapting existing software and data for use
with new hardware or operating systems, though efficiency
gains resulting from the use of multilevel secure systems
would tend to offset the conversion costs. The impact
(including the cost impact) of computer security
requirements on the accomplishment of Air Force mission
objectives has not been fully analyzed.
The Working Group Session discussed topics which were
Covered in greater detail in the other sessions; therefore,
a separate summary is not included herein.
The Trusted Operating Systems Session brought together a
panel of 12 practitioners--persons actively involved in the
design and development of trusted systems--to discuss their
experiences and views on system architecture, hardware
:SUPport,-nd-deVeloPMent-MethOdolOgies.?MOSt-redent-trusted-.
system development activity has followed the kernelized
operating system approach recommended by the 1972 ESD
planning study. In this approach, software specifications
for the sedurity management portion of the operating system
(i.e., the kernel) are proven to be in conformance with a
mathematical model of the security policy. This approach
has been successful in producing several prototype.
implementations of trusted operating systems, with a number
of production versions nearing completion.. 'However,'
opportunities to develop applications programs on these
systems have.been very limited, and experience is badly
needed. In the past, operating system penetration.studies
have been useful in demonstrating protection mechanism
weaknesses, and future studies will be needed on the new
generation of trusted systems. In general, panel members
69
- Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
felt that such studies would show them to be far more secure
than their predecessors. With respect to hardware support
for trusted systems,thepanel felt that,,although the
situation has been improving, several areas were in need of
research emphasis. These included the following.
(1) Hardware-mapped input/output (I/O).
(2) Protection of logical objects.
(3) Unified device interfaces.
(4) Multiple domains (more than two).
(5) Fast domain switching,
The Verification Technology Session served to emphasize the
essential role that formal specification and verification
have played in the development of trusted systems. As
mentioned previously, formal verification or proof
techniques have been used to show the correspondence between
the kernel specifications and the mathematical security
model. Current specification and verification approaches
and tools are limited in capability, however; and (for the
most part) have not been used to show the correspondence
between the code and the specifications. Furthermore,
current verification systems are usable only by a small
community of educated designers; and there is a need both to
make the tools easier to use and to enlarge the user
community. -Despite these limitations, verification
technology has matured to the point where it is desirable to
attempt verification through the code level on limited-scale
real applications, such as clear-core routines and labeling
utilities. It is also desirable to develop methods to
verify that firmware and hardware have been implemented in
accordance with their specifications. One of the highlights
of this session was an on-line demonstration of the Gypsy
and AFFIRM verification systems.
The Secure Data-Base Management (DBM) Session dealt with a
challenging applications area in need of future research and
development emphasis. Security technology for the DBM
problem is still in its infancy. To date, the limited
experience in the application of trusted operating system
technology to DBM issues suggests that several problems need
attention. A critical issue is whether current mathematical
security models are adequate for multilevel data bases.
Data-base constructs not well addressed in current models .
Include multi-level objects and multilevel relations,
aggregation and inference, and data objects--the sensitivity
of which is content-dependent. Also the support provided in
some trusted operating system designs may not be adequate
for DBM applications. To be useful for DBM, the operating
70
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
system should support access control on finer granularity
data objects than most current systems support (e.g.,
files). The user interface to the data base is another area
of concern.
The Secure System Evaluation Session addressed the need to
establish a DoD secure system approval process--a critical
element of the computer security initiative recently
undertaken within the DoD. The session focused on
(1) The technical evaluation and categorization of
trusted systems:.
(2) The characteristics of. threat environments and
applications.
Seven levels of protection were proposed for evaluating
trusted systems. The threat environment was characterized
in terms of processor coupling, user/data exposure,
developer/user trust, and user capability. The session
provided evidence that a workable evaluation process could
be established, and that a consensus could be reached -
matching threat environments with a desired level of
protection. A key assumption throughout the session was
that limiting the user's capability (e.g., use of function
keys, transaction processing in a nonprogramming
environment) significantly reduces the security risk. Since
the security requirements of such systems are not well
understood, this is an area recommended for future research.
The Secure Distributed Systems and Applications Session
discussed approaches to providing multilevel secure computer
network services. The presentations included discussion of
SACDIN, the KSOS network connection, the military message
experiment, and several other systems. It appears that the
trend towards distributed systems can benefit system
effectiveness, but it exposes information to additional
SeCurity,
communication channels, and incorrect user authentication.
Some approaches are emerging for the use of encryption in
secure networks, but more work is needed in this area. Key
areas for future research include the following.
(1) Design methodologies.
(2) Policy issues.
(3) Communications protocols for multilevel secure
networks.
The Air Force Computer Security Policy Session dealt with current
DoD policy as set forth in DODD 5200.28, as implemented in
Air Force Regulation 300-8. Current computer security
71
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
U U/
policy often inhibits the operational capability of
Automatic Data Prodessing (ADP) systems, as was emphasized
during the session on requirements. The problem will be
Alleviated as multilevel secure computing SyStems-becOMe
widely available. The situation would also be improved if
current policy more adequately took into account the degree
of risk in various operational environments. Low-risk
environments could then utilize less costly rules and
procedures. As was pointed out on several occasions during
the summer study, current policy needs to be extended to
cover other data-processing issues: fraud, privacy, data
integrity, declassification, aggregation, sanitization, and
denial of service. An informal statement of current policy
on these issues would assist the development of formal
mathematical models.
?
To summarize, in the last feW years, the field of computer
security has made significant progress towards the goal of
trusted computing systems for multilevel-secure
applications. The following research and development goals
were generated by the group in the final session.
? (1) Continued support for ongoing trusted operating
- system projects (e.g., KSOS, KVM/37 0)
Increased support for future applicatiOns to be
hosted on these systems.
(2)
(3)
Research to improve hardware support for trusted
operating systems with emphasis on hardware-mapped
I/O, protection of logical objects, unified device
interfaces, multiple domains, and fast domain
switching.
(4) Verification-methodology research with a focus on
practical, real applications of limited scale.
Research to improve the hardware support for
verification and to improve verification system
support tools.
(6) Research to develop methods to verify that hardware
and firmware have been implemented in accordance
with their specifications.
Research to identify trusted operating system
enhancements needed to support data-base management
applications.
(8) Improved technology transfer between academic,
industrial, civilian government, and military
domains in the computer security field.
(7)
72
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6 72
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
(9) S,tandardizatj.on of terms for the ADP security
community.
(10) Research to define the security requirements of
limited-capability systems.
(11) Research to concentrate on design methodologies,
policy issues, and communications protocols for
trusted distributed Processing architectures.
(12) Development of approaches to detect and control
indirect communication channels (timing channels).
(13) Continued research on encryption approaches and
their relatiolr kerme1. technology. and .capability
architectures.
(14) Research to extend formal (mathematical) policy
models to cover the problems of fraud, privacy,
multilevel data bases, data integrity,
declassification, aggregation, sanitization, and
denial of service.
(15) Development of methods to evaluate security risk in
ADP systems in terms of threat identification and
quantification of loss.
The Air Farce need's mult-iievel secure systems. The -
technology is at hand. An active and ongoing research and
development program is ne:-_!ded to make the technology widely
available and useful over a broad range of applications.
(End of Summer Study executive summary.)
3.3.4 July 1979 Industry Seminar
On 17 and 18 July 157D, the Initiative conducted its
industry technical seminar at the National Bureau of
Standards in Gaithersburg, Maryland. The 280 attendees were
drawn almost equally from computer manufacturers, system
,houses, and government agencies. The objective of this
seminar was to acquaint computer system developers and users
with the status of the development of "trusted" ADP systems
with the DOD and the current planning for the evaluation of
the integrity of commercial implementations of these
systems. The seminar presented an overview of a number of
topics essential to the development of "trusted" ADP
systems. Much of the material presented was of a technical
nature intended for coliputer system designers and software
system engineers. However, the sophisticated computer user
in the Federal government and in private industry should
have found the seminar useful in understanding security
characteristics of future systems.
first
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
La)
July 17, 1979
PROGRAM
SEMINAR ON
DEPARTMENT OF DEFENSE
COMPUTER SECURITY INITIATIVE
... to achieve the widespread aVallability.
of trusted computer systems -
8:30 am Recistration at National Bureau of Standards,-
9:15 Opening Remarks - James FL Burrows, Director
Institute for Computer Sciences and
Technology
National Bureau of Standards
9:30 Keynote Address - "Computer Security Requirements in the
DoD"
Honorable Gerald P. Dinneen
Assistant Secretary of Defense for
Communications, Command, Control
and Intelligence
10:00 - "Computer Security Requirements Beyond
the DoD"
Dr. Willis Ware
Rand Corporation
10:30 Coffee Break
10:45 - DoD Computer Security Initiative Program
Background and Perspective
Stephen T. Walker
Chairman, DoD Computer Security
Technical Consortium
- Protection of Operating Systems
- Edmund Burke ,
MITRE Corporation
11:30
1:00 pm
2:00
3:15
3:30
4:30
Lunch
re,-A
Adjourn
- Kernel Design Methodology
LtCol Roger Schell, USAF
Naval Post Graduate School
- Formal Specification and Verification
Peter Tasker
MITRE Corporation
FIGURE 3-4
7,4
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part- Sanitized Copy Approved forRelease2013/04/16 : CIA-RDP91-00280R000100110014-6
July. 18, 1979
9:00 am
Secure System Developments
Kernelized Secure Operating System
(KS OS)
Dr. E. J. McCauley
Ford Aerospace and Communications
Corporation
Kernelized VM-370 Operating System
(KVM)
Marvin Schaefer
System Development Corporation
11:00
Coffee Break
11:15
- Secure Communications Processor
Matti Kert
Honeywell Corporation
12:00
- Secure System Applications
John P. L. Woodward
MITRE Corporation
1:00 pm
Lunch
2:00
- DoD Computer Security Initiative
Stephen T. Walker
3:30
Adjourn
FIGURE 3-4 CONCLUDED
75
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Figure 3-4 shows the program for the seminar.
3.3.$ January 1980 - ustry SeMinar
On 15-17 January 1980, the second Initiative-sponsored
industry seminar was held at the National Bureau of
Standards. This seminar was a key part of the task to
transfer the computer security technology to industry
(especially the computer manufacturers) and to: thosewho
will be using and buying trusted computers. There were 300
attendees from industry and government along with about 30
participants.
The seminar was organized into-three sessions: a general
introductory and keynote session on 15 January; a policy and
requirements session on 16 and 17 January; and a parallel
session on Trusted Computing Base (TCB) design on 16 and 17
January. The first of the two parallel sessions provided in
depth discussions of policy issues as they apply to
multilevel secure computer systems, An analysis of
applications of such systems within the DoD and beyond, and
a presentation of the Trusted Computing Base concept. The
TCB session, intended for operating system developers and
sophisticated computer science technical experts, provided a
detailed analysis of the Trusted Computing Base concept,
which is the emerging generalized basis upon which high
integrity operating systems may be evaluated, followed by
discussions by the principle designers of the major DoD
trusted system developments relating their systems to the
TCB concept.
-Figure 3-5 is a copy of the seminar program.
3.3.6 November 1980 Industry Seminar
On 18-20 November 1980, the Initiative conducted its third
industry seminar at the National Bureau of Standards. This
was the latest in the series of seminars to acquaint
.computer_syste5Ldevelopers_and_users with the.status,pf. .??
trusted ADP system developments and evaluation. There were
380 people registered for the seminar.
The first day of the seminar included an update on the
status of the Initiative and presentations by five computer
manufacturers on the trusted system development activities
within their organizations. Following these presentations
was a panel discussion on "How can the government and the
computer industry.solve. the computer security problem?"
The second day of the seminar opened with a discussion of
the technical evaluation criteria that have been proposed as
a basis for determining the relative merits of computer
syStems....-:..The.diacussion of the assurance aspects of those
-
76
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
PROGRAM
-
Second Seminar on the Department of Defense. ?Computer Security Initiative
National Bureau of Standards
_Gaithersburg, Maryland
January 15, 1980 Red Auditorium ?
9:30 am
,
1.:00 pm
"The Impact. of Computer Security in the intelligence
? Community"
Dr. John Koehler
Deputy Director for Central Intelligence for
Resource Management
"The Impact of Computer Security in the Department
of Defense"
Dr. Irwin Lebow(
Chief Scientist
Defense Communications Agency
"The Impact of Computer Security in the Federal
- Government"
- Mr. James Burrows
Director, Institute for Computer Science and
Technology
National Bureau of Standards
BP.EAK
"The Impact' of Computer Security in the Private
Sector"
Mr. Ed Jacks
General. Motors. Corporation
?
"Status of the DoD Computer Security Initiative"
Mr. Stephen T. Walker
. Chairman, Dol) Computer Security Technical
Consortium
FIGURE 3-5
77
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 CIA-RDP91-00280R000100110014-6
January 15, 1980
(Continued) .
2:00 pm "Computer Security ,Impacts on-Near Term Systems"
Mr. Clark Weissman
System Development Corporation
"Computer Security Impacts on Future System
Architectures"
Mr. Ed Burke
MITRE Corporation
BREAK
A "discussion" of what the computer manufacturers
would like/should expect to hear from government
users about trusted computer systems '
Dr. Theodore H.P. Lee
.UNIVAC CorporatiOn
Mr. James P. Anderson
James P. Anderson Company
4:30 pm ADJOURN
January 16-17, 1980 TWO PARALLEL SESSIONS
SESSION I
January 16, 198,0
9:;,..5 am
Gneral Session - Red Auditorium
"Policy Issues Relating to Computer Security"
Session. Chairman: Robert Campbell
Advanced Information Management, Inc.
Mr. Cecil Phillips'
Chairman, Computer Security Subcommittee
DCI Security Committee
Mr. Eugene Epperly
Counterintelligence & Security Policy Directorate
-Office.of,Ahe Secretary of Deft-rite '
Pentagon
Mr. Robert Campbell
Advanced Information Management, Inc.
Mr. Philip R. Manuel
Phillip R. Manuel and Associates
Dr. Stockton Gaines
RAND Corporation
1:00 pm LUNCH
FIGURE 3-5 CONTINUED
78
im,,,Inecifiori in Part - Sanitized Copy Approved for Release 2013/04/16 CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
January 16, 1980
(Continued)
-- 2:00 pm "User Requirements and Applications"
Session Chairman; Dr. Stockton Gaines
RAND Corporation
Mr. Larry Bernosky
-WWMCCS System Engineering Office
LtCol Cerny
Federal Republic of Germany Air Force
BREAK
Dr. Tom Berson
SYTEK Corporation
Mr. Mervyn Stuckey
U.S. Department of Housing and Urbap Development
4:00 pm ADJOURN
January 17, 1980
SESSION I
9:15 am "User Requirements and. Applications" (continued)
Dr. Von Der Brueck
IABC, Germany
Mr. John Rehbehn
'Social Security Administration'
Mr. William Nugent
Library of Congress
Mr. Howard Crumb
-Federal Reserve Bank of New York
BREAK
"Trusted Computing Base Concepts"
Mr. Peter Tasker
MITRE Corporation
.1:00 pm LUNCH
2:00 pm GENERAL DISCUSSION and WRAPUP
Mr. Stephen T. Walker
FIGURE 3-5 CONTINUED
79
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part- Sanitized Copy Approved forRelease2013/04/16 : CIA-RDP91-00280R000100110014-6
NoveMber 18, 1980
PROGRAM
-Red Auditorium
9 : 15 Opening _Remarks
SeymourJeffrieS;
Institute for Computer Sciences (5c Technology
National Bureau of Standards
DOD Computer Security Initiative
Stephen T. Walker, Chairman
DOD :Computer Security Technical Consortium
INDUSTRY TRUSTED SYSTEM ALLIVITTFS
Paul A. Karger
1)igital Equipment Corporation
10:45 Break
11:00 INDUSTRY TRUSTED SYSTEM ACTIVITTES - Continued
Irma Wyman
Honeywell
Viktors-Berstis
lag
Jay Jonekait
TYMSHARE, Inc.
Theodore M. P. Lee
Sperry-Univac
1:00 Lunch
PANEL:.--"How-CantheGoverment-and"the"Corriputer?'
Industry Solve the Couputer Security Problem?"
Theodore M. P. Lee, Sperry-Univac
James P. Anderson, Consultant
William Eisner, Central Intelligence Agency
Steven P. Lipner, Nitre Corporation
Marvin Schaefer,. System Development Corporation
3:00. Break
3:15 PANEL-- Continued
4:30 ,Adjourn
FIGURE 3-6
82
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
- November 19 1980 Red Auditorium.
9:00 ' "Quality Assurance and Evaluation Criteria"
Grace H. Nibaldi
Mitre Corporation
9:50 "Specification and Verification Overview"
William F. Wilson
Mitre Corporation
10:45 Break.
SPECIFICATION AND VERIFICATION SYSIELIS
11:00 "FDM: A Formal Methodology for Software Development"
Richard Kennerer
System Development Corporation
12:00 "Building Verified Systems with Gypsy"
Donald I. Good
University of Texas
1:00 Lunch
SPECIFICATION AND VERIFICATION SYSTEMS.- Continued
2:00 "An Informal View of EDM's Computational Model"
Karl N. Levitt,
SRI International
3:.00 Break
3:15 "AFFLRM: A Specification and Verification System"
Susan L. Gerhart
USC Information Sciences Institute
4:15 Adjourn
FIGURE 3-6 CONTINUED
83
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
C,)
November 20, 1980 Red-Auditorium
9.:00 "An-Overview of Software' Testing"
MrakY Jo Reece
Mitre Corporation
THE EXPERTFNCES:OF TRUSTED SYSTEM DEVELOPERS
"Update:on KSOS"
J-bhmNagibn.
Ford Aerospace and Communications Corporation
10:45 Break
11:00 KVt'T-370
Marvin Schaefer
System Development Corporation
12:00 "Ketnelized Secure Operating System (KSOS-6)"
Charles H.. BonneaU
Honeywell
1:00 Lunch
2:00 PANEL: "Where Would You Put Your Assurance ?Dollars?"
Panelists: Developers, Researchers, & Testers
3:00 Break
3:15 PANEL - Continued
4:15 Adjourn
FIGURE 3-6 CONCLUDED
84
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
Declassified in Part- Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
cJ
3.4 TRUSTED COMPUTER SYSTEM EVALUATION
This section proposes an evaluation process by which trusted
computer system developments may be reviewed and evaluated
under the Initiative. The results of applying the process
will be to develop a list of products 'chat have undergone
evaluation, and thus are eligible for use in applications
requiring a trusted system. This list of systems has been
designated an evaluated products list (see section 31).
Trotter and Tasker have documented the proposed evaluation
process fTROT130]. trusted computer systems. This section
contains a condensation of that paper.
There are three prime elements to the evaluation process.:
the TCB provides the requirements; evaluation criteria have
been proposed and are being coordinated with industry; and a
plan has been advanced for a government-wide evaluation
center. The TCB was described in section 3.2. ? The
subsections below discuss the criteria and the center, and
then present the proposed evaluation process.
3.4.1 Evaluation Criteria
An important requirement of an evaluation program, both from
the viewpoint of the manufacturer and the government, is
that the evaluation be consistent for all manufacturers and
all products. To achieve this, a detailed set of evaluation :
criteria is needed that will allow both the protection value
of architectural features and the assurance value of
development and validation techniques to be considered. In
addition, it is necessary that the criteria be independent
of architecture so that innovation is not impeded. Three
evaluation factors have:been defined, and various degrees of
rigor for each factor have been incorporated into seven
hierarchical protection levels representing both systemwide
protection and assurance that the protection is properly
irl;pleMented:?The'evaIdatiOn"Criteria'addreSS'tWO alt)c-ts Of.
a system considered essential: completeness (is the policy
adequate) and verifiability (how convincingly can the system
be shown to implement the policy).
The proposed evaluation criteria are summarized here, and
are documented in detail by Nibaldi [NIBA79b].. It should be
emphasized ?that these criteria are. preliminary and are
undergoing 'review.
3.4.1.1 Factors
There are three prime evaluation 'factors: policy, mechanism,
and assurance. These factors are shown in figure 3-7, and
are briefly described below- They are fully described and
developed in [NIBA79b].
85
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16: CIA-RDP91-00280R000100110014-6
?
Declassified in Part - Sanitized Copy Approved for Release 2013/04/16 : CIA-RDP91-00280R000100110014-6
Policy
Me.c.b4J1i,sr0
Prevention
Detection
Recovery
Assurance
Development Phases
Design
Implementation
Validation Phases
Tetin,1
Verification
CP-ex.4.tio.TISOvitaPADQA.,
Figure EVALUATION FACTORS
Policy
A .protectionpplicy,specifiez,under what,, conditions.
information stored in the. computer and computer resources
might be shared, typically placing controls on the
disclosure and/or modification of information.. If there- is
a -alear, concise statement (and hence, understanding) of 'the
protection policy a-trusted system purports to obey, -then an
evaluator of the system can better determine (through
testing or other forms:of-validation) if the System enforces
the Stated policy. In fact, formal methods of design
verification depend on precisely stated policy -"models" to
make rigorous mathematical proofs of correctness.
Mechanism
The mechanisms- that-actually-enforce-the?protection policy.
mayinclude both_ hardware ahd' software. To be effective,
'