DOD COMPUTER SECURITY EVALUATION CENTER
Document Type:
Collection:
Document Number (FOIA) /ESDN (CREST):
CIA-RDP83M00914R001800060009-2
Release Decision:
RIFPUB
Original Classification:
K
Document Page Count:
150
Document Creation Date:
December 20, 2016
Sequence Number:
9
Case Number:
Content Type:
MEMO
File:
Attachment | Size |
---|---|
CIA-RDP83M00914R001800060009-2.pdf | 8.4 MB |
Body:
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
JAN 2 1981
MEMORANDUM FOR SECRETARIES OF THE MILITARY DEPARTMENTS
CHAIRMAN, JOINT CHIEFS OF STAFF
DIRECTOR, DEFENSE ADVANCED RESEARCH
PROJECTS AGENCY
DIRECTOR, DEFENSE CO?211UNICATIONS AGENCY
DIRECTOR, DEFENSE INTELLIGENCE AGENCY
DIRECTOR, DEFENSE INVESTIGATIVE SERVICE
DIRECTOR, DEFENSE LOGISTICS AGENCY
DIRECTOR, DEFENSE MAPPING AGENCY
DIRECTOR, DEFENSE NUCLEAR AGENCY
DIRECTOR, NATIONAL SECURITY AGENCY
DIRECTOR, WWMCC SYSTE4 ENGINEERING
SUBJECT: DOD Computer Security Evaluation Center
Although your comments in response to Dr. Dinneen's
memorandum of November 13 indicate some concern about working
relationships within the proposed Evaluation Center, there
is no disagreement or doubt regarding the need. Therefore,
the proposal made by the Director, National Security Agency
to establish a Project Management Office is approved. -Ef-
fective January 1, 1981, the Director, National Security
Agency is assigned'the responsibility for Computer Security
Evaluation for the Department of Defense.
Please provide the name of your representative for
computer security matters to ASD(C3I). The individual
chosen for this task should be empowered to work in your
behalf to develop and coordinate the charter and imple-
menting directives for the Center. I expect this working
group to identify necessary personnel and fiscal resources.
W. Graham Claytor, Jr.
cc: ASD(C3I)
ASD(Comptroller)
DUSD (Policy Review)
No P f rr I to OS D. t e rr-fi
release instructions apply.
Figure 1-1
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
assessment of the progress to date in achieving widespread
availability of trusted computer systems.
The computer manufacturers are making substantial progress
in improving the integrity of their products, as can be seen
by a review of section 3 of this report. Most of the
incentive for this comes from a strong need to build more
reliable and easily maintainable products coupled with a
significant increase in the computer science understanding'
of how to produce more reliable hardware and software. This
trend was well established before the efforts of the
Initiative and can be expected to continue at an
accelerating pace. But the existence of an organized effort
on the part of the government to understand the integrity
measures of industry developed computer products will have a
strong influence on the evolution of the industry's
integrity improvement measures.
If the government can establish consistent evaluation
criteria, the efforts of the Initiative to date have shown
that the industry will evolve their systems in accordance
with those criteria and the government can then expect to be
able to purchase high integrity computer products in the
same manner they purchase standard ADP systems today,
without the high additional costs of special purpose
development and maintenance. This is the philosophy being
pursued by the Initiative, to influence the-evolution of
highly reliable commercial products to enable their use in
sensitive information handling applications and to obtain
sufficient understanding of the integrity of individual
products to,determine suitable environments for their use.
This report is organized in the following manner. The
remainder of this section summarizes the major activities of
the Initiative since June 1978. Section 2 gives background
on the general nature of the computer security problem and
some technical details helpful in understanding the trusted
system evaluation process. Section 3 describes the current
status of the Initiative, including: (1) a description of
the Evaluated Products List concept, (2) a description of
the Trusted Computing Base (TCB) concept, (3) current draft
evaluation criteria, (4) a proposed evaluation process, and
(5) the status of current Initiative evaluation efforts.
Section 4 describes ongoing R&D, plans, and industry
implications in the areas of trusted operating systems,
trusted applications, and verification technology.
1.1 COMPUTER SECURITY INITIATIVE ACTIVITIES
Figure 1-2 illustrates the overall activities of the
Initiative. There are three main efforts being pursued in
parallel. The eventual outcome of this work is the
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
w
P
r
Approved For Release 2007/06101: CIA-RDP83M00914R001800060009-2
establishment of a consistent and systematic means of
evaluating the integrity of industry and government
developed computer systems. This outcome will be
accomplished when the Initiative has reached the Formal
Evaluation of industry developed systems represented in the
lower right of the figure. Before this can happen, the
evaluation process must be formalized, criteria for
evaluation established and an Executive Agent identified to
carry out the evaluations. The vertical dotted line
represents the accomplishment of this formalization. Prior
to this, in the Specification Phase (Part II on the figure),
draft evaluation criteria and specifications for a "Trusted
Computing Base" (TCB) are being developed (see section 3 of
this report). Thesd draft documents are being distributed
for comment to the DoD through the Consortium and to
industry through the Education Phase efforts described
below. In order to ensure that the draft criteria and
specifications are realistic and feasible, the Initiative
has been conducting, at the invitation of various computer
manufacturers, evaluations of several potential industry
trusted systems. (Section 3.5 describes present efforts).
These informal evaluations are performed by members of the
Consortium, governed by OSD General Council approved legal
limitations and non-disclosure agreements. They are
conducted as mutually beneficial technical discussions with
the manufacturers and are serving a vital function in
illustrating the feasibility of such an evaluation process
and the industry's strong interest and willingness to
participate.
The other major part of the Initiative's efforts as
represented on figure 1-2 is the Education Phase. The goal
in this effort is two fold: (1) to transfer technology to
the computer manufacturers on how to develop trusted
computer systems and (2) to identify to the general computer
user community that trusted computers can be built and
successfully employed in a wide variety of applications.
The principle method of accomplishing these goals is through
public seminars. Three such seminars have been held at the
National Bureau of Standards in July 1979, January 1980, and
November 1980. These seminars were attended by 300-350
people representing all the major computer manufacturers,
over 50 computer system user organizations and over 25
Federal and State organizations. The seminars have
generated a great deal of interest in the development of
trusted computer systems. In addition frequent
participation in national level conferences such as the
National Computer Conference (1979 and 1980) have helped to
establish the viability of the trusted computer concept.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
There are three major efforts in the DoD computer security
R&D program. The first is the development and demonstration
of trusted operating systems. Included .in these efforts are
the Kernelized Secure Operating System (KSOS), which went
into initial test site evaluation during the fall of 1980,
and the Kernelized VM/370 System (KVM/370), which will be
installed at two test sites by the first quarter of 1981.
Also included in this activity is the hardware and security
kernel development efforts on the Honeywell Secure
Communications Processor (SCOMP). All of these efforts
began as DARPA programs with joint funding from many
sources. Through the efforts of the Initiative,
arrangements have been made, starting in Oct 1980, for the
Navy to assume technical and contractual responsibility for
the KSOS and SCOMP efforts and for the.Air Force to assume
similar responsibility for the KVM/370 effort. These
efforts are essential for the demonstration of trusted
computer systems to the DoD and also as examples to the
manufacturers as incentives to produce similar systems.
The second major R&D activity is the development of
applications of trusted computer systems. These include the
various guard-related information sanitization efforts (e.g.
ACCAT GUARD, FORSCOM GUARD), trusted front-end systems (e.g.
COINS Trusted TAS, DCA COS-NFE), trusted message system
activities (e.g. DARCOM Message Privacy Experiments), and a
recently-started effort in trusted data base management
systems.
The third R&D thrust is the establishment of a verification
technology program to advance the state of the art in
trusted system specification and verification. The first
phase of this program (FY80-FY83) includes major competitive
procurement activities to broaden our experience in using
current program verification technologies. This effort is
being undertaken to better understand the strengths and
weaknesses of these systems in order to be able to better
specify our requirements for future improved systems which
will be developed in the second phase of the program
(FY83-FY86). The Air Force has a major effort in this area
beginning in FY81. The Navy is initiating an R&D effort to
integ.rate several existing technologies into a package for
the specification and verification of applications like the
various Guard systems now under development.
A significant factor in the progress of the DoD R&D
activities in the past year has been the actions taken in
response to recommendations of the Defense Oversight
Committee's Report, "Employing Computer Security Technology
to Combat Computer Fraud." The Committee's report
recommended that the Services establish long-term programs
in computer security R&D and that spedific sums be allocated
by each service in FY79, 80 and 81 while these long term
Approved For Release 2007/06/01: CIA-RDP83M00914RO01800060009-2
programs are being established. The FY80 funds recommended
by the Committee were provided by March 1980 and have been
instrumental in keeping ongoing efforts underway and
providing the resources needed to establish the new
application and verification technology development efforts.
Approved For Release 2007/06/01 : CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
SECTION 2
BACKGROUND
The Defense Science Board Task Force on Computer Security
described the nature of the computer security problem in a
report entitled "Security Controls for Computer Systems"
dated February 1970 [WARE70]. That description remains
valid today and is reprinted here in part to set the context
for this report.
2.1 NATURE OF THE PROBLEM
2.1.1 The Security Problem
"The wide use of computers in military and defense
installations has long necessitated the application of
security rules and regulations. A basic principle
underlying the security of computer systems has
traditionally been that of isolation--simply removing the
entire system to a physical environment in which
penetrability is acceptably minimized. The increasing use
of systems in which some equipment components, such as user
access terminals, are widely spread geographically has
introduced new complexities and issues. These problems are
not amenable to solution through the elementary safeguard of
physical isolation.
"In one sense, the expanded problems of security provoked by
resource-sharing systems might be viewed as the price one
pays for the advantages these systems have to offer.
However, viewing the question from the aspect of such a
simplistic tradeoff obscures more fundamental issues.
First, the security problem is not unique to any one type of
computer system or configuration; it applies across the
spectrum of computational technology. While the present
paper frames the discussions in terms of time-sharing or
multiprogramming, we are really dealing not with system
configurations, but with security; today's computational
technology has served as catalyst for focusing attention on
the problem of protecting classified information resident in
computer systems.
"Secondly, resource-sharing systems, where the problems of
security are admittedly most acute at present, must be
designed to protect each user from interference by another
user or by the system itself, and must provide some sort of
"privacy" protection to users who wish to preserve the
integrity of their data and their programs. Thus, designers
and manufacturers of resource-sharing systems are concerned
with the fundamental problem of protecting information. In
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
protecting classified information, there are differences of
degree, and there are new surface problems, but the basic
issues are generally equivalent. The solutions the
manufacturer designs into the hardware and software must be
augmented and refined to provide the additional level of
protection demanded of machines functioning in a security
environment.
2.1.2 Types of Computer Systems
"There are several-ways in which a computer system can be
physically and operationally organized to serve its users.
The security controls will depend on the configuration and
the sensitivity of data processed in the system. The
following discussion presents two ways of viewing the
physical and operational configurations.
2.1.2.1 Equipment Arrangement and Disposition
"The organization of the central processing facilities for
batch or for time-shared processing, and the arrangement of
access capabilities for local or for remote interaction are
depicted in figure 2-1. Simple batch processing is the
historical and- still prevalent mode of operation, wherein a
number of jobs or transactions are grouped and processed as
a unit. The batches are usually manually organized, and for
the most part each individual job is processed to completion
in the order in which it was received by the machine. An
important characteristic of such single-queue, batched,
rurf-to-completion systems which do not have an integrated
file management system for non-demountable, on-line memory
media is that the system need have no "management awareness"
from job to job. Sensitive materials can be erased or
removed from the computer quickly and relatively cheaply,
and mass memory media containing sensitive information can
be physically separated from the system and secured for
protection. This characteristic explains why solution to
the problem we are treating has not been as urgent in the
past.
"In-multiprogramming, on the other hand, the jobs are
organized and processed by the system according to
algorithms designed to maximize .the efficiency of the total
system in handling the complete set of transactions. In
local-access systems, all elements are physically located
within the computer central facility; in remote-access
systems, some units are geographically distant from the
central processor and connected to it by communication
lines.
roved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
IR
Difficulty and
Complexity of
Security Controls
Type I
FILE
QUERY
Limited
Application
Programs
Figure 2-1
PROGRAMMING VIA
Type II
LIMITED LANGUAGES
Type IV
AND
PROGRAMMING VIA
CHECKED-OUT
FULL PROGRAMMING
INTERPRETATION
COMPILERS
CAPABILITY
New Languages
1--i-
New Compilers
Figure 2-2
Approved For Release 2007/06/01 : CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
2.1.2.2 User Capabilities
"Another way of viewing the types of systems, shown in
figure 2-2, is based on the levels of computing capability
available to the user.
"File-query Systems (Type I) enable the user to execute only
limited application programs embedded in the system and not
available to him for aiteration or change. He selects for
execution one or more available application programs. He
may be able .to couple several of these programs together for
automatic execution in sequence and to insert parameters
into the selected programs.
"Interpretive systems (Type II) provide the user with a
programming capability, but only in terms of input language
symbols that result in direct execution within the computer
of the operations they denote. Such symbols are not used to
construct an internal machine language program that can
subsequently be executed upon command from the user. Thus,
the user cannot obtain control of the machine directly,
because he is buffered from it by the interpretive software.
"Compiler systems (Type III) provide the user with a
programming capability, but only in terms of languages that
execute through a compiler embedded in the system. The
instructions to the compiler are translated by it into an
assembly language or basic machine language program.
,Program execution is controlled by the user; however, he has
available to him only the limited compiler language.
"Full programming systems (Type IV) give the user extensive
and unrestrained programming capability. Not only can he
execute programs written in standard compiler languages, but
he also can create new programming languages, write
compilers for them, and embed them within the system. This
gives the user intimate interaction with and control over
the machine's complete resources--excepting of course, any
resources prohibited to him by information-protecting
safeguards (e.g., memory protection, base register controls,
and I/O hardware controls) .
In principle, all combinations of equipment configurations
(figure 2-1) and operational capabilities (figure 2-2) can
exist. In practice, not all the possible combinations have
been implemented, and not all the possibilities would
provide useful operational characteristics.
2.1.3 Threats To System Security
"By their nature, computer systems bring together a series
of vulnerabilities. There are human vulnerabilities
throughout; individual acts can accidentally or deliberately
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
jeopardize the system's information protection capabilities.
Hardware vulnerabilities are shared among the computer, the
communication facilities, and the remote units and consoles.
There are software vulnerabilities at all levels of the
machine operating system and supporting software; and there
are vulnerabilities in the organization of the protection
system (e.g., in access control, in user identification and
authentication, etc.). How serious any one of these might
be depends on the sensitivity (classification) of the
information being handled, the class of users, the
computational capabilities available to the user, the
operating environment, the skill with which the system has
been designed, and the capabilities of potential attackers
of the system.
"These points of vulnerability are applicable both in
industrial environments handling proprietary information and
in government installations processing classified data.
This Report is concerned directly with only the latter; it
is sufficient here to acknowledge that the entire range of
issues considered also has a "civil" side to which this work
is relevant.
"The design. of a secure system must provide protection
against the various types of vulnerabilities. These fall
into three major categories: accidental disclosures,
deliberate penetrations, and physical attack.
"Accidental Disclosure. A failure of components, equipment,
software, or subsystems, resulting in an exposure of
information or violation of any element of the system.
Accidental disclosures are frequently the result of failures
of hardware or software. Such failures can involve the
coupling of information from one user (or computer program)
with that of an other user, the "clobbering" of information
(i.e., rendering files or programs unusable), the defeat or
circumvention of security measures, or unintended change in
security status of users, files, or terminals. Accidental
disclosures may also occur by improper actions of machine
operating or maintenance personnel without deliberate
intent.
"Deliberate Penetration. A deliberate and covert attempt to
(1) obtain information contained in the system, (2) cause
the system to operate to the advantage of the threatening
party, or (3) manipulate the system so as to render it
unreliable or unusable to the legitimate operator.
Deliberate efforts to penetrate secure systems can either be
active or passive. Passive methods include wire tapping and
monitoring of electromagnetic emanations. Active
infiltration is an attempt to enter the system so as to
obtain data from the files or to interfere with data files
or the system.
Approved For Release 2007/06/01: CIA-RDP83M00914RO01800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
"One method of accomplishing active infiltration is' for a
legitimate user to penetrate portions of the system for
which he has the authorization. The design problem is one
of preventing access to files by someone who is aware of the
access control mechanisms and who has the knowledge and
desire to manipulate them to his own advantage. For
example, if the access control codes are all four-digit
numbers, a user can pick any four-digit number, and then,
having gained access to some file, begin interacting with it
in order to learn its contents.
"Another class of active infiltration techniques involves
the exploitation of trap-door entry points in the system
that by-pass the control facilities and permit direct access
to files. Trap-door entry points often are created
deliberately during the design and development stage to
simplify the insertion of authorized program changes by
legitimate system programmers, with the intent of closing
the trap-door prior to operational use. Unauthorized entry
points can be created by a system programmer who wishes to
provide a means for bypassing internal security controls and
thus subverting the system. There is also the risk of
implicit trap-doors that may exist because of incomplete
system design--i.e., loopholes in the protection mechanisms.
For example, it might be possible to find an unusual
combination of system control variables that will create an
entry path around some or all of the safeguards.
"Another potential mode of active infiltration is the use of
a special terminal illegally tied into the communication
system. Such a terminal can be used to intercept
information flowing between a legitimate terminal and the
central processor, or to manipulate the system. For
example, a legitimate user's sign-off signal can be
intercepted and cancelled; then, the illegal terminal can
take over interaction with the processor. Or, an illegal
terminal can maintain activity during periods when the
legitimate user is inactive but still maintaining an open
line. Finally, the illegal terminal might drain off output
directed to a legitimate terminal and pass on an error
message in its place so as to delay detection.
"Active infiltration also can be by an agent operating
within the secure organization. This technique may be
restricted to taking advantage of system protection
inadequacies in order to commit acts that appear accidental
but which are disruptive to the system or to its users, or
which could result in acquisition of classified information.
At the other extreme, the agent may actively seek to obtain
removable files or to create trap doors that can be
exploited at a later date. Finally, an agent might be
placed in the organization simply to learn about the system
and the operation of the installation, and to obtain what
Approved For Release 2007/06/01 : CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
pieces of information come his way without any particularly
covert attempts on his part at subversion.
In passive subversion, means are applied to monitor
information resident within the system or being transmitted
through the communication lines without any corollary
attempt to interfere with or manipulate the system. The
most obvious method of passive infiltration is the wire tap.
If communications between remote terminals and the central
processor are over unprotected circuits, the problem of
applying a wire tap to the computer line is similar to that
of bugging a telephone call. it is also possible to monitor
the electromagnetic emanations that are radiated by the
high-speed electronic circuits that characterize so much of
the equipment used in computational systems. Energy given
off in this form can be remotely recorded without having to
gain physical access to the system or to any of its
components or communication lines. The possibility of
successful exploitation of this technique must always be
considered.
"Physical Attack. Overt assault against or attack upon the
physical environment (e.g., mob action) is a type of
vulnerability outside the scope of this Report.
2.1.4 Areas of Security Protection
"The system design must be aware of the points of
vulnerability, which may be thought of as leakage points,
and he must provide adequate mechanisms to counteract both
accidental and deliberate events. The specific leakage
points touched upon in the foregoing discussion can be
classified in five groups: physical surroundings, hardware,
software, communication links, and organizational (personnel
and procedures). The overall safeguarding of information in
a computer system, regardless of configuration, is achieved
by a combination of protection features aimed at the
different areas of leakage points. Procedures, regulations,
and doctrine for some of these areas are already established
within DoD, and are not therefore within the purview of the
Task Force. However, there is some overlap between the
various areas, and when the application of security controls
to computer systems raises a new aspect of an old problem,
the issue is discussed. An overview of the threat points is
depicted in figure 2-3.
2.1.4.1 Physical Protection
"Security controls applied to safeguard the physical
equipment apply not only to the computer equipment itself
and to its terminals, but also--to such removable items as
printouts, magnetic tapes, magnetic disc packs, punch cards,
etc. Adequate DoD regulations exist for dissemination,
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01 : CIA-RDP83M00914R001800060009-2
1>-a I
TAPS RADIATION RADIATION
RADIATION
STALK
03
COMPU PER rWO,X {P~GAC~'F~~I~J'G/T E5 RADIATION
RADIATION CROSSTALK CR
PROCESSOR
FILES
THEFT
COP'YImG
UNAUTHORIZED ACCESS
- - COMMUNICATION
LINES
OPEIPWOR
REPLACE SVPERVISOR
REVEAL PROTECTIVE
MEASURES
PROVIDE "INS"
REVEAL PROTECTIVE MEASURES
HARDWARE
ni FR*non _
FAILURE OF OF PEOTEC'n-O i CIRCUITS MAINTENANCE MAN ACCESS' COIITR15UTE TO SOFTWARE FAiLURES1\ DISASLEr A 4-%Z DC-VICES ATTACHMENT OF RECORDERS
~ USE STAND-ALONE UTILITY PROGRkM$ BUGS
USER
'
VA E
SOfT
FAILURE OF PROTECTION FEATURES IDENTIFICATION
ACCESS CONTROL AUTHENTICATION
BOUNDS CONTROL SUBTLE SOFTWARE
ETC. MODIFICATIONS
Figure 2-3
SY
2OTEC .ATU RES /CONSOLES
OISALE F 26 ~ iCTIVt; FEATU
SWITCHING
CENTER
IMPRO9 CO;4; ECTIONS
CROSS COUPLING
STFr~ PP bm,,p ER REMOTE
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
control, storage, and accountability of classified removable
items. Therefore, security measures for these elements of
the system are not examined in this Report unless there are
some unique considerations. The following general
guidelines apply to physical protection.
(a) The area containing the central computing complex and
associated equipment (the machine room or operational
area) must be secured to the level commensurate with
the most highly classified and sensitive material
handled by the system.
(b) Physical protection must be continuous in time,
because, of the threat posed by the possibility of
physical tampering with equipment and because of the
likelihood that classified information will be stored
within the computer system even when it is not
operating.
(c) Remote terminal device must be afforded physical
protection commensurate with the classification and
sensitivity of information that can be handled
through them. While responsibility for instituting
and maintaining physical protection measures is
normally assigned to the organization that controls
the terminal, it is advisable for a central authority
to establish uniform physical security standards
(specific protection measures and regulations) for
all terminals in a given system to insure that a
specified security level can be achieved for an
entire system. Terminal protection is important in
order to:
Prevent tampering with a terminal (installing
intelligence sensors) ;
Prevent visual inspection of classified work
in progress;
Prevent unauthorized persons from trying to
call and execute classified programs or obtain
classified data.
"If parts of the computer system (e.g., magnetic disc files,
copies of printouts) contain unusually sensitive data, or
must be physically isolated during maintenance procedures,
it may be necessary to physically separate them and
independently control access to them. In such cases, it may
be practical to provide direct or remote visual surveillance
of the ultra-sensitive areas. If visual surveillance is
used, it must be designed and installed in such a manner
that it cannot be used as a trap-door to the highly
sensitive material it is intended to protect.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
2.1.4.2 Hardware Leakage Points
"Hardware portions of the system are subject to malfunctions
that can result directly in a leak or cause a failure of
security protection mechanisms elsewhere in the system,
including inducing a software malfunction. In addition,
properly operating equipment is susceptible to being tapped
or otherwise exploited. The types of failures that must
directly affect security include malfunctioning of the
circuits for such protections as bounds registers, memory
read-write protect, privileged mode operation, or priority
interrupt. Any hardware failure potentially can affect
security controls; e.g., a single-bit error in-memory.
"Both active and passive penetration techniques can be used
against hardware leakage points. In the passive mode, the
intervener may attempt to monitor the system by tapping into
communications lines, or by monitoring compromising
emanations. Wholly isolated systems can be physically
shielded to eliminate emanations beyond the limits of the
secure installation, but with geographically dispersed
systems comprehensive shielding is more difficult and
expensive. Currently, the only practical solutions are
those used to-protect communications systems.
"The problem of emanation security is covered by existing
regulations; there are not new aspects to this problem
raised by modern computing systems. It should be
emphasized, however, that control of spurious emanations
must be applied not only to the main computing center, but
to the remote equipment as well.
"Although difficult to accomplish, the possibility exists
that covert monitoring devices can be installed within the
central processor. The problem is that the computer
hardware involved is of such complexity that it is easy for
a knowledgeable person to incorporate the necessary
equipment in such a way as to make detection very difficult.
His capability to do so assumes access to the equipment
during manufacture or major maintenance. Equipment is also
vulnerable to deliberate or accidental rewiring by
maintenance personnel so that installed hardware appears to
function normally, but in fact by-passes or changes the
protection mechanisms.
"Remote consoles also present potential radiation
vulnerabilities. Moreover, there is a possibility that
recording devices might be attached to a console to pirate
information. Other remote or peripheral equipment can
present dangers. Printer ribbons or platens may bear
impressions that can be analyzed; removable storage media
(magnetic tapes, disc packs, even punch cards) can be
stolen, or at least removed long enough to be copied.
roved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
"Erasure standards for magnetic media are not within the
scope of this Task Force to review or establish. However,
system designers should be aware that the phenomena of
retentivity in magnetic materials is inadequately
understood, and is a threat to system security.
2.1.4.3 Software Leakage Points
"Software leakage points include all vulnerabilities
directly related to the software in the computer system. Of
special concern is the operating system and the
supplementary programs that support the operating system
because they contain the software safeguards. Weaknesses
can result from improper design, or from failure to check
adequately for combinations of circumstances that can lead
to unpredictable consequences. More serious, however, is
the fact that operating systems are very large, complex
structures, and thus it is impossible to exhaustively test
for every conceivable set of conditions that might arise.
Unanticipated behavior can be triggered by a particular user
program or by a rare combination of user actions.
Malfunctions might only disrupt a particular user's files or
programs; as such, there might be no risk to security, but
there is a serious implication for system reliability and
utility. On the other hand, operating system malfunctions
might couple information from one program (or user) to
another; clobber information in the system (including
information within the operating system software itself) ; or
change classification of users, files, or programs. Thus,
malfunctions in the system software represent potentially
serious security risks. Conceivably, a clever attacker
might establish a capability to induce software malfunctions
deliberately; hiding beneath the apparently genuine trouble,
an on-site agent may be able to tap files or to interfere
with system operation over long periods without detection.
"The security safeguards provided by the operating system
software include access controls, user identification,
memory bounds control, etc. As a result of a hardware
malfunction, especially a transient one, such controls can
become inoperative. Thus, internal checks are necessary to
insure that the protection is operative. Even when this is
done, the simultaneous failure of both the protection
feature and its check mechanism must always be regarded as a
possibility. With proper design and awareness of the risk,
it appears possible to reduce the probability of undetected
failure of software safeguards to an acceptable level.
"Probably the most serious risk in system software is
incomplete design, in the sense that inadvertent loopholes
exist in the protective barriers and have not been foreseen
by the designers. Thus, unusual actions on the part of
users, or unusual ways in which their programs behave, can
Approved For Release 2007/06/01 : CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
induce a loophole. There may result a security breach, a
suspension or modification of software safeguards (perhaps
undetected) , or wholesale clobbering of internal programs,
data, and files. It is conceivable that an attacker could
mount a deliberate search for such loopholes with the
expectation of exploiting them to acquire information either
from the system or about the system--e.g., the details of
its information safeguards.
2.1.4.4 Communication Leakage Points
"The communications linking the central processor, the
switching center and the remote terminals present a
potential vulnerability. Wiretapping may be employed to
steal information from land lines, and radio intercept
equipment can do the same to microwave links. Techniques
for intercepting compromising emanations may be employed
against the communications equipment even more readily than
against the central processor or terminal equipment. For
example, crosstalk between communications lines or within
the switching central itself can present a vulnerability.
Lastly, the switch gear itself is subject to error and can
link the central processor to the wrong user terminal.
2.1.4.5 Organizational Leakage Points
"There are two prime organizational leakage points,
personnel security clearances and institutional operating
procedures. The first concerns the structure, -
administration, and mechanism of the national apparatus for
granting personnel security clearances. It is accepted that
adequate standards and techniques exist and are used by the
cognizant authority to insure the reliability of those
cleared. This does not, however, relieve the system
designer of a severe obligation to incorporate techniques
that minimize the damage that can be done by a subversive
individual working from within the secure organization. A
secure system must be based on the concept of isolating any
given individual from all elements of the system to which he
has no need for access. In the past, this was accomplished
by denying physical access to anyone without a security
clearance of the appropriate level. In resource-sharing
systems of the future, a population of users ranging from
uncleared to those with the highest clearance levels will
interact with the system simultaneously. This places a
heavy burden on the overall security control apparatus to
insure that the control mechanisms incorporated into the
computer systems are properly informed of the clearances and
restrictions applicable to each user. The machine system
must be designed to apply these user access restrictions
reliably.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
"In some installations, it may be feasible to reserve
certain terminals for highly classified or highly sensitive
or restricted work, while other terminals are used
exclusively for less sensitive operation. Conversely, in
some installations any terminal can be used to any degree of
classification or sensitivity, depending on the clearance
and needs of the user at the given moment. In either of
these cases, the authentication and verification mechanisms
built into the machine system can be relied upon only to the
degree that the data on personnel and on operational
characteristics provided it by the security apparatus are
accurate.
"The second element of organizational leakage points
concerns institutional operating procedures. The
consequences of inadequate organizational procedures, or of
their haphazard application and unsupervised use, can be
just as severe as any other malfunction. Procedures include
the insertion of clearance and status information into the
security checking mechanisms of the machine system , the
methods of authenticating users and of receipting for
classified information, the scheduling of computing
operations and maintenance periods, the provisions for
storing and keeping track of removable storage media, the
handling-of printed machine output and reports, the
monitoring and control of machine-generated records for the
security apparatus, and all other functions whose purpose is
to insure reliable but unobtrusive operation from a security
control viewpoint. Procedural shortcomings represent an
area of potential weakness that can be exploited or
manipulated, and which can provide an agent with innumerable
opportunities for system subversion. Thus, the installation
operating procedures have the dual function of providing
overall management efficiency and of providing the
administrative bridge between the security control apparatus
and the computing system and its users.
"The Task Force has no specific comments to make with
respect to personnel security issues, other than to note
that control of the movement of people must include control
over access to remote terminals that handle classified
information, even if only intermittently. The machine room
staff must have the capability and responsibility to control
the movement of personnel into and within the central
computing area in order to insure 'that only authorized
individuals operate equipment located there, have access to
removable storage media, and have access to any machine
parts not ordinarily open to casual inspection.
2.1.4.6 Leakage Point Ecology
"In dealing with threats to system security, the various
leakage points cannot be considered only individually.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Almost any imaginable deliberate attempt to exploit
weaknesses will necessarily involve a combination of
factors. Deliberate acts mounted against the system to take
advantage of or to create leakage points would usually
require both a system design shortcoming, either unforeseen
or undetected, and the placement of someone in a position to
initiate action. Thus, espionage activity is based on
exploiting a combination of deficiencies and circumstances.
A software leak may be caused by a hardware malfunction.
The capability to tap or tamper with hardware may be
enhanced because of deficiencies in software checking
routines. A minor, ostensibly acceptable, weakness in one
area, in combination with similar shortcomings in seemingly
unrelated activities, may add up to a serious potential for
system subversion. The system designer must be aware of the
totality of potential leakage points in any system in order
to create or prescribe techniques and procedures to block
entry and exploitation.
"The security problem of specific computer systems must be
solved on a case-by-case basis employing the best judgment
of a team consisting of system programmers, technical,
hardware, and communications specialists, and security
experts."
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
2.2 TECHNOLOGY DEVELOPMENT HISTORY
Much has been learned about methods of assuring the
integrity of information processed on computers since the
emergence of operating systems in the early 1960s. Those
early efforts were primarily concerned with improvements in
the effective use of the large computer centers that were
then being established. Information protection was not a
major concern since these centers were operated as large
isolated data banks. There were many significant hardware
and software-advances in support of the new operating system
demands. Many of these changes were beneficial to the
interests of information protection but since protection was
not an essential goal at that time, the measures were not
applied consistently and significant protection flaws
existed in all commercial operating systems [TANG80].
In the late 1960s, spurred by activities such as the Defense
Science Board study quoted in the previous section, efforts
were initiated to determine how vulnerable computer systems
were to penetration. The "Tiger Team" system penetration
efforts are well known. Their complete success in
penetrating all commercial systems attempted, provided
convincing evidence that the integrity of computer systems
hardware and software could not be relied upon to protect
information from disclosure to other users of the same
computer system.
By the early 1970s penetration techniques were well
understood. Tools were developed to aid in the systematic
detection of critical system flaws. Some detected
mechanical coding errors, relying on the sophistication of
the user to discover a way to exploit the flaws [ABBO76],
others organized the search into a set of generic conditions
which when present often indicated an integrity flaw
(CARL75]. Automated algorithms were developed to search for
these generic conditions, freeing the "penetrator" from
tedious code searches and allowing the detailed analysis of
specific potential flaws. These techniques have continued
to be developed to considerable sophistication. In addition
to their value in searching for flaws in existing software,
these algorithms are useful as indicators of conditions to
avoid in writing new software if one wishes to avoid the
flaws that penetrators most often exploit.
These penetration aids are, however, of limited value in
producing high integrity software systems. While they could
be used to reveal certain types of flaws, they could assure
the analysts that no further exploitable flaws of other
types did not remain.
In the early 1970s' the Air Force/Electronic Systems Division
(ESD) conducted in-depth analyses of the requirements for
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
secure systems [ANDE72]. The concepts which emerged from
their efforts are today the basis for most major trusted
computer system developments. The basic concept is a
Reference Monitor which mediates the access of all active
system elements (people or programs) referred to as
subjects, to all systems elements containing information
(files, record, etc.) referred to as objects. All of the
security relevant decision making functions within a
conventional operating system are collected into a small
primitive but complete operating system referred to as the
Security Kernel. The security kernel is a specific
implementation of the reference monitor in software and
hardware. The three essential characteristics of this
kernel are that it be:
complete (i. e. , that all accesses of all subjects to
all objects be checked by the kernel)
isolated (i. e. , that the code that comprises the
kernel be protected from modification or interference
by any other software within the system)
correct (i. e. , that it perform the function for which
it was intended and no other function) .
Since these Air Force studies, considerable effort has gone
into building security kernels for various systems. The
reference monitor concept was the basis for work by MIT,
MITRE and Honeywell in restructuring the Multics operating
system [SCHR77]. MITRE and UCLA have built prototype
security kernels ,for the PDP-ll minicomputer
[WOOD77,POPE79]. System Development Corporation (SDC) is
building a security kernel for the IBM VM/370 operating
system [GOLD79]. Ford Aerospace and Communications
Corporation is implementing the Kernelized Secure Operating
System [MCCA79,BERS79] based on the Secure UNIX prototypes
of UCLA and MITRE. AUTODIN II, the DCA secure packet
switching system is employing this technology in the packet
switching nodes. The Air Force SACDIN program (formerly
called SATIN IV) is also employing this technology.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
2.3 TRUSTED OPERATING SYSTEM FUNDAMENTALS
An operating system is a specialized set of software which
provides commonly needed functions for user developed
application programs. All operating systems provide a well
defined interface to application programs in the form of
system calls and parameters. Figure 2-4 illustrates the
relationship between the operating system and application
software. The operating system interfaces to the hardware
through the basic machine instruction set and to
applications software through the system calls which
constitute the entry points to the operating system.
Applications programs (e.g., A, B, and C) utilize these
system calls to perform their specific tasks.
A trusted operating system patterned after an existing
system is illustrated in figure 2-5. The security kernel is
a primitive operating system providing all essential
security relevant functions including process creating and
execution and mediation of primary. interrupt and trap
responses. Because of the need to prove that the security
relevant aspects of the kernel perform correctly, great care
is taken to keep the kernel as small as possible. The
kernel interface is a well defined set of calls and
interrupt entry points. In order to map these kernel
functions into a specific operating system environment, the
operating system emulator provides the nonsecurity software
interface for user application programs which is compatible
with the operating system interface in figure 2~4. The
level of compatibility determines what existing single
security level application programs (e.g., A, B, C) can
operate on the secure system without change.
Dedicated systems often do not need or cannot afford the
facilities or environment provided by a general purpose
operating system, but they may still be required to provide
internal protection. Because the security kernel interface
is well defined and provides all the primitive functions
needed to implement an operating system it can be called
directly by specialized application programs which provide
their own environment in a form tailored for efficient
execution -of the application program. Examples of this type
of use are dedicated data base management and message
handling systems.
Figure 2-6 illustrates the relationship between two typical
computer systems connected by a network. Each system is
composed of an operating system (depicted by the various
support modules arrayed around the outside of each box) and
application programs (e.g., A, Q, and R in the inner area of
the boxes). The dotted path shows how a terminal user on
System I might access File X on System II. Working through
the terminal handler, the user must first communicate with
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
APPLICATION
PROGRAMS
WELL DEFINED A
_..
INTERFACE
OPERATING
HARDWARE
Figure 2-4
APPLICATION
PROGRAMS
WELL DEFINED
INTERFACE
WELL DEFINED
INTERFACE
Figure 2-5
25
roved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
OPERATING
SYSTEM
EMULATOR
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
an application program (A) which will initiate a network
connection with the remote computer through the network
interface software. On System II an application program or
a system utility (Q) is initiated on the user's behalf to
access File X using the file system. Program Q could
perform a data base update or retrieval for the user or it
could arrange to transfer the file across the network to the
local computer for processing.
When this scenario is applied in a secure environment, the
two systems are placed in physically secure areas and, if
the network is not secure, encryption devices are installed
at the secure interface to the network as shown in figure
2-6.
Figure 2-7 illustrates the function of the security kernel
in the above scenario. Because the kernel resides directly
on the hardware (figure 2-5) and processes all interrupts,
traps and other system actions, it is logically imposed
between all "subjects" and "objects" on the system and can
perform access checks on every event affecting the system.
It should be noted that depending on the nature of the
hardware architecture of the system, the representation of
the kernel may have to include the various. I/O device
handlers. The DEC PDP-11, for example, requires that all
device handlers be trusted and included in the kernel since
I/O has direct access to memory. The Honeywell Level 6 with
the Security Protection Module option (under development)
does not require trusted device drivers since I/O access to
memory is treated the same way as all other memory accesses
and can be controlled by the existing hardware mechanisms.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
O
a
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
~ M M \ - M\ , " - ffi~ ~ " ~ ME\ `
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
2.4 SYSTEM SECURITY VULNERABILITIES
Protection is always provided in relative quantities.
Complete security is not possible with today's physical
security measures, nor will it be with new computer security
measures. There will always be something in any security
system which can fail. The standard approach to achieving
reliable security is to apply multiple measures in depth.
Traditional locks and fences provide degrees of protection
by delaying an intruder until some other protection
mechanism such as a roving watchman can discover the
attempted intrusion. With computer systems this "delay
until detected" approach won't always work. Once an
intruder knows about a security flaw in a computer system,
he can generally exploit it quickly and repeatedly with
minimal risk of detection.
Research on the security kernel approach to building trusted
operating systems has produced a positive change in this
situation. While absolute security cannot be achieved, the
design process for trusted computer systems is such that one
can examine the spectrum of remaining vulnerabilities and
make reasonable judgments about the threats he expects to
encounter and the impact that countermeasures will have on
system performance.
A caution must be stated that the techniques described here
do not diminish the need for physical and administrative
security measures to protect a system from unauthorized
external attack. The computer security/integrity measures
described here allow authorized users with varying data
access requirements to simultaneously utilize a computer
facility. They provide this capability which relies upon
the existing physical and administrative security measures
rather than replacing them.
The nature of traditional physical and administrative
security vulnerabilities encountered in the operation of
computers with sensitive information is well understood.
Only users cleared to the security level of the computer
complex are allowed access to the system. With the advent
of -trusted computer systems allowing simultaneous use of
computers by personnel with different security clearances
and access requirements, an additional set of security
vulnerabilities comes into play. Table 2-A describes one
view of this new vulnerability spectrum as a series of
categories of concern. Each of these concerns was not
serious in previous systems because there was no need or
opportunity to rely on the integrity of the computer
hardware and software.
The first category is the Security Policy which the system
must enforce in order to assure that users access only
roved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06101: CIA-RDP83M00914R001800060009-2
O 0,
OC +-+
1.. { 0 U
+J G
U N Cr
C) C O a
U a
.C N C vl 0- -
L' (U C .-) 4J
C) C~~
U
?O U
E
l7 L S C
O of r. O 4) - 7
C) '+- C C I - U 41 4-
0. L O O L C N
N (U- ?.- a L'a:
C) C). 4-) 4J C 4- ?.- (:>
'-i
4)
Cd
N 112
- -
H
C
r
01
b
4-)
N
Y
N I
4J
N
a
a
C 1
C
U?
O)
C C
O)
0-C
v 1
v
N O
O
1
r
U
J ?.-
4)
CT ?.-
C 4-'
1
1
S
L r0
10 )0
a) i
1.. 1
a
L
N
t7 C
C
0 1
0
F
C)
L C1
O f-
a O)
t=
$ 1
V-
L
C~
-
Cr
)
O 1
N
N
C1
0
rt0~ 0.
C
0
0
U
L
Y
N
C
..
.
a
L to
U 7
u
v
N
L
4-
a.) 3
.
O.-U
C) L
U N
C) L
1
?
O
41
a)
b
0
L
1 v
1 4J
U
b
.N-
a)?
J L
N
-V 41
.r J
4J O
U
N
U
a
c) C
C) N
s >
u a,
CJ)
? 0
L 41 0.
Y .???-
C ? C
Q/ b N O)
C y
4)
4~ r
C
O s'
N
Q)
C 41 N C) yJ
L O
o ?.-
N
>,
a) U
u
C
N
N
Cl U
1-
N
N ?D
vi
C a)
a
N
3
-
4) L
4)
C.- a
L
C
C
b N A v 4
N
N 0
> 0
04- L: O
.0
E .-) C u
O a =
L
C u
4-
C? U a+
S. C C4)
a ~0 O 7 C
Cl.
C U4-
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
authorized data. This policy consists of the rules which
the computer will enforce governing the interactions between
system users. There are many different policies possible
ranging from allowing no one access to anyone else's
information to full access to all data on the system. The
DoD security policy (table 2-B) consists of a lattice
relationship in which there are classification levels,
typically Unclassified through Top Secret, and compartments
(or categories) which are often mutually exclusive groupings
(BELL74]. With this policy a partial ordering relationship
is established in which users with higher personnel security
clearance. levels can have access to information at lower
classification levels provided the users also have a "need
to know" the information. The vulnerability concern
associated with the security policy is assuming that the
policy properly meets the total system security
requirements.
The second general concern is the System Specification
Level. Here the function of each module within the system
and its interface to other modules is described in detail.
Depending upon the exact approach employed, the system
specification level may involve multiple abstract
descriptions. The vulnerability here is to be able to
assure that each level of the specification enforces the
policy previously established.
The next vulnerability concern is the high level language
implementation. This category constitutes the actual module
implementation represented in a high order language (HOL)
such as EUCLID or PASCAL. This vulnerability involves the
assurance that the code actually obeys the specifications.
The next concern on the vulnerability list is the machine
code implementation which includes the actual instructions
to be run on the hardware. The step from HOL implementation
to machine code is usually performed by a compiler and the
concern is to assure that the compiler accurately transforms
the HOL implementation into machine language.
The next level of concern is that the hardware modules
implementing the basic instructions on the machine perform
accurately the functions they represent. Does the ADD
instruction perform an ADD operation correctly and nothing
else? Finally, the last concerns include the circuit
electronics and more fundamental device physics itself. Do
these elements accurately perform in the expected manner?
As can be seen by analyzing this vulnerability spectrum,
some of the areas of concern are more serious than others.
In particular, relatively little concern is given to circuit
electronics and device physics since there is considerable
confidence that these elements will perform as expected.
There is a concern with hardware modules, though in general
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
TABLE 2-B -- DoD Security Policy
I. Non discretionary (i.e., levels established by national policy
must be enforced).
Compartments
Partially Ordered Relationship
Top Secret ) Secret ) Confidential ' Unclassified
Compartments A, B, C are mutually exclusive
Example:
User in Compartment B, level Secret can have access to all
information at Secret and below (e.g., Confidential and
Unclassified) in that compartment, but no access to
information in Compartments A or C.
II. Discretionary, "Need to know" - (i.e., levels established
"informally").
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
most nonsecurity relevant hardware failures do not pose a
significant vulnerability to the security of the system and
will be detected during normal operations of the machine.
Those security relevant hardware functions can be subject to
frequent software testing to insure (to a high degree) that
they are functioning properly. The mapping between HOL and
machine code implementation is a serious concern. The
compiler could perform improper transformations which would
violate the integrity of the system. This mapping can be
checked in the future by verification of the compiler
(presently beyond the state-of-the-art). Today we must rely
on rigorous testing of the compiler.
The selection of the security policy which the system must
support requires detailed analysis of the application
requirements but is not a particularly complex process and
can be readily comprehended so the level of concern is not
too high for this category.
The system specification and HOL implementations are the two
areas which are of greatest concern both because of the
complex nature of these processes and the direct negative
impact that an error in either has on the integrity of the-
system. Considerable research has been done to perfect both
the design specification process and methods for assuring
its correct HOL implementation [POPE78b,MILL76,FEIE77,
WALK79,MILL79]. Much of this research has involved the
development of languages and methodologies for achieving a
complete and correct implementation [ROUB77,AMBL76.,HOLT78].
As stated earlier this vulnerability spectrum constitutes a
set of conditions in which the failure of any element may
compromise the integrity of the entire system. In the high
integrity systems being implemented today, the highest risk
vulnerability areas are receiving the most attention.
Consistent with the philosophy of having security measures
in depth, it will be necessary to maintain strict physical
and administrative security measures to protect against
those lower risk vulnerabilities that cannot or have not yet
been eliminated by trusted hardware/software measures. This
will result in the continued need to have cleared operation
and maintenance personnel and to periodically execute
security checking programs to detect hardware failures.
Over the next few years as we understand better how to
handle the high risk vulnerabilities we will be able to
concentrate more on the lower risk areas and consequently
broaden the classes of applications in which these systems
will be suitable.
Computer system security vulnerabilities constitute paths
for passing information to unauthorized users. These paths
can be divided into two classes: direct (or overt) and
indirect (or covert) channels [LAMP73,LIPN75] . Direct paths
roved For Release 2007/06/01: CIA-RDP83M00914RO01800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
grant access to information through the direct request of a
user. If an unauthorized user asks to read a file and is
granted access to it, he has made use of a direct path. The
folklore of computer security is filled with case histories
of commercial operating systems being "tricked" into giving
direct access to unauthorized data. Indirect or covert
channels are those paths used to pass information between
two user programs with different access rights by modulating
some system resource such as a storage allocation. For
example, a user program at one access level can manipulate
his use of disk storage so that another user program at
another level can be passed information through the number
of unused disk pages.
Unauthorized direct access information paths can be
completely eliminated by the security kernel approach since
all objects are labeled with access information and the
kernel checks them against the subject's access rights
before each access is granted. The user who is interested
only in eliminating unauthorized direct data access can
achieve "complete" security using these techniques. Many
environments in which all users are cleared and only a
"need-to-know" requirement exists, can be satisfied by such
a system.
Indirect data paths are more difficult to control. Some
indirect channels can be easily eliminated, others can never
be prevented. (The act of turning off the power to a system
can always be used to pass information to users.) Some
indirect channels have very high bandwidth (memory to memory
speeds), many operate at relatively low bandwidth.
Depending upon the sensitivity of the application, certain
indirect channel bandwidths can be tolerated. In most cases
external measures can be taken to eliminate the utility of
an indirect channel to a. potential penetrator.
The elimination of indirect data channels often affects the
performance of a system. This situation requires that the
customer carefully examine the nature of the threat he
expects and that he eliminate those indirect paths which
pose a real problem in his application. In a recent
analysis, one user determined that indirect path bandwidths
of approximately teletype speed are acceptable while paths
that operate at line printer speed are unacceptable. The
assumption was that the low speed paths could be controlled
by external physical measures. With these general
requirements to guide the system designer it is possible to
build a useful trusted system today.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
The applications for which trusted operating systems will be
used and the environments in which they will operate cover a
wide spectrum. The most sensitive practical environment
encompasses highly sensitive intelligence information on a
system with unclassified users. AUTODIN II is employing
security kernel technology to operate a packet switched
network in such an environment. A minimum sensitive
environment in which a trusted system might be placed
involves unclassified information where individual
need-to-know or privacy must be maintained. There are a
large number of environments between these two that have
differing degrees of sensitivity.
The type of application for which the trusted system will be
used influences the concern for the integrity of the system.
For example, while AUTODIN II does not employ full code
verification or fault resistant hardware, it is being used
for an application which offers the user few opportunities
to exploit weaknesses within the packet switch software.
Thus it can be used in a much higher-risk environment than
can a general-purpose computer system. -A general-purpose
programming environment offers many more opportunities to
exploit system-weaknesses. The combination of the
sensitivity of information being processed relative to the
clearances of the users and the degree of user capability
afforded by a particular application are the primary factors
in determining the level of concern required for a
particular system.
There are examples of multilevel systems that have been
approved which provide significant data points in the
environment/application spectrum. Honeywell Multics,
enhanced by an "access isolation mechanism", is installed as
a general-purpose timesharing system at the Air Force Data
Services Center in the Pentagon in a Top Secret environment
with some users cleared only to the Secret level. Multics
has the best system integrity of any commercial operating
system available today. While it does not have formal
design specifications as described in the previous section,
the-system was designed and structured with protection as a
major goal but. Formal development procedures were not used
because the system was developed. before these techniques
were available. In spite of this, after a very thorough and
careful review, the Air Force determined that the benefit of
using this system exceeded the risk that a user might
attempt to exploit a system weakness, given that all users
have at least a Secret clearance.
There have been several other examples where current
technology enhanced by audit procedures and subjected to
rigorous testing have been approved for use in limited
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
sensitivity applications.
The degree to which one must rely on technical features of a
system for integrity depends significantly on the
environment that the system will operate in and the
capabilities that a user has to exploit system weaknesses.
There has been some study of the range of sensitivities for
different applications and environments [ADAM791. Section
3.1 describes a way of combining these application and
environment concerns with the technical measures of system
integrity.
pr roved For Release 2007/06/01: CIA-R DP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
The security kernel approach to designing trusted computing
systems collects the security relevant portions of the
operating system into a small primitive operating system.
In order to have confidence that the system can be trusted,
it is necessary to have confidence that the security kernel
operates correctly. That is, one must have confidence that
the security kernel enforces the security policy which the
system is supposed to obey.
Traditional means such as testing and penetration can and
should be used to uncover-flaws in the security kernel
implementation. Unfortunately, it is not possible to test
all possible inputs to a security kernel. Thus, although
testing may uncover some flaws, no amount of testing will
guarantee the absence of flaws. For critical software, such
as a security kernel, additional techniques are needed to
gain the necessary assurance that the software meets its
requirements. Considerable research has been devoted to
techniques for formally proving that software operates as
intended. These techniques are referred to as software
verification technology or simply verification technology.
In the case of a security kernel, the critical aspect of its
operation is the enforcement of a security policy. The
ultimate goal of a verification is to prove that the
implemented security kernel enforces the desired security
policy. There are five main areas of concern in relating
the security policy to the implemented security kernel: the
security policy itself, system specification, high order
language implementation, compiler, and hardware. The
following paragraphs discuss the way in which verification
addresses each of these areas.
2.6.1 Security Policy
DoD has established regulations covering the handling of
classified information- (e.g. DOD Directive 5200.28).
However, in order to prove that a security kernel enforces a
security policy, it is necessary to have a formal
mathematical model of the security policy. It is not
possible to prove that the model is correct since the model
is .a. formal mathematical interpretation of a
non-mathematical policy. Fortunately, mathematical models
of security have existed since 1973 when Bell and LaPadula
formulated a model of multilevel security. [BELL74].
Various models of multilevel security have been used since
1973, but they have all been derived from the original
Bell-LaPadula model. Since this model has been widely
disseminated and discussed, one can have confidence that the
model correctly reflects the non-mathematical DoD
regulations. In the case of software with security
roved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
requirements different from those of a security kernel, a
specialized model is needed, and thorough review is required
to determine that the model guarantees the informal
requirements.
2.6.2 System Specification
In practice the gap between the mathematical model of
security and the implemented security kernel is too great to
directly prove that the kernel enforces the model. A
specification of the system design can be used to break the
proof up into two parts:
a) Show the system specification obeys the model.
b) Show the kernel code correctly implements the
specification.
Step a) is called Design Verification. Step b) is called
Implementation or Code Verification.
To be useful for verification, the meaning of the system
specification must be precisely defined. This requires that
a formally defined specification language be used. Formal
specification languages, associated design and verification
methodologies, and software tools to help the system
designer and verifier have been developed by several
organizations. Since a specification typically hides much
of the detail which must be handled in an implementation,
design verification is significantly easier than code
verification. The design verification usually requires the
proof of a large number of theorems, but most of these
theorems can be handled 'by automatic theorem provers. There
are several methodologies available today that work with
existing automatic theorem provers. Verification that a
formal design specification obeys a security model has been
carried out as part of the AUTODIN II, SACDIN, KSOS, and
KVM/370 programs. Design verification can be useful even if
no code verification is done. Traditional techniques can
give some confidence that the code corresponds to the
implementation, and design verification will uncover design
flaws, which are the most difficult to correct.
2.6.3 HOL, Compiler, Hardware
After the system specification has been verified to obey the
security model, the remaining problem is to show that the
kernel implementation is consistent with its specification.
The gap from specification to object code is too great for
current verification methodologies to prove that object code
is consistent with a specification. However, work has been
devoted to developing techniques for proving that the HOL
implementation of a system is consistent with its
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
specification. The implementation for a system is much more
.detailed than a specification, and more attributes must be
shown to be true to support the top-level design assertions.
Thus, verification that the code is consistent with its
specification is much more difficult than verification of
the design properties of the specification. Usually many
theorems must be proved for code verification. Even with
automatic theorem provers the verification requires
significant human and computer resources. Recent work in
verification technology has developed code verification to
the point that it is now feasible to attempt code
verification in some small systems. To date, code
verification has been done only for example systems.
To complete the verification one.would have to consider the
compiler and hardware. At present, it is beyond the state
of the art to formally prove that production compilers or
hardware operate as specified. However, since the compiler
and hardware will probably be used on many systems, flaws in
their operation are more likely to be revealed than flaws in
the code for a new system. The software is the area where
there is the greatest need for quality assurance effort.
2.6.4 Summary-
Verification is useful for increasing one's confidence that
critical software obeys its requirements. An example of
critical software where verification can be useful is a
security kernel. Verification does not show that a system
is correct in every respect. Rather verification involves
showing consistency between a mathematical model, a formal
specification, and an implementation. Verification that a
formal specification is consistent with a mathematical model
of security has been demonstrated on several recent systems.
Verification of consistency between a specification and a
HOL implementation is on the verge of becoming practical for
small systems, but has not yet been demonstrated except for
example systems. Verification of consistency between the
HOL and machine language is not practical in the near
future. (Verification is discussed in more detail in
section 4 .3 . )
roved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
COMPUTER SECURITY INITIATIVE STATUS
The goal of the Computer Security Initiative is to establish
widespread availability of trusted computer systems. There
are three major activities of the Initiative seeking to
advance this goal: (1) coordination of DoD R&D efforts in
the computer security field, (2) identification of
consistent and efficient evaluation procedures for
determining suitable environments for the use of trusted
computer systems, and (3) encouragement of the computer
industry to develop trusted systems as part of their
standard product lines. This section describes the
Initiative activities in support of 2 and 3 above. (Section
4 addresses item 1.)
3.1 THE EVALUATED PRODUCTS LIST
Section 1-1101 of the Defense Acquisition Regulations (DAR,
formerly called the Armed Services Procurement Regulations
or ASPRS) defines a procedure for evaluating a product prior
to a procurement action. This procedure establishes a
Qualified Products List (QPL) of items which have met a
predefined government specification. This procedure can be
used when one or more of the following conditions exist:
"(i)
The time required to conduct one or more of the
examinations and tests to determine compliance
with all the technical requirements of the
specification will exceed 30 days (720 hours).
(Use of this justification should advance product
acceptance by at least 30 days (720 hours).)
(ii) Quality conformance inspection would require
special equipment not commonly available.
(iii) It covers life survival or emergency life saving
equipment. (See 1-1902 (b) (ii) .) "
Whenever any of these conditions exist, a Qualified Products
List process may be established. Under these regulations, a
specification of the requirements that a product must meet
is developed and widely distributed. Any manufacturer who
believes his product meets this specification may submit his
product for evaluation by the government. If the product is
determined to meet the specification, it is entered on a
Qualified Products List maintained by the government agency
performing the evaluation.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Any agency or component seeking to procure an item which
meets the QPL specification can utilize the QPL evaluation
in-its procurement process in lieu of performing its own
separate evaluation. The QPL process allows the efficient
and consistent evaluation of complex products and the
general availability of the evaluation results to all DoD
procurement organizations.
There is a provision of the QPL process described in the DAR
that requires all products considered as part of a
particular government RFP to be already on the QPL prior to
issuance of the RFP. If a manufacturer believes that his
product meets the government specification but the
evaluation has not been completed at the time of issuance of
the RFP, that product will be disqualified from that
procurement action. This provision has been viewed by many
as anti-competitive and has been a deterrent to the wide use
of the QPL process.
The Special Committee on Compromising Emanations (SCOCE) of
the National Communications Security Board has established a
modified QPL process for the evaluation of industry devices
which meet government standards for compromising emanations
(NACSEM 5100). Under the provisions of their Preferred
Products List (PPL), a manufacturer supplies the government
with the results of tests performed either by himself or one
of a set of industry TEMPEST evaluation laboratories which
indicate compliance with the NACSEM 5100 specification.
Upon affirmative review of these test results, the product
will be entered on the TEMPEST Preferred Products List. Any
manufacturer may present the results of the testing of his
product to the government at any time including during the
response to a particular RFP.
The evaluation of the integrity of industry developed
computer systems is a complex process requiring considerable
time and resources that are in short supply. A QPL-like
process for disseminating the results of these evaluations
is essential. Under these circumstances, a small team of
highly competent government computer science and system
security experts will perform the evaluation of industry
submitted systems and the results of their evaluations will
be made available to any DoD organization for use in their
procurement process, eliminating the inefficiency and
inconsistency of duplicate evaluations.
As described in section 3.4.1, there are many technical
features which influence the overall integrity of a system.
Some of these features are essential for protecting
information within a system regardless of the type of
application or the environment. However, many of these
features may not be particularly relevant in particular
applications or environments and therefore it may be
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
reasonable to approve systems for use in some environments
even with known deficiencies in certain technical areas.
For example, in an environment where all users are cleared
to a high level and there is a need-to-know requirement, it
may be reasonable to employ a system which has not
completely eliminated all indirect data paths (see section
2.4.1) on the premise that a high degree of trust has
already been placed in the cleared users and they are not
likely to conspire with another user to attempt to exploit a
complex indirect channel to obtain information for which
they have no need-to-know. Similar arguments can be made
for systems processing information of a low level of
sensitivity. Since indirect paths require two conspiring
users they are difficult to use and in most cases are not
worth the risk of being detected.
Thus, systems with certain technical features should be
usable for applications of a particular type in environments
of a particular type. It is possible to describe classes of
those integrity features required for different application
and risk environments. If there is a process (as described
in section 3.4) for evaluating the integrity of various
trusted systems, then an "Evaluated Products List" (EPL) can
be constructed matching products to these protection classes
(and, thus, to certain application and risk environments).
It appears that the technical integrity measures can be
categorized into a small set of classes (six to nine) with
considerable consistency in determining into which class a
particular system will fit. Figure 3-1 is an example of an
Evaluated Products List, consisting of six classes ranging
from systems about which very little is known and which can
be used only in dedicated system-high environments (most of
the commercial systems today) to systems with technical
features in excess of the current state-of-the-art. The
environments are described in terms of the sensitivity of
the information and the degree of user capability.
The Evaluated Products List includes all computer systems
whose protection features have been evaluated. The first
class implies superficial protection mechanisms. A system
in this class is only suitable for a system-high
classification installation. Most modern commercial systems
satisfy at least the requirements of Class I. As one
progresses to 'higher classes, the technical and assurance
features with respect to system protection are significantly
strengthened and the application environment into which a
system may be placed can be of a higher sensitivity.
In discussing the Evaluated Products List (EPL) concept with
various communities within the defense department and the
intelligence community, it has become clear that, while the
Approved For Release 2007/06/01 : CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
U) 3 U) s i
4J 0 41 O?+ tT m
C C C cn N C C C
a) b -.4 a) I ?~ u ?ri w ?14
a) I F- E a)
rI 4J O C E-+ rO E U) E U) E I
-00 (U C.'-O 0) (0 U (UU (UO
r1 $4 U t7+ 1 w U 4J W I sa I -+ I
U) rq ?rl a) ?r4 ro ?ri ri 81M H
C: Q) Q raO &) H O, U
m> ra ra > E: I 1 HO I
O C a) O v aJ C -ri $4 V) w U) w cn
ww QE CQCG) 404 E-4 rX4Q4E4 w04 E-4
C
a) ro ro
U) r0 H a)
IV O U U) U U) r-i
H E> E: v C U lO H
C~ a)a) >+rtS ?rl I I
f~ 4 E 4J (n U)
(U U) E U) 4- r?i O O
w :3 U) m
x 0 >1 (0
W Z U U) Z~ O z x x4
W U)
0 a
E-+
Q4 ?? e o a -o
0 ~, ~+ a cn
44 (U 0 04 -
04) 0 -r., r
C 0 > 0 0
0 >1 W 44 E Q C 4 %4
?r-I 0 a -1 o w 044 a
4.) ?ra ri W a -14 H
a) 0 n w 4J 0) (`a a U) 44 Q C (00 4J ?N-4 th
a) 41 ?1-1 O C O rO Ei UI C 4- 4J ?r4 AS U)
O 41 a) ?r- ?d v O 0 C 4- a :> a)
w (a U) 4J 4-a -r+ C r4 C ?r+ P4 a) RS H $a
04u ?r444C0 vo?u E;w4 (U
14 U
CU a) ?r?+ 0) D 'r4 4J a) a) u C U) 01
v a) 4J a) 4) ?ri rO a) 4J sa $.I r-I C a) 4 U ri
4-4 E c: as Qa C U? (1) a cis a~ v Q, W > a) W
O (L) 'D C En a) a) U) I U C E O O Q) 04
4 0 rcs Q( w-+ a) o H U ra U)
r-1 4J >i -r?1 r-i m 0 4 40U a) 0
(U a) O as 4- IT a) C E-1 --' rO U) ra U a)
0 O (U C > 3 O -i tr+ rO (1) (0 O sa
r~ (U 4) sa ? 1 N r-I O C r-I a r- O) ?r1 U b 4J (U
C , C C 4 . ) a w Q ?r-4 (U > ?r1 4- 4-4 C 0 3
?r1 r0 (L) -r1 I l0 I 4) E +7 ?r1 -r- 1) a) a) r0
0 >1 cr) C C r0 04 (1) 04 U) s-, U) U) E W U) 4J ?n f-i
(L) coo (0 a) 0 r-+ o (1) 00a )- 4 a) a) x .0 (Id
E?+ Z-1 Za4 E-+UE-+P r4E-'E-+0 >E-+W O x
r-I
U r-1 N M d' tf) 4D
Approved For Release *2007/06101: CIA-RDP83M00914R001800060009-2
technical feature evaluation process is understood and
agreed upon, the identification of suitable application
environments will differ depending upon the community
involved. For example, the Genser community may decide that
the technical features of a Class IV system are suitable for
a particular application, whereas the same application in
the intelligence community may require a Class V system. As
a result, the EPL becomes a matrix of suitable application
environments (figure 3-2), depending upon the sensitivities
of the information being processed. In addition to the
intelligence community and the Genser community, there are
the interests of the privacy and the financial communities
and the non--national security communities whose
requirements, frequently, are less restrictive than those of
the national security communities.
The successful establishment of an Evaluated Products List
for trusted computing systems requires that the computer
industry become cognizant of the EPL concept and of computer
security technology, and that a procedure for evaluating
systems be formulated. Section 3.2 (below) discusses the
focus of operating system protection requirements, the
Trusted Computing Base. Section 3.3 describes the
Initiative's technology transfer activities. Section 3.4
presents a proposed process for trusted system evaluation
and section 3.5 summarizes current, informal system
evaluation activity.
Approved For Release 2007/06/01: CIA-RDP83M00914RO01800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
mi
w
< w
r-- ( M d' LS) w
M
w~
w
CD 0
U
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
A significant prerequisite to achieving the widespread
availability of commercial trusted systems is the definition
of just what the requirements for a trusted system are.
Security kernel prototypes had been built over the years,
but they were specific to particular hardware bases or
operating systems. In order to present the basic concept of
a security kernel and trusted processes in a general manner.
that would apply to a wide range of computer systems and
many applications, a proposed specification for a Trusted
Computing Base (a kernel and trusted processes) was prepared
by Grace Nibaldi of The MITRE Corporation [NIBA79a]. The
specification describes the concept of a Trusted Computing
Base (TCB) and discusses TCB requirements. The rest of this
section describes the Trusted Computing Base, and is
excerpted from [NIBA79a]. (We have preceded. the section
numbering used in [NIBA79a] by TCB. Thus, Nibaldi's section
3.1 appears below as TCB.3.1.)
TCB.1 Scope
In any computer operating system that supports
multiprogramming and resource sharing, certain mechanisms
can usually be identified as attempting to provide
protection among users against unauthorized- access to
computer data. However, experience has shown that no matter
how well-intentioned the developers, traditional methods of
software design and production have failed to provide
systems with adequate, verifiably correct protection
mechanisms. We define a trusted computing base (TCB) to be
the totality of access control mechanisms for an operating
system.
A TCB should provide both a basic protection environment and
the additional user services required for a trustworthy
turnkey system. The basic protection environment is
equivalent to that provided by a security kernel (a
verifiable hardware /software mechanism that mediates access
to information in a computer system) ; the user services are
analogous to the facilities provided by trusted processes in
kernel-based systems. Trusted processes are designed to
provide services that could be incorporated in the kernel
but are kept separate to simplify verification of both
kernel and trusted processes. Trusted processes also have
been referred to as "privileged," "responsible,"
"semi-trusted", and "non-kernel security-related (NKSR) " in
various implementations. This section documents the
performance, design, and development requirements for a TCB
for a general-purpose operating system.
In this section, there will be no attempt to specify how any
particular aspect of a TCB must be implemented. Studies of
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
present-day computer architectures [SMIT75,TANG78] indicate
that in the near term a significant amount of software will
be needed for protection regardless of any support provided
by the underlying hardware. In future computer
architectures, more of the TCB functions may be implemented
in hardware or firmware. Examples of specific hardware or
software implementations are given merely as illustrations,
and are not meant to be requirements.
This specification is limited to computer hardware and
software protection mechanisms; not covered are the
administrative, physical, personnel, communications, and
other security measures that complement the internal
computer security controls. For more information in those
areas, see DOD Directive 5200.28 that describes the
procedures for the Department of Defense.
(Section 2 of the TCB specification contains references.
They have been included in the references for this report
rather than being included here as TCB.2. )
TCB.3 General Requirements
TCB.3.1 System Definition
A TCB is a hardware and software access control mechanism
that establishes a protection environment to control the
sharing of information in computer systems. Under hardware
and software we include implementations of computer
architectures in firmware or microcode. A TCB is an
implementation of a reference monitor, as defined in
[ANDE72] , that controls when and how data is accessed.
In general, a TCB must enforce a given protection policy
describing the conditions under which information and system
resources can be made available to the users of the system.
Protection policies address such problems as undesirable
disclosure and destructive modification of information in
the system, and harm to the functioning of the system
resulting in the denial of service to authorized users.
Proof that the TCB will indeed enforce the relevant
protection policy can only be provided through a formal,
methodological approach to TCB design and verification, an
example of which is discussed below. Because the TCB
consists of all the security-related mechanisms, proof of
its validity implies the remainder of the system will
perform correctly with respect to the policy.
Ideally, in an implementation, policy and mechanism can be
kept separate so as to make the protection mechanisms
flexible and amenable to different environments, e.g.,
military, banking, or medical applications. The advantage
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
here is that a change in or reinterpretation of the required
policy need not result in rewriting or reverifying the TCB.
In the following sections, general requirements for TCB
design and verification are discussed.
TCB.3.2 Protection Policy
The primary requirement on a TCB is that it support a
well-defined protection policy. The precise policy will be
largely application and organization dependent. Four
specific protection policies are listed below as examples
around which TCBs may be designed. All are fairly general
purpose, and when used in combination, would satisfy the
needs of most applications, although they do not
specifically address the denial of service threat. The
policies are ordered by their concern either with the
viewing of information--security policies--or with
information modification--integrity policies; and by whether
the ability to access information is externally
predetermined--mandatory policies--or controlled by the
processor of the information--discretionary policies:
1. mandatory security (used by the Department of
-Defense--see DoDD 5200.28), to address the
compromise of information involving national
security;
2. discretionary security (commonly found in general
purpose computer systems today) ;
3. mandatory integrity; and
4. discretionary integrity policy.
In each of these cases, "protection attributes" are
associated with the protectable entities, or "objects"
(computer resources such as files and peripheral devices
that contain the data of interest), and with the users of
these entities (e.g., users, processes), referred to as
subjects. In particular, for mandatory security policy, the
attributes of subjects and objects will be referred to as
"security levels." These attributes are used by the TCB to
determine what accesses are valid. The nature of these
attributes will depend on the applicable protection policy.
See Nibaldi [NIBA79b] for a general discussion on policy.
See Biba [BIBA75] for a discussion of integrity.
TCB.3.3 Reference Monitor Requirements
As stated above, a TCB is an implementation of a reference
monitor. The predominant criteria for a sound reference
Approved For Release 2007/06/01: CIA-RDP83M00914RO01800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
monitor implementation are that it be
1. complete in its mediation of access to data and
other computer resources;
2. self-protecting, free from interference and
spurious modification; and
3. verifiable, constructed in a way that enables
convincing demonstration of its correctness and
infallibility.
TCB.3.3.1 Completeness .
The requirement that a TCB mediate every access to data in
the computer system is crucial. In particular, a TCB should
mediate access to itself--its code and private data--thereby
supporting the second criterion for self-protection. The
implication is that on every action by subjects on objects,
the TCB is invoked, either explicitly or implicitly, to
determine the validity of the action with respect to the
protection policy. This includes:
1. unmistakably identifying the subjects and objects
and their protection attributes, and
2. making it impossible for the access checking to be
circumvented.
In essence, the TCB must establish an environment that will
simultaneously (a) partition the physical resources of the
system (e.g., cycles, memory, devices, files) into "virtual"
resources for each subject, and (b) cause certain activities
performed by the subjects, such as referencing objects
outside of their virtual space, to require TCB intervention.
TCB.3.3.1.1 Subject/Object Identification
What are the subjects and objects for a given system and how
are they brought into the system and assigned protection
attributes? In the people/paper world, people are clearly
the subjects. In a computer, the process has commonly been
taken as a subject in security kernel-based systems, and
storage entities (e.g., records, " files, and I/O devices) are
usually considered the objects. Note that a process might
also behave as an object, for instance if another process
sends it mail (writes it). Likewise, an I/O device might be
considered to sometimes act as a subject, if it can access
any area of memory in performing an operation. In any case,
the policy rules governing subject/object interaction must
always be obeyed. The precise breakdown for a given system
will depend on the application. Complete identification of
subjects and objects within the computer system can only be
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
assured if their creation, name association, and protection
attribute assignment always take place under TCB control,
and no subsequent manipulations on subjects and objects are
allowed to change these attributes without TCB involvement.
Certain issues remain, such as (a) how to associate
individual users and the programs they run with subjects;
and (b) how to associate all the entities that must be
accessed on the system (i. e. , the computer resources) with
objects. TCB functions for this purpose are described in
TCB.4, "Detailed Requirements."
TCB.3.3.1.2 Access Checking
How are the subjects constrained to invoke the TCB on every
access to objects? Just as the .TCB should be responsible for
generating and unmistakably labelling every subject and
object in the system, the TCB must also be the facility for
enabling subjects to manipulate objects, for instance by
forcing every fetch, store, or I/O instruction executed by
non-TCB software to be "interpreted" by the TCB.
Hardware support for checking on memory accesses exists on
several machines, and has been found to be very efficient.
This support has taken the form of descriptor-based
addressing: each process has a virtual space consisting of
segments of physical memory that appear to the process to be
connected. In fact, the segments may be scattered all over
memory, and the virtual space may have holes in it where no
segments are assigned. Whenever the process references a
location, the hardware converts the "virtual address" into
the name of a base register (holding the physical address of
the start of the segment, the length of the segments, and
the modes of access allowed on the segment), and an offset.
The content of the base register is called a descriptor.
The hardware can then abort if the form of reference (e.g.,
read, write) does not correspond to the valid access modes,
if the offset exceeds the size of the segment, or if no
segment has been "mapped" to that address. The software
portion of the TCB need merely be responsible for setting up
the descriptor registers based on one-time checks as to the
legality of the mapping.
Access checking in I/O has been aided by hardware features
in a variety of ways. In-one line of computers, devices are
manipulated through the virtual memory mechanism: a process
accesses a device by referencing a virtual address that is
subsequently changed by hardware into the physical address
of the device. This form of I/O is referred to as "mapped
I/O" [TANG78]. Other methods of checking I/O are discussed
in section TCB.4.1.2..
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Following the principle of economy of mechanism [SALT751,
the TCB ideally protects itself in the same way that it
protects other objects, so the discussion on the
completeness property applies here as well. In addition,
not uncommonly many computer architectures provide for
multiple protection "domains" of varying privilege (e.g.,
supervisor, user). Activities across domains are limited by
the hardware. so that software in the the more privileged
domains might affect the operations in less privileged
domains, but not necessarily vice versa. Also, software not
executing in a privileged domain is restricted, again by the
hardware, from using certain instructions, e.g.,
manipulate-descriptor-registers, set-privilege-bit, halt,
and start-I/O. Generally only TCB software would run in the
most privileged domain and rely on the hardware for its
protection. (Of course, part of the TCB might run outside
of that domain, e.g., as a trusted process.) Clearly, if in
addition to the TCB, non-TCB or untrusted software were
allowed to run in the privileged region, TCB controls could
be subverted and the domain mechanism would be useless.
TCB.3.3.3 Verifiability
The responsibility given to the TCB makes it imperative that
confidence in the controls it provides be established.
Naturally, this applies to TCB hardware, software, and
firmware. The following discussion considers only software
verification. Techniques for verifying hardware correctness
have tended to emphasize exhaustive testing, and will no
doubt continue to do so. Even here, however, the trend is
toward more formal techniques of verification, similar to
those being applied to software. One approach is given in
[FURT78] . IBM has done some work on microcode verification.
Minimizing the complexity of TCB software is a major factor
in raising the confidence level that can be assigned to the
protection mechanisms it provides. Consequently, two
general design goals to follow after identifying all
security relevant operations for inclusion in the TCB are
(a) to exclude from the TCB software any operations not
strictly security-related so that one can focus attention on
those that are, and (b) to make as full use as possible of
protection features available in the hardware. Formal
techniques of verification, such as those discussed in the
next section, are promoted in TCB design to provide an
acceptable methodology upon which to base a decision as to
the correctness of the design and of the implementation.
TCB.3.3.3.1 Security Model
Any formal methodology for verifying the correctness of a
TCB must start with the adoption of a mathematical model of
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
the desired protection policy. A model encompassing
mandatory security and to some extent the discretionary
security and integrity policies was developed by Bell and
LaPadula [BELL73]. Biba [BIBA75] has shown how-mandatory
integrity is the dual of security and, consequently may be
modeled similarly. There are five axioms of the model. The
primary two are the simple security condition and the
*-property (read star-property). The simple security
condition states that a subject cannot observe an object
unless the security level of the subject, that is, the
protection attributes, is greater than or equal to that of
the object. This axiom alone might be sufficient if not for
the threat of non-TCB software either accidentally or
intentionally copying information into objects at lower
security levels. For this reason, the *-property is
included. The *-property states a subject may only modify
an object if the security level of the subject is less than
or equal to the security level of the object.
The simple security condition and the *-property can be
circumvented within a computer system by not properly
classifying the object initially or by reclassifying the
object arbitrarily. To prevent this, the model includes two
additional axioms: the activity axiom guarantees that all
objects have a well-defined security level known to the TCB;
the tranquility axiom requires the classifications of
objects are not-changed.
The model also defines what is called a "trusted subject"
that may be privileged to violate the protection policy in
some ways where the policy is too restrictive. For
instance, part-of the TCB might be a "trusted process" that
allows a user to change the security level of information
that should be declassified (e. g. , has been extracted from a
classified document but is itself not classified). This
action would normally be considered a tranquility or
*-property violation, depending on whether the object
containing the information had its security level changed or
the information was copied into an object at a lower
security level.
TCB.3.3.3.2 Methodology
A verification methodology is depicted in figure 3-3. In
this technique, the correspondence between the
implementation (here shown as the machine code) and
protection policy is proven in three steps: (a) the
properties of a mathematical model of the protection policy
are proven to be upheld in a formal top level specification
of the behavior of a given TCB in terms of its input,
output, and side effects; (b) the implementation of the
specifications in a verifiable programming language
(languages such as Pascal, Gypsy, Modula, and Euclid for
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
qc
I
1
LU
VS
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
which verification tools either exist or are currently being
planned [GOOD78b]) is shown to faithfully correspond to the
formal specifications; and finally (c) the generated machine
code is demonstrated to correctly implement the programs.
The model describes the conditions under which the subjects
in the system access the objects. With this approach, it
can be shown that the machine code realizes the goals of the
model, and as a result, that the specified protection is
provided.
Where trusted subjects are part of the system, a similar
correspondence proof starting with an additional model of
the way in which the trusted subject is allowed to violate
the general model becomes necessary. Clearly,-the more
extensive the duties of the trusted subject, the more
complex the model and proof.
The TCB is designed to "confine" what a process can access
in a computer system. The discussion above centers around
direct access to information. Other methods exist to
compromise information that are not always as easily
detected or corrected. Known as "indirect channels", they
exist as a side-effect of resource-sharing. This manner of
passing -information may be divided into "storage" channels
and "timing" channels. Storage channels involve shared
control variables that can be influenced by a sender and
read by a receiver, for instance when the fact that the
system disk is full is returned to a process trying to
create a file. Storage channels, however, can be detected
using verification techniques. Timing channels also involve
the use of resources, but here the exchange medium is time;
these channels are not easily detected through verification.
An example of a timing channel is where modulation of
scheduling time can be used to pass information.
In order to take advantage of indirect channels, at least
two "colluding" processes are needed, one with direct access
to the information desired, and a second one to detect the
modulations and translate them into information that can be
used by an unauthorized recipient. Such a channel might be
slowed by introducing noise, for instance by varying the
length of time certain operations take to complete, but
performance would be affected.
Storage channels are related to the visibility of control
information: data "about" information, for example, the
names of files not themselves directly accessible, the
length of an IPC message to another user, the time an object
was last modified, or the access control list of a file. It
is often the case that even the fact that an object with
certain protection attributes exists is information that
must be protected. Even the name of a newly created object
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
such as a file can be a channel if this name is dependent on
information about other files, e.g., if the name is derived
from an incremental counter, used only to generate new file
names. This type of channel can often be closed by making
the data about legitimate information as protected as the
information itself. However, this is not always desirable:
for instance, in computer networks, software concerned only
with the transmission of messages, not with their contents,
might need to view message headers containing message
length, destination, etc.
Systems designers should be aware of confinement problems
and the threats they pose. Formal techniques to at least
identify and determine the bandwidth of the channels, if not
completely close them, are certainly of value here. Ad hoc
measures may be necessary in their absence.
TCB.3.4 Performance Requirements
Since the functions of the TCB are interpretive in nature,
they may be slow to execute unless adequate support is
provided in the hardware. For this reason, in the examples
of functions given below, hardware implementations
(including firmware/microcode), as opposed to software, are
stressed, with the idea that reasonable performance is only
accomplished when support for the protection mechanisms
exists in hardware. Certainly, software implementations are
not excluded, and due to the malleability of software, are
likely more susceptible to appreciable optimization.
TCB.4 Detailed Requirements
The kinds of functions that would be performed by a TCB are
outlined below. Those listed are general in nature: they
are intended to support both general-purpose operating
systems and a variety of dedicated applications that due to
potential size and complexity, could not easily be verified.
The functions can be divided into two general areas:
software interface functions, operations invoked by
programs, and user interface functions, operations invoked
directly by users. In terms of a security kernel
implementation, the software interface" functions would for
the most part be implemented by the kernel; the user
interface functions would likely be carried out in trusted
processes.
TCB.4.l Software Interface Functions
The TCB acts very much like a primitive operating system.
The software interface functions are those system calls that
user and application programs running in processes on top of
the TCB may directly invoke. These functions fall into three
categories: processes, input/output, and storage.
55
roved For Release 2007/06/01: CIA-RDP83M00914RO01800060009-2
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
In the descriptions that follow, general .input, output, and
processing requirements are stated. Output values to
processes in particular could cause confinement problems
(i.e., serve as indirect channels), by relating the status
of control variables that are affected by operations by
other processes. Likely instances of this are mentioned
wherever possible.
Processes are the primary active elements in the system,
embodying the notion of the subject in the mathematical
model. (Processes also behave as objects when communicating
with each other.) By definition, a process is "an address
space, a point of execution, and a unit of scheduling. " .More
precisely, a process consists of code and data accessible as
part of its address space; a program location at which at
any point during the life of the process the address of the
currently executing instruction can be found; and periodic
access to the processor in order to continue. The role of
the TCB is to manage the individual address spaces by
providing a unique environment for each process, often
called a "per-process virtual space", and to equitably
schedule the processor among the processes. Also, since
many applications require cooperating processes, an
inter-process communication (IPC) mechanism is required as
part of the TCB.
TCB.4.1.1.1 Create Process
A create process function causes a new per-process virtual
space to be established with specific program code and an
identified starting execution point. The identity of the
user causing the process to be created should be associated
with the process, and depending on the protection policy in
force, protection attributes should be assigned, such as a
security level at which the process should execute in the
case of mandatory security.
TCB.4.1.1.2 Delete Process
A delete process function causes a process to be purged from
the system, and its virtual space freed. The process is no
longer considered a valid subject or object. If one process
may delete another with different protection attributes, an
indirect channel may arise from returning the fact of the
success or failure of the operation to the requesting
process.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
TCB.4.1.1.3 Swap Process
A swap process function allows a process to become blocked
and consequently enable others to run. A TCB implementation
may choose to regularly schedule other processes to execute
after some fixed "time-slice" has elapsed for the running
process. If a TCB supports time-slicing, a swap function
may not be necessary. In order to address a denial of
service threat, this will not be the only process blocking
operation: certain I/O operations should cause the process
initiating the operation to be suspended until the operation
completes.
For example, the hardware could support such an operation
through mechanisms that effect fast process swaps with the
corresponding change in address spaces. An example of such
su ppo:rt is a single "descriptor base" register that points
to descriptors for a process' address space, only modifiable
from the privileged domain. The swap would be executed in
little more than the time required for a single "move"
operation.
As was-mentioned above, the "scheduling" operation in itself
may contribute to a timing channel, that must be carefully
monitored.
A process may send a message to another process permitted to
receive messages from it through an IPC send mechanism. The
TCB should be guided by the applicable protection policy in
determining whether the message should be sent, based on the
protection attributes of the sending and receiving process.
The TCB should also insure that messages are sent to the
correct destination.
An indirect channel may result from returning the success or
failure of "queuing" the message to the sending process,
because the returned value may indicate the existence of
other messages for the destination process, as well as the
existence of the destination process. This may be a problem
particularly where processes with different protection
attributes are involved (even if the attributes are
sufficient for actually sending the message). If such a
channel is of concern, a better option might be to only
return errors involving the message itself (e.g., message
too long, bad message format). Clearly, there is a tradeoff
here between utility and security.
TCB.4.l.1.5 IPC Receive
A process may receive a message previously sent to it
through an IPC receive function. The TCB must insure that
Approved For Release 2007/06101: CIA-RDP83M00914RO01800060009-2
in allowing a process to receive the message, the process
does not violate the applicable protection policy.
TCB.4.1.2 Input/Output
Depending on the sophistication of the TCB, I/O operations
may range from forcing the user to take care of low level
control all the way to hiding from the user all device
dependencies, essentially by presenting I/O devices as
simple storage objects, such as described below. Where I/O
details cannot be entirely hidden from the user, one could
classify I/O devices as devices that can only manipulate
data objects with a common protection attribute at one time
(such as a line printer), and those that can manage data
objects representing many different protection attributes
simultaneously (such as disk storage devices). These two
categories can be even further broken down into devices that
can read or write any location in memory and those that can
only access specific areas. These categories present
special threats, but in all cases the completeness criteria
must apply, requiring that the TCB mediate the movement of
data from one place to another, that is, from one object to
another. To resolve this problem, all I/O operations should
be mediated by the TCB.
Some computer architectures only allow software running in
the most privileged mode to execute instructions directing
I/O. As a result, if only the TCB can assume privileged
mode, TCB mediation of I/O is more easily implemented.
In the first category, if access to the device can be
controlled merely by restricting access to the memory object
which the device uses, the problem becomes how to properly
assign the associated memory to a user's process, and no
special TCB I/O functions are necessary. However, if
special timing requirements must be met to adequately
complete an I/O operation, quick response times may only be
possible by having the TCB service the device, in which case
a special operation is still needed.
When the device can contain objects having different
protection attributes, the entire I/O operation will involve
not only a memory object, but also a particular object on
the device having the requisite protection attributes. TCB
mediation in such a case is-discussed under "Storage
Objects."
The access device function is a directive to the TCB to
perform an I/O operation on a given device with specified
data. The operations performed will depend on the device:
terminals will require read and write operations at a
Approved For Release 2007/06/01 : CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06101: CIA-RDP83M00914R001800060009-2
minimum. The TCB would determine if the protection
attributes of the requesting process allow it to reference
the device in the manner requested.
This kind of operation will only be necessary when mapped
I/O is not possible.
TCB.4.1.2.2 Map Device
The map device operation makes the memory and control
associated with a device correspond to an area in the
process' address space. As in the case of the "access
device" function, a process must have protection attributes
commensurate to that of the information allowed on the
device to successfully execute this operation. This
operation may not be possible if mapped I/O is not available
in the hardware.
TCB.4.1.2.3 Unmap Device
The unmap device frees a device mapped in the address space
of a process.
TCB.4.1.3 Storage Objects
The term "storage objects" refers to the various logical
storage areas into which data is read and written, that is,
areas that are recognized as objects by the TCB. Such
objects may take the form of logical files or merely
recognizable units of a file such as a fixed-length block.
These objects may ultimately reside on a long-term storage
device, or only exist during the lifetime of the process, as
required. Where long-term devices have information with
varied protection attributes, as discussed in the previous
section, TCB mediation results in virtualizing the device
into recognizable objects each of which may take on
different protection attributes. The operations on storage
objects include creation, deletion, and the direct access
involved in reading and writing.
TCB.4.1.3.1 Create Object
The create object function allocates a new storage object.
Physical space may or may not be allocated, but if so, the
amount of space actually allocated may be a system default
value or specified at the time of creation.
As mentioned above, naming conventions for storage objects
such as files may open an undesirable indirect channel. If
the names are (unambiguously) user-defined or randomly
generated by the TCB, the channel can be reduced.
Approved For Release 2007/06/01: CIA-RDP83M00914R001800060009-2
TCB.4.1.3.2 Delete Object
The delete object function removes an object from the system
and expunges the information and any space associated with
it. The TCB first must verify that the protection
attributes of the process and object allow the object to be
deleted. Indirect channels in this case are similar to
those for "delete process." The fact of the success or
failure of the operation may cause undesirable information
leakage.
TCB.4.1.3.3 Fetch Object
The fetch object function makes any data written in the
object available to the calling process. The TCB must
determine first if the protection attributes of the object
allow it to be accessed by the process. This function may
be implemented primarily in hardware, by mapping the
physical address of the object into a virtual address of the
caller, or in software by copying the data in the object
into a region of the caller's address space.
TCB.4.1.3.4 Store Object
The store object function removes the object from the active
environment of the calling process. If the object is mapped
into the caller's virtual space, this function will include
an unmap.
TCB.4.1.3.5 Change Object Protection Attributes
A protection policy may dictate that subjects may change
some or all of the protection attributes of objects they can
access. Alternatively, only trusted subjects might be
allowed to change certain attributes. The TCB should
determine if such a change is permitted within the limits of
the protection policy.
The TCB software interface functions address the operations
executable by arbitrary user or applications software. The
user interface functions, on the other hand, include those
operations that should be directly invokable by users. By
localizing the security-critical functions in a TCB for
verification, it becomes unnecessary for the remaining
software running in the system to be verified before the
system can be trusted to enforce a protection policy. Most
applications software should be able to run securely, by
merely taking advantage of TCB software interface
facilities. Applications may enforce their own protection
requirements in addition to those of the TCB, e.g., a data
base management system may require very small files be
Approved For Release 2007/06/01 : CIA-RDP83M00914R001800060009-2
Approved For Release 2007/06/01 :'CIA-RDP83M00914R001800060009-2
controlled, where the granularity of the files is too small
to be feasibly protected by the TCB. In such 'a case, the
app? ication would still rely on the basic prote