COMPUTER SECURITY TECHNOLOGY - PREVENTING UNAUTHORIZED ACCESS
Document Type:
Collection:
Document Number (FOIA) /ESDN (CREST):
CIA-RDP95-00972R000100090017-4
Release Decision:
RIFPUB
Original Classification:
K
Document Page Count:
16
Document Creation Date:
December 27, 2016
Document Release Date:
December 14, 2012
Sequence Number:
17
Case Number:
Publication Date:
July 1, 1983
Content Type:
REPORT
File:
Attachment | Size |
---|---|
CIA-RDP95-00972R000100090017-4.pdf | 1.67 MB |
Body:
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
JULY 1983
*THE INSTITUTE OF ELECTRICAL AND
ELECTRONICS ENGINEERS. INC.
? Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
This concise overview of secure system developments summarizes
past and current projects, deciphers computer security lingo,
and offers advice to prospective designers.
The Best Available Technologies
for Computer Security
Carl E. Landwehr
Naval Research Laboratory
For more than a decade, government, industry, and
academic centers have investigated techniques for
developing computer systems that can be trusted to en-
force military security rules. A good deal has been
learned about the problem, and a number of approaches
have been explored. But what useful advice can be given
to those about to embark on a system development? If
the application requires a trusted system, how should the
developer proceed? The purpose of this article is to sum-
marize past experience and to guide developers.
A reader new to an area must often master its jargon
and history before he can be comfortable with the sub-
tect . Unfortunately, the history is often described using
the jargon and vice versa, making it difficult to get
started. This article describes both the terms used in the
development of trusted systems and the history of devel-
opment efforts. A glossary of some of the more signifi-
cant buzzwords in computer security begins on the next
page. The Department of Defense has developed terms to
describe modes of operating trusted systems and criteria
to be used in evaluating those systems; these are ex-
plained briefly in the following section. The history of
many projects is capsulized in two tables in the next sec-
tion with additional notes in Appendix A. The reader is
encouraged to skip back and forth among these parts ac-
cording to his needs. Readers desiring a general introduc-
tion to computer security problems are referred to Sec-
tions 1-4 of Landwehr.I
Although this article focuses on security in military
systems, most of the ideas presented also apply to com-
mercial systems that must protect data from disclosure or
modification. The key questions in building trusted
systems are "What rules must the system enforce?" and
"How can we assure that the system enforces them?"
Some of the rules to be enforced may differ among
military and commercial systems, but the approach to
developing a system that can enforce them will be
similar.
There is no simple recipe for creating a system that can
be trusted to enforce security rules. To be trusted, a com-
puter system must reliably enforce a specified policy for
accessing the data it processes while it accomplishes the
functions for which it was built. Since software engineer-
ing has as its goal the production of reliable, main-
tainable programs that perform according to their speci-
fications, the best techniques developed for software
engineering should be the cornerstone of efforts to
develop trusted computer systems.
The principal recommendations to developers are that
they (I) consider the security requirements of each
system as part of its user-visible behavior, rather than as
a separate set of requirements; (2) continue to think
about security throughout the design and implementa-
tion of the system; and (3) use the best available software
engineering technology.2
DoD modes of operation and
evaluation criteria for trusted systems
Within the Department of Defense, classified informa-
tion is labeled with a sensitivity level (Confidential, Se-
cret, or Top Secret) and with a (potentially null) set of
compartments (e.g., NATO). An individual is permitted
access to sensitive information only if he has a clearance
for its sensitivity level, authorization for all of its com-
partments, and a need-to-know or job-related require-
ment to use the information.
At present, the DoD uses four modes of operation to
accredit systems processing classified information:?
? Dedicated. All system equipment is used exclusively
by that system, and all users are cleared for and have
a need-to-know for all information processed by the
system.
? Additional constraints can be placed on systems processing compart-
mented information
(K)Iti.9162911,11-;00-110861391 00 1983 ILI 1
COMPUTER
Declassified and Approved For Release 2012/12/14 : CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Computer security buzzwords
Terms often used in discussing trusted systems are listed below with defini-
tions and comments. For more complete information on the systems mention
ed refer to the "Projects and Trends" section and to Appendix A
Bell-LaPadula model. A security model based on the reference monitor con
cept, the Bell-LaPadula model has been widely applied in prototype Dot)
systeTns. Its two principal rules are the simple security condition, which
allows a subject at a given security level to have read access only to objects at
the same or lower security levels (no read up), and the --property, which
prevents a subject from having write access to objects at lower security levels
(no write down). It also provides for trusted subjects that are not bound by the
?-property. When a trusted subject is implemented as a process in a system, it
is known as a trusted process. (For more complete descriptions see Land
wehrl or the article by S. Ames. M. Gasser, and R. Schell in this issue.)
Capabilities. A capability is usually defined as an unforgeable ticket that
grants its holder a specific kind of access to a particular object: capabilities
can be implemented as virtual addresses with additional bits that define allow?
ed access modes. Capabilities provide a mechanism for controlling access to
objects, not for implementing security policy. The provision of security
depends on the system design, which may use capabilities.
Enforcement of access controls is decentralized in most capability
systems?that is, once a capability is passed to a process, that process is free
to pass it on to other processes. Preventing these "tickets" from getting into
the wrong hands can be difficult, and the evidence suggests that, lacking ap-
propriate hardware, capability-based systems perform poorly.
Current hardware based on capabilities includes the Plessey S250, the IBM
System 38, and the Intel 432. System developments using capabilities include
the UCLA data secure Unix kernel (and hence the SDC communications
kernel), kernelized VM /370, and the PSOS, or provably secure operating
system.
A descriptor is like a capability except that it has meaning only within the
context of a single process: it is meaningless to pass descriptors between pro-
cesses
Confinement. The problem of preventing a program from leaking sensitive
data is called confinement. For example. a program granted access to a file of
Confidential data by one user might surreptitiously copy the data anc transmit
it to another user. Channels used to signal data to other users can be quite
subtle If information is transmitted through system storage by tile names (a
storage channel) or by varying the system load over time (a timing channel)
the information could be obtained by a third unauthorized party.
A program is said to be confined if it is impossible for it to leak data in prac-
tice, storage channels can be detected, and thus eliminated, through analysis
of formal program specifications: timing channels can be restricted but their
elimination usually implies severe performance penalties
Databases. A database is a system of records composed of attribute-value
pairs. Although designs have been proposed for trusted database manage-
ment systems, the problem of providing both information sharing and protec-
tion against unauthorized disclosure has resisted an engineering solution.
None of the trusted-system development efforts described here deals suc-
cessfully with the complexities of securing a shared DBMS, although NRL,
Mitre, and others are researching aspects of this problem. A recent study for
the Air Force3 describes both Problems and prospects In this area.
Encryption. Encryption Is useful for protecting data that must be stored on or
transmitted over media that cannot be protected against unauthorized
monitoring. In some cases, it can be used to authenticate the information's
source or integrity. Since protection of the data depends on the secrecy of the
key, the protection and distribution of keys are critical factors.
Conventionally, link encryption, which implies encryption and decryption by
each network processor, is used for data flowing over a specific physical path
(link). In end-to-end encryption a message Is enciphered once at its source in
a network of processors and not deciphered until it reaches Its final destina-
lion. Public key cryptosystems use different keys to encrypt and decrypt a
message so that one of the keys can be publicly held. (See Denning' for details
on these recent developments.)
The projects described in this article use a more general approach than en
cryption to protect information; they attempt to secure the operating system
that would control the access to keys. One exception is Recon guard, which
uses encryption to assure the integrity of labels attached to data.
Guard. A processor that provides a filter between two systems operating at
different security levels or between a user terminal and a system is cayed a
guard. Typically, a system that contains sensitive data also includes less sen-
sitive data, which could be made available to users not authorized to access
the system as a whole. A guard processor can monitor user requests and
sanitize or eliminate unauthorized data from responses, but it may also require
a human to act as watch officer. Examples include ACCAT guard, LSI guard
Forscom guard, RAP guard, Recon guard, and the message flow modulator
Kernels. A security kernel is a hardware/software mechanism that im-
plements a reference monitor (defined below), but the term has also been used
to denote all security-relevant system software. Implementations usually in-
clude one component called the kernel, which enforces a specified set of
security rules, and other components called trusted processes. These pro-
cesses are trusted not to violate security, although they are not bound by allot
the security rules. A rigorous applications of the definition would probably in-
clude trusted processes as part of the security kernel, but keeping the kerne;
small enough to assure its correctness formally is difficult under this inter
pretation
Many projects have sought to demonstrate the practicality of the security
kernel approach: the results so far are mixed. Systems using this approach to
support general-purpose programming, particularly to emulate existing
operating system interfaces (see Mitre secure Unix, UCLA data secure Unix.
KSOS, and KVM /370), have been less successful, on the whole, than those
that have applied it to support small, special-purpose applications, such as
communications processors (see SDC communications kernel, PPSN .
Sacdin). Performance has been a serious problem for several of the former et ?
forts: some have provided only 10 to 25 percent the performance of their non
trusted counterparts. The latter efforts have been more successful at produc
ing systems with adequate performance.
Some argue that these problems come primarily from choosing capabilities .
rather than descriptors, as the underlying abstraction for the implementation.
and from using hardware that does not support numerous segments with per ?
process descriptor lists or have enough execution domains. In the KSOS ef
fort, performance problems were also attributed to the fact that a single call to
the Unix emulator could trigger multiple calls to the security kernel, with each
kernel call incurring the overhead of a context switch. To improve perfor-
mance revisions to the security communications processor, called Scamp.
and to KSOS will provide simple operating-system interfaces to the kernel in
place of Unix emulators, but they will still define Unix-compatible system call
interfaces. (For further information on security kernel design and implementa-
tion, see the article by Ames, Gasser, and Schell in this issue.)
Measures. No useful quantitative measures exist at present for defining
relative system security, and none of the projects listed in the tables has ad-
dressed this problem seriously. The only useful seem to be the dif-
ficulty of penetration or the rate of unauthorized information: flow Out of the
system under specified conditions. The DoD Computer Sedirity Galuation
Center is establishing criteria for defining the-level to which a computing
system can be trusted to enforce DoD security requIrements.6
Network architectures. Network security Is concerned with both links and
nodes. From the standpoint of military security, hey questions include:
? Can the security labels on data arriving at a network port be relied on to
be correct?
? Can the network nodes preserve these labels and detect whether they
have been altered?
? Can third parties monitor traffic on the links?
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Oil-the-shell technology in this area is primarily oriented toward link encryp-
tion, but efforts to build systems utilizing end-to-end encryption and nodes
that can enforce security rules have met with some success (see Sacdin, RSRE
PPSN, Autodin II, COS/NFE). The VVWMCCS Information System project7 may
contribute some new Solutions to these problems. (See also "Protocols.")
Penetration studies?A study to determine whether and how the security
controls of a computer system can be defeated is called a penetration study
We know of no serious attempt to penetrate a particular system that has not
succeeded. Information on penetration studies applied to the systems listed is
sparse, but an SDC study of the VM /370 (CP/67) system, based on virtual
machines, did conclude that it was significantly more difficult to break into
than some other systems.
Protocols. Procedures for communicating between two or more nodes of a
network are called protocols. The major security problem is how to limit the
unauthorized flow of information through the control fields (e g for routing
and error control), which must often be transmitted unencrypted [See also
"Network architectures.")
Reference monitor. A reference monitor is a computer system component
that checks each reference by a subject (user or program) to an object (file.
device, user, or program) and determines whether the access is valid under
the system's security policy. To be effective, such a mechanism must be in-
voked on every reference, must be small enough so that its correctness can be
assured. and must be tamperproof .5 (See also "Kernels ")
Risk assessment. A risk assessment characterizes a specific installations
assets, threats to them, and system vulnerability to those threats The most
difficult problem in any risk assessment is determining what value to attach to
the data at risk. Since risk assessments consider conditions at a specific site,
the most general assessment the developer can make is an analysis of the
vulnerabilities of the system as a whole. Vulnerability analyses may be sen-
sitive. since they may reveal exploitable system flaws.
Security model. A system security model defines the security rules that
every implementation must enforce. It may reflect the demands of a general
security policy on a particular application environment. A security model can
act as a basis both for users to understand system operation and for system
design. If stated formally and used as a basis for formal specification proofs
the security model rigorously defines system security.
Of the projects described in this article, the Mitre kernel, Multics security
enhancements and kernel, KSOS, Scomp, guard, Military Message Experi-
ment systems, Autodin If, KVM / 370, and Sacdin, are all based on the Bell-
LaPadula model (see above). The UCLA kernel implemented controls on
capabilities and allowed defining specific Bell-LaPadula rules in a policy-
manager module outside the kernel. Application systems, such as MME and
guard, that have been built on systems enforcing the Bell-LaPadula model
have included trusted processes that could downgrade information in controll-
ed ways. Recently, there has been interest in building systems based .on
security models derived from particular applications (e.g., NRL SMMS,9
message flow modulator, COS/NFE).
Specification techniques. These .techniques describe system behavior.
Specifications that are structured to allow mathematical analysis are called for-
mal specifications. Parts of several systems described in this article have been
specified formally. Typically, they use a layered approach that refines a
relatively abstract top-level specification to produce a second-level specifica-
tion (and sometimes a third as well). Code is based on the lowest level
specification.
Cheheyl et al. l? Introduce some of the systems and tools that have been us-
ed to verify security properties, including the specification languages Special,
Ina Jo, Gypsy, and Affirm. The languages Euclid, Ada, and Pascal have also
been used to write specifications. There is evidence (e.g., KSOS and Scomp)
that trained programmers can write formal specifications and that automated
tools can expose some security flaws. However, the major benefit for security
seems to be that humans?system developers, reviewers, or the specifier
himself ?can understand a formal specification, despite its often forbidding
appearance, and can find security flaws by reviewing it manually. This finding
is partly a comment on the state of formal specification and verification tools:
the ones currently available, though improving, are research vehicles, not
production-quality aids to software development.
Trusted. A component is said lobe trusted if it can be relied on to enforce the
relevant security policy. Thus, a trusted process is one that will not violate
security policy, even though it may have privileges that could allow it to do
so (see "Security kernel"). Presumably, there must be some way of estab-
lishing confidence that a program Is in fact trustworthy. According to the DoD
CSEC_ (Reference 6. p. 105), a trusted computing base is " . . the totality of
protection mechanisms within a computer system ?including hardware, firm-
ware, and software?the combination of which are responsible for enforcing a
security policy "
Verification techniques. Verification techniques are methods for determin-
ing that two specifications are consistent. In the context of computer security,
one may wish to verify that a system specification is consistent with (i.e., en-
forces) a certain security model, or that a lower level specification corresponds
to (is a correct refinement of) a higher level specification. If both specifications
are formal, mathematical techniques, including automated theorem proving,
can be used to accomplish formal verification.
Most of the comments on specification techniques apply to verification
techniques as well. Verification of security properties of top-level formal
specifications seems to be within (but just barely) the current state of the art
t see Scamp, KSOS), but verification that program code (in some compilable
language) corresponds to some higher level specification is very difficult with
available tools and usually requires considerable human intervention. The
state of the art is reflected in the message flow modulator project.
Despite apparent progress in research. present verification tools are some
distance from practical use in system development. The benefits from attemp-
ting verification are software engineering ones: the verification process forces
the designer/implementor to think hard about what he is doing and leaves a
,Atell-structured , precise documentation trail 19r others to review.
Cheheyl et at describe four current syst,:ms for verifying (or helping the
user verify) properties of specifications: the Boyer-Moore theorem prover, the
Gypsy theorem prover, the interactive theorem prover, and the Affirm theorem
prover . Other systems now in use include the Shostak theorem prover. the
Stanford Pascal verifier and Compion's Verus
Virtual machines. I his system structure provides each user with a
simulated version of a bare machine on which he can load and run his own
copy of an operating system. Each user can be isolated in his own machine.
with communication among machines provided by the virtual machine
wonitor, which simulates the multiple virtual machines on the physical
machine. KVM/ 370 is the prime example of a trusted system based on this ap-
proach: Share-7 is another Each virtual machine can be operated at a
separate security level, and part of the VMM can act as a reference monitor to
control communication among users on different virtual machines. Since this
organization helps isolate individual users at separate security levels, it is the
best available approach if isolation is a primary requirement. However, the
overhead required to pass information between different virtual machines
becomes an important consideration in systems with extensive communication
among users and across security levels,
Virtual memory. A virtual memory structure maps program-generated ad-
dresses into physical ones. The mapping mechanism permits several user
programs to reside in main memory simultaneously without Interference.
Because the mapping device participates in every memory access, it can also
act as a part of a reference monitor if access to the implementing registers can
be controlled. This is one tool for organizing computer systems that a develop-
er of a new trusted system should not forego. However, the PDP-11's small
virtual address space and its approach toward device virtualization have been
persistent problems in many of the efforts listed.
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
? System high. All equipment is protected in accor-
dance with requirements for the most classified in-
formation processed by the system. All users are
cleared to that level, but some users may not have a
need-to-know for some of the information.
? Controlled. Some users have neither a security clear-
ance nor a need-to-know for some information pro-
cessed by the system, but separation of users and
classified material is not essentially* under operat-
ing system control.
? Multilevel. Some users have neither a security clear-
ance nor need-to-know for some information pro-
cessed by the system, and separation of personnel
and material is accomplished by the operating sys-
tem and associated system software.
Definitions of these modes are provided in DoD Direc-
tive 5200.28."
The Dot) Computer Security Evaluation Center is
developing criteria for evaluating the suitability of com-
puter hardware/software systems for processing classi-
fied information.6 Because of their likely influence on
future developments and because they provide a useful
framework for categorizing many of the systems dis-
cussed below, they are introduced briefly here. The crite-
ria frequently refer to a system's trusted computing base,
or TCB, which is defined as the protection mechanisms
of that system (hardware, firmware, and software) that
are responsible for enforcing a security policy. The
CSEC criteria will probably be revised as they areArsed,
but their general structure will persist.
The CSEC criteria define four hierarchical divisions,
designated D, C, B, and A. Within each division except
D, there are additional numbered classes; the higher the
class number, the greater the trust that can be placed in
the system. Thus, class C-2 systems provide more safe-
guards than C-1 systems. Within each class, require-
ments are stated for (I) the security policy enforced, (2)
the ability of the system to attribute security-related ac-
tions to responsible individuals (accountability), (3) the
assurance that the system operates in accordance with its
design, and (4) the documentation provided. The divi-
sions are intended to represent major differences in the
ability of a system to meet security requirements, while
the classes within a division represent incremental im-
provements. Thus, a system originally evaluated as, say,
C-I might be enhanced to meet the requirements for C-2,
but would be unlikely to meet those for B-1 without ma-
jot changes.
Division D, minimal protection, is reserved for sys-
tems that fail to Meet the criteria for any of the other divi-
sions; thus, it consists of systems that have been evalua-
ted and found least trustworthy.
Division C, discretionary protection, covers systems
that can be relied on to implement need-to-know con-
trols within a single security level or compartment but
? "Essentially" is not further defined in the official documents. In prac-
tice, controlled mode seems to be applied to systems that operate in a
multilevel mode but bar users with clearances below some specified level
It has also been applied to systems that operate over less than the corn
pletc range of sensitivity levels (e.g., systems that contain data only froni
Confidential through Secret).
July 1983
cannot reliably separate, say. Secret information from
users with only Confidential clearances. They must pro-
vide audit mechanisms to account for user actions that
affect security. Commercial systems that do not allow
security levels to be specified for user files, but do have
well-designed mechanisms for users to control access to
files, would be likely to fit in this division. Class C-2 pro-
vides for access control and auditing mechanisms with a
finer resolution than those for C-I.
Division B, mandatory protection, includes systems
that incorporate the notion of security levels, can label
data on the basis of these levels, and can protect these
labels against unauthorized modification. Systems in this
category would be expected to segregate, say, Confiden-
tial users from Secret data. The developer is expected to
provide a security model on which enforcement is based
and to demonstrate that a reference monitor has been im-
plemented. There are three subclasses: BI, labeled secur-
ity protection, does not require that the security policy be
stated formally or that storage and timing channels be
addressed; B2, structured protection, adds these re-
quirements and others related to the system structure,
configuration control, and documentation of the system
design. In addition to other constraints on system struc-
ture, B3, security domains, requires that the TCB ex-
clude code not essential to security policy enforcement,
calls for a descriptive top-level specification of the TCB
and a demonstration that it is consistent with the security
model, and requires support for a security administrator.
Division A. verified protection, is for systems with
functions essentially like those required in class 8-3; but
with additional assurance that the system design correct-
ly reflects the security model and that the implementa-
tion corresponds to the design. This assurance is to be
obtained in class A-I (verified design) through the use of
formal specification techniques and formal verification
that the specifications correspond to a formal model of
security policy, and enhanced in class A-2 (verified im-
plementation) through formal verification that the
source code correctly implements the specification.
How these criteria will be used to determine the suit-
ability of particular systems to operate in the modes
listed above remains to be seen. The evaluation of a
system and its certification to operate in a particular en-
vironment are conducted separately. It seems unlikely,
however, that a system with a relatively !Ow rating,-.say,
C-1, would be certified to operate in multilevel mode.
Projects and trends
This section summarizes projects over the last 10 to 15
years, including those underway, that have built or are
building computer systems intended to enforce security
rules. Because of space limitations, not all efforts in this
area are enumerated, and some projects that could bedc-
scribed separately have been combined. The focus is on
publicly documented systems that produced (or Plin to
produce) at least a demonstration of operation, though'a
few study efforts of particular significance are included.
Directions that systems now in the planning stages are
likely to take are discussed at the end of the section.
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Structure of the tables. The following questions were
asked about each system:
? When did its development begin?
? Who sponsored (i.e., paid for) it?
? Who built it?
? What were its security goals?
? What approach was used to reach these goals?
? Were formal specifications used? If so, in what
language were they written? Who wrote them?
? Was any verification done? If so, what tools were
used?
? On what hardware was the system built?
? Which programming language or languages were
used?
? If built, did the system perform adequately for some
practical use?
? If installed, was it certified for operation with
classified data? If so, in what mode?
Initi
Project ated Sponsor Buildert Goals
? What rating might this system receive under the
DoD CSEC evaluation criteria?
? Was/is it installed? Where?
? What lessons were learned?
The answers to these questions (except the last) are
given in Tables I and 2. Table 1 covers projects that are
now complete; Table 2 covers those that are still in pro-
gress. Within each table, projects are ordered roughly ac-
cording to when they were initiated. If a particular entry
is enclosed in brackets, the information represents a plan
or intention, rather than an accomplishment. A question
mark appended to an entry indicates the information is
uncertain or unknown.
Appendix A summarizes noteworthy aspects and les-
sons learned about the projects listed. The reader is cau-
tioned that some of this information is subject to change
(particularly for projects underway or planned) and parts
of it are the personal conclusions of the author. The
bibliography (Appendix B) provides additional sources
Table 1. Completed projects to develop trusted systems.
Adept-50 1967 DARPA Sir General-purpose time
with security
Multics Early Al
Security 70's Horieywe[
Enhancements
Honeyweii
MOrP
Mitre Early Al Mitre
Brassboard 70'c
Kernel
UCLA Ears, )inkkf,,/k Ai
Data Secure 71)
Unix
Military Litt: DARPi; IS;
Message 70'F Navy BBN
Experiment MI1
Share-7 Mid NAVV FCDSSA
70';.
Secure 1978 Naw, Naval
Archival PG
Storage Soho();
System
Damos 1979 Christian Christian
Rovsing Rovsing
Autodin II Late DCA Westei n
70', Union
CSC
FACC
SDC
Communications
Kernel
Message
Flow
Modulator
77:P7'
Late Don SDC
70's
19111 Navy
'.eneral-purpose time
with security
Prototype security
Kernel
Unix with
ecurity
Multilevel secure
message system
experiment
General-purpose
timesharing with
security
Secure archival
file system
Operating system
for communications
Multilevel secure
packet switch
Multilevel secure
packet switch
Filter message
traffic
Approach
High-water-mark
model labeled objects
Retrofit checks for
Bell- LaPadula model
Bell-La Padula
model as basis
Security kernel plus
Unix emulator
Simulated security
kernel interface built
on pseudokernel
Based on virtual
machine monitor
architecture
Multi
microprocessor-
based kernel
Security kernel
with trusted processes
on capability
architecture
Security kernel
architecture
tailor UCLA
Unix security kernel
Trusted processes
directly on hardware,
code verification
Formal
Spec
No
No
Yes
Yes
No
No
No
Vienna
Defn.
Lang.
Yes
Ina-Jo
No
Gypsy
Vent
cation
No
No
Manual
Some
No
No
No
No
Yes
ITP
No
Gypsy
COMPUTER
Declassified and Approved For Release 2912/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
of information on the projects listed, but since the
literature consists largely of technical reports, some may
be difficult to obtain.
Comments on the projected CSEC ratings. A com-
ment is in order concerning the estimated DoD CSEC
evaluation classes listed for these systems. The ratings
listed should be considered only as estimates reflecting
the author's knowledge of the systems and the evaluation
criteria; they are not based on the kind of exhaustive
review that an actual evaluation would entail. Never-
theless, they should provide some indication of how the
criteria might be applied.
One anomaly worth noting occurs in the estimated
evaluation of the message flow modulator, or MFM.
Generally, systems that have employed formal tech-
niques for design and implementation achieve higher rat-
ings than those that have not. Despite the fact that all of
the source code for the MFM has been mathematically
verified against a set of assertions, it has an estimated
Honeywell 6180
DPS8/ 70M
AN/UYK
Zilog 8000
POP-11
July 1983
Prog
Lang
Pert.
Cert
Est
Eval
Installations
asm
MOL/ 360
Yes
System
high
B-1
Pentagon,
CIA.SDC
PL/I
Yes
Con-
trolled
B-2
Pentagon
AFDSC
(DO CSEC)
SUE-11
Demo
No
A-1
Mitre
UCLA
Demo
No
A- 1
UCLA
Pascal
Bliss
BCPL .7
Yes
System
high
8-2
Cincpac.
CMS-2
Yes
[Con-
trolled]
8-1
FCDSSA
sites
asm
Demo
[Multi-
level]
8-3
Naval PG
School
Yes
System
high
8-2
asm
[Multi-
level)
8-3
2 test sites
UCLA-
Pascal
Yes
(Multi-
level)
8-2
SOC,CioD
Gypsy
Yes
[Multi-
level?)
C-2
I OSIS)
rating of C-2. The reason for this rating tics in the
criteria's bundling of specific security requirements and
requirements for assurance. A system cannot be rated
above division C unless it meets certain requirements
(e.g., labeling of objects with security levels), regardless
of the level of assurance that the system correctly im-
plements its specifications. The MFM, because of its in-
tended use, does not need to meet most of the criteria re-
quired of a class B system, yet it requires a level of
assurance that corresponds to the requirement imposed
on a class A system.
Recent developments. Systems now in the planning
stages include several efforts to build trusted network in-
terfaces. These interfaces would assure that security la-
bels transmitted to or from the network were not modi-
fied improperly and would prevent network devices from
receiving information marked at levels for which they
were not authorized. Trusted interface units could make
Abbreviations used in Tables 1-2
Notes:
data unknown or uncertain
II enclosed data indicates plans. not accomplishments
Abbreviations:
AF
AFDSC
asm
Air Force
Air Force Data Services Center
Assembly language (for machine indicated)
BBN Bolt Beranek and Newman, Inc.
Boyer-Moore Boyer-Moore theorem prover (SRI)
CIA Central Intelligence Agency
Cincpar, Commander-in-Chief, Pacific
CSC Computer Sciences Corp.
DARPA Defense Advanced Research Projects Agency
DEC Digital Equipment Corp.
Demo System built as prototype or demonstrator only
DCA Defense Communications Agency
FACC Ford Aerospace and Comm. Corp
FCDSSA Fleet Combat Direction Systems Support Activity
Forscom Forces Command (Army)
ISI
ITP
Information Sciences Institute
Interactive theorem prover (SOC)
MARI Microprocessor Applications Research Institute (England)
MOL/360 Machine Oriented Language for I8M/360
NASA
NB
NC
NSA
RSRE
National Aeronautics and Space Administration
System never built
System not yet complete enough for evaluation
National Security Agency
Royal Signals and Radar Establishment (Malvern. England)
SDC System Development Corporation
SDI System Designers. Ltd. (England)
SLS Second-level specification
SRI SRI International
TLS Top-level specification
VMS Operating system for DEC VAX computer
WIS/JPM WWMCCS joint program manager
WSE WWMCCS system engineer
WWMCCS World-Wide Military Command and Control System
3LS Third-level specification
91
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
possible a multilevel secure network, to which host ma-
chines operating at different security levels could be con-
nected.7 The Army is currently procuring designs for its
Military Computer Family operating system that are in-
Project
In
KVM/370 1976
PPSN (SUE) 1977
KSOS Late
70's
Scomp Late
70's
Sacdin Late
70',
Guard Late
COS/NFE Late ()C,
70
DEC OS 1979 ilL
Security
Projects
Forscom 1989
Guard
SI Guard 1981'
PSOS 198(1
RAP Guard Early NASA CSC,
80's Sytek
SDC Secure 1981
Release
Terminal
I Recon Guard 1981 Sytek
(;SOS 198",'
198 RSPF
Sponsors
DARPA
AF,
DCA
RSRE
NSA,
DARPA
Navy
Honeywec
DARPA
DCA
NSA Navy
At
Navy
tended to allow multilevel secure operation. Finally,
there is increasing interest in developing security models
for particular applications. The MFM, as just noted, en-
forces a somewhat different notion of security than that
Table 2. Projects underway to develop trusted systems.
Builders Goals
Approach
Formal Verifi-
Spec, cation
SDC General-purpose time-
sharing with security
RSRE End-end-encryption
packet switch
EAU. Production prototype,
Logicon secure UNIX
Heneyweli Production prototype.
secure UNIX
r I Secure
'BM communications
processor
ogican Sanitize, filter
between DBMSs
Cornwell Multilevel secure
(DTI) network front end
for WWMCCS
9E( Add security
to VMS Tops 20
LA
WIG 'OM Honeywell
Filter traffic be-
tween host and
terminals
Nay\ Guard system
Sha, for single user
I ACL
Honeywell
Secure capability
based operating
system
Filter terminal-host
communications
no operator
SDC SDC Trusted release
station (guard)
Distributed
Secure
System
Gemini
Corp
Gemini
Corp
SQL Ltd
MARI
Guard between
network and database
Secure real-lime
operating system
General-purpose
multilevel secure
local net
Retrofit security kernel
to virtual machine
simulator
Kernel implementing
virtual machines
Security kernel
with Unix emulator.
trusted processes
Security kernel
with Unix emulator.
trusted processes.
hardware assistance
Security-kernel
based architecture
[rusted processes
-1r) KSOS
Yes Yes
Ina-Jo ITP
Some Some
Manual
Yes Yes
Special Boyer-
Moore
Yes Yes
Special Boyer.
Moore
Yes Yes
TLS TLS
Specia: Boyer
Moore
Some some
Gypsy Gypsy
Security kernel (Hubi TLS
trusted modules SLS
Ina-Jo
Retrofit Bell-LaPadula
security checks
(Tops-20. VMS)
build kernel
VMS only)
1Yes
Kernel
only I
Trusted processes Yes
on security kernel Gypsy
Trusted processes Ye
rm bare hardware Euciic
Formally specify
and verify
entire OS
Ye!.
I LS
SLS
ITP
Yel;
Kernel
on!VI
Ves
Yes
Manual
Trusted processes TLS Yes
on trusted task
monitor
Trusted processes
on bare hardware
Special I Manual
TLS
SLS
3LS ?
Ina .lo
TLS
[SLS1
ITP
Trusted processes, No No
encryption-based
authentication
Security kernel
architecture
I rusted network
interface
S
St SI
Ye:. j Yost
Euclid]
COMPUTER
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
defined by the Bell-LaPadula model. Work at the Naval
Research Laboratory is investigating the use of applica-
tion-based security models in the development of
military message systems.9
Prog
_
Est
Hardware
Lang
Pert
Cert
Eval
Installations
IBM 4331
Jovial J3
Demo
(Multi-
level]
A-1
Mitre. SDC
PDP-11/34
asni
Yes
C-2
RSRE
Coral 66
POP 11
Modula
No
(Multi-
level)
A-1
Logicon.
Mitre
Honeywell
Level 6
UCLA
Pascal
NC
(Multi
level)
A-1
Mitre,
DoD CSEC,
Logicon
IBM
Series 1
asm
NC
[Multi-
level)
A-1
[SAC sites(
PDP 11
C, Modula
NC
[Multi-
level)
B-1
PDP 11
Pascal
NC
[Con-
trolled]
A-1
[ Demo only]
DEC-20
VAX 11
NC
[Multi-
level]
B-1
A-1
I DEC!
Scomp
C
NC
'Multi
level]
8-3
Eorscorn
DEC 1SI-11
Euclid
Yes
(Multi-
level]
B-3
[Navy]
Honeywel,
(new)
Ada
NB
[Multi
level]
A I
Intel 286
VAX-11/730
7
NC
[Con-
trolled)
A-1
[NASA]
LSI-11
(Intel 80861
Modula
Yes
(Multi-
level)
A-1
SOC. (DOD)
Intel 8086
Pascal
NC
[Multi-
level)
13-3
Intel 286
PLM
PL/I
NC
(Multi-
level?)
8-3
POP-Il
(Euclid(
NC
(Multi-
level]
A-1
[RSREI
July 1983
Advice for the developer
What are the lessons for the developer of a new
system? It is important to distinguish between technol-
ogies that are available and useful today and research ap-
proaches that appear promising but are unproven. At
present, no technology can assure both adequate and
trustworthy system performance in advance. Those
techniques that have been tried have met with varying
degrees of success, but it is difficult to measure their suc-
cess objectively, because no good measures exist for
ranking the security of various systems.
Listed below, under the appropriate phase of the sys-
tem development cycle, are the best available approaches
for incorporating system security. Many of them simply
represent good software engineering practice.
Requirements. Because security requirements affect
the entire structure of the system software, their advance
determination is crucial. Security is not an add-on fea-
ture. It must be incorporated throughout a system, and
the statement of system security requirements must re-
flect this fact.
A requirements document should state what modes of
operation are needed for the system initially and whether
future operation in other modes is planned. However, if
the requirement for security is isolated and stated with-
out setting the context appropriately ("The system shall
be secure"), bidders may view the security requirement
separately from other system requirements and propose
undesirable solutions (e.g., solutions based solely on
physical security).
The criteria for evaluating trusted systems6 being
developed by the DoD ('SEC should also be helpful in
this phase. A project manager might use these criteria as
a way of specifying security requirements that corre-
spond to the needs of his system.
From the start, the system developer should consider
how the system's security requirements affect its user-
visible behavior. In this way trade-offs can be made
where required, and a coherent design, integrating the
needs for function and security, can be obtained. If this
procedure is not followed, bidders may claim that they
can build the system, and later, when they are well into
development, discover that requirements conflict.
An illuminating example is that of the military data-
base system that normally contains only unclassified
data. During crises, some of its contents may tiecome
classified.. If the requirement to handle classified data
was not implemented initially, it's users must either build
a duplicate system, attempt to retrofit security to the ex-
isting system, or operate in a manual mode during crises.
One way of explicitly integrating security requirements
with functional requirements is. to specify the_ system's
flows of information (especially the flows of sensitive in-
formation) and flows of authorization. Look especially
for problems in the user interface, in operations that
cause information to flow into or out of the system, and
in places where the classification of information could
be changed.
Declassified and Approved For Release 2912/12/14: CIA-RDP95-00972R000100090017-4
94
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
1
Trade-offs in providing security should be identified
and assessed as early as possible. For example, physical
controls, such as locks and guards, and computer hard-
ware and software controls are alternative techniques for
protecting information stored on computers. If they are
not assessed as alternatives early in the system develop-
ment, however, physical controls will be the de facto
choice, because they are much easier to provide after the
fact than hardware and software controls. Unfortunate-
ly, physical controls frequently restrict user functions
more than comparable software controls.
Trade-offs exist within the software and hardware
design as well. Many of these need to be addressed only
in the design, but the system requirements should pro-
vide guidelines on the granularity of protection needed,
critical information that requires special protection, ac-
ceptable bandwidths for leakage channels, and the like.
The designers must face these problems whether or not
guidelines are provided; it is in everyone's interest that
their decisions be informed, not ad hoc.
Design. To develop a trusted system, security must be
considered early and often during the design phase. As
with requirements, security cannot be added to an ex-
isting design. The most prominent design strategy at pre-
sent uses a security kernel, but a security kernel can im-
pose significant performance burdens, especially if ap-
plied to inappropriate hardware. The most successful
kernels have been tailored to support particular applica-
tions.
Whether a kernel-based structure is chosen or not, a
system designer would be well-advised to do the follow-
ing:
(1) Study system functions, focusing on requirements
tor the flow of sensitive information and the interface
with the user. Specifically, note under what conditions
sensitive information is disclosed or modified, has its
classification changed, or enters or leaves the system. In-
clude mechanisms to audit functions that might leak in-
formation, change its classification, or change its set of
users.
(2) Construct a simple model of the flow of informa-
tion and authority within the system. The model need not
be formal, but it should be brief, precise, and simple
enough to be understood by both implementors and
users.
(3) Keep the model and the design consistent. If one
requires changes, make corresponding changes to the
other.
(4) Develop a hierarchical set of design specifications.
Include a top-level specification that reflects the basic
functions of the system and the information flow model,
and a program specification of sufficient detail so that
outside reviewers (and new personnel) can review the
codit4frikOur.., ang6aasembilastinfounaticarflavesrand-
authorization mechanisms. As the design is created, have
it reviewed at regular intervals by both potential users
and individuals knowledgeable in computer security. The
kernelized secure operating system, Honeywell's secure
communications processor, SDC's communications ker-
nel, and other projects have used this approach with
good results. Formal specifications may be helpful, but
their use should be contingent on training the implemen-
tors to read and update them competently. The software
tools now available for formal specification and verifica-
tion are still primarily research vehicles, although this
situation may, change within a few years. One of the ma-
jor efforts at the DoD CSEC is to collect these tools, im-
prove them, and make them available to the computer se-
curity community.
(5) Choose hardware that reduces security problems.
Generally, suitable hardware provides good mechanisms
for isolating different computations, simple and efficient
ways for controlling the flow of information between
isolated contexts, and a uniform way of treating different
kinds of objects. The following features will help in the
implementation of trusted systems: virtual memory with
controlled access to the mapping registers; a device inter-
face that offers the possibility of uniform treatment of
memory, files, and devices; and the ability to change ad-
dressing contexts rapidly. Several machine states that
restrict access to critical portions of the instruction set
will not be required if all data accesses are mediated by
the virtual memory mechanism, but they can be used as
another way of protecting critical operating system data
(e.g., the contents of the mapping registers).
Implementation. The implementation language should
have a well-understood, well-supported compiler. Some
languages (e.g., Gypsy, Euclid) have been intentionally
designed so that programs can be verified, and verifiable
subsets have been proposed for others (e.g., Pascal,
DoD's Ada).
Experience indicates that disciplined use of a conven-
tional language with a compiler that has been exercised
over a broad range of programs is preferable to using a
relatively untried compiler and a "verifiable" language.
Progress in practical compilers for verifiable languages is
expected to continue, however, and developers should
keep up-to-date. Language structures that lead to am-
biguities (aliasing) in code (e.g., Fortran Equivalence)
should be avoided because they make it hard for both
human readers and automated tools to detect the true in-
formation now in a program.
In practice, crucial facilities in a compiler include those
that allow the parts of large programs to be compiled
separately and those that enable generation of efficient
object code, so that security overhead is not compound-
ed. Assembly language should be assiduously avoided.
If it becomes necessary for the code to deviate from the
specifications, the specifications should be updated in
parallel with the code changes. Good coding practices
benefit the security of the system, as well as its main-
tainability, and should make errors easier to find. Care-
ful attention should be paid to configuration control.
eili*ifttaiioiiihat a -
formal, top-level specification obeys the Bell-LaPadula
security model is within the state of the art but is far from
routine. The developer must understand just what is de-
monstrated in such a verification, namely, the correspon-
dence between two formal statements: one about the sys-
tem security model and one about the system design. Vir-
COMPUTER
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
tually all applications based on the Bell-LaPadula model
have required trusted subjects of some kind, and it has
been difficult to formalize the properties these trusted
subjects enforce. Consequently, formal proofs that the
system enforces the axioms of the Bell-LaPadula model
typically do not apply to the trusted subjects. Assurance
that the trusted subjects preserve system security requires
either a separate model, specification, and proof or reli-
ance on manual methods.
Automated tools (e.g., Ina Jo, Gypsy) have been suc-
cessfully applied to the verification of trusted subjects
and to systems based on security models other than the
Bell-LaPadula model. However, these applications are
far from routine, and demonstrating that the chosen for-
mal security model corresponds to the user's intuitive no-
tion of security can be a difficult task.
Automated verification of the security properties of
substantial amounts of source code is beyond the state of
the art. One of the largest pieces of code verified to date
is in Gypsy, a Pascal derivative; it consists of well under
1000 lines (excluding those not producing executable
code). These techniques hold promise for the future, but
there is substantial risk in employing them in a system
development that is to start next week. The verification
of particular properties of a specification or small pieces
of code can be done by hand or, if written in suitable
languages, with machine assistance.
Despite this perhaps pessimistic assessment of the state
of the art, there are benefits in adopting a formal ap-
proach. Formal verification necessitates a formal design
specification, and constructing this specification requires
the designer to be precise and to think his problem
through very carefully. Thus, design flaws are more apt
to be discovered prior to implementation. Although a
fes k security flaws have been found during formal verifi-
cation, far more have probably been removed because
the designer had already noticed them in the formal spe-
cification. Finally, as noted in the comments on design,
the implementors must be capable of reading and updat-
ing the specification. An airtight specification that is ig-
nored by the implementors is worse than useless, since it
may lend unwarranted credibility to the system's securi-
ty.
Thorough testing continues to be a necessary partner
to the careful design and implementation of systems that
will be operated in a multilevel secure mode. When test
plans are constructed, specific attention must be given to
testing the system's security properties. If possible; the
developer should arrange for penetration tests and esti-
mate the bandwidths of leakage channels that Cannot be
eliminated.
Operation. From the standpoint of security, the rele-
vant aspects of system operation are the controls over
changes to the system software and configuration and
the conscientious use of the system security mechanisms.
Configuration control is of central importance in the
operation of a trusted system, and the design of the
system itself should assist in this task. The maintenance
of various levels of specification, good coding practices,
and use of high-level languages all help.
July 1983
One often neglected aspect of operations is the moni-
toring of the audit trails that are routinely collected.
Usually they are too voluminous (and boring) for ade-
quate manual checking, but automated tools for this pur-
pose need careful scrutiny. Defeating the tools becomes
equivalent to defeating the auditing controls.
No philosophers' stone can turn a given system into a
trusted version of the same system. The advice given
above boils down to four main points:
? Consider security requirements in conjunction with
the user-visible behavior of the system.
? Think about security throughout the design and im-
plementation of the system.
? Use the best available software engineering tech-
nology.
? Be skeptical; lots of "modern" ideas don't work
yet.
References
C. E. Landwehr, "Formal Models for Computer
Security," ACM Computing Surveys, Vol. 13, No. 3,
Sept. 1981, pp. 247-278.
2 Software Engineering Principles, notebook for NRL soft-
ware engineering course, 1981. Available as AD-
A113-415, National Technical Information Service,
Springfield, Va.
3. "Report of the 1982 Air Force Summer Study on Multi-
level Data Management Security," Air Force Studies
Board, National Academy of Sciences, Washington, D.C.
4 D. E. Denning, Cryptography and Data Security,
Addison-Wesley, Reading, Mass., 1982.
5. J. P. Anderson, "Computer Security Technology Plan- ?/
fling Study," Vol. I, ESD TR-73-5I, Oct. 1972, p. 14.
Available as AD-758 206, National Technical Information
Service, Springfield, Va.
6. "Trusted Computer System Evaluation Criteria (Final
Draft)," DoD Computer Security Evaluation Center, Ft.
Meade, Md., January 27, 1983.
7. D. P. Sidhu and M. Gasser, "A Multilevel Secure Local
Area Network," Proc. 1982 Symp. on Security and
Privacy, Order No. 410, IEEE-CS Press, Los Alamitos,
Calif..-April 1982, pp. 137-143.
8. C. R.: Attanasio, P. W. Markstein, and R. J. Phillips,
"Penetrating an Operating System: A Study of VM./370
Integrity," IBM Systems Journal, Vol. 15, No. 1, Jan.
1976, pp. 102-116.
9. C. E..Landivehr and C. L. Heitmeyer, "Military Message
Systems: Requirements and.Security Model," Memoran-
dum Report 4925, Naval Research Laboratory, Washing-
ti:MM.C.?; Sept. 1982.
10, -M. H2Cheheyl, M. Gasser, G. A. Huff, and J. K. Millen,
"Verifying Security," ACM Computing Surveys, Vol. 13,
No. 3, Sept. 1981, pp. 279-340.
11 DoD Directive 5200.28 of Dec. 18, 1972, first amendment,
change 2, April 29, 1978.
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Appendix A?Project Summaries
Following are brief summaries of the projects listed in Tables
I and 2, ordered as they appear in the tables.
1. Projects completed
Adept-50. Adept-50 is based on a formal security model,
often called the high-water-mark model. It included the "basic
security condition"?that is, no user can read information
classified higher than his clearance. With some authorization,
users could cause write downs?that is, move information class-
ified at one level to a lower level. The ease of,defeating its
authorization controls is a matter of some debate. Installed in
the Pentagon, CIA, and SDC, the Adept-50 ran for several
years and was certified for system-high operation only.
AFDSC Multics (Multics security enhancements). This Air
Force project applied the Bell-LaPadula model to Multics:
classifications were attached to Multics objects, and checks
were installed to enforce the model's rules. It yielded the first
production system that supported the Bell-LaPadula model. No
effort was made to isolate the security-relevant code or restruc-
ture Multics into a kernel.
The original Multics operating system (and the Multics hard-
ware) paid close attention to protection, though not to security.
The most significant innovations were rings of protection, in
which inner (lower numbered) rings are more privileged than
outer rings, and access control lists to control user access to
files (segments). The architecture generalized the concept of a
two-state (supervisor and user) machine to that of an n-state
machine, with one state for each ring. The current Honeywell
DPS 8/70M supports n = 8. In practice, nearly all of the original
Multics privileged software was in the innermost ring. Verifica-
tion and security models were not considerations in the original
Multics development, but its sophisticated protection architec-
ture and layered design made it a good candidate for adding
security controls.
The system was installed in 1974 at the Air Force Data Ser-
vices Center in the Pentagon and has been certified since De-
cember 1976 to operate in a two-level mode; the system stores
some information classified at Top Secret and supports some
users cleared only to Secret. These modifications, known collec-
tively as AIM, for access isolation mechanism, are included in
the standard release of Multics. They are enabled only at the
customer's request, however, and only military customers have
done so to date.
Following the AIM development, Honeywell and Mitre
studied whether the Multics operating system could be restruc-
tured as a security kernel. At the same time, an MIT study of the
Multics supervisor found that a considerable reduction in code
operating in Ring 0 was possible. Funding was cut, however,
and no Multics security kernel was built.
Mitre brassboard kernel. A prototype security kernel
developed for a PDP-I I, the Mitre brassboard kernel was based
on the Bell-LaPadula model. Performance was poor, but good
performance was not a project objective. Both top-level and
low-level specifications were written, and extensive manual
verification of the kernel operations and the correspondence
between specification levels was performed. An attempt to build
a Unix emulator for this kernel led to construction of a second
kernel with functions more suited to Unix requirements
UCLA data secure Unix. This prototype security kernel for a
PDP-I I paralleled Mitre secure Unix, but the Mitre effort was
not completed.
The UCLA architecture was based on an implementation of
capabilities for the PDP-I I. A prototype was developed and
fitted with a Unix user interface, hut performance was very
slow. A separate, data-security-based model was developed for
this prototype, and the Bell-LaPadula model could be enforced
by a policy-manager module running outside the kernel. Efforts
to keep the kernel small resulted in a fair amount of non-kernel
code with security-relevant functions (e.g., the policy manager).
Hence, comparing code sizes of the UCLA kernel and others
must be done carefully.
Much effort was expended on formal verification, in cooper-
ation with ISI and the Xivus theorem prover. The effort seems
to have been hampered initially by the lack of a high-level
system specification, which apparently stemmed from the phi-
losophy that only verification of the lowest level assembly code
was important. Nevertheless, higher level specifications, and
functions to map the code to them, were eventually constructed.
Ultimately, the verification of 35 to 40 percent of the kernel led
developers to claim that, given sufficient resources, verification
of the entire kernel was feasible.
Message systems for the Military Message Experiment. The
MME evaluated operational use of an electronic message system
by a military organization. Three message systems were devel-
oped in competition: Hermes by BBN, Sigma by ISI, and MIT-
DMS by MIT. The ISI system was the one used.
Sigma was designed as though it were running on a security
kernel, although the underlying system was not kernel-based. A
trusted job was introduced to allow operations that violated the
.property of the Bell-LaPadula model. Sigma required users to
confirm activity that could cause insecure information flows,
but in practice, most users confirmed each operation requested
without understanding or thinking about its implications.
Share-7. This Navy system developed by Fleet Combat Di-
rection Systems Support Activity and implemented on the
AN/UYK-7 computer is in operational use at approximately 16
FCDSSA sites. The system, based on a virtual machine architec-
ture, provides a virtual AN/UYK-7 to each of its users. The
Share-7 monitor enforces the system's security properties; only
the monitor runs in the machine's privileged state. The ex-
ecutive is initiated by the monitor and communicates with users.
A Data General Nova provides a trusted path to terminals, and
the system uses trusted processes as well. Written in CMS-2, it is
now undergoing security certification procedures for operation
in the controlled mode (originally, certification for multilevel
mode had been sought). It is one of the first systems the Navy
has nominated for evaluation by the DoD Computer Security
Evaluation Center.
Secure archival storage system. The SASS project developed
a kernel for the Z8000 microprocessor. Its overall goal was to
apply security kernel technology to a network of microproces-
sors that store multilevel files. A core-resident kernel and il-
lustrative applications were developed; a full file server was not
completed. Detailed specifications were written in PLZ, and the
implementation used a structured assembly language. The de-
velopers found preliminary operations lobe within their range
of expectations for a single-chip processor. No formal specifica-
tion or verification was performed, but close attention was paid
to information flows in the implementation.
Damos. An operating system developed by a Danish com-
pany, Damos runs on Christian Rovsing's own CR80 computer.
The computer and the operating system are described as cap-
ability-based, and the operating system includes implementa-
tions of a-security kernel and trusted processes. These concepts
seem to have been developed by CR independent of US efforts
in security kernels. Damos is a successor to an earlier system,
Amos, and is intended to provide fault-tolerant operation in
communication system applications. Systems in which CR8Os
are or will be used include (I) NICS Tare, a NATO system for
automating paper tape relay operations. (2) FIKS, a Danish
DoD system for message, packet, and circuit switching, and (3)
Camps, a NATO-SHAPE message system for communication
centers.
Autodin II. This was to have been a packet-switched com-
munication network for military use, provided by the Defense
Communications Agency. Western Union was the prime con-
tractor, with Ford Aerospace and Computer Sciences Corpora-
tion as software subcontractors. The Autodin II RFP called for
COMPUTER
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
a kernel-based system for picket switching 'Without adequely
defining "kernel" or the requirements for formal specification
and verification. There were many problems in its development,
including a court fight over the defintion of "formal specifica-
tion." Eventually, FACC wrote the code and then a specifica-
tion in Ina Jo. SDC participated in verifying this specification,
using ITP. The system passed its security and system tests, but it
was not installed due to factors unrelated to security.
SDC communications kernel. This project modified the
UCLA prototype PDP-11 kernel for use in communications ap-
plications (packet switching, etc.). The code is written in UCLA
Pascal and ,compiled using the UCLA Pascal-to-C translator.
The kernel was heavily modified and tailored to the application;
no operating system emulator was used. Although performance
of the installed system seems satisfactory, Pascal-to-C is not
recommended for further use; it takes about five minutes of
unloaded PDP-I 1/70 time to compile each of 60 submodules,
or five hours for the entire system. The requirement for specifi-
cation verification was dropped early in the project; a require-
ment for verifiable code motivated the choice of Pascal-to-C,
but the only actual verification has been manual examination of
the specification for information channels.
Message flow modulator. A now modulator is a system that
receives messages, applies a filter to them, transforms messages
that pass the filter, and transmits the resulting messages to a
destination. A flow modular with a null transform is like a
guard, and a flow modulator with a null filter and an encryption
algorithm for a transform is like a standard encryption device.
A group at the University of Texas has developed a simple
flow modulator, using Gypsy and the Gypsy tools, and has
proven properties related to the system's functions and its
security. The proofs are unusual in that they have been con-
ducted at the level of the actual code input to the Gypsy com-
piler; this work represents the state of the art in code verification
of operational military systems. The system has been delivered
to the Navy, which may deploy it as part of the Ocean Surveil-
lance Information System.
2. Projects underway
KVM/370. This is an SDC project to install a kernel under-
neath the IBM VM /370 operating system (a virtual-machine-
based system). It has been demonstrated on the IBM 4331, IBM
4341, and Amdahl V7/A. Current estimates are that it provides
approximately one-quarter the performance of standard VM, a
reduction from an earlier estimate of one-half.
KVM seems to be the only current project dealing with the
complexities of a large-scale timesharing system (though some
argue that this is really a small system running on large hard-
ware). The design provides multiple virtual machines, each at a
different security level, with restricted communication between
machines handled via shared minidisks. At present, KVM sup-
ports only a limited set di peripherals, and changing system
directories or the basic hardware configuration requires a'sys-
tem gcnerator. Support fci:( auditing and for a system seCyrity
officer have not been implatiented.
A version of, the system- IC corrently operated at:. Mitre; but
without additional Aleveltiintettst'efforL it appears that the,
system. will onlybe useful fcif demonstration
. .
PPSN SUE. This , a securily-kernel based packet switch for
supporting end-to-end encryption in the pilot, packet switched
network, or PPSN, at the Royal Signals and Radar Ettablish-
ment, MalVern, England. ' ? - - , ' ?
The kernel?called 'SVC-Jae:secure u,ser. envitoilinent?
iiolates.Virtual machine environments so that independent vir-
tual Machines can process,l)laintext and encrypted data. The
kernel also provides a controlled path for passing routing and
connection management information between virtual machines.
The kernel is said to occupy from 2K to 7K words, depending on
ii.the configuration, and was designed for this application. It
? operates on PDP- 11/34 hardware, with specially designed sup-
July 1983
port for 1/0 deViCes. It ykai propai?n
al 66 some verification pet-Wined afti6hiefiketiCirteileilitn'
changes.
SUE 2 is said to generalize the initial system to support a
range of network security applications requiring a policy of
isolation of low-level environments and controlled sharing on
top. SUE 2 is intended to be the first of a line of common
building blocks for network security applications; it is also
known as the SCP 1, or secure communications processor.
KSOS. Also known as KSOS-1 I, the kernelized secure operat-
ing system represents an attempt to build a commercial pro-
totype kernel-based system on a PDP-I 1/70. An emulator on
top of the kernel was to provide a Unix interface to users. Full
verification of the security properties of the kernel top-level
specification (written in Special) and demonstration proofs of
correspondence of code (written in Modula) to specifications
were planned. The contract with FACC was terminated in
December 1980.
Although performance with the Unix emulator and user soft-
ware layers was poor, verification of flow properties of the
kernel top-level specification (using the Boyer-Moore theorem
prover at SRI) was successful. The code proof demonstrations
were done by hand. The kernel by itself may prove to be a useful
base for applications, but the emulator duplicated some kernel
functions and performed poorly in combination with it.
The Navy is funding Logicon to enhance the FACC product,
making the kernel smaller and faster and building a kernel inter-
face package to replace the original emulator (see "Scomp").
The development of an interface to handle Unix system calls is
still a possibility. The formal specifications, as well as the stan-
dard system (B-5) and detailed (C-5) specifications, are to be
kept consistent with these modifications.
Scomp. The secure communications processor, also known as
KSOS-6, was intended to parallel KSOS-11, using Honeywell
Level 6 hardware. The government funded development of a
hardware box?called SPM, for security protection module?
and the kernel software specification and development. The
SPM monitors transfers on the bus without CPU interference,
providing faster mediation and enhanced virtual memory/capa-
bility structure over the standard Level 6. The kernel and hard-
ware are complete, and kernel performance on the hardware is
now being tested.
The original plans called for the development (funded by
Honeywell) of a Unix emulator as in KSOS-1 , but the revised
plans call only for a minimal Scomp kernel interface package,
called SKIP. A package to map Unix system calls into SKIP
calls is now under construction. The kernel was specified in
Special, and verified using SRI tools. The verification of the
final top-level specification was substantially completed, but a
few modules were too large for the SRI tools. Renewed efforts
applied to this task as part of an evaluation of the system by the
DoD CSEC have apparently succeeded.. No code proofs are
planned. The kernel is coded in UCLA Pascal, compiled via
Pascal-to-C. A C compiler for Level z?, was, built -tty.Compion
(Raver!), MP- Trusted proseksekarAOttepesig-. .
but the current cOatract does not *al*;
- Scomp hardware and sokware,basnowg
several sites; including Los ? Alamos. 44 ? ? ''?-'`?
General Electric. Providing'an,SPWrq:the I, ?
an extension of the Level 6 feaslb
Sacdin. Originally named Sethi IV, this was toltiewpacket
switching net for SAC, but the Air Force was dIreitedlO use
Autodin II (now the Defense Data NetwOilt) fottliiiieteitark
backbone. The Satin IV RFP was tliotigbVtiiii6.1)0itiekvOtit
sPect to(iecillity)-than Autodbt is
somewhat less ambitious than the Original iliteLliiRia.feciriditg
as a subcontractor to ITT, building a kernekeit iYilem:A top-
level specification has been written and verifled;t but the im-
plementation is in assembly language (for IBM Series I com-
puters) and no code proofs are planned. Mitre is attempting to
construct a mapping between the specification and the code.
Declassified and Approved For? Release 2012/12/14: CIA-RDP95-00972R000100090017-4
97
98
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Guard. Guard (originally ACCAT guard) provides a trusted
path between two database systems for networks that operate at
two different security levels. It supports operators who monitor
requests from (below database to the high database and sanitize
the responses. The trusted process that performs downgrading
was specified in Gypsy and verified by Mitre.
Guard was initially planned to run on top of the KSOS Unix
emulator, but current plans are to implement it directly on the
KSOS kernel as modified by Logicon. A prototype version was
developed to run on unmodified Unix. Future plans include
automating most of the operator functions in the system and
developing a Scomp-based version.
This was the first of the now numerous guard systems to be in-
itiated.
COS/NFE. Compion is building a trusted network front end
for a World-Wide Military Command and Control System
H6000 host and an Arpanet-like packet-switching network,
using their proprietary Hub executive. According to Com-
pion, the Hub executive is a security kernel. Both SDC, using
Ina Jo and ITP, and Compion, using its own Verus, have writ-
ten top-and second-level specifications for the Hub executive
Both companies have verified security criteria based on a strict
separation of security-level sets. The Hub structure includes 12
trusted modules; each processes data at several security levels.
SDC has written and verified top- and second-level specifica-
tions for each module. Code is being written in Pascal for a
PDP-11/70, though the Hub also operates on VAX and Moto-
rola 68000 hardware. Compion has not substantiated earlier
claims concerning the performance of the system.
DEC operating system security projects. In 1979, DEC
developed prototype security enhancements for the VAX/VMS
operating system on the VAX-11/780. Discretionary and non-
discretionary controls enforcing the Bell-LaPadula model were
added, auditing was improved, and a number of security flaws
were corrected. The target was a system that might fit into DoD
CSEC evaluation class B-I or B-2. These changes may be includ-
ed in future VMS releases.
A DEC customer, Sandia National Laboratories, has under-
taken security enhancements to VMS using its Patch utility. A
project to enhance security in Tops-20 is in the study phase at
DEC. DEC is also building a research prototype security kernel
for the VAX-I I architecture that is intended to be compatible
with VAX/VMS. It will not be a general-purpose kernel; in or-
der to keep it as small and simple as possible, the prototype will
be carefully tailored for its intended applications. Current plans
call for a formal specification and for verification that this
specification is consistent with the Bell-LaPadula model. The
goal is a system that will be in DoD CSEC evaluation class 8-3
or A-I .
Forscom guard. The Army's Forces Command is developing
a guard processor that will filter traffic between World-Wide
Military Command and Control System users and a WWMCCS
host. The motive is to ensure that users not cleared for all
WWMCCS data obtain only data for which they are author-
ized. The system requires a single human operator who screens
traffic for all of the terminals. Forscom guard programs, unlike
those of the initial guard system and LSI guard, include
substantial information about likely user activities to enable
more accurate filtering.
Logicon produced an experimental version that was tested
successfully in an Army exercise. Because KSOS was unavail-
able, this version employed trusted processes on top of standard
Unix. Logicon and Honeywell are now building a production
version utilizing the Scomp kernel and hardware; the goal is to
eliminate the need for a human operator.
LSI guard. LSI guard implements a subset of guard functions
directly on DEC LSI-11 hardware (without a kernel) in Euclid.
It allows only a single operator (other guard processors can have
multiple operators), and it is not cognizant of the databases.
The specification and implementation, both in Euclid, have
been completed by I.P. Sharp; verification is awaiting develop-
ment of new tools for this purpose.
PSOS. An initial specification for a provably secure opera-
ting system, or PSOS,?was written by SRI in the mid-to-late
1970's. This document )9.tas used as part of a product description
for a two-phase conkfct awarded in early 1980 to FACC as
prime contractor, witIOloneywell as a subcontractor..
The goal was a mutrde- to large-scale general?pprPose c9r.n"
outer system that:400 be formally verifie0o meet its
specification Theiirik-base called for the design. Of a secure
operating system usititoapability-based addressing and tagged
architecture; the second;.phase, if awarded, was to be for devel-
opment. It was uncleaOyhether code proofs would be done.
Honeywell bid 32-bit minicomputer hardware that loOks like a
next-generation LevelOith SPM (see "Scomr)..The contract
was terminated in May .11981 Honeywell is conducting a related
hardware design effort tinder a new contract awarded in 1982.
RAP guard. The restricted-access processor is a guard system
for NASA that will control traffic between unclassified ter-
minals and a host that contains classified data. Like Forscom
guard, the RAP guard system will monitor queries and re-
sponses that are highly structured, so human operators will not
be required for sanitization.
Computer Sciences Corporation is the prime contractor, with
Sytek as the subcontractor for security. Sytek is to develop a
formal security model and a top-level formal system specifica-
tion using Special. This specification is to be verified to enforce
the security model. The verification will probably be manual,
since the automated tools available for verifying security pro-
perties of Special specifications apply only to the Bell-LaPadula
security model.
The implementation planned is trusted processes on a trusted
task monitor; both of these are new developments. Possible
hardware is a set of 10 single-board computers and a DEC
VAX-II/730 for auditing functions. Required operational date
is January 1985.
SDC secure release terminal. This system, developed in-house
by SDC, has functions and design similar to the LSI guard. A
terminal user sanitizes or reviews incoming data and then allows
the data to be transmitted. Trusted code runs directly on a bare
machine. Top- and second-level Ina Jo specifications have been
written and verified; code verification of the Modula implemen-
tation is planned. Currently, the system runs on a DEC
PDP-110 (a VT-l00 with LSI-11). Ultimately, it is intended for
tls.e Intel 8086-based Burroughs B-20 workstation.
Recon guard. This project would allow users to have network
access to a database containing some information for which net-
work users are not authorized. The approach being taken is to
attach an authenticator (similar to a checksum) to each database
record, based on the contents of that record. If either the
authenticator or the record is modified, recomputing the au-
thenticator will reveal that a change has occurred. A modifica-
tion to both the record and authenticator that preserves their
correspondence is assumed infeasible, since the computation of
the authenticator is based on a secret key.
The guard processor allows queries to enter the database and
monitors responses as follows: The authenticator on the return-
ing record is recomputed and checked. If the new value fails to
match the stored value, the record has been altered and is not
returned. If it does match, then the information in the record
(including the security marking) is reliable. If the user making
the request is cleared for the level indicated by themarking, the
record is returned to him.
The secret key is shared by two systems: one system generates
the authenticators for records being added to or updated within
the database, and the other checks the authenticator-record cor-
respondence on records returned in response to queries. No
operator is required.
The system is being developed by Sytek. Hardware includes
several Intel 8612 single-board computers. Although no formal
specification or verification has been done, an informal security
model was written in English, and constraints were imposed on
the use of Pascal so that the source code might eventually be
verified.
COMPUTER
Declassified and Approved For Release 2912/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
GSM; The Gemini secure operating system-uses the com-
pany's-OC-16-4 computer; which employs the Intel 80286 pro-
cessor. This processor is similar to the 8086, bin adds four hard-
ware protection rings and segmentation based on Mu!tics.
The kernel, operating system, and applications will operate in
separate rings; the design is said to draw heavily on the Mu!tics
security kernel design and on the design of SASS. Top-level and
second-level specifications have been written in a variant of
Gypsy. No formal verification is planned, but an informal flow
analysis has been completed. The kernel is being written in
PLM.
The initial offering is intended to support CPM/86 on the
same hardware with tools for generating and loading applica-
tions in PL/I or other CPM-supported languages. The initial
operating system supports dedicated or embedded applications
rather than general-purpose programming; designs have been
proposed for guard and real-time communications applica-
tions.
DSS. A distributed secure system is being developed for the
Royal Signals and Radar Establishment by System Designers,
Ltd., and the Microprocessor Applications Research Institute at
Newcastle Upon Tyne. It is to be a general-purpose multilevel
secure system based on physically separate processors con-
nected to a local area network. Trusted interface units to the
network will incorporate real-time security kernels and both
trusted and unt rusted software. The system is to be Unix-based
with PDP-I Is and DEC personal computers.
Appendix B?Bibliography
The following list will direct interested readers to further in-
formation about each system listed in Tables 1-2. It is not in-
tended to be a complete list of references on these systems.
Although we have tried to include references that are generally
obtainable, several of these projects are documented only in
technical reports whose availability cannot be guaranteed.
References are given in the order that the systems are listed in
Tables 1-2.
1. Projects completed
Adept-50
Weissman, C., "Security Controls in the ADEPT-50 Time
Sharing System," AFIPS Conf. Proc., Vol. 35. 1969 FJCC.
AFIPS Press, Arlington, Va., pp. 119-133.
Muffles
Organick, 0., The Mu/tics System: An Examination of its
Structure, MIT Press, Cambridge, Mass., 1972
Whitmore. J., et al. Design for Mu/tics Security Enhancements.
Air Force Electronic Systems Division ESD-TR-74-176, Dec.,
1973. For information on security mechanisms available to
users of the current commercial version, see Minks Program-
mers' Manual Reference Guide, Honeywell HAG91, Revision 2,
Honeywell Information Systems, Inc., Waltham, Mass., March
1979.
Schiller, W. L., Design and Abstract Specification of a
Multics Security Kernel, Mitre ESD-TR-77-259, Mitre Corp.,
Bedford, Mass., Nov. 1977 (NTIS AD A048576).
Mitre brassboard kernel
Schiller, W. L., Design of a Security Kernel for the PDP-11/45,
MTR-2709, Mitre Corp., Bedford, Mass., June 1973.
Mrs secure Unix
Woodward, J. P. L., and G. H. Nibaldi, A Kernel-Based Secure
UNIX Design, ESD-TR-79-134, Mitre Corp., Bedford, Mass.,
May 1979.
July 1983
UCLA data secure Unix
Popek, G. J., et at "UCLA Secure Unix," AFIPS 024,
Proc., Vol. 48, 1979 NCC, AFIPS Press, Arlington, Va., pp.
355-364.
Military Message Experiment
Wilson, S. H., N. C. Goodwin, and E. H. Bersoff, Military
Message Experiment Final Report, NRL Report 4456, Naval
Research Laboratory, Washington, D.C.
Share-7
"Share 7/Security Design," Fleet Combat Direction Systems
Support Activity (FCDSSA), San Diego, Calif., Feb. 1980.
(This is an informal document, not an officially published
report.)
SASS
Cox, L. A., and R. R. Schell, "The Structure of a Security
Kernel for a Z8000 Multiprocessor," Proc. IEEE 1981 Symp.
on Security and Privacy, Order No. 345, IEEE-CS Press, Los
Alamitos, Calif., Apr. 1981, pp. 124-129.
Damos
Hvidtfeldt, A., and A. Smut, "Manufacturers' Efforts in Com-
puter Security: Christian Rovsing," Proc. Fourth Seminar on
the DoD Computer Security Initiative, NBS, Gaithersburg,
Md., Aug. 1981.
Autodin II
Bergman, S., A System Description of AUTODIN II,
MTR-5306, Mitre Corp., Bedford, Mass., May 1978. For a
short summary, see I. Lieberman, "AUTODIN II: An Advanc-
ed Telecommunications System," Telecommunications, May
1981, pp. 43-48
Use order form on p. 144A
PROCEEDINGS
L
,COMPUTH
lib
1SYSTEMS
_ .
?
- wig,, ...
Traditional subject areas: distributed operating systems,
network topologies, and distributed databases are covered as
well as advances in fault tolerance, distributed Aestbeds,
concurrency mechanism evaluation, and :praillies and
experiences for distributed system developinentSOi
Order #435
Proceedings?Third international Confirencoon
Distributed Computing Systems
October 1822, 1982
Members?$33.00
Nonmembers ?06.00
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4
SDC communications kernel
Golber, T., "The SDC Communication Kernel," Proc. Fourth
Seminar on the DoD Computer Security Initiative Program,
NBS, Gaithersburg, Md., August 1981.
Melange flow modulator
Good, D. I., A. E. Siebert, and L. M. Smith, Message Flow
Modulator Final Report, Institute for Computing Science
TR-34, Univ. of Texas, Austin, Tex., Dec. 1982.
2. Projects underway
KVM/370
Gold, B. D. et al., "A Security Retrofit of VM/370," AFIPS
Conf. Proc., Vol. 48, 1979 NCC, AFIPS Press, Arlington, Va.,
pp. 335-342.
PPSN (SUE)
Barnes, D. H., "Computer Security in the RSRE PPSN,"Proc
Networks 80, Online Conferences, June 1980.
KSOS
McCauley, E. J., and P. J. Drongowski, "KSOS: The Design of
a Secure Operating System," AFIPS Conf. Proc., Vol. 48, 1979
NCC, AFIPS Press, Arlington, Va., pp. 345-353.
Scomp
Fraim, L. J., "SCOMP: A Solution to the MLS Problem,''
Computer, Vol. 16, No. 7, July 1983, pp. 26-34.
Sacd in
System Specification for SAC Digital Network (SACDIN),
ESD-MCV-1A, ITT Defense Communications Division,
Nutley, N. J., 1978.
Guard
D. Baldauf, "ACCAT GUARD Overview," Mitre Corp., Bed-
ford, Mass.
COS/NFE
Sutton, S. A., and C. K. Willut, COS/ NFE Functional Descrip-
tion, DTI Document 389, Compion Corp. (formerly Digital
Technology, Inc.), Champaign, III., Nov. 1982.
DEC secure OS projects
Karger, P. A., and S. B. Lipner, "Digital's Research Activities
in Computer Security," Proc. Eascon 82, IEEE, Washington,
D.C., Sept. 1982, pp. 29-32.
Forscom guard
"FORSCOM Security Monitor Computer Program Develop-
ment Specification Type B-5," Logicon, Inc., San Diego,
Calif., Feb. 1981.
LSI guard
S. Stahl, LSI GUARD System Specification (Type A),
MTR-8452, Mitre Corp., Bedford, Mass., Oct. 1981.
PSOS
Neumann, P. G., et al., A Provably Secure Operating System:
The System, its Applications, and Proofs, seconded., CSL-I 16,
SRI International, Menlo Park, Calif., May 1980.
RAP guard
Contact RAP Project Manager, code 832, Goddard Space
Flight Center, Greenbelt, MD 20771.
SDC secure release terminal
Hinke, T., J. Althouse, and R. A. Kemmerer,. "SPC. Secure
Release Terminal Project 'Proc C. 1983 Syntii..1m.Sicurity'and
Privacy, IEEE-CS Press, LOS. Alamitos, Calif:, Apr.-1983.
Recon guard
,
Anderson, J. P, On the _Feasibility of Connecting REcoN
to an External Network," James P. Anderson to:,-FortVash-
ington, Pa., Mar. 1981.
GSOS
"GSOS Security Kernel Design," first edition, Gemini Com-
puters, Inc., Carmel, Calif., 1982.
DSS
Rushby, J. M., and B. Randell, "A Distributed Secure
System," Computer, Vol. 16, No. 7, July 1983, pp. 55-67.
Acknowledgments
For asking the question that inspired this paper, I thank D. L.
Parnas. The information on which this report is based came
from individuals too numerous to list here, but in whose debt I
remain. Parnas and C. Hietmeyer provided reviews of earlier
drafts that led to significant improvements. R. Schell suggested
including the tentative DoD CSEC evaluation classes, and the
referees and editors suggested numerous improvements in the
presentation. H. 0. Lubbes of the Naval Electronics Systems
Command and S. Wilson of NRL provided encouragement and
support. The responsibility for all opinions (and any remaining
errors) is, of course, mine.
Carl E. Landwehr is head of the Software
Research Section in the Computer Science
and. Systems Branch of the Information
Technology- Division at the Naval Re-
search Laboratory in Washington, D.C.
In addition to computer security, his re-
search interests include software engineer-
ing and 'system performance modeling,
measurement, and evaluation. He has been
with the Naval Research Laboratory since
1976. Previously, he served on the computer science faculty of
Purdue University, and he has also taught computer science at
Georgetown University. He is a member of ACM, IEEE, and
Sigma Xi. He holds a BS degree from Yale University in engi-
neering and applied science, and MS and PhD degrees in com-
puter and communication sciences from the? University of
Michigan.
COMPUTER
Declassified and Approved For Release 2012/12/14: CIA-RDP95-00972R000100090017-4