SURVEY OF COMPUTER NETWORKS
Document Type:
Collection:
Document Number (FOIA) /ESDN (CREST):
CIA-RDP79M00096A000500010017-0
Release Decision:
RIPPUB
Original Classification:
K
Document Page Count:
92
Document Creation Date:
December 16, 2016
Document Release Date:
September 1, 2004
Sequence Number:
17
Case Number:
Publication Date:
September 1, 1971
Content Type:
STUDY
File:
Attachment | Size |
---|---|
CIA-RDP79M00096A000500010017-0.pdf | 3.71 MB |
Body:
;t1
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
MTP-357
SURVEY
OF
COMPUTER NETWORKS
JACK I. PETERSON
SANDRA A. VEIT
The work reported here was sponsored by the
Defense Communications Agency under
contract Fl 9628-71-C-0002
SEPTEMBER 19 71
THE
4ITRE
CORPORATION
This document hAppyvvercivfzderReleaSea2004/09/23 : CIA-RDP79M00096A00050001001v7411NGTON OPERATIONS
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
ABSTRACT
This paper presents the results of a survey of state-of-the-art computer networks.
It identifies ten major networks: ARPA, COINS, CYBERNET, the Distributed Computer
System, DLS, MERIT. Network/440, Octopus. TSS, and TUCC and outlines their capa-
bilities and design. A tabular presentation of the most significant network features and
a brief discussion of networks that were examined but rejected for the survey are also
in
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
ACKNOWLEDGMENTS
The authors of this survey thank the organizations mentioned herein for their
assistance in providing much of the basic information from which this survey was com-
piled. We wish to extend special thanks to the individuals named below, who gave a good
deal of their time for site interviews, telephone conversations, and correspondence with
us:
Jack Byrd, Jim Caldwell, and Jim Chidester of Control Data Corporation; Doug
McKay and Al Weis of IBM; Don Braff, John Fletcher, Mel Harrison, and Sam Mendicino
of the Lawrence Radiation Laboratory; Eric Aupperle and Bertram Herzog of MERIT;
Peggy Karp and David Wood of MITRE; Dan Cica, Wayne Hathaway, Gene Itean, Marge
Jereb, and Roger Schulte of NASA; Doug Engelbart and Jim Norton of the Stanford
Research Institute; Leland Williams of TUCC; and David Farber of the University of
California at Irvine.
iv
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
FOREWORD
Data for this survey was gathered primarily from interviewing individuals at the
various network sites. A questionnaire was used as a checklist during the interviews,
but not as a tool for comparative evaluation of' the networks because of the wide
range of questions and because of the vast differences among the networks. In many
cases additional information was obtained from literature provided by the interviewees
or their installation.
Most of the information furnished by this survey was gathered between
January and April 1971; however, in this rapidly expanding area most networks are in
the process of changing. This document gives a picture of these networks as they were
at a given point in time: where possible, proposed or impending changes have been
indicated. Each section of the survey has been reviewed by the cognizant organization
to ensure greater accuracy, although errors are inevitable in an undertaking of this
magnitude.
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
TABLE OF CONTENTS
SECTION
PAGE
LIST OF FIGURES vii
INTRODUCTION 1
I I NETWORKS SURVEYED 3
The ARPA Computer Network 3
The COINS Network 11
The CYBERNET Network 14
The Distributed Computer System 18
Data Link Support (DLS) /3
The MERIT Computer Network 25
Network/440 32
The Octopus Network 35
The TSS Network 48
The TUCC Network 53
III MATRIX OF NETWORK FEATURES 59
Configuration 60
Communications 64
Network Usage 66
IV EXCLUDED NETWORKS 69
V SUMMARY 71
GLOSSARY 73
APPENDIX 83
BIBLIOGRAPHY 85
LIST OF FIGURES
FIGURE PAGE
1 ARPA Network Topology, February 1971 4
2 Inventory of Nodes and Host Hardware in the ARPA Network 5
3 The Interface Message Processor 6
4 COINS Configuration 12
5 The CYBERNET Network 15
vii
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
FIGURE
PAGE
6
Typical CY BERNET Configurations
16
7
The Distributed Computer System Topology
19
8
Inventory of Planned Hardware
20
9
Communications Interface
21
10
DLS Configuration
23
11
Overview of the MERIT Network
26
12
Inventory of MERIT Host Hardware
27
13
MERIT Communications Segment
28
14
Communication Computer System
29
15
Logical Structure of Network/440
33
16
Nodes in Network/440
33
17
The Octopus Network
37
18
Octopus Hardware
38
19
Television Monitor Display System (TMDS)
39
20
6600/PDP?I 0 File Transport Channel
41
21
7600/PDP--10 File Transport Channel
42
22
Octopus Teletype Subnet
43
23
Remote Job Entry Terminal (RJET) System and Network
Connections
45
24
An Overview of the TSS Network
49
25
TSS Network Hardware
50
26
Usage of the TSS Network
52
27
An Overview of the TUCC Network
55
28
Configuration of the 360/75 at TUCC
56
viii
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
SECTION I
INTRODUCTION
As defined in this paper, a computer network, is an interconnected group of
independent computer systems which communicate with one another and share re-
sources such as programs, data, hardware, and software. This paper presents the results
of a survey or state-of-the-art computer networks by MITRE under the sponsorship of
the Defense Communications Agency. It identifies the major networks according to
the working definition given above and includes a discussion of their purpose, configura-
tion, usage, communications and management.
The bulk of the paper consists of a discussion of the selected networks and a
matrix presentation of some of the more predominant characteristics of each. Section II
presents much of the information gathered in the course of the study; it is divided into
ten subsections, one for each of the networks surveyed. Each of the subsections (net-
works) is further divided into five topic areas: Introduction, Configuration, Communica-
tions, Usage, and Management. A comparative matrix in Section III gives an overview
of the characteristics of the networks. Section IV briefly examines networks that were
not included in the survey. Section V presents a summary of the survey. The Glossary
provides definitions of terms and acronyms which may be unfamiliar to the reader.
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
SECTION II
NETWORKS SURVEYED
Filch subsection in Section Il presents the findings pertaining to one network.
All network discussions are organized in the same manner and deal with five basic topics:
Introduction gives background information such as the sponsor, purpose and present
status of the network. Configuration provides an inventory of network hardware, gen-
erally accompanied by a topological diagram of the network, and information on network
software. Communications relates the relevant factors in the communications subsystem
of the network. Usage discusses the present or intended use of the network. Management
presents a view of the network management structure.
THE ARPA COMPUTER NETWORK
Introduction
The Advanced Research Projects Agency (ARPA) network is a nationwide system
which interconnects many ARPA-supported research centers. The primary goal of this
project is to achieve an effective pooling of all of the network's computer resources, making
them available to the network community at large. In this way, programs and users at a
particular center will be allowed to access data and programs resident at a remote facility.
At the present time, network activity is concentrated in three major areas. The
first is the installation of the network interface hardware, and the development and testing
of its associated software modules. Secondly, network experimentation is being carried
out at several operational sites. These experiments are designed to develop techniques for
measuring system performance, for distributing data files and their directories, and for dis-
seminating network documentation. Finally, expansion and refinement of the original
system design are being investigated, with considerations being paid to both long-range
and immediate goals.
Configuration
The ARPA Network is a distributed network of heterogeneous host computers and
operating systems. ARPA's store-and-forward communication system consists of modified
Honeywell DDP-516 computers located close to the hosts and connected to each other by
50 kilobit-per-second leased telephone lines. The 516 is called an Interface Message Proces-
sor, or IMP.
The Network Control Program (NCP) is generally part of the host executive; it
enables processes within one host to communicate with processes on another or the same
host. The main functions of the NCP are to establish connections, terminate connections.
and control traffic flow.
3
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Figure 1 is a topological diagram of the ARPA Network. Figure 2 lists the network
nodes along with a brief description of the hardware and software at each. Although this
compilation is approximate at the time of this writing, it provides a general idea of the
resources available at various nodes in the ARPA Network.
SRI
(DP-1;) (PDP-1CD
FUTAH , I ILLINOIS 1 MIT
IMP
TX 2)
LINCOLN
CASE
GE 645
SDC
CARNEGIE
RAND
BBN
HARVARD
( XDS
\ SIGMA-7
- PROPOSED
EXISTING
Communications
( IBM
1BCO
SOURCE: BOLT BERANEK AND NEWMAN
IMP
BURROUGHS
I IMP
PDP-..1) C'DP CB6500)
Figure 1 ARPA Network Topology, February 1971
Communications in the ARPA network are achieved using a system of leased lines,
operated in a synchronous, full-duplex mode at 50,000 bps. The interconnection of the
host computers to the telephonic network is the primary function of a specially developed
communications computer system, the Interface Message Processor (IMP),I Each IMP,
as shown in Figure 3, is an augmented, ruggedized version of the Honeywell DDP-516,
and includes 12K 16-bit words of core memory, 16 multiplexed channels, 16 levels of
priority interrupt, and logic supporting host computers and high-speed modems. Special
I A second device, the Terminal Interface Processor (TIP), is under development for use on
the ARPA network. It not only performs the same function as an IMP, but can also
directly support user terminals, eliminating the need for a host. The first TIP is scheduled
to go into operation in August 1971 at NASA Ames.
4
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
NOSE
PROCESSOR
SPECIAL NODE FUNCTIONS OR SOFTWARE
BOLT BERNAEK AND NEWMAN (CAMBRIDGE MASSACHUSETTS)
POP 10 (TENEX)
LISP VERSION ALLOWING 250K WOROS OV VIRTUAL MEMORY
NATURAL LANGUAGE PROCESSORS
NE MORA CONTROL CENTER
EAR BOUGHS IPOOL I. IL l INGISII
80500
CARNEGIE MEL I ON UNIVE Hwy 'RI rTSBU RUE, PENNSYLVANIA)
PDP 10 50
LCT AN INTERACTIVE ALGOL LANGUAGE
CASE WESTERN RESERVE CLEVELAND.I OH101
POP 10
LOGOS DATA MANAGEMENTI
HARVARD UNIVERSITY ICAMBRIOGE, MASSACHUSETTS!
POP 10
POP 1
GRAPHICS
LINCOLN t AMATORY ILE XINGTON. MASSACHUSETTSI
TX 2
LEAP - A GRAPHIC LANGUAGE
LIL - LOCAL INTERACTION LANGUAGE
360/67
TSP
MASSACHUSETTS INSTITUTE OF TECHNOLOGY (CAMBRIDGE
GE 645 IMULTICS1
MASSACHUSETTS)
POP 10
ARTIFICIAL INTELLIGENCE
MATH LAB
POPS
DYNAMIC MODELING
RAND (SANTA MONICA, CALIFORNIA)
36005 VIA IBM IBM
ADAPTIVE COMMUNICATION PROJECT
NETWORK SERVICES PROGRAM
CONVERSATIONAL PROGRAMMING SYSTEM
STANFORD RESEARCH INSTITUTE (MENLO PARK, CALIF() RNIAI
POP 10 (TENE XI
NETWORK INFORMATION CENTER
TOO AS
NES
STANFORD UNIVERSITY (STANFORD, CALIFORNIA)
PDP 10
ARTIFICIAL INTELLIGENCE PROJECTS
POP 6
SYSTEM DEVELOPMENT CORPORATION (SANTA MONICA, CALIFORNIA)
36() El VIA OOP 516
CONVERSE
IADEPT1
DISPLAY 70
ORBIT
UNIVERSITY OF CALIFORNIA AT SANTA BARBARA
360(75
UCSB CULLER F RIFT ON LINE SYSTEM
UNIVERSITY OF CALIFORNIA AT LOS ANGELES
SOS SIGMA -T
NETWORK MEASUREMENT CENTER
360791 (OS MVTI
REMOTE JOB SERVICE
UNIVERSITY OF ILIINOIS IURBANA, ILLINOIS)
POP 11
UNIVERSITY OF UTAH
POP 10/50
SPECIALIZES GRAPHICS SERVICES
AIR WEATHER SERVICE' (OFFUTT AIR FORCE BASE, OMAHA,
UNIVAC 1108
NEBRASKA
ETAC1 IWASHINGTON, O.C.)
USER ONLY
MITRE/WASHINGTON' IMcLEAN, VIRGINIA)
USER ONLY
NASA ARES RESEARCH CENTER' LMOUNTAINVIEW, CALIFORNIA)
360(61
ILLIAC IV VIA 06500
PDP 10 DATA MACHINE
NATIONAL BUREAU OF STANDAROSI IGAITHERSBURG, MARYLAND,
NATIONAL CENTER FOR ATMOSPHERIC RESEARCH' (BOULDER,
POP 11
CDC 6600
SPECIALIZED FOR DATA MANAGEMENT
COLORA001
CDC 7600
RA0C1 (ROME, NEW YORK/
GE,645
LONDON UNIVERSITY' (LONDON, ENGLAND)
POPS
CDC 6600
OCAMA2 (TINKER AIR FORCE BASE, OKLAHOMA CITY, OKLAHOMAI
UNIVAC 418
SAAC2 (ALEXANDRIA, VIRGINIAI
SAAMA2 IMECLEL LAN AIR FORCE BASE, SACRAMENTO, CALIFORNIA]
UNIVAC 418
UNIVERSITY OF SOUTHERN CALIFORNIA' (LOS ANGELES,
360/44
CALIFORNIA/
'SCHEDULED NODES
'PROPOSED NODES
Figure 2 Inventory of Nodes and Host Hardware in the ARPA Network
5
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
UP TO
FOUR
HOSTS'
TTY
HOST INTERFACE
1
HOST INTERFACE
2
CPU
4---
16 PRIORITY
WORDS
?0
4?
16 I/O
CHANNELS
12K MEMORY
16-EIT WORDS
CLOCK
WATCHDOG
TIMER
STATUS
INDICATORS
POWER FAIL/
AUTO-RESTART
PAPER-TAPE
READER
1.......
MODEM INTERFACE
MODEM INTERFACE
2
THE NUMBER OF HOSTS PLUS THE NUMBER OF MODEMS MAY NOT EXCEED SEVEN.
SOURCE: HEART, F. E., et al., "THE INTERFACE MESSAGE PROCESSOR FOR THE ARPA
COMPUTER NETWORK," PROCEEDINGS OF THE SPRING JOINT COMPUTER
CONFERENCE, MAY, 1970, P. 558.
Figure 3 The Interface Message Processor
UP TO
SIX
MODEMS'
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
hardware is provided to detect certain internal failures, and to either correct them or to
gracefully power down if correction is not possible. Each IMP is capable of supporting
up to four hosts, with the restriction that the number of hosts plus the number of trans-
mission lines may not exceed seven. Software support is derived from a specially developed
operating system which requires approximately 6K words of core memory; the remaining
6K words are used for message and queue storage. The operating system is identical for
all IMP's except for a protected 512-word block which contains programs and data unique
to each. This allows an IMP which has detected a software failure to request a reload of
the program from a neighboring IMP.
The IMP hardware is activated by a host computer whenever a message is ready
for transmission. Such messages are variable length blocks with a maximum size of 8095
bits. The host interface portion of the IMP, which is its only host-dependent component,
operates in a bit-serial, full-duplex fashion in transferring the message between the host
and IMP memories. A data-demand protocol is used in the interface to match the transfer
rates of the two processors.,
Messages received by the IMP are segmented into variable length "packets," each
having a maximum size of approximately 1000 bits. Packets serve as the basic unit record
of information interchange between IMP's. Their smaller size places a reduced demand on
intermediate message-switch storage, and increases the likelihood of an error-free trans-
mission. Parity check digits, which provide an undetected error rate of about 10-12, are
appended to the packets. The packets are then queued for transmission on a first-in, first-
out basis.
The selection of the particular link over which a packet is to travel is determined
by the IMP's estimation of the delay in reaching its destination over each of its available
lines. These estimates, which are recomputed at approximately 500-millisecond intervals,
are based on the exchange of estimates and past performance records between neighbor-
ing IMP's. As a consequence of this estimation capability, transmission paths which
maximize effective throughput are selected. In addition, since these estimates are
dynamic, the several packets which comprise a message need not use the same physical
path through the network to their destination.
IMP activity is also initiated upon receipt of a packet from another IMP. A packet
error check is performed first. If the packet is error-free, it is stored and a positive
acknowledgment is returned to the sending IMP, allowing it to release the packet from
its storage area. If the packet contains errors, or if the receiving IMP is too busy or has
insufficient storage to accept it, the packet is ignored. The transmitting IMP waits a pre-
determined amount of time for a positive acknowledgment; if none is detected, the packet
is assumed lost and retransmitted, perhaps along a different route.
7
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Once a positive acknowledgment has been generated, the receiving IMP must
determine, by an examination of the destination field in the packet header, whether the
packet is to be delivered to a local host or forwarded. In the latter case, the packet is
queued for transmission in a fashion similar to that used for locally initiated messages.
Otherwise, the IMP must determine whether all the packets comprising rhe message have
arrived. If so, a reassembly task is invoked to arrange the packets in proper order and to
transfer the message to the host memory.
In addition to its message handling functions, the IMP provides special capabilities
for the detection of communication failures and the gathering of performance statistics.
In the absence of normal message traffic, each IMP transmits idling packets over the
unused lines at half-second intervals. Since these packets must be acknowledged in the
usual manner, the lack of any packet or acknowledgment traffic over a particular line for
a sustained period (about 2.5 seconds) indicates a dead line. Local routing tables may be
up-dated to reflect the unavailability of such a line. The resumption of line operation is
indicated by the return of idling packet traffic.
The IMP is capable of gathering statistics on its own performance. These statistics,
which are automatically transmitted to a specified host for analysis, may include summaries,
tabulation of packet arrival times, and detailed information describing the current status
of the packet queues. All network 1MP's can provide these statistics on a synchronized,
periodic basis, allowing the receiving host to formulate a dynamic picture of overall net-
work status.
An additional capability supporting performance evaluation is tracing. Any host-
generated message may have a trace bit set. Whenever a packet from such a message is
processed, each IMP records the packet arrival time, the queues on which the packet re-
sided, the duration of the queue waits, the packet departure time, etc. These statistical
records, which describe the message-switch operation at a detailed level, are automatically
transmitted to a specified host for assembly and analysis.
Usage
The use of the ARPA Network has been broken into two phases related to the
network implementation plans:
? initial research and experimental use; and
? external research community use.
The first phase involves the connection of approximately 14 sites engaged principally in
computer research into areas such as computer systems architecture, information system
8
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
design, information handling, computer augmented problem solving, intelligent systems
and computer networking. The second phase extends the number of sites to about 20.
During the final phase one the network usage consists primarily of sharing soft-
ware resources and gaining experience with the wide variety of systems. This enables the
user community to share software, data, and hardware, eliminating duplication of effort.
The second phase activities will consist of adding new nodes to take advantage of
other research in such areas as behavioral science, climate dynamics and seismology. Data
distribution, data sharing and the use of the ILLIAC IV in climate dynamics and
seismology modeling are areas of special interest. One of the uses of the network will he
to share data between data management systems or data retrieval systems; this is regarded
as an important phase because of its implications for many government applications.
A network node for data management is being designed by Computer Corporation
of America (CCA): it will consist of a PDP-10, one trillion bits of on-line laser memory,
interfaced with the B6500/ ILLIAC IV processing complex. CCA plans to implement a
special data language to talk to the "data machine," having disk storage and a slower,
trillion bit direct-access store that will provide an alternative to storage at network sites.
The network is also used to access the Network Information Center (NIC) at SRI;
the NEC serves as a repository of information about all systems in the network that can be
dynamically updated and accessed by users.
Another use of the network is measurement and experimentation: because of
the nature of the network, much effort has been expended developing appropriate
tools for collecting usage statistics and evaluating network performance. Bolt, Beranek
and Newman (BBN), the Network Control Center, gathers information such as:
? the up/down status of the hosts and telephone lines;
? the number of messages failing to arrive over each telephone line;
? the number of packets successfully transmitted over each telephone line; and
? the number of messages transmitted by each host into its IMP.
Additional information is being gathered by UCLA, the Network Measurement Center.
Management
Although the several nodes of the ARPA network are at lease partially sup-
ported by ARPA, each is an independent research facility engaged in many activities
beyond its participation in the network. One of the primary considerations of the
network design philosophy and of its management is the preservation of this autonomy.
9
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
As a consequence, administrative control of the computer systems has remained with
the individual facilities, while the responsibility for intercomputer communications
has been assumed by network management.
The management of the network is functionally distributed between two or-
ganizations. Fiscal policy, particularly the disbursement of funds, is determined by
the Information Processing Office of ARPA. The technical pursuit of the network
is the responsibility of the Chairman of the Network Working Group (NWG), who
is appointed by ARPA.
The NWG itself is composed of at least one member from each participating
site. It meets every three months and operates ill a somewhat informal fashion. Its
main purpose is to propose and evaluate ideas for the enhancement of the network.
To this end, several subcommittees have been formed within the NWG, each involved
with a single major aspect of network operation. Their respective areas of inquiry
include the following:
? data transformation languages;
? graphics protocol;
? host-host protocol;
? special software protocol; and
? accounting.
The critical need for the timely dissemination of technical information through-
out the network community is satisfied by means of a three-level documentation
scheme. The most formal papers are called "Documents," and are issued by the
Chairman of the NWG as a statement of network technical policy. A "Request for
Comments" (RFC) is issued by any member of the NWG as a means of proposing
technical standards. RFC's are therefore technical opinions and serve to promote the
exchange of ideas among the NWG. An RFC Guide which indexes and defines the
status of all RFC's is published periodically by The MITRE Corporation. Finally,
RFC's, Documents, substantive memoranda, telephone conversations, site documents,
and other appropriate material are cataloged by the NIC at the Stanford Research
Institute (SRI), which periodically publishes a comprehensive index to these materials.
SRI has also developed two sophisticated software systems to enable a network user
to effectively utilize the information in the catalog files. The first of these is the
Typewriter Oriented Documentation Access System (TODAS). This system, as its
name implies, is intended to provide the teletype terminal user with appropriate capa-
bilities for manipulating the library catalogs. These facilities include text editing,
record management, keyword searching, and display of formatted results. The second
system, which is similar to TODAS but far more powerful, employs graphic display
devices with specially developed keyboards in place of the teletype.
10
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
THE COINS NETWORK
Introduction
The Community On-Line Intelligence System (COINS) was proposed in 1965 as
an experimental program. Its primary purpose is to assist in determining methods of im-
proving information handling among the major intelligence agencies.
The COINS network is currently operational as an experimental system. The
research that has been carried out to date has been concerned almost exclusively with the
means of sharing pertinent data among the network users. This is a particularly complex
problem in the intelligence community because of the variety of hardware, software, and
standards that are used. Studies are also underway to demonstrate the applicability of a
common network control language and a common data management system to be imple-
mented at all sites.
Configuration
COINS is a geographically distributed network of heterogeneous computers and
operating systems working through a central switch, an IBM 360/30. Linked to the
switching computer are a GE 635, and two Univac 494 installations (one of which is a
triple processor). The configuration is illustrated in Figure 4. Some agencies participate
in the network via terminal connection to one of the participating computer systems.
Communications
Communications are achieved in the COINS network by a centralized message
switch and conditioned, leased voice-grade lines. The lines, which connect each host com-
puter to the central switch, are operated in a full-duplex, synchronous mode at 2400 bps
using modems. The transmission system is completely secure, using cryptographic equip-
ment throughout the network.
A host computer may transmit a message of up to 15,000 characters to another
host; however, a message must be subdivided into segments of no more than 150 charac-
ters prior to transmission. All characters transmitted use the 7-bit ASCII code with addi-
tional bit for parity. Each segment of a message must be sent and acknowledged.
1
COINS is no longer operational; it is included here as a matter of historical record.
11
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
GE 635
IBM 360/30
UNIVAC 494
UNIVAC 494
(TRIPLE PROCESSOR)
Figure 4 COINS Configuration
12
EXTERNAL
ORGANIZATIONS
EXTERNAL
ORGANIZATIONS
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Usage
COINS is being used experimentally to enable various intelligence agencies to
share their data bases with each other. These data bases are constantly changing, and the
responsibility for building, maintaining and updating a data base rests solely with its
sponsor. Users at terminals cannot change the data bases: they can only query them. A
response time of less than 15 minutes is the goal, but in practice it ranges from five minutes
to two hours. The response time achieved is dictated to a great extent by the workload of
the file processors responding to interrogations.
Management
The management of the COINS network is vested in the Project Manager who is
responsible for the design and operation of the network. He is assisted by a Subsystem
Manager from each of the participating agencies who represents the interests of his agency.
One of the more critical problems faced by the Project Manager is the establish-
ment of acceptable procedures governing the inclusion of files. Currently, through a
formalized nomination procedure, a network user may request that a file maintained by
one of the participating agencies be made available for network access. The Project
Manager coordinates such requests by determining whether other users also require the
files or by establishing the necessary justifications. Subsequently, the request is forwarded
to the particular agency, which maintains the exclusive right to accept or deny the request.
A forum for the presentation and discussion of interagency problems is provided
by four panels, each consisting of one or more individuals from each agency. Although
the panels can make specific recommendations, final decisions rest exclusively with the
Project Manager and Subsystem Managers. The User Support Panel is responsible for con-
ducting training seminars in network usage and for distributing network documentation
among the users. The Security Panel is tasked with investigating procedures for ensuring
adequate security on the network computers. The gathering and evaluation of network
performance statistics is the responsibility of the Test and Analysis Panel. Finally, the
Computer and Communications Interface Panel is concerned with the system software and
network communications, and the protocol used in network operation.
13
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
THE CYBERNET' NETWORK
Introduction
The CYBERNET network is a nationwide commercial network offering computing
services to the general public. CYBERNET is operated as a division of the Control Data
Corporation, and represents a consolidation of their former Data Center operation. By
interconnecting the individual service centers, CDC feels that the user is offered several
unique advantages which include the following:
? better reliability, by offering local users a means for accessing a remote com-
puter in the event of local system failure;
? greater throughput, by allowing local machine operators to transfer parts of an
extra heavy workload to a less busy remote facility;
? improved personnel utilization, by allowing the disperse elements of a corpora-
tion to more readily access one another's programs and data bases; and
? enhanced computer utilization, by allowing the user to select a configuration
which provides the proper resources required for the task.
Configuration
CYBERNET is a distributed network composed of heterogeneous computers,
mainly CDC 6600's and CDC 3300's linked by wideband lines. Figure 5 gives a geographic
picture of CYBERNET and a partial inventory of its hardware.
The 6600's are considered the primary computing element of the network and are
referred to as "centroids" where many jobs are received and processed. Future centroids
will include a 7600 and other CDC machines to be announced. The 3300's serve as front
ends" and concentrators for the 6600; they are referred to as "nodes." In addition, small
satellite computers can be used as terminals to the CYBERNET network; they are dis-
tinguished by the fact that they have remote off-line processing capabilities and are able to
do non-terminal work while acting as a terminal. These satellites include CDC 3150's,
CDC 1700's, and lower scale IBM 360's. Figure 6 gives some typical system configurations.
CYBERNET supports essentially four types of terminals:
? Interactive/conversational (MARC2 I);
? Low-. medium-, and high-speed peripheral processors (MARC II, Ill, IV);
? Small- to medium-scale satellite computers (MARC V ); and
? Large- to super-scale computers with terminal facilities (MARC VI).
I CYBERNET is a registered trademark of the Control Data Corporation.
2Multiple Access Remote Computer.
14
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
0-L1?001.0009000V960001N6LdCIU-VI3 : CZ/60/1700z aseeieu JOd 130A0iddV
-1?LR46
, --111 ATLANTA
.11111
?
...
? DALLAS
HOUSTON
......
?
? ...
?te
SOURCE: THE CONTROL DATA CORPORATION
CDC
CDC
(?1(
0 CI..
6600
6600
3330
DATA CENTERS
am ?
CDC vVIDEBAND LINES
CDC MRDESAND EINES ON ORDER
VOICE GRADE LINES
? OTDEP SYSTEVS N NJN'OER Or MULTIPLE CDC LINOS
Figure 5 The CYBERNET Network
0-L1?001.0009000V960001N6LdCIU-VI3 : CZ/60/1700z aseeieu JOd 130A0iddV
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
PROCESSOR
CORE AND SECONDARY
MEMORY
OTHER COMPONENTS
....?
TYPICAL 6600
CONFIGURATION
131K MAIN CORE
2 167 MILLION WORD DISKS
SCOPE OPERATING SYSTEM
DEPENDENT SUBSYSTEMS - FORTRAN COBOL, ETC
EXPORT 8231 WIDEBAND REMOTE TERMINAL SUBSYSTEMS
SYSTEM 2000 . A BATCH OR INTERACTIVELY ACCESSIBLE LIST
STRUCTURED DATA BASE MANAGEMENT SYSTEM
CDC 6400
131K MAIN CORE
6638 DISK FILE
848.4 MULTIPLE SPINDLE
DISK DRIVE
KRONOS TIME SHARING OPERATING SYSTEM
DEPENDENT SUBSYSTEMS FORTRAN. COBOL, ETC.
TELEX - TELETYPEWRITER COMMUNICATIONS MODULE AND SWAPPING
EXECUTIVE
EXPORT 200 CDC 200 BATCH COMMUNICATIONS MODULE
EXPORT 8231 WIDEBAND TERMINAL COMMUNICATIONS MODULE
IMPORT 6600 - WIDEBAND LINK TO 6600 MODULE WITH INPUT
CONCENTRATION AND OUTPUT DISPERSION FACILITY.
TYPICAL 3300
CONFIGURATION
131K MAIN CORE
854 DISK DRIVES
MASTER MULTIPROGRAMMING OPERATING SYSTEM
DEPENDENT SUBSYSTEMS - FORTRAN, COBOL, ETC.
SHADOW - COMMUNICATIONS AND MESSAGE SWITCHING SUBSYSTEM
SHADE RECORD MANAGING SUBSYSTEM
Figure 6 Typical CYBERNET Configurations
Terminals are categorized in the above manner to indicate their hardware and/or software
characteristics. For example, the CDC 200 User Terminal is the CYBERNET standard
for low- and medium-speed devices; other devices which have been programmed to
resemble this terminal include the CDC 8090, the CDC 160A, the IBM 1130, the Univac
9200, UCC's COPE series, the IBM 360/30 and higher, and the Honeywell 200.
Software available through CYBERNET includes FORTRAN, COBOL, COMPASS
assembly language, ALGOL, SIMSCRIPT, GPSS, SIMULA, JOVIAL, BASIC, the SYS-
TEM 2000 Data Management System, the EASE structural analysis package, the
STARDYNE dynamic structural analysis package, and a large statistical library. Linear
programming systems include OPHELIE II, PDQ/LP, OPTIMA and NETFLOW (trans-
portation).
Communications
The communications facilities of the CYBERNET network consist of two pri-
mary elements: the transmission system and the nodes. The transmission system itself
includes four major components: lines, modems, multiplexers, and switches.
CYBERNET employs a variety of lines connecting terminals with computers and
computers with one another. For the most part the lines are either switched or leased
lines, but private lines are occasionally used, and at least one satellite communications
link is in use. Switched lines are operated at low speeds, and include both local and
Direct Distance Dial (DDD) facilities. Leased lines include Foreign Exchange (FX)
16
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
facilities and point-to-point connections. Measured and full period inward WATS lines
are also provided for operation at moderate speeds. Finally, wide-band full period lines
are used between computer complexes.
A corresponding complement of modems is used throughout CYBERNET.
Typewriter-like terminals are supported by Western Electric 103A modems, operating
at rates of up to 300 bps. Medium speed terminals use Western Electric 201A and 201B
modems, operating at 2000 bps and 2400 bps respectively on switched and leased lines.
High speed terminals use Milgo and ADS modems operating at up to 4800 bps on leased
lines. Western Electric 303 modems are used on the wideband lines, operating at
40800 bps.
Multiplexers are used to increase the transmission efficiency of voice-grade
lines supporting low-speed terminals. The principal multiplexing configurations are
designed to drive the leased-lines at their full capacity of 2400 or 4800 bps by operating
as many as 52 low-speed devices simultaneously on the same line. Cost savings are
realized by having low-speed terminal users dial in to the local multiplexers rather than
directly to a remote computer.
Western Electric line switches have been used throughout CYBERNET to
establish terminal-to-computer and computer-to-computer connections. The switches
are operated similarly to a telephone exchange system. The switches are not dependent
on any of the computer systems, providing a highly reliable mode of operation.
CYBERNET supports two types of nodes: remote job entry and conversational.
Each type of node can concentrate message traffic, perform message switching, and pro-
vide a user processing capability, The remote job entry node is a CDC 3300 operating
with the MASTER multiprogramming operating system. A special subsystem called
SHADOW has been developed for this configuration to provide the necessary support for
the communications and message-switching functions of the nodes. The SHADOW soft-
ware is capable of supporting remote-job entry from typewriter-like and CDC 200-Series
terminals. Communication from the 3300 to either another 3300 or to a 6600 is also
supported by SHADOW.
The conversational nodes of CYBERNET are CDC 6400's operating under an ex-
tended version of the KRONOS time-sharing operating system. At the present time, the
6400 is capable of supporting teletypes in a conversational mode, and remote batch 200-
Series terminals. Planned additions to the system include communciations capabilities
for 3300 and 6600 support and complete message-switching facilities.
17
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Usage
The CYBERNET Network is intended to make the computer utility concept
available to all of its commercial users by offering the following services:
? "super-computer" processing;
? remote access;
? a multi-center network;
? file management; and
? an applications library and support.
Load sharing, data sharing, program sharing and remote service are possible over
the network. The CDC 3300 nodes are used for remote job entry, and the CDC 6400 is
used for time sharing. Both nodes can also serve as front ends or concentrators, can
relay messages, and can process jobs. CYBERNET's nodes are intended to provide the
following facilities:
? generalized store-and-forward message switching;
? the ability to send work to a system that is not loaded;
? the ability to send work to a system which is not inoperative;
? the ability to utilize a unique application at a particular location;
? the ability to access a data base at another location; and
? the ability to utilize a specific computer system.
Management
The management of the CYBERNET network is centralized, vested in the Data
Services Division of CDC. All activities including hardware/software development,
resource accountability, and documentation development and dissemination are con-
trolled through this central office.
THE DISTRIBUTED COMPUTER SYSTEM
Introduction
The Distributed Computer System is an experimental computer network being
developed by the Information and Computer Sciences Department at the University of
California at Irvine. The immediate goals of the project are to design, construct, and
evaluate the intercomputer communications network.
18
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
The Distributed Computer System is currently in the planning stage. Upon com-
pletion of the overall design, the communications interfaces are to be constructed,
followed by an experimentation program using small computer systems.
Configuration
When the Distributed Computer System at Irvine becomes operational, it will
consist primarily of a store-and-forward communications system with a uni-directional,
ring-structure topology. Messages will be forwarded around the ring (which is to be
composed of two megabit coaxial cables) until the appropriate destination is reached.
Figure 7 illustrates this topology.
Figure 7 The Distributed Computer System Topology
The initial Irvine network will consist of heterogeneous mini-computers located
at several nodes on the Irvine Campus. A simple device such as a teletype can be con-
sidered a host computer on this network. Figure 8 gives the planned inventory of
hardware.
FORTRAN and BASIC will be provided through the network; plans call for other
capabilities to be added later.
19
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
NODES
CORE SIZE
SECONDARY MEMORY
VARIAN 620/i
VARIAN 620/a
MICRO 800
3 TELETYPES
8K (16 BIT)
8K (16 BIT)
8K (16 BIT)
IBM 2314 (ONE SPINDLE)
Figure 8 Inventory of Planned Hardware
Communications
Two principal elements comprise the communication subsystem: the transmis-
sion lines and the communications interface. The transmission lines actually form
three distinct subnets, as Figure 7 shows. The primary subnet forms a closed ring
connecting all of the nodes. This is the path which is normally used for all message
traffic. The other two subnets, one connecting the even nodes, the other the odd, are
included solely for reliability. In the event a particular node should fail, the two
adjoining nodes could communicate directly over the backup link.
All of the transmission paths will be coaxial cable carrying digital transmissions
using pulse-code modulation (PCM). The links are operated using a simplex protocol
with all message traffic traveling in one direction around the ring. Data rates of two
million bps are expected to be used in the initial configuration; this rate may be in-
creased to as high as six million bps if conditions warrant.
The communications interface is functionally illustrated in Figure 9. Its primary
components and their functions are as follows:
? an input line selector switch which automatically switches to the backup
input line whenever the primary line drops out for a predetermined period
of time;
? a pair of passive PCM repeaters which autonomously forward messages
through the interface;
? a repeater selector which activates the backup PCM repeater in the event that
the primary unit fails;
? a shift register which provides intermediate storage for messages originating
from and delivered to the host computer; and
? logic modules which operate the previously mentioned components, and de-
termine whether a message is to be delivered or forwarded.
20
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
(PRIMARY
OUTPUT LINE
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
INPUT LINE
SELECTOR SWITCH
PCM
REPEATER
PCM REPEATER
SHIFT
REGISTER
(240 BITS)
INTERFACE
LOGIC
REPEATER
SELECTOR SWITCH
. /BACKUP
TO HOST
COMPUTER
Figure 9 Communications Interface
21
PRIMARY INPU;)
LINE
\_OUTPUT LINE)
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
The communications interface can operate in one of two modes: idle or busy.
In the idle mode, the interface can accept messages from either the transmission line or
its host. In the former case, the message header is examined to determine whether the
destination is the local host; if not, the message is ignored and the PCM repeaters forward
it to the next node. If the message is to be delivered, it is removed from the line, placed
in the shift register, and checked for errors. If none are detected, a positive acknowledg-
ment is generated and sent to the originating host, and the message is passed to the
destination host. If errors are detected, a retransmission request is sent to the originating
host.
Upon receipt of a message from its host, the communications interface places
the message in the shift register and on the output lines and goes into the busy mode.
In this condition, the interface routinely forwards all messages received over the lines,
checking only for acknowledgments or retransmission requests. The receipt of a re-
transmission request indicates that the previously transmitted message was received in-
correctly by the destination node. The interface subsequently places the message on the
output lines again. A positive acknowledgment, indicating receipt on an error-free
message, is passed to the host, and the interface returns to the idle state.
There are two conditions in which a message may circulate in the ring for a
protracted period, one of which is the non-existence of the destination node. The other
occurs if a message arrives at the destination node when the node interface is in the busy
state, unable to accept any messages. In most cases, if the message is allowed to circulate,
it will eventually arrive at the destination node while the interface is idle. An interesting
exception, however, is the case where two nodes independently and simultaneously send
messages to each other. The two messages would circulate forever, since each destination
is awaiting an acknowledgment which the other cannot generate. At the present time
there is no facility for preventing infinite message loops, although such a capability will
probably be added later.
Usage
Because this network is primarily an experimental communications system, very
little has been done to provide software to assist users of the network. Load sharing,
program sharing, data sharing and remote service are not anticipated in the near future.
User software to provide these features and host/host protocol will be developed by the
university's computer center once network viability has been demonstrated.
Management
At the present time, the Distributed Computer System is highly localized, in-
volving mainly the resources of the Information and Computer Sciences Department.
Consequently, there has been no need for a formalized management structure.
22
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
DATA LINK SUPPORT (DLS)
Introduction
The DLS system is a communication facility which connects the National
Military Command System Support Center (NMCSSC) with the Alternate National
Military Command Center (ANMCC). Its primary purpose is to provide an automated,
high-speed capability allowing data bases to be exchanged between the sites and to
facilitate computer load leveling by allowing remote program execution.
DLS was developed by the IBM Corporation for the Defense Communications
Agency (DCA) during the period June 1969 to June 1970. Final testing was completed
in September 1970. The system is currently undergoing further tests and evaluations.
Configuration
Data Link Support (DLS) transmits jobs and data over a 40,800 bps leased line
between IBM 2701 Data Adapter Units connected to two IBM 360 computers (Model
50's or larger) operating in a point-to-point mode. Data is hardware encrypted prior to
being transmitted and is decoded when received. DLS is currently being operationally
tested using a 360/65 at the NMCSSC and a 360/50 at the ANMCC.
DLS is a software package that runs as a problem program in a single region
under OS/MVT. Standard OS software is available when using DLS. The DLS configura-
tion as currently used by the NMCSSC and the ANMCC is shown in Figure 10.
360/65/M VT
DLS
Communications
CRYPTO-
GRAPHIC
DEVICE
40.8 Kbps
CRYPTO-
GRAPHIC
DEVICE
360/50/MVT
Figure 10 DLS Configuration
DLS
DLS operates between the Support Center and the Alternate Center using a
secure, leased wide-band line operated at 40,800 bps. The transmission line is term i-
23
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
nated by an IBM 2701 Data Adapter at each end, connected to IBM 2870 Multiplexer
Channels. The link uses standard binary-synchronous communication protocol.
Software support for the operation of the link is derived from the DLS pro-
gram, a copy of which must be operating at each of the link end points. The portion of
the DLS program which drives the communications link is called the Binary-synchronous
Communication Controller (BCC).
The BCC consists of two primary modules, the Start/Restart Synchronizer
(SRS) and the Continuous Communication Access Method (CCAM). SRS is responsible
for recognizing the need to start DLS at the remote site, to read and spool job decks, to
despool job decks, and dispose of data sets. SRS decodes operator requests, and invokes
the necessary support routines to perform the desired function.
CCAM is responsible for maintaining an active channel program for the communi-
cation line. CCAM permits multitask usage of the communication link by supporting
software multiplexing and demultiplexing functions. The module is also responsible for
generating positive acknowledgments upon proper receipt of a message and for requesting
retransmission for lost or garbled messages. Message compaction and decompression
are also supported by CCAM, as is the gathering of statistics reflecting the performance
of the communication subsystem.
Usage
The primary capabilities offered by DLS are data base transmission between
remote locations and remote job processing. Thus far DLS has been used primarily to
transmit data bases, rather than to achieve load leveling. DLS is used for program
sharing, but not extensively because of the large data bases in the operational environ-
ment of the NMCS. DLS is designed for batch processing and has no on-line capability.
A job requiring no data base transmission can be transmitted under operator con-
trol to the remote site, executed, and the output returned without modification to the
deck used when running locally. A job for remote execution which requires data sets
located at the local site must include DLS control cards to transmit those data sets.
The job is then placed in the reader destined for the appropriate site, or for whichever
site is more desirable if the job can run at either location.
Management
The DLS system is controlled by the National Military Command System
Technical Support (NMCSTS) Directorate of the Defense Communications Agency
(DCA) and is being implemented by the NMCSSC.
24
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
THE MERIT COMPUTER NETWORK
Introduction
The Michigan Educational Research Information Triad, Inc. (MERIT) network
is a cooperative venture among the three largest universities in Michigan: Michigan
State University, Wayne State University, and the University of Michigan. The central
purpose of this undertaking is the "development and implementation of a prototype
educational computing network, whereby the educational and computing resources of
each university may be shared as well as enhanced by each other."1
The development of the MERIT network is proceeding in two stages. The first
of these, funded by grants from the Michigan State Legislature and the National Science
Foundation, calls for the development and installation of all network hardware and
software modules by June 1971. Subsequently, network experimentation projects will
begin, advancing research in information retrieval and computer-aided instruction systems.
Configuration
MERIT is a distributed network of heterogeneous computers with nodes at
Michigan State University (MSU) in East Lansing, Wayne State University (WSU) in
Detroit, and the University of Michigan (UM) in Ann Arbor. Each host is connected to
the network through a communications computer (CC), a modified PDP 1 1 /20 computer
with a special purpose operating system for communications. Data is transmitted over
2000 bps, voice-grade lines with eight lines connected to each CC. Figure 11 presents
an overview of the MERIT Computer Network.
UM runs Michigan Terminal System (MTS) on a duplex IBM 360/67; MTS can
service over fifty time-sharing terminals and several batch jobs. MSU uses a CDC 6500
with the SCOPE operating system. WSU has an IBM 360/67 and runs the MTS operating
system. Figure 12 presents an inventory of the host hardware at each of the three nodes.
Communications
The communications subsystern of the MERIT network consists of three
functional units: the host interface, the communications computer, and the telephonic
network. The interconnection of these modules along a typical communications segment
is illustrated in Figure 13.
I Bertram Herzog, "MERIT Proposal Summary," 2nd Revision, 28 February 1970.
25
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
MSU
CDC 6500
SCOPE
CC
CC
UM
IBM 360/67
MTS
CC
WSU
IBM 360/67
MTS
CC: COMMUNICATIONS COMPUTER - DEC PDP-11
SOURCE THE MERIT COMPUTER NETWORK
Figure 11 Overview of the MERIT Network
The host interface is a specially designed hardware module which interconnects
the host computer with the communications computer (CC). This intercace provides two
primary capabilities. First, it is capable of independently transmitting a variable-length
data recordl to (from) the memory of the CC from (to) the host computer, performing
whatever memory alignment operations are required by the different word configurations
of the two processors. Secondly, it provides a multiple-address facility which permits the
host to treat the CC as several peripheral devices. This greatly simplifies the host soft-
ware, since a dedicated pseudo-device can be allocated to each user or task requesting use
of the communications resources.
I Record length is determined by a software parameter.
26
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
NODE
PROCESSOR
MAIN CORE
OTHER HARDWARE
MSU
CDC 6500
64K (60-BIT WORDS)
1 CDC DISK SYSTEM (167)
4K (12-BIT WORDS)
MILLION 6-BIT CHARACTERS)
FOR EACH OF THE
3 854 DISK STORAGE DRIVES
TEN PERIPHERAL
(8.2 MILLION 6-BIT CHAR-
PROCESSORS.
ACTERS PER DISK PACK).
MODEL 33 TELETYPES
2 CDC 200 REMOTE BATCH
STATIONS
1 217 2 REMOTE SINGLE STA-
TION ENTRY/DISPLAY
UM
360/67
6 UNITS (1.5 MILLION
3 2314s (8 DRIVES EACH)
(DUPLEX)
8 BIT BYTES TOTAL)
2 DATA CELLS, (800 MILLION
? VIRTUAL MEMORY
BYTES TOTAL)
MACHINE.
2 HIGH-SPEED DRUMS (APPROX -
IMATELY 7.4 MILLION BYTES ;
TOTAL)
IBM 360/20 COMPUTER
IBM 1130 COMPUTER
IBM 2780 REMOTE JOB ENTRY
TERMINAL
TERMINALS (IBM 2741, DATEL
30, 33/35 TELETYPES).
WSU
360/67
ONE MILLION BYTES2 ?
2 2314s3 (8 DRIVES 8 EACH)
(HALF
VIRTUAL MEMORY
2 DRUMS
(DUPLEX)1
MACHINE
TERMINALS (IBM 2741, TELE-
TYPES, DATA 100)
lA DUPLEX SYSTEM BECOMES OPERATIONAL IN JUNE.
2AN ADDITIONAL 125K WILL BE ADDED WHEN THE DUPLEX SYSTEM BECOMES
OPERATIONAL.
3TWO MORE 2314s WILL BE ADDED IN JUNE.
Figure 12 Inventory of MERIT Host Hardware
27
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
HOST
COMPUTER
HOST
INTERFACE
PDP-11
TELE HONIC
INT RFACE
MICHIGAN BELL SYSTEM
L
TELEPHONIC
INTERFACE
4-0
HOST
INTERFACE
Figure 13 MERIT Communications Segment
HOST
COMPUTER
The heterogeneous composition of the network has required the development of
two philosophically similar but functionally different host interfaces, one for the IBM
equipment, the other for the CDC system. The IBM interface attaches to a 2870 Multi-
plexer Channel and transmits data on eight parallel lines at rates of up to 480,000 bps.
The CDC interface, on the other hand, couples the CC with the CDC 6500 Data Channel
and its associated Peripheral Processor. Transmission is achieved over twelve parallel
lines at an expected rate in excess of 3,000,000 bps.
The central element of the communications subsystem is the CC. As Figure 14
shows, the CC is a PDP-11/20 computer with 16K 16-bit words of core memory,
augmented with interfaces that allow interconnection to the host computer and the
telephonic network. The primary responsibility of the CC is to allocate its resources
among messages to be transmitted, delivered, and forwarded. Software support is
derived from the Communications Computer Operating System (CCOS), a specially
developed multitasking monitor operating on the PDP-11. The present configuration of
CCOS requires approximately 8K words of core memory; the remaining 8K words are
used for message and message queue storage.
Upon receipt of a message from the host interface, the CC translates the local
host character string into a standard ASCII code (unless the original message was in
28
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
SOURCE THE MERIT COMPUTER NETWORK
HOST
COMPUTER
UNIBUS
CONTROL
PANEL
?
PROGRAMMABLE
INTERVAL TIMER
HOST
INTERFACE
801C
MULTIPLEXER
INTERFACE
?10
?
PDP 11/20
I6K CORE
MEMORY
TELETYPE
KSR.35
PAPER TAPE
READER/
PUNCH
oil 801C
MULTIPLEXER
EIGHT
DIRECT DIAL
TELEPHONE
LINES
201A
INTER rAcE 201A3
201A
INTERFACE 201A3
201A
/00 INTERFACE 201A3
201A
INTERFACE
201A3
Figure 14 14 Communication Computer System
binary, eliminating the need for this operation). A message header is generated by
CCOS and a 16-bit checksum is computed and checked by the CC hardware. The
message is stored, and a transmission queue entry is generated. The order in which the
queue is emptied and the physical link over which transmission takes place are subse-
quently determined by a CCOS task.
Each CC is capable of receiving messages from the others. In this event, a de-
termination of whether the message was received free of errors is made, using the mes-
sage checksum. If the message was error-free, an acknowledgment is returned to the
transmitting CC, allowing it to release its record of the message. If errors were detected,
a request for retransmission is returned in lieu of the acknowledgment. Upon receipt of
an error-free message, the receiving CC determines whether the message is for its host or
is to be forwarded. If the former, the message is queued for host interface activation
and subsequent transfer to the host memory: otherwise, the message is queued for
transmission toward its destination.
The telephonic network comprises the physical transmission medium and its
termination equipment. The MERIT network employs voice-grade, dial-up lines ex-
clusively. Some economy in line costs is achieved by sharing the existing tri-university
Telpak lines on a competitive basis with normal voice traffic. Each site supports four
Western Electric 201A modems, operating asynchronously in four-wire, full-duplex
29
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
mode at 2000 bits per second. Dial-up connections are made by a Western Electric 801C
Data Auxiliary Set, which is multiplexed among the four modems. Because the modems
operate in a four-wire configuration, the 801C is designed to allocate lines in pairs for
each modem. Moreover, since the 801C is completely controlled by the PDP-11 soft-
ware, it is possible to change the number of lines between two sites in accordance with
the current traffic volume, achieving an optimum cost/performance tradeoff within the
constraints of available bandwidth.
Usage
MERIT is seeking knowledge of the problems and solutions of operating a net-
work in an established educational computing environment; through the development
and implementation of a network, they expect to make contributions to computer and
educational technology. MERIT management feel that a network linking the computers
of the participating universities will have a synergistic effect on the combined resources
of the separate computing facilities. Connecting machines with significantly different
system characteristics enables the user to take advantage of the computer best suited for
his work. For example, the University of Michigan's system was especially designed for
time sharing; it will be available to those at other nodes needing a time-sharing capability.
Because of its speed, the CDC 6500 at MSU is well suited for compute-bound jobs;
once it is connected to the network, personnel at other universities will be able to take
advantage of its facilities. Interconnecting computer systems can make possible a
cooperative policy for acquiring some of the more unusual peripheral devices.
The MERIT Network is designed to provide a vehicle for a rapid exchange of
information that would not be possible otherwise and to bring researchers in closer
contact with those doing similar work at different locations, thereby eliminating much
duplication of effort.
MERIT will provide remote service that will be transparent to the user; his job
will look like a standard batch job except for the addition of a few network routing
commands. MERIT feels that load sharing is infeasible on a per program basis.
Ultimately MERIT hopes to provide a service whereby real-time terminal users
will be able to concurrently control programs on two or more host systems. This
"dynamic communication" would enable the user to control this process, operating
the programs simultaneously or sequentially and transferring data between them.
Dynamic communication will facilitate "dynamic file access," the ability of a user at
one node to access a file at another node without transferring the file to the user's node.
MERIT feels that implementation of this capability will be difficult.
30
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
A standard data description language has been proposed by the Michigan
Interuniversity Committee on Information Systems (MICIS) to facilitate transmission
of data between computers, systems, and programs and to provide a convenient and
complete format for the storage of data and its associated descriptor information.
MICIS proposes a standard data set composed of two parts:
? a directory describing the logical and physical storage of the data and the
properties of its variables; and
? the data matrix.
The directory is to be written in a standard character set, facilitating maximum trans-
ferability between various character codes. This restriction does riot apply to the actual
data described by the directory, however; data can be highly machine dependent,
although its description is written in a standard character set.
The current plan is that data will be converted to ASCII prior to being trans-
mitted over the network; upon receipt by the object node the data will be converted to
a compatible form for processing on the target hosts. Programs and data must be trans-
mitted in a form acceptable to the target host. MERIT feels that the network will
eliminate the need for physical program transferability and that all users can share pro-
grams that exercise special features offered by a node but have not been written in a
computer independent manner.
Management
The participants in the MERIT network are independent universities, each vying
with the others for students, faculty, and research grants. One of the primary goals of
the network is to maintain this autonomy at the maximum level consistent with effective
network operation. Consequently,?each of the universities is responsible for access
authorization, resource accounting, systems development, and local hardware expansion.
Communications facilities, intercomputer protocol, and similar aspects of the network are
the proper concerns of the network management.
Network management is vested in the MERIT Central Staff. comprising a
director, an associate director from each university, and a small technical staff. The
Director is appointed by the Michigan Interuniversity Committee on Information sys-
tems (MICIS), the predecessor of MERIT, which is composed of representatives from
each of the three participating institutions. Each associate director's position is filled
by nominations from the university, selection by the Director, and approval by MICIS.
31
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
The Director is responsible for the technical development of the network and
for the administration of its fiscal resources. He relies on his associate directors to insure
that the implementation at each university is proceeding on schedule. The associate
directors are also responsible for promoting and encouraging network activities at their
respective institutions. Moreover, each associate director acts as a liaison between
MERIT and his university, to ensure that the university's interests are equitably served
with respect to the demands placed upon its resources.
The distribution of system documentation throughput the user community is
the joint responsibility of MERIT and the individual universities. At the present time,
MERIT disseminates information relevant to the design and operation of the communica-
tions subsystem and its interfaces. Each university is required to maintain and distribute
its local facilities documentation and is responsible for issuing notices reflecting any
significant changes.
The MERIT staff is developing procedures to closely monitor the performance
of the network. Statistics gathered on message errors, traffic distribution, and overall
throughput will significantly help in adapting the original network design to actual
usage patterns. Moreover, a study of machine utilization should facilitate the develop-
ment of an equitable interuniversity rate structure.
NETWORK/440
Introduction
Network/440 is an experimental network sponsored by the Watson Research
Center of the IBM Corporation located at Yorktown Heights, New York. Its primary
purpose is to facilitate the study of computer networks and to provide a vehicle for an
experimentation program in network applications.
The network is currently operational using a 360/91 MVT region as a central
switch. This architecture was chosen because of the ease with which performance
statistics could be gathered. However, because of the inherent disadvantages of the
centralized topology, the network is to become distributed.
Configuration
At the present time Network/440 is a centralized network of homogeneous
computers as shown in Figure 15. The grid node of this network is a region of an
1This network is no longer operational; it is included for historical purposes.
32
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
USER NODE
USER
NODE
INTERFACE
NETWORK
CONTROLLER
USER NODE
USER
NODE
INTERFACE
COMMUNI?
CATIONS
SUBSYSTEM
USER NODE
allemavomearalli
GRID NODE 360/91 PARTITION
USER
NODE
INTERFACE
Figure 15 Logical Structure of Network/440
??????111
IBM 360/911 running under MVT. This node acts as a central switch for the store-
and-forward communications, presently being carried out over 40,800 bps leased lines.
The present and expected nodes in the network are listed in Figure 16.
LOCATION
MACHINES
IBM WATSON RESEARCH CENTER
360/916MVT (CURRENTLY IN THE NETWORK)
IBM WATSON RESEARCH CENTER
360/67/CP (CURRENTLY IN THE NETWORK)
IBM (BOULDER, COLORADO)
360/65/MVT
OTHER IBM INSTALLATIONS
360/MVT
NYU
CDC 6600
YALE
360/44
IBM (SAN JOSE, CALIFORNIA)
360/91/65/65
Figure 16 Nodes in Network/440
Standard OS/360 software is available to the user over the network.
1This is the same 360/91 that is linked to the TSS Network.
33
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Communications
Network/440 is a centralized network, comprising a grid node which performs
all of the communications support functions, and a set of transmission links which
extend radially outward from the grid node to the host computers. The transmission
links are leased wide-band half-duplex lines operated at 40,800 bps using Western
Electric 300-Series modems. Computer terminations are provided by IBM 2701 Data
Adapters connected to 2870 Multiplexer Channels. The links are operated using the
standard IBM Basic Telecommunications Access Method (BTAM).
Special communications capabilities are provided by a problem program
operating in a single region of a 360/91 MVT system. The program comprises six
primary segments, performing network control, operator interface, error recovery, line
handling, message queue management, and transaction recording functions. The net-
work control segment is responsible for handling user jobs and decoding appropriate
network control messages. The operator interface handles messages going to and from
the central machine operator. The error recovery segment is responsible for retrans-
mitting messages which were lost or garbled, and for attempting to re-synchronize the
lines after a line loss. The line handler provides the interface with the BTAM software
for forwarding messages to the host computers. The message queue manager is respon-
sible for queuing messages in core for forwarding if the target host is available, or on a
disk if not. In this way, a host will always receive its messages whether or not it is
operational at the time the message is sent. Finally, the transaction recording segment
maintains an audit tape of all message traffic passing through the central switch.
Usage
Network/440 is a research project being used to gain a better understanding of
computer networking; for this reason the centralized approach was taken in its design.
The grid node monitors all messages passing through the network and makes network
measurements more easily than would be the case on a distributed system.
Load sharing, data sharing and program sharing are possible over the network.
The grid node provides a centralized catalog of all data sets available for network usage,
but each node maintains control over its own data sets. One of the more important
functions of the network is transferring data sets. This currently requires the user to
spell out exactly what he is referring to when manipulating files. Current plans call for
making these operations more user transparent. Network/440 is currently a batch-
oriented network with plans to offer interactive facilities in the future.
34
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Network/440 has developed several control languages, each providing the user
with more capability and flexibility in a less machine-oriented form. Planned expansions
of this control language include the following:
?- grid node conversion of local job control language into the language required
by the target computer or grid node mapping of one job control language into
the target machine's job control language;
? grid node conversion of floating point numbers, integers, and character strings
from one machine structure to any other; and
? automatic job scheduling to achieve load leveling among like machines or, by
job control language conversions, among unlike machines.
IBM's concern about network usage of proprietary data has prompted the de-
velopment of a grid node usage matrix that maintains a list of resources available to a
specific user. Additionally, a node may disconnect itself from the network to process
proprietary data; if this occurs, incoming messages are stored until the node is reconnected.
Management
Because of the nature of Network/440, no formal management structure exists.
The network is administered as a research project of the IBM Corporation.
THE OCTOPUS NETWORK
Introduction
The Octopus network is a highly integrated system providing operational support
for the research activities of the Lawrence Radiation Laboratory (LRL). The network
was developed and is operated by LRL under the auspices of the United States Atomic
Energy Commission. The computation requirements of LRL have necessitated the use
of several large computer systems; the purpose of the Octopus network is to integrate
these systems into a unified computer complex. In satisfying this responsibility. Octopus
performs two primary functions:
? it provides the user with a single point of access to the several computers; and
? it allows each of the computers to access a large centralized data base.
Octopus was first conceived in the early 1960's and became operational in 1964.
As the computation center has grown in size and complexity. Octopus has been expanded
to meet changing needs. At the present time, the system services about 300 remote
terminals, four main computers, and a trillion-bit data base.
35
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Configuration
Octopus is a heterogeneous network of computers including two CDC 6600's,
two CDC 7600's, and, in the future, a CDC STAR. Each of these worker (or host)
computers is operated in a time-sharing mode and is linked to a central system providing
two features:
? ? a centralized hierarchy of storage devices (a centralized data base) shared by
the worker computers; and
? a provision for various forms of remote and local input and output, permitting
the network to be viewed as a single computing resource.
Octopus uses a store-and-forward communications protocol. Communications lines
between workers are 12 megabit, hardwire cables. Figure 17 gives a graphic description
of the Octopus system.
Octopus can be more easily visualized as two independent, superimposed
networks:
? the File Transport Subnet which is a centralized network consisting of
worker computers, the Transport Control Computer (a duplex PDP-10
which serves as the grid node), and the central memory system (disk,
data cell, Photo-store); and
? the Teletype Subnet (Figure 21) which is a distributed network consisting of
worker computers, three PDP-8's (each servicing up to 128 teletypes) and
the Transport Control Computer (the duplex PDP-l0).
A third network, not yet installed, will comprise remote I/O terminals supported by
duplex PDP-1 1 While the networks can be considered logically independent, they
are interconnected to provide alternate routes for data; for example, the PDP-10 pro-
vides an alternate path between the interactive terminals and the worker computers.
Figure 18 shows some of the major hardware components in the Octopus system.
The Octopus network also supports a Television Monitor Display System (TMDS),
shown in Figure 19. The purpose of TMDS is to provide a graphic display capability,
with monitors distributed throughout the facility. Bit patterns representing characters
and/or vectors are recorded on the fixed-head disk, which operates at a synchronous
speed compatible with the standard television scan rate. Sufficient storage is available
on the disk to store 16 rasters of 512 x 512 black or white picture points. The addition
of a crossbar switching system will allow a particular raster to be displayed on several
monitors simultaneously.
36
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
0-L1?001.0009000V960001N6LdCIU-VI3 : CZ/60/1700z aseeieu JOd 130A0iddV
CENTRALIZED DATA
STORAGE
GPL
DISC
.8 x le BITS
DATA
CELL
3.2 x 109 BITS
/PHOTO
STORE
1012 BITS
TMDS
DISPLAY
? ERIPH-
ERAL
STORAGE
ERIPH-
ERAL
STORAGE
? ERIPH-
ERAL
STORAG
TO BE
DELIVERED
PERIPH-
ERAL
STORAGE
PERIPH-
ERAL
STORAGE
PDP-10
A USER
ULATE
IS A HOST WHEN
WISHES TO MANIP-
FILES
6600
7600
STAR
7600
6600
DUAL
PROCESSOR
PDP-10
TRANSPORT
CONTROL
COMPUTER
TO BE INTRODUCED
INTO THE NETWORK
IN THE FUTURE
EVANS AND
TTY
TTY
TTY
DUAL PROCESSOR
SUTHERLAND
PDP-8
PDP-8
PDP-8
POP-11
LINE DRAWING
(REMOTE I/O)
SYSTEM
128
TELETYPES
128 128
TELETYPES TELETYPES
Figure 17 The Octopus Network
POP-8
PDP -8
PRINTERS AND
CARD READER
0-L1?001.0009000V960001N6LdCIU-VI3 : CZ/60/1700z aseeieu JOd 130A0iddV
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
PROCESSOR
CORE SIZE
OTHER HARDWARE
WORKER COMPUTERSI
6600 L
6600 M
7600 R
7600 S
START
TRANSPORT CONTROL COMPUTER
PDP-10 (DUAL PROCESSOR)
POP-8 CONCENTRATORS
3 POP-8s
REMOTE 1/02
PDP-11 (DUAL PROCESSOR)
128K
128K
65K SMALL CORE
500K LARGE CORE
65K SMALL CORE
5001< LARGE CORE
500K 164-BIT) WORDS
8K
225 MILLION 60-BIT WORDS (DISK)
225 MILLION 60-BIT WORDS (DISK)
167 MILLION 60-BIT WORDS (DISK)
.8 MILLION 60-BIT WORDS (DRUM)
167 MILLION 60-BIT WORDS (DISK)
.8 MILLION 60-BIT WORDS (DRUM)
167 MILLION 64-BIT WORDS (DISK)
IBM 2321 DATA CELL (3.2 x 108-BIT
CAPACITY)
IBM 1360 PHOTOSTORE )1012-BIT
CAPACITY)
GENERAL PRECISION LIBRASCOPE
DISK (8.8 x b08-BIT CAPACITY)
TMDS (TELEVISION MONITOR
DISPLAY SYSTEM)
EVANS & SUTHERLAND LINE
DRAWING SYSTEM
LIP TO 128 TELETYPES ON EACH
PDP-8
TERMINALS TYPICALLY CONSIST-
ING OF A PDP-8 CONNECTED TO
I/O DEVICES (READERS AND
PRINTERS)
1THE LETTER TO THE RIGHT OF THE WORKER COMPUTER IS AN INTERNAL LRL DESIGNATION.
THESE LETTERS ALSO APPEAR IN FIGURE 17.
2NOT YET OPERATIONAL.
Figure 18 Octopus Hardware
38
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
FILE
TRANSPORT
SYSTEM'
FILE TRANSPORT CHANNELS
TO CDC WORKER MACHINES
PDP-10 CORE
MEMORY BUFFER
SECONDARY
STORAGE
HIERARCHY
FT-
PDP-10
CPU
DIRECT
MEMORY
ACCESS
TMDS
CONTROLLER
I/O
CONTROL
(TO BE
INSTALLED)
COAX
?
?
?
READ/WRITE AND
TRACK-SWITCHING
ELECTRONICS
16 x 64
ELECTRONIC
CROSSBAR
SWITCH
?
?
?
FIXED-HEAT DISK
32 TRACKS
3600 RPM
16 CHANNELS
2 TRACKS/CHANNEL
TMDS SUBSYSTEM
SOURCE: PEHRSON, D. L., "AN ENGINEERING VIEW OF THE LRL OCTOPUS COMPUTER
NETWORK," NOVEMBER 17, 1970, P. 24.
Figure 19 Television Monitor Display System (TMDS)
UP TO 64
TV MONITORS
LRL has designed and built much of their hardware and almost all of, their
software (including the operating systems for their computers); for example, they have
a special multiplexer enabling the PDP-8's to handle 128 teletypes each, whereas DEC
permits a maximum of 32. LRL has implemented their own versions of COBOL,
FORTRAN (called LRLTRAN), APL and PL/I. They are currently developing an APL
compiler (their current version of APL is interpretive). In addition, they provide CDC
FORTRAN, SNOBOL, debugging routines, a text editor, LISP and linear programs.
39
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Communications
The File Transport Subnet connects the PDP-10 system and its central data
store to the worker computers. Because of the inherent differences between the
CDC 6600 and the CDC 7600, two distinct file transport channels have been developed,
one for each machine type. However, there are two channel characteristics which are
identical in both cases. The first of these is the PDP-10 interface, which uses a data
demand protocol to synchronize the transmission rate between the computers. The
second is the maximum channel transfer rate, which is about 10 million bits per second.
The 6600/PDP-10 file transport channel is shown in Figure 20. The principal
components involved in the transmission process are as follows:
? the 6600 Peripheral Processor Unit (PPU), a 12-bit, 4K-word programmed
I/O processor;
? a Channel Switch, which connects one of the ten available PPU's to one of
the twelve available data channels;
? the 6000-Series Data Channels, which transfer data on 12-bit parallel lines;
? the Adapter unit which interfaces the 12-bit CDC channel to the standard LRL
Octopus Data Channel, a 12- or 36-bit wide transmission system;
? the LRL Data Channel, which performs synchronized, half-duplex digital
transmission; and
? the PDP-10 and its channel interfaces.
The operation of the file transport channel is initiated by a request from the
6600, requesting either a read from or a write to the central data store. These requests
normally involve the transfer of a complete file, with the average transmission com-
prising more than 500,000 bits of data. A processor dialog is subsequently established
to transfer data between the two computers. At the PDP-10, data is transferred between
the GPL disk and core, and then between core and the transmission line. The 6600
uses two PPU's in a buffer-switching scheme, alternating between the transmission line
and the local disk, bypassing the main core of the 6600.
As Figure 21 illustrates, the 7600 configuration is somewhat different. Each of
the fifteen PPU's on the 7600 has eight dedicated channels available, eliminating the
need for the channel switch. Moreover, because the 7600 CPU and PPU's are much
faster than those of the 6600, a more classical transport protocol is used, with the PPU
acting as a programmable interface between the PDP-10 and the 7600 CPU, which con-
trols the data transfer.
40
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
CDC .6600
SYSTEM
FILE
TRANSPORT
CHANNEL
PDP-10
SYSTEM
_
6600 MAIN
FRAME
PPU
? ? ?
PPU
? ? ?
PPU
CHANNEL SWITCH
? 0 ?
? ? ?
ADAPTER
A TOTAL OF
10 PPU's
V-- CDC 6000-SERIES
DATA CHANNELS
(12 CHANNELS,
12-BIT DATA PATH)
12-BIT OR 36-BIT DATA PATH
PDP-10 LINE UNIT
36-BIT DATA PATH
PDP-10
CORE MEMORY
SOURCE: PEHRSON, D. L., OP CIT, P. 14.
.Figure 20 6600/PDP-10 File Transport Channel
41
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
CDC 7600
SYSTEM
FILE
TRANSPORT
CHANNEL
PDP-10
SYSTEM
7600 MAIN
FRAME -
PPU
?? ?
PPU
? ??
PPU
UP TO 15 PPU's
ADAPTER
CDC 7000-SERIES
DATA CHANNELS
(8 CHANNELS/PPU,
12-BIT DATA PATH)
12-BIT DATA PATH
PDP-10 LINE UNIT
PDP-10
CORE MEMORY
36-BIT DATA PATH
SOURCE: PEHRSON, D. L., OP CIT, P. 15.
figure 21 7600/PDP-10 File Transport Channel
42
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
The Teletype Subnet, shown in Figure 22, is designed to efficiently route short
messages of approximately 80 characters between user terminals and the worker com-
puters. Although this subnet functions independently of the File Transport Subnet,
the two are interconnected, primarily for enhanced reliability.
WORKER
MACHINES
TELETYPE
NETWORK
6600
ADAPTER
LINE UNIT
128
TTY's ??
7600
ADARI.ER
LINE UNIT
LINE
MPLXR
PDP-10
SYSTEM
(MESSAGE
SHUNTING)
SOURCE: PEHRSON, D. L., OP CIT, P. 20.
PDP 8
8K
WORDS
FILE CHANNEd
\UP TO 128
TTY'
? ?
6600
F?A-DAPTER
7600
ADAPTER
LINE UNIT]
PDP-8/L
8K
WORDS
? 128
? TTY's
FILE CHANNEL
CORE MEMORY
BUFFER
Figure 22 Octopus Teletype Subnet
43
FILE CHANNEL
POP 10
CPU
POP-10
CPU
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
The central element of the Teletype Subnet is the PDP-8 computer. Each
PDP-8 has 8K 12-bit words of core memory and is capable of supporting up to 128
full-duplex Teletype terminals, operating at 110 bps. A special operating system has
been developed for the PDP-8 to support the Tele type Multiplexer, to manage core
buffers, and to forward messages in the subnet. This system requires 4K words of core
memory, leaving 4K words for line message buffers.
Each PDP-8 accepts characters from its terminal until a complete message has
been formed. If the message destination is a worker which is directly connected to the
PDP-8, the message is transmitted using links similar to those described in the File
Transport Subnet discussion, but operating at about one-tenth the speed. If the worker
is not directly connected, the message is forwarded to a neighboring PDP-8, where a
similar process is repeated. An analogous protocol is followed for output messages
traveling from a worker computer to the user terminal.
In the event that a PDP-8/worker link becomes inoperative, messages can be
forwarded to the affected worker computer via the File Transport Subnet. Although
the intermixing of short Teletype messages with long file transfers does downgrade
system performance, the enhanced reliability that is achieved is adequate compensation.
A Remote Job Entry Terminal (RJET) Subnet is currently under development
for inclusion in the Octopus Network. Its purpose is to provide a capability for card
reader input and line printer output at remote sites throughout the facility. The pro-
posed RJET configuration is shown in Figure 23. The controlling element of RJET
is a pair of PDP-11 computer systems. One PDP-11 performs a role similar to that of
the Teletype Subnet Computer, routing messages between the workers and local buffer
areas. The other PDP-11 acts as a line handler, providing interface capabilities for
eighteen 4800 bps, half-duplex terminal lines.
Usage
The Octopus Network has increased the overall effectiveness and efficiency of
the computing facilities provided by LRL's large computers. The multicomputer com-
plex is treated as a single resource, enabling all terminals to access all worker computers
and providing the Octopus user with several advantages:
? easy accessibility to any worker computer from any teletype terminal;
? man-machine interaction with a high-speed computer while executing a pro-
gram; and
? rapid turnaround time during debugging.
44
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
PDP-10 AND
STORAGE
SYSTEM
MESSAGE
HANDLER
COMPUTER
LINE
HANDLER
COMPUTER
6600
6600
PDP-11
UNIBUS
7600
7600
PDP-11
SELECTOR
LINE UNIT
PDP-11/PDP-11
CHANNEL
PDP-11
CORE BUFFER
32K WORDS
(INITIALLY)
PDP-11
CPU
PDP-11
UNIBUS
PDP?11
CPU
PDP-11
CORE MEMORY
4K WORDS
SERIAL LINE
INTERFACE
LINE UNIT SUBPORTS
T-TYPICAL TERMINAL
? UP TO 18
? SERIAL LINES
? AT 4800 BPS
SOURCE: PEHRSON, D. L., OP CIT, P. 28.
iPDP?8/L
CARD
READER
[LINE
PRINTER
Figure 23 Remdte Job Entry Terminal (RJET) System and Network Connections
45
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
Most of LRL's computer work consists of long-running, compute-bound problems re-
quiring many hours of computer time. Of their 1000 users, 20 to 40 are generally on line
at one time, running concurrently with batch background jobs. User-controlled data
sharing, program sharing and remote service are possible on the network; load sharing,
however, is hampered by the fact that the worker machines are different.
. Interactive multi-programming on LRL's giant computers generates a require-
ment for massive on-line storage. Tapes are inefficient in this type of environment, and
for this reason the concept of the shared data base has been employed. A hierarchy of
storage is composed of a Librascope fixed-head disk (807 million bits, rapid access, high
transfer rate) and an IBM Data Cell (3.24 billion bits), both supporting the IBM Photo-
store (over one trillion bits), the major media for mass storage. Economics and flexi-
bility make the sharing of these storage devices advantageous.
The large-capacity Photostore provides an economical means of storing long-life
files; such a large storage device is reasonable only if it is shared by several large systems.
Writing the Photostore is a time-consuming activity and it is therefore not amenable to
files that change frequently. The storage hierarchy balances and smooths I/O loads in
supporting the Photostore and also provides an indexing mechanism for this device. The
shared data base concept instills flexibility and operational advantages into the system
since files transported to the Photostore by one worker system can be subsequently
accessed from another worker system, eliminating the need for unique copies of public
files on each worker system.
The PDP-10 Transport Control Computer and the appropriate worker computer
handle file transport. Data is copied from a file controlled by the PDP-10 and written
into a file controlled by the worker computer; the source file is not altered or destroyed,
although it can be rewritten while in the worker computer.
Maintaining a centralized data base has some disadvantages: since all worker
computers depend on the shared storage hierarchy, reliability requirements are greatly
increased; and a major effort is required to implement the centralized file storage subnet.
File access codes enable a file to be read by others, but written only by those
with the correct access code. Worker computers have their own access codes which may
inhibit file transport in some cases. Various types of files include the following:
? private files accessible to one user,
? shared files accessible to a group of users, and
? public files accessible to all users.
46
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Each user is identified with a maximum security level at which he is permitted to operate;
these include: unclassified, protected (data cannot be carried off the site), administrative,
and secret. Each file and I/O device is identified with an operating level, and a user's
access to them must not exceed the maximum operating level allowed him. No Top
Secret work is done on the Octopus network.
Management
The Octopus network was developed to provide computer support exclusively
for LRL. Consequently, its management is centralized, vested in the Computation De-
partment of LRL. The Computation Department, which comprises over 300 staff
members, is managed by a Department Head and a Deputy Department Head, supported
by three Assistant Department Heads for Administration, Research, and Planning.
Octopus is managed in a fashion similar to that of any research computational
center. Management is responsible for acquiring, developing and maintaining hardware
and software; authorizing system access; allocating computer resources; and assisting the
user community in achieving effective computer utilization.
The applications programming support functions of the Computation Depart-
ment are necessarily extensive and varied. Applications programmers, working in one
of six main groups, provide programming support throughout the Laboratory in areas
including administrative data processing, engineering, physics, medicine, and nuclear
weapons research.
In addition to the applications programming staff, the Computation Department
maintains six project groups, each tasked with a specific support role. These groups,
and their respective functions are as follows:
? the Systems Development Section, which designs and develops all of the sys-
tems software for the network;
? the Systems Operation Section, which performs software maintenance and
consultation services;
? the Numerical Analysis Group, which designs, develops, and evaluates
mathematically-oriented computer algorithms;
? the Computation Project Group, which engineers additions and modifications
to the network hardware;
? the Computer Information Center, which obtains, edits, writes, publishes, and
distributes all system documentation; and
? the Computer Operations Section, which is responsible for the operation of the
computer systems.
47
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
THE TSS NETWORK'
Introduction
The TSS network was first conceived in 1967 when Princeton, IBM Watson, and
Carnegie-Mellon University decided to interconnect their computer facilities. The purpose
of the network is to advance research in the applications of computer networks, particu-
larly in the areas of cooperative system development.
The network is currently operational at all of the nodes. Moreover, experi-
mentation programs are well underway at several sites, particularly those on the east
coast.
Configuration
The TSS network is a distributed network of homogeneous (IBM 360/67) com-
puters using the TSS/360 operating system. Each node manages a local network of
heterogeneous computers including some large 360's running under OS; these proces-
sors appear as devices to the network. Nodes are located at IBM Watson Research
Center, Carnegie-Mellon University (CMU), NASA Lewis Research Center, NASA
Ames Research Center, Bell Telephone Laboratories (Naperville), and Princeton Uni-
versity. The nodes are interconnected by 2000 bps voice-grade auto-dial lines and
40,800 bps leased lines. Figure 24 presents an overview of the nodes participating
in the TSS Network, and Figure 25 presents configuration information for each node.
Additional facilities that may become TSS nodes are Chevron Oil Corporation and
Northern Illinois University.
Modifications to the TSS/360 operating system were necessary to enable
processors to initiate tasks on and communicate with other processors; one processor
appears as a terminal to another. A major consideration was given to minimizing these
modifications. By using like processors and software many of the usual obstacles of
network design were avoided. TSS/360 provides enough flexibility for expansion, and
the modular network design allows for the inclusion of other operating systems in the
future.
Languages available over the network include PL/I, FORTRAN H, ASSEMBLER
H, ALGOL, SNOBOL 3, SNOBOL 4, APL, BASIC, WATFOR, LISP, CSMP, GPSS, JOSS,
LC2, and LEAD. Other software includes NASTRAN, a structural analysis program, and
I Time Sharing System?a time-sharing operating system for the IBM 360/67 computer.
48
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23: CIA-RDP79M00096A000500010017-0
gr IBM RESEARCH
NASA LEWIS
? RUTGERS
BELL TELEPHONE LABS ? ? PRINCETON
?
CARNEGIE MELLON
? NASA AMES
SOURCE: IBM WATSON RESEARCH CENTER
Figure 24 An Overview of the TSS Network
a text editor. In addition, software is available to convert FORTRAN source code from
the TSS format to the OS format automatically, for example between the 360/67 with
the TSS operating system and IBM's 360/91 with OS/MVT.
NASA Lewis will be the Network Information Center for the TSS Network.
They will keep records on machine configurations and available programs, will maintain
up-to-date source code for the network software, and will keep a history of usage re-
quests, identifying the user and the reason for the request. All changes to programs
available over the network will be recorded for other users.
Communications
Communications among the 360/67's of the TSS network are carried out using
voice-grade switched lines operating at 2000 bps. The lines are driven by Western
Electric 20IA modems in a half-duplex configuration. The 360/67 interface is provided
by an IBM 270 or 2703 connected to the 2870 Multiplexer Channel.
Because of the lack of programmable interface hardware, all communications
software support is resident on the host computer. In order to avoid extensive changes
49
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
NODE
NODE
CONFIGURATION
LOCAL NETWORK CONFIGURATION
IBM (WATSON RESEARCH
CENTER)
360/67/TSS
IBM 360/91/MVT, 1800, 1130,
SYSTEM 7
CMU
360/67/TSS
EAI/680-PDP-9, UNIVAC 1108, PDP-8,
PDP-11, DDP-116, PDP-101
NASA LEWIS
DUPLEX 360/67/TSS
SMALL COMPUTERS (XDS, DEC),
ON-LINE CDC MICROFILM UNIT,
2321 DATA CELL, 2 2301 DRUMS,
2 2314 DISK UNITS, 3 MAINCORE
UNITS (256K EACH), CALCOMP
PLOTTER, SENSOR EQUIPMENT.
NASA AMES
DUPLEX 360/67/TSS
3 2314s, 2 2301 DRUMS, 6 2780s,
1800 WITH A 2250, IMLAC PDS 1,
SC 4020, 30 TERMINALS (2741s,
TTYs)
BTL/NAPERVILLE
DUPLEX 360/67/TSS
IBM 360/65/65/50 ASP
PRINCETON
360/67/TSS
IBM 360/91/MVT
1PROPOSED.
Figure 25 TSS Network Hardware
to the standard TSS operating system, the communications software was designed to
operate as a user program, contending for resources on the same basis as other user
programs. As a consequence, the highest message throughput capacity which has been
realized is less than the maximum possible with the present communications hardware.1
1In recognition of this problem, IBM is developing a communications computer concept
which would use a 370/145 as a combination communications computer and data base
manager.
50
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
In order to provide the user with access to the communications subsystem, the
TSS network employs the Computer Access Method (CAM), a specially developed set of
procedure calls and software which effect the intercomputer dialog. CAM is capable of
supporting voice-grade lines operating at 2000 bps, and wide-band leased facilities oper-
ating at either 40,800 or 50,000 bps. A more generalized version of CAM, called Table
Driven CAM, has also been developed. In Table Driven CAM, the characteristics of the
communications subsystem and the receiver are defined by means of table entries, per-
mitting a wider range of computers and communications equipment to be used on the
network.
Upon receipt of a CAM request, the communications software must first de-
termine whether a connection exists to the destination computer; if not, one is estab-
lished. The message, which may be up to 1024 bytes long, is compacted prior to
transmission. The message is subsequently transmitted to its destination by a special
software task, which time-multiplexes all messages destined for any particular site.
The receiving system software effectively performs the same process in the reverse
order. Error checks are performed; retransmissions are requested if errors occurred,
while acknowledgments are returned otherwise.
Usage
The first goal of the TSS network was to investigate the uses and advantages of
a general purpose network of computers. The experience gained is to be used in de-
termining future avenues of expansion in designing and implementing other networks.
The nodes will use the network for experimentation and research rather than for pro-
duction work.
The TSS Network provides a convenient means of exchanging programs and sys-
tem modifications since like computers are used in the network. Use of the network for
program sharing and data sharing saves duplication of programs and data at foreign sites.
Load sharing, remote service and dynamic file access are among the features provided by
the TSS Network. Figure 26 gives an example of how the network is used.
Since one processor appears as a terminal to another and since all devices in the
network appear to be on each processor, the terminal user can command the full re-
sources of the network as though he were dealing with a single system. After the user
has gained access to the target processor, he may initiate processing activity, disconnect,
or connect to another node; he may even have many jobs executing at various nodes
simultaneously. Specialized facilities, such as graphic subsystems and large core
memories, are available over the network.
51
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
NODE 1
LOG ON
INITIATION OF COMMUNICATIONS WITH NODE 2
SEND JOB
TRANSMIT JOB TO NODE 2
LOG OFF
MESSAGE-YOUR JOB IS FINISHED AT NODE 2
LOG OFF
TRANSMISSION OF
JOB
INITIATING
TASK A
LOG ON
READ JOB RESULTS
PRINT JOB RESULTS
TALK
LOG OFF
NOTIFICATION OF JOB COMPLETION
RECEIVING
TASK D
NODE 2
LOG ON
RECEIVE JOB
EXECUTE JOB
LOG OFF
TRANSMISSION OF
JOB RESULTS
B
RECEIVING
TASK B
LOG ON
COMMANDS FOR FORTRAN
COMPILATION AND
EXECUTION
INITIATION OF COMMU-
NICATIONS WITH
NODE 1
SEND JOB RESULTS
LOG OFF
WORK
TASK C
EXECUTION OF JOB
Figure 26 Usage of the TSS Network
0-L1?001.0009000V960001N6LdCltl-VI3 : CZ/60/1700Z eseeletliOd peAoiddv
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Passwords and keys serve to maintain the privacy and integrity of the user files.
As an added precaution, however, IBM does not allow outsiders to connect to their
360/91 when proprietary information is being processed. The network has a "copy
protect" feature that enables a person to use a file or a program without allowing him
to copy it.
A Network Control Language enables the user to perform the following
functions: connect to a specified node, initiate a computational process, disconnect
from a specified node, test for any outstanding responses, and send and receive data
sets, and display process responses. The language is simple since the designers con-
centrated on making the system easy to use.
Management
The TSS network is an interconnection of several independent research facilities.
A consequent goal of the network is the establishment of an experimental environment
which interferes with the other activities of the nodes as little as possible. The homo-
geneity of the network has been instrumental in establishing this environment by
minimizing the amount of effort required to develop all of the network software.
The technical development of the network has been carried out informally.
Representatives from each of the sites meet periodically to discuss technical proposals
and ideas. Planned experiment activities are also discussed and coordinated at these
meetings.
The collection and dissemination of network documentation is the responsibility
of the Network Information Center (NIC), located at the NASA Lewis Research Center.
Systems which will permit retrieval of appropriate documents by a network user are
currently under development at the Lewis Center.
No formalized procedure has been developed for intersite billing. At the present
time, the informality of the project and the nearly equal intersite utilization of resources
has obviated the need for such procedures. However, usage statistics are gathered to
monitor this situation and to uncover any heavy one-sided usage patterns.
THE TUCC NETWORK
Introduction
The Triangle Universities Computation Center (TUCC) was established in 1 965
as a cooperative venture among three major North Carolina universities: Duke University,
North Carolina State University (NCSU), and the University of North Carolina (UNC). Its
53
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
incorporation was a response to the saturation of existing local computer facilities and
the unavailability of funds to permit the expansion necessary at each of the universities.
The network was developed to satisfy three primary goals:
? to provide each of the institutions with adequate computational facilities as
economically as possible;
- ? to minimize the number of systems programming personnel needed; and
? to foster greater cooperation in the exchange of systems, programs, and ideas
among the three universities.
Network operation was begun in 1966. Since that time, a continual growth in
both the central computing capability and that at each of the universities has been
necessitated. Throughput on the central computer has grown from 600 jobs per day in
1967 to a present peak volume of about 4200 jobs per day. Present plans call for a
100% increase in the central computer capacity by September 1971.
Configuration
The TUCC Network is centralized with homogeneous computers at its three
nodes: UNC, Duke and NCSU. Through the North Carolina Educational Computer
Services, TUCC also serves some fifty smaller schools within the State and provides
general computing services to a small number of research-oriented organizations. Figure
27 gives an overview of the TUCC network.
The center of TUCC is a well-equipped IBM 360/75 with one million bytes of
high-speed core and two million bytes of Large Capacity Storage, operating under
OS/MVT (see Figure 28). There are approximately 100 terminals (high, medium, and
low speed) in the network. The high-speed terminals are a 360/50 and an 1130 at
UNC, a 360/40 at NCSU and a 360/40 at Duke. The 360 systems are multiprogrammed
with a partition for local batch work and a telecommunications partition for TUCC
remote I/O services. The medium-speed terminals are IBM 2780's (or equivalents) and
1130's, and the low-speed terminals are teletypes, IBM 274I's (or equivalents) and
IBM 1050's. Less than 10% of TUCC's work is submitted at the card reader at the
central computer.
Software facilities provided by TUCC include FORTRAN (E, G, H and
WATFIV), ALGOL, APL, COBOL, CPS, BASIC, PL/I, SNOBOL, CSMP, ECAP, GPSS,
MPS, FORMAT, TEXT/360, and Assembler G.
54
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
Approved For Release 2004/09/23 : CIA-RDP79M00096A000500010017-0
DUKE/DURHAM
360/40
PRIMARY TERMINAL
COMMUNITY COLLEGES,
TECHNICAL INSTITUTES, AND
SECONDARY SCHOOLS (2770,
2780, 1050 AND TELETYPES)