A PROTOTYPE ANALYSIS SYSTEM FOR SPECIAL REMOTE VIEWING TASKS
Document Type:
Collection:
Document Number (FOIA) /ESDN (CREST):
CIA-RDP96-00789R002200540001-8
Release Decision:
RIPPUB
Original Classification:
K
Document Page Count:
27
Document Creation Date:
November 4, 2016
Document Release Date:
October 14, 1998
Sequence Number:
1
Case Number:
Content Type:
REPORT
File:
Attachment | Size |
---|---|
CIA-RDP96-00789R002200540001-8.pdf | 859.09 KB |
Body:
For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Final Report-Task 6.0.3 October 1989
Covering the Period 1 October 1988 to 30 September 1989
11
A PROTOTYPE ANALYSIS SYSTEM FOR SPECIAL
REMOTE VIEWING TASKS
By: Wanda L. W. Luke
Thane J. Frivold
Edwin C. May
Virginia V. Trask
Prepared for:
SG1J
Contracting Officer's Technical Representative
SRI Project 1291
MURRAY J. BARON, Director
Geoscience and Engineering Center
333 Ravenswood Ave. ? Menlo Park, CA 94025
kved For ReJease6lWobi6$)0810 `~k'Ob~)bteb0 ' 002200540001-8
s
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
ABSTRACT
We have developed a prototype analysis system for remote viewings conducted
against targets of interest. The system uses individual viewers' performance histories
in conjunction with current data to prioritize a set of possible interpretations of the
site.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
' Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
TABLE OF CONTENTS
ABSTRACT .................................................................ii
LIST OF TABLES ........................................................... iv
LIST OF FIGURES .......................................................... iv
I INTRODUCTION ............................................... 1
II METHOD OF APPROACH ....................................... 2
A. Fuzzy Set Formalism ....................................... 2
B. Prototype Analysis System ................................... 5
C. Partial Application of Analysis System to Existing Target Pool ..... 7
D. General Conclusions ...................................... 12
REFERENCES .............................................................. 13
APPENDIX A ............................................................... 14
APPENDIX B ............................................................... 15
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
i LIST OF TABLES
1. Numerical Listing of Targets ........................................... 8
2. Technology Cluster .................................................. 11
3. Principal Elements Contained in the Technology Template .................. 11
(U) LIST OF TABLES
1. Cluster Diagram for Simulated Operational Targets ..................... 10
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
' Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
I INTRODUCTION
Since 1973, when the investigations of the human information-accessing capability
called remote viewing (RV) first began at SRI International," evaluating the quality of the
information obtained has been a continuing challenge. In order to develop valid evaluation
procedures, two basic questions must be addressed:
(1) What constitutes the target?
(2) What constitutes the response?
If the RV task is research-oriented, the targets are known, and therefore can be
precisely defined. In -oriented tasks, however, the targets are generally unknown
and their descriptions are problematical. In both task domains, RV responses tend to consist of
sketches and written phrases. A method to encode unambiguously this type of "natural
language" is one of the unsolved problems in computer science, and there has been little progress
to date. Thus, a complete definition of an RV response is also problematical.
An -oriented RV task poses further problems. High-quality RV does
.not always provide useful . For example, the RV may provide additional support for
information that has been verified from other sources, but provide no new information. In some
cases, however, an overall low-quality RV may provide key elements that positively influence an
analyst's interpretation.
Another characteristic of current laboratory analysis techniques is that they do
not provide an a priori assessment of the RV quality. While this is not a problem in the
laboratory, applications require such evaluation. An RV analyst cannot provide
usefulness ratings from the RV alone; rather, the analyst must provide a priori
probabilities that individual RV-response elements (or concepts) are present at the target site. It
remains the responsibility of analyst to determine whether such data are ultimately
useful.
Analysis of laboratory RV has been a major part of the ongoing Cognitive
Sciences Program.'" For FY 1989, we focused on the development of a prototype analysis
system that would provide the needed a priori assessments for tasking.t
References are at the end of this report.
This report constitutes the deliverable for Statement of Work item 6.0.3.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
The intelligence analyst, as opposed to an RV analyst, should construct such
a list for each mission. While there may be considerable similarities between element lists for
different missions, undoubtedly the lists will require specialization. In Section II-C below, we
show the construction of one element list and how it can be applied to a set of 65 simulated
operational targets.
2. Analysis of Complete Responses
Once an appropriate universal set of elements has been created, and fuzzy
sets that define the target and the response have been specified, the comparison between them is
straightforward. We have defined accuracy as the percent of the target material that is described
correctly by a response. Likewise, we have defined reliability (of the viewer) as the percent of
the response that is correct.y Although in the laboratory it is required to provide a posterior
probability estimates of the target-response match, in an operational setting, this may be less
important. All that is usually necessary is to describe the accuracy and reliability for complete
responses, and for individual target elements of interest. These quantities for the jth sessions are
n
7, Wk(Rj n Tj)k
k=1
rj = n
7, WkRj,k
n
7, wk (Rj n Tj) k
k=1
aj = n
7, WkTj,k
k=1
(1)
(2)
where the sum over k is called the sigma count in fuzzy set terminology, and is defined as the sum
of the membership values (?) for the elements of the response, the target, or their intersection,
and n is the number of possible elements as defined by the element list. A fuzzy intersection is
defined as the minimum of the intersecting fuzzy set membership values. In this version of the
definitions, we have allowed for the possibility of weighting the membership values, Wk, to
provide mission-defined relevances.
For the above calculation to be meaningful, the membership values for the targets
must be similar in kind to those for the responses. For most mission-dependent specifications,
this is generally not the case. The target membership values represent the degree to which a
particular element is characteristic of the target, and the response membership values represent
the degree to which the analyst is convinced that the given element is represented in the
response.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Until RV abilities can encompass the recognition of elements as well as their degree of
target characterization, we are required to modify the target fuzzy set. An analyst must decide
upon a threshold above which an element is considered to be completely characteristic of the
target site. In fuzzy set theory, this is called an cr-cut: a technique to apply a threshold to the ?
values such that if the original value exceeds it, reassign the value to 1, otherwise set it to 0. In
this way, the analyst's subjectivity can be encoded in the response fuzzy set, and Equations 1 and
2 remain valid.
3. Analysis of an Individual Element
Equations 1 and 2 can be simplified to provide an accuracy and reliability on an
individual element basis instead for a complete response. For example, let N be the number of
sessions against different targets that exist in a current archive for a specified viewer. Let a be an
element in question (e.g., airport). Then the empirical probability that element a is in the target,
given that the viewer said it was, is given by
R(E) = N, (3)
where Nc is the number of times that the individual was correct, and Nr is the number of times
that element a was mentioned in the response. R(e) is also the reliability of the viewer for that
specified element.
To compute what chance guessing would be, we must know the occurrence rate
of element a in the N sessions. Let No be the actual number of times element a was contained in
the N targets. Then the chance-guessing empirical probability is given by
Ro(E) _ No
.
Ro(e) can also be considered as the guessing reliability (i.e., the reliability that would
be observed if the viewer guessed a during every session). The more R(e) > Ra(e), the more
reliable the individual is for the specified element.
The empirical probability that the viewer said element e, given that it was in the
target, is given by
A (E) is also the accuracy of the viewer for that specified element.
As a numerical example, suppose a single viewer participated in N = 25 sessions.
Let e = "airport." Further suppose that No = 5 of the targets actually contained an airport.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00189R002200540001-8
Then, Ro(airport) = 0.20 is the chance probability (i.e., guessing airport during every session
would only by 20 percent reliable). Assume that the viewer mentioned airport Nr = 6 times and
was correct Nc = 4 times. Then this viewer's reliability for airports is computed as R (airport) =
0.67 > R0(airport) = 0.20. The viewer's accuracy for airports is computed as A(airport) = Nc/No
= 0.80. Thus in this example, we can conclude that this viewer is reasonably accomplished at
remote viewing an airport.
B. Prototype Analysis System
We assume that an _ analyst has constructed a mission-dependent
universal set of elements. We further assume that there are a number of competing
interpretations of the target site in question.
1. Target Templates
The first step in our prototype analysis system is to define templates (i.e.,
general descriptions of classes of target types) of all competing target interpretations from the
universal set of elements. For example, a class of target types could be a generic biological
warfare (BW) facility. Exactly what the templates should represent is entirely dependent upon
what kind of information is sought. Both the underlying universal set of elements and the
templates must be constructed to be rich enough to allow for the encoding of all the information
of intelligence interest. That is, if neither the set of elements nor the templates can meaningfully
represent information about, say BW development sites, then it will be unreasonable to consider
asking, "Does development of BW agents take place at the site?" Furthermore, a certain
amount of atomization is necessary because such division into small units provides the potential
for interactions within the universal set of elements. If the profile of a BW facility consists of a
single element, the template would be useless unless the response directly stated that particular
element; rather, the profile should be constructed from groups of elemental features (e.g.,
biological, offensive, weapon, decontamination).
There are two different ways to generate target templates. The most
straightforward technique is also likely to be the most unreliable, because it relies on the analyst's
judgment of a single target type. With this method, the analyst, who is familiar with the
problem at hand, simply generates membership values for elements from the
universal set of elements based upon his or her general knowledge. Given the time and
resources, the best way to generate template membership values is to encode known targets that
are closely related (e.g, a number of known BW sites). Each template ? is the average value
across targets, and thus is more reliable. If it is known that some targets are more
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
.characteristic" of the target type than others, then a weighted average should be computed. In
symbols,
(4)
where the sums are over the available targets that constitute the template, wk are the target
weights, and the ?1.k are the assigned membership values for target k.
A critical feature of an analysis system for RV data is that along
with the current RV data to be evaluated, the individual viewer's past performance on an
element-by-element basis must also be included. For example, if a viewer has been relatively
unsuccessful at recognizing BW facilities, then a BW reference in the current data should not
contribute much in the overall analysis.
As ground truth becomes available for each session, a performance database
should be updated for each viewer to reflect the new information This database should be a
fuzzy set whose membership values for each element are the reliabilities computed from
Equation 3.
3. Optimized Probability List
The goal of any RV analysis system is to provide an a priori
prioritized and weighted list of target possibilities that results from a single remote viewing that is
sensitive to the performance history of the viewer. Assuming that a template exists for each of
the possible interpretations, an analyst should adhere to the following protocol:
(1) Analyze the RV data by assigning a membership value (?) for each element in the
universal set of elements. Each .t represents the degree to which the analyst is
convinced that the particular element is included in the response. For example,
suppose that the viewer said, "I perceive a BW facility." Then ?(BW facility) = 1.
Alternatively, suppose the viewer said, "I perceive glassware and smell organic
chemicals." In this case, ?(BW facility) might be assigned 0.6.
(2) Construct a crisp set, Rc, as an a-cut of the original response set. By adopting a
threshold of 0.5, for example, then the resulting crisp set contains only those
elements that the analyst deems most likely as being present in the response.
(3) Construct an effective response set, Re, as Re = Rcfl Ra, where Ra is the reliability set
drawn from the archival database. For example, suppose the original
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
assignment from the raw RV data was ?(BW facility) = 0.6. Then after the a-cut
with a threshold set at 0.5, t(BW facility) = 1.0. Suppose, however, that the viewer
has been performing well on BW facilities and the archival database shows that
Ra(BW facility) = 0.8. Thus, Rc (BW facility) = 0.8.
(4) Using this effective response set, compute an accuracy and reliability in accordance
with Equations 1 and 2. Then compute a figure-of-merit, Mi, for the jth competing
interpretations as
Mj=aj X rj .
Of course, the accuracy and reliability use the effective response set from step 3
above.
(5) Order the Ms from largest to smallest value. Since the figures-of-merit range in value
from 0 to 1, they can be interpreted as relative probability values for each of the
alternative target possibilities.
By following such a protocol, an analyst can produce a list of target alternatives that is sensitive to
the current remote viewing yet takes into consideration to the individual viewer's archival record.
C. Partial Application of Analysis System to Existing Target Pool
We have used an existing target pool (developed under a separate program) as a test
bed for the analysis system described above.
1. Criteria for Inclusion in the Target Pool
Targets in this pool have the following characteristics:
? Each target is within an hour and a half automobile drive of SRI International.
? Each target simulates an operational site of interest.
? Each target fits generally within one of five functional categories: Production,
Recreation, Scientific, Storage, and Transportation.
? Each target meets a consensus agreement of experienced RV monitors and
analysts about inclusion in the pool.
The pool consists of 65 targets. Initially, they were divided into 13 groups of five
targets each, where each group contained one target from each of five functional categories. By
carefully organizing the targets in this way, the maximum possible functional difference of the
targets within each group was ensured. Table 1 shows a numerical listing of these targets.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Numerical Listing of Targets
1. Transformer Station
23. Space Capsule
45. Pump Station
2. Ballpark
24. Coastal Battery
46. Ice Plant
3. Satellite Dish
25. Bay Area Rapid Transit
47. Caves/Cliffs
4. Weapons Storage
26. Salt Refinery
48. Bevatron
5. Naval Fleet
27. Candlestick Park
49. Barn
6. Gravel Quarry
28. Solar Observatory
50. Golden Gate Bridge
7. Swimming Pool
29. Food Terminal
51. Modern Windmills
8. Observatory
30. Pedestrian Overpass
52. Baylands Nature Preserve
9. Prison
31. Electrical Plant
53. Gas Plant
10. Shipping and Receiving
32. White Plaza
54. Auto Wreckers
11. Greenhouse
33. Space Shuttle
55. Fishing Fleet
12. Picnic Area
34. Coastal Battery
56. Radio Towers
13. Satellite Dishes
35. Train Terminal
57. Vineyard
14. Paint Warehouse
36. Sawmill
58. Pharmaceutical Laboratory
15. Naval Air Station
37. Pond
59. Toxic Waste Storage
16. Sugar Refinery
38. Wind Tunnel
60. Airport
17. Playground
39. Grain Terminal
61. Car Wash
18. Aquarium
40. Submarine
62. Old Windmill
19. Drum Yard
41. Cogeneration Plant
63. Nuclear Accelerator
20. Aircraft
42. Park
64. Reservoir
21. Sewage Treatment Plant
43. Linear Accelerator
65. Train Station
22. Hoover Tower
44. Dump
In FY 1989, we developed a prototype analysis system for analyzing targets
and responses in operational remote viewings. A list of elements, based on target function (i.e.,
the mission specification), is arranged in levels from relatively abstract (information poor) to the
relatively complex (information rich). Having levels of elements is advantageous in that each can
be weighted separately in the analysis.
This universal set of elements (included as Appendix A) represents primary
elements in the existing target pool of 65 targets. The set was derived exclusively from this
known target pool. In an actual RV session, however, a viewer does not have access to the
element list, and thus is not constrained to respond within its confines. An accurate RV analysis
must include any additional data that may be provided in the response; therefore, additional
space has been provided on the analysis sheets (see Appendix A) to include elements that are
part of the response but not initially included as part of the universal set.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
The target-dependent elements emphasize the site's function, and use terms
that are potentially universal across targets. We identified six element levels ranging from
relatively information rich to relatively information poor: affiliation, function, attributes,
modifiers, objects, and general/abstract. Because operational RV presupposes a certain level of
ability on the part of the viewer, there are relatively few general/abstract elements included in our
prototype analysis system. A description of some of the elements shown in Appendix A and a
guide to their use are presented in Appendix B.
3. Target Similarities
In order to generate a demonstration target-type template using Equation 4, we
first organized the 65 targets into clusters of similar types.
We begin by defining the similarity between target j and target k (Sj,k) to be a
normalized fuzzy set intersection between the two target sets;
N 2
Wi(T.i nTk)i
__ i=t 1
Sj'k N N
C~ W,TJ , X WjTk i
=t l,'=t l
1
(5)
By inspection, we see that Si.k is also the figure-of-merit between target j and target k.
For N targets there are N(N-1)/2 unique values (2080 for N=65) of S1,k. The
value j and k that correspond to the largest value of Si,k represent the two targets that are most
functionally similar. Suppose another target m is chosen and Sm,j and Sm.k are computed. If
both of these values are larger than Sm.p (for all p not equal to j or k) then target m is assessed to
be most similar to the pair j,k. The process of grouping targets based on these similarities is
called cluster analysis.
Figure 1 shows the six clusters found from the cluster analysis of the 65 targets."
The numbers shown refer to the targets listed in Table 1, and the clusters are in close agreement
with the original five categories used to select the targets. The point, however, is that a numerical
algorithm is capable of dividing a set of targets into functional categories.
" In order to make the graphic output more meaningful, we used 1 - Si.k in the analysis.
9
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Cluster 1
Recreation
---------------- 12~
55
Cluster 2
Transportation
65
--------------------------------- 4
Cluster 3
40
4 --
W
eapons 1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 56 -
1
Cluster 4
Technology
Cluster 5
Storage
45
62
Cluster 6
Production/Distribution
- 26
Figure 1.
14
19
--------49
Cluster Diagram for Simulated Operational Targets
We used the technology cluster (i.e., number 4 in Figure 1) to apply Equation 4
to construct a technology target template. Table 2 shows the targets in this cluster, where the
horizontal lines indicate the subclustering within the technology group shown in Figure 1.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Technology Cluster
Target
Name
56.
Radio Towers
1.
Transformer Station
51.
Modern Windmills
31.
Electrical Plant
41.
Cogeneration Plant
3.
Satellite Dish
13.
Satellite Dishes
8.
Observatory
28.
Solar Observatory
58.
Pharmaceutical Laboratory
63.
Nuclear Accelerator
43.
Linear Accelerator
48.
Bevatron
Table 3 shows those elements that met or exceeded average membership values
of 0.4 using Equation 4.
Principal Elements Contained in the Technology Template
Levels
Number
Name
Affiliation
1
Commercial/Private
Function
14
Research/Experimentation
Attribute
24
Energy
Modifier
47
Electricity/Radio
Objects
88
High Technology Electronics
99
Restricted Access
120
Wires/Cables
Abstract
122
Activity-Passive
130
Ambiance-Indoor
131
Ambiance-Manmade
137
Ambiance-Outdoor
149
Size-Medium
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
. As a self-consistency check, we included the technology template in the total
target pool and recalculated the clusters. As expected, the technology template was included
within the subgroup of targets 3 and 13, and well within the technology cluster as a whole.
The goal of this effort was to develop an analysis system that would prove
effective in providing a priori assessments of remote viewing tasks. If the proper
mission-dependent universal set of elements can be identified, then, using a viewer-dependent
reliability archive, data from a single remote viewing can be used to prioritize a set of alternative
target templates so as to chose the most likely one for the mission.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
1. Puthoff, HE., and Targ, R., "A Perceptual Channel for Information Transfer Over
Kilometer Distances: Historical Perspective and Recent Research," Proceedings of the
IEEE, Vol. 64, No. 3, March 1976,
2. Targ, R., Puthoff, H.E., and May, E.C., 1977 Proceedings of the International
Conference of Cybernetics and Society, pp. 519-529, 1977,
3. May, E.C., "A Remote Viewing Evaluation Protocol ," Final Report (revised), SRI
Project 4028, SRI International, Menlo Park, California, July 1983,
4. May, E.C., Humphrey, B.S., and Mathews, C., "A Figure of Merit Analysis for
Free-Response Material," Proceedings of the 28th Annual Convention of the
Parapsychological Association, pp. 343-354, Tufts University, Medford, Massachusetts,
August 1985,
5. Humphrey, B.S., May, E.C., Trask, V.V., and Thomson, M. J., "Remote Viewing
Evaluation Techniques " Final Report, SRI Project 1291, SRI International, Menlo
Park, California, December 1986,
6. Humphrey, B.S., May, E.C., Utts, J.M., Frivold, T.J., Luke, W.L., and Trask, V.V.,
"Fuzzy Set Applications in Remote Viewing Analysis," Final Report-Objective A, Task 3,
SRI Project 1291, SRI International, Menlo Park, California, December 1987,
7. May, E.C., Humphrey, B.S., Frivold, T J., and Utts, J M., "Applications of Fuzzy Sets to
Remote Viewing Analysis " Final Report-Objective F, Task 1, SRI Project 1291,
SRI International, Menlo Park, California, December 1988,
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Appendix A
UNIVERSAL SET OF ELEMENTS FOR ANALYSIS OF FUNCTION
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
E c
l0
C.)
0)
1100
co
.2
to
-
0
w
to
-
i'
c
cc
to
01
U
>
C)
0)
)
CD
U
m
3
`)
=
0
m
3
C)
m m
0 3 v, m
n. m w
cc U > 03 w
GLL~
O
C
0)
E
E
C7
L
O
16
z
tom-
c7.
C
w
m CL
CO
%
CD
?
G
U-
0
a
i
CD
15
cc
C
z
cc
a
O
a
}.
a.
Z
0
N
O)
O0
r
C)
1()
N
CO
N
0D
N
CY)
O
C)
A)
z
0
CL
0
r
<
L)
s0
a
o
C
U
-0
w
x
w
c
4c
L.
W,
<
-6
ro
m
L)
a)
10
0
U
~,
U
w
C
w
0
Lij
V
N
co
n
co
co
CY)
0
cli
e4
N
N
N
C
C
0
.F+
V
C
U.
U)
Approved For Release
E 0 Q ??
c
m m m
w co m 0
w U
CD ~-
> F O
Y
O
m
O 'D a)
G 0) V C U
h 0 ~?- -2-
El El El El E]El
10
m
C
U
m
J
C
C
0
10
g m
U
ro
100
0
t f
1o
m
z
CL
a a
E
El
EaE
o
0
E o t?- m
E 4 0
m?
O N
0 V sl
n) ..
0) 10
3 3
U
v V
m ??
0
s W E
E
>- .2
C V n
U U
1A CO
C) CO
R .
D
Approved For Release 2WIPUESMES :
1, ItIb-UUI ''Illi.u 040001
Arrrn..arl Fnr RpIpacp 7nnn/n8/n8 ? IA-RDP96-0078 R00 00540001-8
E c ??
CD m m
Q, ~ y v 3 cn m
w CC U > H v
El auoooao
I j I
CD
co
I-
m
U U
h O
h h
000
Approved For Release
A.nnrnliarl Fnr RPIPaCP 7nff/08/08 ? IA-RDP 6-00789R00 00 40001-8
Approved or a ease
El
El
UUEIEIUU
i ~ o..I..a ~OOA!A9IAR ( IA_Rr1DQ~_f1f17RQRf n')9f1f1~ nnn1 _Q
E
i O m
CL al vii ao
w i= ? U > 00
EU1EU1JE1EJ
El
L
m
0
O < U U
a a E E E
c) .r Ln
N N N
~- r r
EJEIEIIJ
N N
El
El
Approved or Release
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
C
E c
a c m 0
w f- ac O
U)
1
d
0
2
0 Co D co co
r r r r r r r
Goo w qr to m Co o coo cc
r r r r r
oooaoaaooooo0
~ t Co cc
r r
Go cri
CO CO
h N Cl)
r r r
r r r
UEJEIEIEIIJEJEJEJ
r..
0
y./
o
C
U.
1[) Lr) Co ^ 1^)
~
U)
r.r
.7
.C
~.
7
ft) Co O O N lt2 t2 t2
t0 f0 i0 f0
r r
ppr v
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Appendix B
ANALYSTS' GUIDE TO THE UNIVERSAL SET OF ELEMENTS FOR
FUNCTION
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
AN ANALYST'S GUIDE TO THE UNIVERSAL SET OF ELEMENTS (U)
This appendix is intended to assist an analyst in using the universal set of elements
shown in Appendix A. We developed six levels of elements ranging from relatively abstract
(information poor) to the relatively complex (information rich).
The task of the analyst is to assign a membership value between 0 and 1 to each
individual element. For targets, a numerical value will be assigned on the basis of the presence
or absence of each element in terms of functional importance. For responses, the numerical
value will be assigned on the basis of the degree to which the analyst is convinced that the
element is contained in the response.
All subsequent commentary is referenced by the element numbers in Appendix A.
Although each level may contain a number of elements, only those individual elements that may
.need explanation are listed below.
"Affiliation" represents an advanced level of remote viewing functioning.
Although we infrequently observe this advanced functioning, the data are valuable, and,
therefore, are included. Elements in this level can be assigned membership values by asking the
question, "Who owns the target?" There are only three "affiliation" elements:
(1)
(2)
(3)
Commercial/Private.
Government: Federal, state, or local governmental ownership (e.g., municipal
utilities), but excluding military.
Military: military ownership as separate from the above governmental ownership
(e.g., a Navy submarine).
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
"Function" also represents an advanced level of remote viewing functioning, and
it may represent the most important information with regard to overall function. Elements are
assigned membership values by asking the question, "What is(are) the primary function(s) of the
target?" There are 14 "function" elements, and a few require further explanation:
(6) Distribution: the primary function is to receive and to transmit something (e.g., an
electrical transformer station).
(8) Extraction: as in the extraction of minerals from the ground.
(11) Reception: the primary function is Qnjy to receive (e.g., a satellite tracking station).
(13) Refining: the primary function is to refine a raw material into an intermediate or
finished product (e.g., a saw mill).
(16) Transmission: the primary function is 2nly to transmit (e.g., a radio tower).
"Attributes" can be thought of as clarification for the "function" level.
Elements are assigned membership values by asking a question similar to, "If the function of the
target is production, then what is being produced?" There are 20 "attribute" elements, and the
following require further explanation:
(18) Animals: animals QQnjy.
(20) Biology: the study of living things in general.
(21) Chemistry: also includes chemicals.
(23) Ecology: symbiotic systems in nature, as in ecological zones (e.g., the Bay Lands
Nature Preserve).
(24) Energy: energy in a broad sense that also includes radio waves.
(29) Nature/Natural: general natural objects (e.g., plants j animals).
(32) Plants: plants nly.
(33) Space exploration: general, includes all experimentation done in space.
Elements 18 and 32 are given a membership value if the target/response is specifically oriented to
one item. Otherwise element 29 should be assigned a value.
"Modifiers" can be thought of as a clarification of the "attributes" level.
Elements are assigned membership values by asking a question similar to, "If the function of the
target is production, and vehicles are being produced, then what kind of vehicles are they?"
There are 36 "modifiers" elements, and only element 66 requires further explanation:
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8
(66) Symbiotic: symbiotic relationships not subsumed under natural or ecology (e.g., a
cogeneration plant).
5. Element Level-Objects
"Objects" contains specific elements not necessarily related to function.
Elements are assigned membership values on the basis of the presence or absence of each object
in terms of functional importance. There are 47 "objects" elements, and the following require
further explanation:
(77) Catwalk: elevated walkway.
(79) Coastline: used only as coastline of an ocean.
(88) High-Technology Electronics: silicon-based technology.
(95) Port/Harbor: port should be marked as in port of departure (e.g., airport, train
station, seaport).
(116) Water-Bounded: only completely bounded bodies of water (e.g., pool or pond).
(117) Water-Canal: manmade.
(118) Water-Large Expanse: the San Francisco Bay should be marked as a large
expanse.
(119) Water-River: also includes stream.
6. Element Level-General/Abstract Items
This level contains the most abstract elements. There are 31 elements, and the
following require further explanation:
(121) Activity-Active: predominant visually active (e.g., an accelerator is very active
electromagnetically, but would be considered passive, because there is little visual
activity); potential activity is considered as passive.
(122) Activity-Passive: predominant visually passive (e.g., a ballpark is passive most of
the time).
(123) Activity-Flowing (Water, Air, etc.): can be natural (e.g. creek) or manmade.
(128) Ambience-Dangerous: perceived and/or physically dangerous.
(140) Colorful: to be used only if especially characteristic.
(141) Modern: to be used only if especially characteristic.
(142) Odd/Surprising: to be used only if especially characteristic.
(143) Old: to be used only if especially characteristic.
(144) Personnel-Few: 1 to 10 employees mostly full-time.
(145) Personnel-Many: 10 to 1000 employees mostly full-time.
(146) Personnel-None: no full-time employees, but occasional human attention is
allowed.
(148) Size-Large (University Campus): represents a "campus" size area.
(149) Size-Medium (Building): size of typical single buildings.
(150) Size-Small (Human): typically, the size of a human (i.e., 6 feet)
(151) Dull: to be used only if especially characteristic of the color.
Approved For Release 2000/08/08 : CIA-RDP96-00789R002200540001-8