SUMMARY REPORT. STAR GATE OPERATIONAL TASKING AND EVALUATION (AN EXTRACT)
Document Type:
Collection:
Document Number (FOIA) /ESDN (CREST):
CIA-RDP96-00791R000200300002-2
Release Decision:
RIPPUB
Original Classification:
K
Document Page Count:
22
Document Creation Date:
November 4, 2016
Document Release Date:
April 13, 2000
Sequence Number:
2
Case Number:
Content Type:
SUMMARY
File:
Attachment | Size |
---|---|
CIA-RDP96-00791R000200300002-2.pdf | 1.58 MB |
Body:
)ct ~ ~~
~~ eleasle 2000/08/10 :CIA-RDP96=0091 800
~~ ~~~ ~~ SUMMARY REPORT
STAR GATE OPERATIONALTASKING AND EVALUATION
1.0 EXECUTIVE SUMMARY
,~ !~~^ cll.~a.a nt~"-~
~p~o - ~
From 1986 to the first quarter of FY 1995, the DoD paranormal psychology program
received more than 200 tasks from operational military organizations requesting that
the program staff apply a paranormal psychological technique know as "remote
viewing" (RV} to attain information unavailable from other sources. The operational
tasking comprised "targets" identified with as little specificity as possible to avoid
"telegraphing" the desired response.
In 1994, the DIA Star Gate program office created a methodology for obtaining
numerical evaluations from the operational tasking organizations of the accuracy and
value of the products provided by the Star Gate program. By May 1, 1995, the three
remote viewers assigned to the program office had responded, i.e., provided RV
product, to 40 tasks from five operational organizations. Normally, RV product was
provided by at least two viewers for each task.
Ninety-nine accuracy scores and 100 value scores resulted from these product
evaluations by the operational users. On a 6-point basis where "1" is the most
accurate, accuracy scores cluster around "2's" and "3's" (55 of the entries) with 13
scores of "1". Value scares, on a 5-point basis with "1"the highest, cluster around "3's"
and "4's" (80 of the entries); there are no "1's" and 11 scores of "2".
After careful study of the RV products and detailed analysis of the resulting product
evaluations for the 40 operational tasks, we conclude that the utility of RV for
operational intelligence collection cannot be substantiated. The conclusion results
from the fact that the operational utility to the Intelligence Community of the information
provided by this paranormal RV process simply cannot be discerned. Furthermore,
this conclusion is supported by the results of interviews conducted with
representatives of the operational organizations that provided tasking to the program.
The ambiguous and subjective nature of the process actually creates a need for
additional efforts of questionable operational return on the part of the intelligence
analyst. Assuming that the subjective nature of the psychic process cannot be
eliminated, one must determine whether the inform~rhlprovided justifies the required
resource investment. '
2.0 GENERIC DESCRIPTION OF OPERATIONAL TASKING
Over the period from 1986 to first quarter of FY 1995, the Star Gate program received
more than 200 tasks from operational military organizations. These tasks requested
that the program staff apply their paranormal psychologica! technique know as "remote
Page i
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
viewing" (RV) in the hope of attaining information unavailable from other sources. The
operational tasking comprised "targets" which were "identified" in some manner,
normally with as little specificity as possible (see discussion below) to avoid
excessively "telegraphing" the desired response. However, until 1994, the results
from this tasking were not evaluated by the tasking organizations by any numerical
method that would identify the accuracy and value of the provided information (for a
few cases in prior years narrative comments were provided by some organizations).
In 1994, this situation changed when the Program Office developed a methodology for
obtaining numerics{ evaluations from the tasking organizations of the Star Gate inputs;
this methodology is described briefly in Section 3.0. By May 1, 1995, 40 tasks
assigned by five operational organizations had been evaluated under this process.1
Section 4.0 describes the numerical evaluations performed by evaluators from the
tasking organizations. The descriptions presented below regarding the tasking and
the related targets refer principally to the operational tasks that were numerically
evaluated.
The process for a typical tasking, RV response and subsequent evaluation is as
follows:
- The tasking organization provides information to the Star Gate Program
Manager (PM) describing the problem to be addressed.
- The PM provides a Tasking Form delineating only the most rudimentary
information to one or more of the three Star-Gate RV's2 far their use during the
RV session (a typical Tasking Farm is presented in Figure 2-1 ). In addition, the
RV's are appraised of the identity of the tasking organization.
- Subsequently the RV's hold individual "viewing" sessions recording their
comments, observations, feelings, etc. and including line drawings or sketches
of things, places, or other items "observed" during the session.
- The individual RV inputs are collected and provided to the tasking
organization for their review with a request far completing a numerical
evaluation of the individual RV inputs for accuracy and for value.
- Finally, for those organization who comply with the request, the evaluation
scores are returned to the Star Gate Program Office.
1 Evaluation at additional 1994-95 tasks continued after 5/1 /95; three tasks since evaluated were
reviewed. They caused only insignificant changes to the statistical information provided in Table 4-1 and
did not alter any of the Conclusions and Recommendations in Section 7.0
~ (U) All three RV's were full time government employees.
Page 2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
FIGURE 2-1
TASKING SHEET
DATE: 18 Jul 94
SIISPENSE: 18 Jul 94
1600 Hrs _
2. METHOD/TECHNIQUE: Method of Choice
4. ESSENTIAL ELEMENTS OF INFORMATION:
Access and describe tarcxet.
Page 3~
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Twenty-six (26) of the 40 operational tasks originated from DIA in support of two joint
Task Forces, Org. B and Org. C, (see Section 4.0). Typical tasking targets for these
organizations comprised the name of a person or thing (e.g., vessel) with a generic
request to describe the target, hismer~ts activities, location, associations, etc. as
appropriate. No specific information (e.g., what is the height/weight/age of the target?)
was requested in the tasking. As noted above, the identity of the supported
organizations also was provided. For these tasks that identification provides the RV's
with knowledge regarding the specific operational interests of these organizations.
Thus, any information provided by the RV's which describes or relates to those
interests "could be" relevant; and, therefore, could be interpreted by the evaluators as
having some level of "accuracy" and "value" depending upon the information
described and the evaluator's interests and beliefs.
The tasking provided by the organization denoted as Org. A comprised targets that
were "places" visited by "beacons", i.e., an individual from Org. A who visited and
"viewed" the site of interest to assist the RV in "visualizing" and describing the site.
Targets could be a general vista in or around a particular location, a particular facility
at a selected location or, perhaps, a particular item at a location (in the one case
where this type of target was used, the item was a particular kind of boat). Usually, no
specifics regarding the type of target or its location were provided.
Tasking by Org. d comprised two generic types of targets that related to military
interests/concerns current at the time of the tasking, e.g., North Korean (NK)
capabilities and leadership.. The first type of target focused upon then-current military
concerns while the second type required "precognitive" {predictive) capabilities since it
required a prognoses of future intentions and actions.3
The tasking from Org. E was similar in scope, albeit quite different in context, from the
tasks noted earlier for Org. B and Org. C , i.e., describe a person, his activities,
location, etc..
SG1 B
3 Some operational tasks from the period Oct. 1994 to Jan 1991 regarding Middle East issues were of a
similar types, albeit these were not numerically evaluated. They would provide some data for an after-the-
fact check of the accuracy of the RV predictions -see Section 7.0 for a discussion of this possibility.
Page 4
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
3.0 EVALUATION MEASURES
The numerical evaluation measures that were given to the evaluators of the tasking
organizations to score the accuracy and value of the Star Gate inputs were extracted
from the Defense Intelligence Agency Manual (DIAM) 58-13. These measures are
shown in Table 3-1. Most of the stipulated measures include modifiers such as "may",
"possibly", "high", "low", etc. which are subjective and open to individual interpretation
by each evaluator. The DIAM 58-13 definitians for the ratings under "Value" are
presented in Table 3-2; whether the individual evaluators reviewed these definitions
prior to their scoring is unknown. There was no clarification of what was intended by
the generic headings of "Accuracy" and "Value", e.g., in the evaluator's estimation how
much of the RV's response to the tasking had to qualify for a particular measure, 1 %,
10%, 90%, to be granted the related score?
Table 3-1 Numerical Evaluation Measures I
Category
I
Score I
I
I
I
I
Accuracy - Is the information accurate?
Yes (true)
1
I
May be true
2
I
Possibly true
3
I
Na
4
I
Possibly not true4
5
I
_
Unsure
6
1
I
Value -what is the value of the sources' information?
I
I
I
I
f
I
I
Major significance
High value
Of value
Low value
No value
As noted in Section 2.0, one series of tasks were evaluated by a narrative discussion
only. While much of the final narrative evaluation for this series was complimentary, it
lacked any real specifics regarding the usefulness or relevance of the Star Gate inputs
and much of the narrative was replete with modifiers and other hedges. A sanitized
extract from the final evaluation report for these tasks is presented in Appendix A
illustrating the subjective, "uncertain" nature of the comments.
4 Note that Accuracy scores 5 and 6 actually rank "higher" than 4 since both imply that there may be
something accurate in the information. Changing the scoring order to accommodate this observation
causes insignrficant changes to both the averages and the standard deviations shown on Table 4-1.
Page 5
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Table 3-2 -Value Rating Definitions from DIAM 5$-13
MAJOR SIGNIFICANCE - Intelligence Information Report (IIR) provided information which
will alter or significantly influence national policy, perceptions, or analysis; or provided
unique or timely indications and warning of impending significant foreign military or
political actions having a national impact.
NIGH VALUE - IIR(s) was best report to date or first report on this important topic, but
did not significantly influence policy or change analyses.
OF VALUE - IIR(s) provided information which supplements, updates, confirms, or aids
in the interpretation of information in data bases, intelligence production, policy research
and analysis, or military operations and plans; most DoD HUMINT System reporting falls
into this category.
LOW VALUE -IIR was not a good report because the information was not reported in a
timely manner, or was of poor quality/of little substance. Nevertheless, it satisfied some
of the consumer's informational needs.
NO VALUE -IIR provided no worthwhile information to support data base maintenance,
intelligence production, policy research and analysis, or military operations and planning;
or its information had no utility, was erroneous, or misleading.
4.0 EVALUATION SUMMARY AND COMMENTS
Thirty-nine (39) of the 40 numerically evaluated, operational tasks were performed in
1994 and one in 1995. The information provided by the Star Gate RV's for each task
was evaluated by staff of the tasking organization. The complete compilation of
evaluated scores is presented in Table 4-1 which includes a designation of the tasking
organization and, where known, a numerical designator for the individual from that
organization who signed the response to the evaluation request (in some instances,
this was also an evaluator}. Also presented are the individual and collective scores for
Accuracy (A) and Vaiue {V) for each of the three RV's and the related average and
standard deviations for the compiled scores. (Note that the total number of scaring
entries for either Accuracy or Value is not equal to the maximum of 120, i.e., 3x40,
since all three RV's did not participated in all tasks). Table 4-2 presents the same
scoring data by tasking organization.
Histograms of the scores from Table 4-1 are shown below. Note that "Accuracy"
scores tend to cluster around 2's and 3's (55 of the 99 entries) while "Value" scores
cluster around 3's and 4's {$0 of the 100 entries}. This is not too surprising as the
nonspecific, nebulous nature of the individual task/target requests permits the RV to
"free associate" and permits the evaluator to pick and choose from the RV commentary
Page fi
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved Fo~~Ie~~~~lO~~I~~~P~~~1~~~0300002-2
NUMERICAL EVALUATIONS
1
oC.
ate
as ing
....m..
va uator
.......~ti_~
.
~
.
_
emote sewer co
res
ota s
2
9 .
r
. .
..
.
.
._..~..
1~0.
1 A
1 V
2A
2V
3A
3V
3
250
_w~...
7/13/94
.....,..~..~.._~ ..............._~.
Or~rvA
............M
1
...........~..~..~........
...~..~....
3.0
.
3.0
2.0
3.0
4.0
5.0
4
264
9/6/94
Or . A
.
2
_.M,,.
m.
...~....
.~.m...,.......
...,....._.......
2.0
..................
3.0
....~.~?~......
5.0
.........m...
4.0
....~.......~.
...............
5
270
....w ..............
1 1 /3/94
.~.~..~...~......_............m.....
Org.,.B_.,~
._.............
,
............m.
...3
.
..
.
..
.
.
5.0
.
4.0
.~....
5.0
..,~
4,0
. ~_.~.
6
271
1 1 /3/94
Or . B
.
.
.
.
.
.
.
.m
3
......~.._....
3.0
.M........nn
4,0
.............m
..................
...._~........
5.0
.......~.........
4.0
.........,.....
...............
7
273
1 1 /3/94
Or~:~B
3
4.0
5.D
5.0
.~_:..
4.0
4.0
....M,
5.0
8
267
1 1 /3/94
Or . B
3
3.0
4.D
3.0
4.0
..............
9
268
.._ .......
1 1 /3/94
.........M...............................
Org_ B
................ .......
~3
.........
..
....
,
w..
3.0
.
.
4.0
4.0
~.::_
3.0
5.0
4.0
1 D
269
1 1 /3/94
Or . B
...
.
..
...
3
.
...
............
.................
..................
3.0
..................
3.0
.................
5.0
~............~.
..
5.0
....
m..n .
..............
1 1
272
..
1 1 /3/94
...,....~...._ ...........................
Or~nB.. m_
.....M.....
3
_..__...,.
...................._...~..
...._
:
1 2
258
8/3/94
Or . C
4
....,,..m.
1 0
..........~.......
3.0
........._........
2.0
.........
........
3.0
.....~.........._
._.....~.........
................
.....~......~
13
257
...............
8/1/94
............m....w.w.............m...
Org:,C
~..~,. ~,...
4
...~........................,....>.........
3.0
..~,._..
.
5.0
3.0
5.0
1 4
256
7/28/94
Orc~ C ~
.
4
...
....
2.0
~..._......
3.0
....,........
...._.............
.....~~,....w.
5.0
..,,.......,,~.....
4.0
,................
.....,...
1 5
249
~.~...~..~.........
7/ 1 1 /94
m.......~.......,..~ ................_
Or~.,.D_,,,...
~..............
....... S . ..m
...........
....
1.0
..
~
..
4.0
2,0
20
2.0
4.0
.~..._.
1 6
248
7/6/94
Or . D
.
..
5
.
.
.~._....
3.0
...._..m......
3.0
....~.........
2.0
,,.
.._
2.0
........~....m.
1.0
...................
4.0
................
..............
17
245
......
6/24/94
._....,...._.......~............~..~..
Or~D
.........._...
5
~.m...~ ..............~.....
>.....
..
3.0
..
...
.
3.0
1 0
~
4.0
1 8
252
7/1$/94
Or . C
.
.
4
.
.,..
...
4.0
_.............
4.0
,....,~..
..................
......
m.w.,..
2.0
..~.............,
3.0
~..n..........
.....,~...,....
1 9
251
............
7/15/94
.........~.............~.................
Or~._C
.,..............
.4
.,................
..
_...~
.
..
2.0
3.0
1.0
A.?
3.0
2.0
3.0
2 0
243
5/31 /94
Or . C
.
...
..
..
.
4
......~.......
3.0
.........~..m...
3.0
.........~.......
5.0
._............_.
4.0
,
1.0
.~,...~..........
4.0
................
...............
21
242
.~..~........_.
5/25/94
...........................................
Or9_..C.........
...............
.. ..4
....,..........
....
.
.
.
.
1.5
.
3.0
....:
1
.5
3.D
2 2
244
6/6/94
Or . A
.
.
.
..
...
......
1
..
...............
4.0
.m...._........
5.0
..................
2.0
~.._......---~--
3.0
_..
.,
.,..
.
1 0
2.4
........
...
2 3
239
........~........
6/12/94
...~..~........~.~..,.w..~........
Org:.A
.... _~..
6
....._......~......._ .............~~....
2.0
...
....
..
2.0
..
1.0
...._
2.0
2.0
2.0
2 4
230
4/ 1 /94
Or . A
7
..
.
..~
2.0
_...~.........
2.0
..~..............
2.0
..................
2.0
......~..........
1.0
......~.........
2.0
................
...............
2 5
240
~........~.~........
5/17/94
~......~ ........................~,....
Or~~. C .,__
._...._w.,..~
._.............~.....~.....................
2.0
~....._...~...
3.D
......
....
3.0
4.0
2 6
235
4/ 18/94
Or . C
8
3.0 -
....
.
..
4.0
...........m..
3.0
.........._...,...
3.0
...............w
3.0
..,...~..........
4.0
.........~.~..
.......m......
2 7
234
.m .........
4/ 14/94
......._..................................
Ord. C ~..
......,......
8
..................... ...m...w.....
2.0
...
..
.
.....
3.0
5.0
~~----
3.0
.~~..
6.0
.,.,,w,,...,,.
5.0
2 8
233
4/1 1 /94
Or . C
.
8
.
.
.
.~.
3.0
.................
3.0
m........._....
3.0
.................
3.0
..................
3.0
.
3.0~
.........
,
..........
2 9
229
.....
3/29/94
.........M...................~...__..
Or~...C
......... ~...
4
.m.......,,....._...~......m............
2.0
........~...
4.0
.
...
2.0
4.0
5.0
4.0
3 0
228
3/28/94
Or . C
4
.....
1.0
_.
...........
2.0
...........h.....
3.0
..................
4.0
.......n.........
3.0
............~....
3.0
............
.......
31
227
m....,
3/24/94
.......w._ .............._.............
Or~..C.
......._.~..
4
......................................._m.
3.0
...~
........
..
3.0
....
..
4.0
.~:.:..
5.0
3.0
3.0
3 2
226
3/22/94
Or . C
4
.
..
.
5.0
~..........
.
4.0
...m,....._...,..
5.0
...................
4.D
........~......_
2.0
.............~...
3.0
................
~............
3 3
225
.~ .
3/21 /94
............_.....~..............~,._
Ord C, .,_
m.....m.._
4
................._............~............
2.0
......_.......
..
3.0
..
...
_....
..
3.0
..
3.0
2.0
3.0
3 4
232
4/ 1 1 /94
Or . E
9
.
Z.0
.
.
..
~
4.0
.
m..._n~...
5.0
.....~...........
4.0
....~............
5.0
..................
4.0
.............
...,.._.......,
3 5
236
m.......
?__,. 4/26/94_~,_
_._mmOr~..Em.__,_
9
m ................_...w.........~.~....
6.0
._..._..._
..n
4.0
.m..
....
m.._
.
.::
6.0
2.0
3 6
237
4/26/94
Or . E
9
.
S.0
.
.
,
4.0
.
................
5.0
...................
4.0
.....~..m......
..............r..
................
....._..~...~
3 7
241
..
4/27/94
....w..._.........M..........~._
Ord En
..m.....~...
.... m_ 9
.....m............~ ......
3.0
.......rr.
..
4.0
...
..
n
2.0
4.0
3 8
247
6/29/94
Or . D
10/1 1
.
...
1.0
..
~..
.....
3.0
,w....._......n
3.0
...............
3.0
..................
3.0
....._.v......
3.0
....~.....~m
................
3 9
265
.....~ ...........
7/6/94
......~...~~.~w._................~...
Ord. D
................ ..........
10/11
.................................,...........
1.0
..................
3.0
.._..~.._.....
2.0
3.0
2.0
4.0
40
259
7/15/94
Or . C
4
5.0
.
4,0
............_....
.................
..._............_
2.0
.................
2.0
................
......
4 1
262
8/23/94
.....................~......~...........,
Or~r...C........
.............. .
... 4~,.?)........_m..
w,........
..6 0?_
4.0
.
.
4.0
5.0
4 2
287
4/3/95
Or . C
12
2.0
...
.......
......
4.0
..................
....._........_
...~.......~.._
1 0
~m~...m..
4.0
...........~..
...............
4 3
Score sums =
106.5
130
76.0
83.0
113.5
135.0
296
348
4 4
Num
ber of en1 ies =
37
37
25
26
37
37
99
100
4 5
.
.._..~...._...........~...........
.~,....~......n .................
Av~score,.m
.....~,,~,
2.9
..m.......,,...
3.5
....._m,..
2.9
...,w..
3.2
3.1
3.6
3.0
3.5
4 6
Std.Deviation =
1.4
0.8
............
1.3
...~...........
D.7
..............~..
1.6
.................
0.9
m......~....
1.4
...............
D.8
TABLE 4-1
Approved For Release 2000/08/1~ag~la4-RDP96-007918000200300002-2
Approved F~r~ss~~~~,t10.~~~~~P~~~~~~G0300002-2
NUMERICAL EVALUATIONS
1
oc.
ate
as er
va uator
emote sewer cores
ota s
2
rg.
o:~~~~~
1 A
1 V
2A
2V
3A
3V
3
4
By Tasking Organization
._:..-
.
~.~.~,.x.,,.v
~
5
r~..~~
....m_.......~..........~..~...
...~ ....................
..............................~,,.....
.........
........
........
.........
~..n..~..
....~.
..........
.........
6
2 5 8
~....~......
8 / 3 / 9 4
....~~............w...m......~,n...
0~...C~.w....
..~,~.....,
4
~,,..~.....M..~.....w_.....w...,...~..
1.0
._.~..m.m_
3.0
...
.
.
...~~...
2.0
.......
3.0
m...
.m
..........m.
.
,_m.... _
.~ _
~......
.m......
.
7
257
8/ 1 /94
Or . C
4
3.0
..
.
.
5.0
.
.
....
3.0
.
5.0
.
.
.
8
256
..
7/28/94
........~~........_...........w...~ ...
O~~C ?
..............
_ 4
....mm~.................~...........
2.0
...........
...
..
3.0
..
..
_..._..
~............
mn.,,
..:.
5.0
m..._........
4.0
......~.........
.,
~.
_,,.
..~
m~.
9
252
7/1$/94
Or C
4
.
.
4.0
.
.
4.0
.
.
.
2.0
3.0
.
.
.
.
1 0
251
............_.......
7/15/94
~.......~.._...,
., Or~c.,C
4
....W..._..n~..n.....n ..................
2.0
.........
~..~..
3.0
~.m....m..n..
1.0
.m_.
.
v..
3.0
w.,....
.
,
2.0
.
.
..w
...
._3.0?w
..........
.......
...w
1 1
243
5/31 /94
Or . C
4
.
3.0
3.0
.
.
5.0
...
.
...
4.0
....
.
.
.
..
1 0
4.0
..
..
....
12
242
~.._
5/25/94
m ....
......w.~......M.__..m.....
Or C
... ...
..M_..n,...~ m.....w,..
4
~.. .....
...~ .............................~.
1 5
...
..
........~..
3.0
... .
.
....~,.
~
..
...
.
..~..
.... .
..
...
1.5
........ .
m....
3.0
._..,
...,
.
.
, .
. ...
,......... ..
,,Y...
1 3
240
5/17/94
Or . C
.
2.0
.
.
.
3.0
.
..
..
.
...
.
_
.
3.0
.
4.0
.
.
.
.
_.~.
.........w..~.
1 4
235
.~ .
4/1$/94
........................~......_...
Ord, Cmmm
.........~.m.
8
.x...,...~...~~._..........~..._...
3.0
..............~..
4.0
.........~.......
3.0
.......,~......
.
3.0
_m......
3.0
.
.
...............
4.0~
.
? r~
`n~
~~Y
r~~
1 5
234
4/14/94
Orc
~. C
8
2.0
.
3.0
5.0
.....
3.0
.
.
6.0
5.0
~
.
'
1 6
233
....~..~
4/1 1 /94
.m.........~ ................._.
,
w~.
Or C
.............~.:............
M
8
.........................................
3.0
...........
3.0
...
.
....
3.0
._.
3.0
.
__
3.0
..
._.
.
3.0
......
.
..
.
.. ~..
1 7
229
3/29/94
Or . C
4
2.0
.
.
..
4.0
_
..........
2.0
.
.........
4.0
::::::.,
.
..
..
5.0
.
.
4.0
w...m,.,
.
..
.~.._... __.,~.M.
1 8
228
3/28/94
Or~;..C^.m
.....r......
. 4
........r....~ ..............._._..,.......
1.0
..~...............
2.0
...~.
.mm...
3.0
.............
~
4.0
_.
.
.
3.0
.
._.......
..m
3.0
._.....
,_
.
_.,my
,
m...,..._..
1 9
227
3/24/94
Or . C
4
3.0
.
3.0
.
4.0
.
..
.
...----
5.0
.
.
.
3.0
3.0
.
.
.
2 0
226
3/22/94
....m...m ...............~..........~.
Ord. _C,
................ ......
4
................~.......................
5.0
..~..
.
.m
.
4.0
_
..
.
..
5.0
4.0
.
.
.
2.0
...
.
.
.
3.0
_. _ ___, ,
.m
2 1
225
3/21 /94
Or . C
4
.
..
.
.
2.0
...
..
.
~.
.
3.0
............
3.0
.
.....
~.
...
3.0_._
..
.
~
.
..
n 2.0
3.0
..
.~:..._~M
2 2
259
7/15/94
..m..~m..._..........~.,....._.,.m..
Or~.n,C M..n
,..,...~,...y
4
_.~............
....,
5.0
..~ ......
....
...
4.0
.
..
...
.......
..~..
.._~....
.
2.0
.~......~.......
2.0
2 3
262
8/23/94
Or . C
~
'
4 ?
~
.
.
6.0
.
.
.
.
4.0
.
.
..
, _
4,0
5.0
m
2 4
287
4/3/95
Or . C
12
2.0
4.0
1.0
4.0
2 5
Score sums=
52.5
65.0
36.0
39.0_
.
51.5
65.0
140 169
2 6
.....~...... .......m....._
....~...,._n....... ..
No. of entries=
m........w,...~........._.w.......
.~
1 J
.
.
..
.
19
.
1 1
.
1 1
_
~ 18
18
48 48
.
.,.....
2 7
.
Av score=
.
.
...
.
.......
2.8
...
...............
3.4
.
..............
3.3
................
3.5
....~..~.~._..
2.9
m~...~..
3.6
wm.............
....
2.9 3.5
2 8
rq?
~
2 9
270
1 1 /3/94
Or . B
3
5.0
4.0
5.0
4.0
._~....n.... ... ..
m
3 0
271
1 1 /3/94
Or9c. B ......
.. ................3 .. ..
. 3.0 ,
. 4.0..
...
5.0
4.0
.......
.
3 1
273
1 1 /3/94
Or . B
3
4.0
5.0
5.0
4.0._
V..
4.0
5.0
.......
.
-~,~.v
3 2
267
_......~.~..
1 1 /3/94
...._~ .................m....._.......
Ord g~.......
........
3
m...._m~......,..~....................
3.0
...~.......m...
4.0
_........n......
......~...~..
.... .
3.0
4.0
,,.
3 3
268
1 1 /3/94
Or B
3
3.0
4.0
4.0
3.0
5.0
4.0
3 4
269
....~.~...........
1 1 /3/94
~
Or B
..m
3
....................M..............
.....m._.....~
,......~....
3.0
...........
3.0
.
.
.
.
5.0
.
.
.
m5w0
._
3 5
272
1 1 /3/94
Or . B
3
.
..
._
......
3.0
.
...
......._..
~
? .~ n ~? ,, ..
3 6
......._ ..........
....m_.............._.......~...m.
.w. ~ ............. .
Score sums=
m......M~...._........_...............
18.0
.~...............
21.0
~.........,.....,
12.0
................
13.0
................
27.0
......_..........
26.0
_..~.
,.Vh..
S7 60
.
..~....
.............h.._..
3 7
No, of entries=
5
5
3
4
..
__ 6
.
6
.
.
14 15
3 8
Av score=
3.6
4.2
4.0
3.2
4.5
4.3
4.1 4.0
Approved For Release 2000/0 ~~al~-~P96-007918000200300002-2
Approved FO~~~~~10F~1~~~~~77'~'1~~~0300002-2
NUMERICAL EVALUATItJNS
1
oc.
ate
as er
va uator
em
ote sewer cores
ota s
2
r g.
~~~~m~~~~~~ci.~~~~~'~
1 A
1 V
2 A
2 v
3 A
3 v
3
4
.,.......~,.
_._..r .........................m.
By Taski
m...~~.....M............~ ........m...w..~.,..~_.~.~..........
ng Organization
~ ........
......_._.......
....,.~,.~..,..
..~...,......_
_.m.............
~
.....,~...
~
............
......._......
5
1'C -
V,.,.-
.,~.
w,~.~
6
...~........_,.........
249
..~..~ ..........................~....
7/11/94
...................m.............
Or . D
_...m.w.....~.......~..._......_..,..
5
....._.........
1.0
........~..~.~
4.0
......~........
2.0
...............
2.0
,~,.?,..._
.,~.........._..
2.0
...._........_..
4.0
.,.~.....
........m
_..w.H~.._,~
......rv
7
248
7/6/_94
..... ~
Or~..D....,..,.
.m........
_ _ 5
.....
.................m...,..,........
,.
3.0
..~....M.......
3A
.~....m._.m
2.0
_._...._.
.
2.0
_
.
1.0
..mm.......
-.
4:..~ .....
........_?_
m_...._~..
8
245
6124/94
_ Orc~M D~
.
.
5
.
3.0
3.0
.
_
...
._...--~-
.
-
1.0
4.0
9
247
.
6/29/94
...._..~....m._........_.,......m
.,_.Or~. D.._...
_.
m~m..,~.1 Q/1 1...
...~....
1.0
......
........
.
3.0
.....
..
.
..
3.0
.
.
.
_
_
3.0
.
3.0
..
..............
3.0
_...~...
._,..._,...w
.... ymm,.
1 0
265
7/6/94
Or . D
.
10/11
.
..
1.0
.
..
.
....
3.0
.
.
_~.
.
..
2.0
....
..........
3.0
.
.
2.0
4.0
1 1
_.
........~..~ .............._
................,..........~,....
Score sums=
..................,...~......m..,....,..
9.0
~........
_.
....
16.0
..m.~.M......
9.0
.
..
.
m__
10.0
.
..
9.0
.
...
19.0
~..........~....
27
~_.....~,_
45
....
...........
1 2
No. of entries=
.
.
5
5
.
.
.
4
..........
.
..
4
_.
5
5
14
.
14
1 3
.......m~..._
~...~ ............._.............
.~..~........._....~.r.,..
Avg.score=
...~............
1.8
_........
3.2
_...._.
.
2.2
.
..
2.5
.
.
1 8
...........
3.8
............
1 ,9?
, 3.2
..........
14
.
....
..
.....
..
...
'..
15
rq.
~.
_M
.w,..-
~-
~
1 6
102
7/13/94
Or . A
1
3.0
3.0
2.0
3.0
4.0
5.0
1 7
101
9/6/94
...........~.~....~...,...m......~.
Ord. AM _
......w...n
2
....,,...~..n.....~..._..~.,....y......
.....mm..~..
.,_ m
...
2.0
......m ..
.
.
3.0
5.0
.......
........
4.0~
,_ _.,
1 8
$2
6/6/94
Or . A
1
4.0
5.0
.
.
.
2.0
................
3.0
::.:
..
.
1 0
2.0
~
19
81
6/12/94
... ...... m ~,..... m ...
Or~__,A._,_
................
n,,.m
w,
...n.~,.........,6
2.0
2.0
1.0
2.0
2.0
2.0
2 0
79
4/ 1 /94
Or . A
.
_
7
2.0
2.0
2.0
2.0
1.0
2.0
2 1
...._
........._..,.....~.......,~~.~~._n.
..._.._.....n..~._..........
Score sums=
m~ ........................m..._........._
11.0
12.0
9.0
13.0
13.0
15.0
33
40
2 2
~
No. of entries=
~
4
4
5
5
5
_
.
5
14
14
..
..~
.
2 3
.....~...._......
..~._..........._~~,_.m_~.
......._...n...~ .........
Avg score m...._
........
2.8
.~.......~.
3.0
....~..~...
1.8
......._ ..
.~.._w::::
2.6
.........._..
_,w..
.
..
2.6
. _........~..
3.0
...
2.4
..m...
~
,
.
..2.:~....
2 4
~
~
~
2 S
rct.
~
~
#
_....._~
.~...~...
3
~..~...m..
v. ~.
~
~x
2 6
232
4/1 1 /94
Or . E
9
2.0
4.0
5.0
4.0
5.0
4.0
~.. ~
mm~~
2 7
236
4/26/94
Or~
E
9
6.0
4.0
6.0
2.0
..,,.M.,,HM.
2 8
..,.m.........~.
237
.........~...~...m ..................
4/26/94
:,,
, ,,,_.,
........
Or~E
..._..m
.........m..~.,.........._~.~...
9
........_.~ ...
5.0
...........~.~.
4.0
~......._......
5.0
................
4.0
.....~........_
~~..............
..m~..~...
~
..._.._..~...
2 9
241
4/27/94
~_,
~ Or . E
9
3.0
4.0
_
2.0
4.0
__
_?~W
3 0
Score sums=
16.0 16.0 10.0 8.0 13.0
~
10.0
39
W
34
~
3 1
..~...~.~.....~.
..~.......~....m....mV.,~.....
_,_...~...w........_.~......
No. of entries=
_._.~_..~.....~.....m.,..w.r....
4 4 2 2
3
.
......,....._.._ .....~ ............
....~.
....
..
.....
.r.,.
_...
.
.
3
_.~.........n
9
....._.._m~.
9
....__...._....
3 2
Av score=
.
.
...
.
....._ ...
..
.
.
4.0 4.0 5.0 4.0 4.3
.
.
.
.
.
.
.
3.3
.
.
..
4.3
3.8
33
~
_ ..u
~.__~
..~~w .~
:_
: _,,,
,
M
3 i ~
~ ~
_
,
3 4
~......~m......~
.............................__....~mr
.............................._~.
._...~....m.........~.....m.....~.,.
.........,,
?~....~ .............~................~ ..._.......s....__.~~......?
~
...~......_..
.._......
... ..
.......
~....
..M.._.
_.,~.::.:. .~.,~.....,,...x
...,.~.
3 6
....
.........~.................................
~OCTI~arIS
OrI ... _m. ~-VBr
a e mm~COr.~....~~_......._..~~r_...ani
zatio
n....~....~
--......r......
3 7
..........m..~.~.....
~ .........................._.......,n...
.m...~,,........_............_.
Or anization
Avera a Scores
..
3 8
Or . C
2.$ 3.4 3.3 3.5 2.9
.:.:
3.6
..
._...
M
3 9
Ord :.. B............_
.....3 :~..........4 r.2.... 4.0 3.2 4.5
....
..
.
....
..............M.
4.3
mm...m~..
.._.......
..
.
4 0
Or . D
.
.
.....
.._..
._...
1.8 3.2 2.2 2.5 1.8
_.
3.8
.
.
..
.............
41
.~..........mm.
..
.,. ......
...............~ ..............m
.....~......_....~.....
... _,...
Or A
.......~_...~9~.~..........~..
__ ...
_
2.8 3.0 18 2.6 2.6
............ ............ ~._,n.
..
...........
... ...._ ...... .....~ ..
. ...
.
.
3.0
....m.~..
. m
...
.~
.
.
.
,
.
4 2
Or . E
.
.
..... .
.
.
4.0 4.0 S.0 4.0 4.3
3.3
.
...
..
.
..
..
Table 4-2
Approved For Release 2000/08/1pag~l~-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Histograms of Evaluator Scoring
anything that he thinks "may" or "possibly" is related to his problem (and score
accordingly) regardless of how much of the RV commentary may. satisfy the particular
measure. If the Accuracy of the information is somewhat uncertain, its Value must be
vaguer still, i.e., scared lower. This presumption is supported by review of the scored
"pairs" for all cases, e.g., 1 A and 1 V; only rarely does the "V" score equa{ or exceed the
"A" score for a specific RV and target. Note further that of the 100 "V" scores shown on
Table 4-1, there are no "1" scores5, while the 99 "A" scores include 13 "1's".
Regarding the latter, a detailed review of the evaluator comments and/or the tasking
suggests that the importance of these 1's is less than the score would imply in al! but
four cases since:
- the evaluator of Document 243 stated that the RV 3A score "...though vague, is
probably correct."
-the tasking and targets far Documents 245,247, 248, 249 and 2656 concern
topics widely publicized in the open media during the same. period, hence the
"source" of the RV 1 A and 3A comments, intended or not, is suspect,
and -for Documents 230, 239 and 244, the evaluator's supporting narrative is
5 The significance of this omission is further enhanced if one assumes that the evaluators were familiar
with the definitions in Table 3-2 since even those 11 instances scored as #2 ("High value"} merely require
that the input be the "best report to date or first report on this important topic, but [it] did not significantly
influence policy or change analyses."
6 (U) The evaluation of Document 265 is actually a second evaluation of the same RV inputs provided
many months after the first evaluation for Document 248 and probably done by a different evaluator.
Page 1 0
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
between the RV(s) and the evaluator:
- has a very narrow information bandwidth, i.e., the RV-derived information
cannot be embellished by a dialogue with the evaluator without substantially
telegraphing the evaluator's needs and interests, thereby biasing any
RV information subsequently derived ,
and - is extremely "noisy" as a result of the unidentifiable beliefs, intentions,
knowledge, biases, etc. that reside in the subconsciousness of the RV(s)
and/or the evaluator .
As a result, the potential for self-deception on the part of the evaluator exists, i.e,
he/she "readsn into the RV information a degree of validity that in truth is based upon
fragmentary, generalized informatian and which may have little real applicability to
his/her problem. The relevant question in the overall evaluation process is who and
what is being evaluated, i.e., is the score a measure of the RV's paranormal
capabilities or of the evaluators views, beliefs and concepts?
One of the RV's expressed a concern to the author that the protocols that were
followed in conducting the RV process in response to the operational tasking were not
consistent with those that are generally specified for the study of paranormal
phenomena. Whether the claimed discrepancy was detrimental to the information
derived by the RV's,or to its subsequent evaluation or use cannot be determined from
the available data.
The operational tasking noted earlier concerning activities in North Korea which
required precognitive abilities on the part of the RV's provides an opportunity for a
post-analysis by comparing the RV predictions against subsequent realities.
Additional comparative data of this type is available from operational tasking during
the period 11 /90 through 1/91 regarding the Middle East situation (this tasking was not
numerically evaluated).
6.0 SUMMARY FROM USER INTERVIEWS (U)
Subsequent to the review and analysis of the numerically scored taskrng described in
the previous sections of this report, the author participated in interviews with
representatives of all of the tasking organizations presented in Table 4-1 except Org.
Q. Only a brief summary of the results from those interviews is presented here; more
detailed synopses are presented in Appendix B. In all cases except for Org. C, the
interviewees were the actual personnel who had participated directly in the tasking
and evaluation of the Star Gate program. For Org. C, the sole interviewee was the
Chief of the Analysis Branch; the staff who defined the tasking and performed the
evaluations was comprised of his lead analysts.
Page 1 3
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
A brief summary of the salient paints which appeared consistently throughout these
interviews follows:
? the principal motivation far using Star Gate services was the hope that
something useful might result; the problems being addressed were very
difficult and the users were justifiably (and admittedly) "grasping at straws" for
anything that might be beneficial.
? the information provided by the Star Gate program was never specific enough
to cause any operational user to task other intelligence assets to specifically
corroborate the Star Gate information.
? while information that was provided did occasionally contain portions that
were accurate, albeit general, it was -without exception -never specific
enough to offer substantial intelligence value far the problem at hand.
? two of the operational user organizations would be willing to pay for this
service if that was required and if it was not too expensive (although one user
noted that his organization head would not agree). However, the fact that Star
Gate service was free acted as an incentive to obtain "it might be useful -who
knows" support far the program from the user organizations.
The reader is referred to Appendix B for additional information resulting from these
interviews. However, two inconsistencies noted during the discussion of the numerical
evaluations in Section 4.0 were supported by information obtained from the interviews.
On the average, the Org. C evaluators scored higher that those of Org. B. One cause
for this discrepancy may be due to the fact that the Org. B evaluators were, in general,
skeptical of the process while the lead person at Org. C claimed to be a believer in
parapsychology and, in addition, had the last say in any evaluations that were
promulgated back to the Star Gate PM. This comment is in no way intended to impugn
the honesty or motivation of any of these personnel, merely to point out that this
difference in the belief-structure of the staff at these two organizations may have
resulted in the perceived scoring bias. As noted above, the subjectivity inherent in the
entire process is impassible to eliminate or to account for in the results.
The higher average scoring, especially Accuracy scores, from the Org. A evaluators
appears to be explained by the procedure they used to task and evaluate the
experiments they were performing with the Star Gate program. Namely, they used a
staff member as a "beacon" to "assist" the RV's in "viewing" the beacon's location.
Subsequently, the same Org. A staff member evaluated the RV inputs. However, since
he/she had been at the site, he/she could interpret anything the appeared to be related
to the actual site as accurate. When asked if the information from the multiple RV's
was sufficiently accurate and consistent such that a "blind" evaluator, i.e., one who did
not know the characteristics of the site, would have been able to identify information
from the RV inputs that they could interpret to be accurate, they all answered in the
negative and agreed that the score would have been lower. Again the subjectivity of
Page 1 4
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
the process appears -the evaluator could interpret the admittedly general comments
from any RV that seemed to relate to the actual site as "accurate", e.g., consider an RV
input "there is water nearby", the evaluator knows this it true of almost anyplace
especially if one does not or cannot define what kind of water, i.e., is it a lake, a water
line, a commode, a puddle?
7.0 CONCLUSIONS AND RECOMMENDATIONS
7.1 Conclusions
The single conclusion that can be drawn from an evaluation of the 40 operational
tasks is that the value and utility to the Intelligence Community of the information
provided by the process cannot be readily discerned. This conclusion was initially
based solely upon the analysis of the numerical evaluations presented in Section 4.0,
but strong confirmation was provided by the results of the subsequent interviews with
the tasking organizations (Ref. Section 6.0 and Appendix B). While, if one believes the
validity of parapsychological phenomena, the potential for value exists in principal,
there is, Wane-the-less, an alternative view of the phenomenology that would disavow
any such value and, in fact, could claim that the ambiguous and subjective nature of
the process actually creates a need for additional efforts with questionable operational
return on the part of the intelligence analyst.
Normally, much of the data provided by the RV(s) is either wrong or irrelevant although
one cannot always tell which is which without further investigation. Whether this reality
reduces or eliminates the overall value of the totality of the information can only be
assessed by the intelligence analyst. It clearly complicates his/her problem in two
ways: 1) it adds to the overburden of unrelated data which every analyst already
receives on a daily basis, i.e., the receipt of information of dubious authenticity and
accuracy is not an uncommon occurrence for intelligence analysts, and 2) since the
analyst does not ndrmally know which information is wrong or irrelevant, some of it is
actually "disinformation" and can result in wasted effort as the analyst attempts to verify
or discount thane data from other sources.
The review of the operational tasking and its subsequent evaluation does not provide
any succinct conclusions regarding the validity of the process (or the information
provided by it). First and foremost, as discussed in Section 5.0, the entire process,
from beginning to end, is highly subjective. Further, as noted in Section 3.0, the
degree of consistency in applying the scoring measures, any guidance or training
provided to the evaluators by any of the tasking organizations and/or the motivation of
the evaluators are either unknown or, in the case of the latter, may be highly polarized
(see Appendix B) The lack of information regarding these items could account for
some of the variability in the scores across organizations noted in Table 4-2, but this
cannot be certified and is, at most, a suspicion.
Page 1 5
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Whether the information provided by the Star Gate source is of sufficient value to
overcome the obvious detriment of accommodating the irrelevant information included
therein is an open question? Mare precisely, whether the Star Gate information is of
sufficient value to continue this program -vis-a-vis other sources of information and
other uses of resources - is an important question for the Intelligence Community to
address, irrespective of one's personal views and/or beliefs regarding this field of
endeavor, i.e., does the information provided justify the required resource investment?
One method that might assist this evaluation is to develop a means for scoring the
complete input from the RV process, i.e., evaluate all information and determine how
much is truly relevant, how much is of undeterminable value and how much is
completely irrelevant. One could then analyze haw much information is being handled
to achieve the relevant information (along with same measure of the relevancy) and
make judgments on its value vis-a-vis the investment in time and money. Other, less
technical methods, for adjudicating this issue also exist.
7.2 Recommendations
Considering the statements above, the only sensible recommendation in this author's
mind is to bring some "scientific method" into this process (if it is continued). As
evidenced by more than 2D years of research into paranormal psychology, much of it
done by institutions of higher education or others with excellent credentials in related
fields, validation of parapsychological phenomena may never be accredited in the
sense that is understood in other scientific and technical fields of endeavor . Control in
any rigorous scientific sense of the multitude of human and physical variables which
could, and probably do, influence this process is difficult -perhaps impossible - #or any
except the most mundane types of experiments, e.g., blind "reading" of playing cards.
Even these restricted experiments have led to controversy among those schooled in
the related arts.
One of the foundation precepts of scientific endeavor is the ability to obtain repeatable
data from independent researchers. Given the subjective nature of RV activities, it is
difficult to believe that this aspect of parapsychology will ever be achieved. As an
admitted neophyte in this area of endeavor, I categorize the field as a kind of religion,
i.e., you either have "faith" that it indeed is something real, albeit fleeting and unique,
or you "disbelieve" and attribute all positive results to either chicanery or pure
chanceio,
Thus, one must recognize at the start that any attempt to bring scientific method into
the operational tasking aspects of this project may not succeed. Others with serious
10 Practitioners in the field, including those #unded under government contracts, would argue with these
observations, perhaps vehemently; some would argue further that the phenomenology has been verified
beyond question already. This reviewer disagrees; albeit, these observations are not intended to discard
the passibility of such phenomena.
Page 1 6
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
motives and intentions have attempted to do this with the results noted above.
However, as a minimum, one could try to assure that the scoring measures are
succinctly defined and promulgated such that different organizations and evaluators
would have a better understanding of what is intended and, perhaps could be more
consistent in their scoring. The use of independent, multiple evaluators on each task
could aid in reducing some of the effects of the subjective nature of the evaluation
process and the possible personal biases (intentional or otherwise} of the evaluators.
Since, according to some parapsychologists, the time of the remote viewing is not
relevant to the attainment of the desired information, controlled "blind tests" could be
run by requesting tasking for which the accurate and valuable information is already
known to determine statistics on RV pertormance (clearly one key issue in such tests is
what information is given to the RV in the task description to avoid any semblance of
compromise, not a casual problem). Controlled laboratory experiments of
parapsychology have done this type of testing and the results, usually expressed in
terms of probability numbers that claim to validate the parapsychological results, have
done little to quell the controversy that surrounds this field. Thus it may be naive and
optimistic to believe that such additional testing would help resolve the question of
the"value of the process" (or its utility for operational intelligence applications), but it
might assist in either developing "faith" in those who use it, or conversely "disbelief".
Before additional operational tasks are conceived, some thought could be given to
how and what one defines as a "target". Broad generic target descriptions permit
unstructured discourse by the RV which -especially if there is a knowledge (or even a
hint} of the general area of interest -leads to data4open to very subjective, perhaps
illusionary, interpretation regarding both accuracy and value. If some specificity
regarding the target could be defined such that the relevance and accuracy of the RV-
derived data could be evaluated more readily, some of the uncertainties might be
eliminated. In this context, note that the cases where targets were more specific, e.g.,
the North Korean targets ,the resulting scores were generally higher.
Finally, it was noted in Section 5.0 that some of the RV information obtained from
operational tasks regarding North Korea (and others concerning the Middle East)
depended upon the precognitive ability of the RV's in predicting events yet to occur.
These data provide an opportunity for apost-analysis of the accuracy of these
predictions by making a comparison with subsequent information regarding actual
events (same data for this comparison might require access to classified information
from other sources). Such apost-analysis would provide data far evaluating the ability
of the RV's to perform precognitive tasks and of the related operational value of the
predictions. Pertormance of this past-analysis lies beyond the scope of this paper, but
is a topic for a subsequent study if any sponsor is interested.
Page 1 ?
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
SG1 B
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Next 1 Page(s) In Document Exempt
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
APPENDIX B
STAR GATE OPERATIONAL USER INTERVIEWS
Page B-1
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
STAR GATE OPERATIONAL USER INTERVIEW
ORGANIZATION: A
USER POC: #7
DATE: 3 August 1995
Operational Task: SG was asked to participate in a series ofi experiments to determine
if their paranormal service could assist in locating someone who was at an unknown
location and had no radio or other conventional method for communicating. Members
of the user organization acted as "beacons" for the RV's by visiting sites unknown to the
RV's at specified times. The RV's were requested to identify any information that would
assist in determining the site location by "envisioning" what the beacons were seeing.
Motivation for Employing Star Gate: The previous head of the user's group was aware
of the program from other sources and requested that SG participate in these
experiments in the hopes that some information might be obtained to assist in locating
the sites and/or people given the scenario above This situation is similar to that noted
from other user interviews, namely, the difficulty of obtaining relevant information from
any other source renders the use of the paranormal approach as a worthwhile endeavor
from the user's perspective "just in case" it provides something of value
User Attitude: Alf of the interviewees were positive regarding the application of this
phenomenology to their problem, albeit they all agreed that the RV information provided
from the experiments performed to date were inadequate to define the utility of the
phenomena and that additional experiments were needed.
Results - ValuelUtility: For each user task, the evaluator was the same individual who
had acted as the beacon, i.e., the person who had actually been at the candidate
location. Each evaluator noted that some of the information provided by the RV's could
be considered to be accurate. When asked if the accuracy of the information would be
ranked as high if the evaluator did not know the specifics of the site, i.,e., had not be the
"beacon, which is the real "operational situation", all answered in the negative. Several
interviewees indicated that their interpretation of the RV data led them to believe that
the RV`s had witnessed other items or actions the beacon was engaged in but not
related to the site of interest. As a result of the experiments done to date, the user
decided that the approach being pursued was not providing information of operational
utility since it was too general. However, the user was convinced of the possible value
of the paranormal phenomena and was planning a new set of experiments using a
substantially modified approach in the hope of obtaining useful results.
Future Use of SG Services: As inferred above, the user would continue to use SG-type
services, albeit in a new set of experiments. The user would be willing to pay for this
service if it was not too expensive and requested that they be contacted if the program
was reinitiated. When advised that they could obtain services of this type from
commercial sources, they noted that this would be difficult due to the highly classified
nature of some of their activities.
Page B-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
STAR GATE OPERATIONAL USER INTERVIEW
ORGANIZATION: B
USER POC: #3, et al
DATE: 14 July 1995
Operational Task: Most tasking requested information about future events, usually the
time and/or place (or location) of a meeting. Some tasking requested additional
information describing a person or a thing, e.g., a vessel. In one instance, after
previous "blind" requests had yielded no useful information, the user met with the RV's
and provided a picture and other relevant information about an individual in hope of
obtaining useful information about his activities.
Motivation for Employing Star Gate: SG PM briefed RV activities and his desire to
expand customer base. User was willing to "try" using SG capabilities since there was
na cost to the user and, given the very difficult nature of user business, "grasping at
straws" in the hope of receiving some help is not unreasonable. Note that this
organization had tasked the program in the `91 time frame but had not continued tasking
in `92-'93 until briefed by the new Star Gate PM.
User Attitude: DIA POC was openly skeptical, but was willing to try objectively.
Members of the organization he supports (Org. B) had varied levels of belief, one
individual appear very supportive noting the successful use of psychics by law
enforcement groups (based upon media reporting). Evaluation of the tasking was
accomplished collectively by the DIA POC and three other Org. B members.
Results - Value/Utility: None of the information provided in response to any of the
tasks was specific enough to be of value or to warrant tasking other assets. SG data
was too vague and generic, information from individual RV's regarding the same task
were conflicting, contained many known inaccuracies and required too much personal
interpretation to warrant subsequent action. User would be more supportive of process
if data provided was more specific and/or closely identified with known information. In
one instance, a drawing was provided which appeared to have similarity with a known
vessel, but information was not adequate to act on.
Future Use of SG Services: User would be willing to use SG-type services in future.
However, in current budget environment, demonstrated value and utility are not
adequate to justify funding from user resources. Would not fund in any case unless
program could demonstrate a history of successful and useful product. User believes
that RV's working directly with his analysts on specific problems would be beneficial in
spite of the obvious drawbacks. Individual quoted above suggested recruiting RV's
from other sources, noting his belief that the government RV's may not be best
qualified, i.e., have best psychic capabilities.
Page B-3
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
STAR GATE OPERATIONAL USER INTERVIEW
ORGANIZATION: C
USER POC: #4
DATE: 26 July 1995
Operational Task: Most tasking requested information describing a person, a location or
a thing, e.g., a vessel. Occasionally, the tasking would provide some relevant
information about the target or "hismer~ts" associates in hope of obtaining useful
.information about its activities.
Motivation for Employin Star Gate: In circa 1993, the SG PM briefed RV activities and
his desire to expand the customer base. This desire conjoined with the user's belief
that it provided an alternate source of information led to the subsequent tasking. User
was willing to "try" using SG capabilities since there was no cost to the user and, as
noted in other interviews, given the very difficult nature of the user's business, "grasping
at straws" in the hope of receiving some help is not unreasonable. This organization
had tasked the program in the (circa) `86-'90 time frame but had terminated tasking
since there was no feedback mechanism.
User Attitude: User was a believer in the phenomena based upon his "knowledge of
what the Soviets were doing" and his perceptions from the media regarding its use by
law enforcement agencies. He noted that his lead analysts, who generated the tasking,
were very skeptical, as was his management. User insisted that analysts be objective
in spite of their skepticism. In general numerical evaluation of the task was pertormed
by the individual who had defined it.
Results - ValuelUtility: This interviewee claimed value and utility for the information
provided by the RV's, noting that information regarding historical events was always
more accurate that information requiring predictions. RV's were "fairly consistent" in
identifying the "nature" of the target, e.g., is it a person or a thing, but not always. On
occasions where RV inputs were corroborated, additional data were requested, but
these data usually could not be corroborated. User commented that all reports had
some accurate information, 2 however, the SG data provided was either not specific
enough and/or not timely enough to task other assets far additional information. Some
SG data was included in "target packages" given to field operatives; however, there was
no audit trail so there is no evidence regarding the accuracy or use of these data. User
also noted that classification prohibited data dissemination as did concerns about
skepticism of others regarding the source and the potential for a subsequent negative
impact on his organization.
Future Use of SG Services: User desires to continue using SG-type service if the
1 Only one person provided all of the information at this review. 11Vhere the "user" or "interviewee" is cited, it reflects
the remarks of that single individual.
2 User was unaware that the tasking organization and its primary mission were known to the RV's. Portions of the
data provided by the RV's could have been predicted from this knowledge.
Page B-4
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
program continues. In addition, the user stated that he would be willing to pay for the
service if necessary. However, subsequent discussion indicated that his management
would not fund the activity unless the credibility could be demonstrated better and the
phenomenology legitimized. User went an to claim that only the sponsorship of a
government agency could "legitimize "this activity and its application to operational
problems. User believes that RV's working directly with his analysts on specific
problems would not be beneficial due to the skepticism of his analysts and the
deleterious impact that would have on the RV's. The views provided by the user -note
Wane of the actual evaluators were present -appeared to be unique to him and his belief
in the phenomenology, i.e., his remarks indicated that the use of this process was not
actively supported by anyone else in his organization. The numerical evaluations of the
19 tasks performed in 1994/95 certainly do not indicate, on the average, either a high
degree of accuracy or value of the data provided.
Page &5
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2
STAR GATE OPERATIONAL USER INTERVIEW
ORGANIZATION; E
USER POC: #9
DATE: 7 July 1995
Operational Task: Request to assist in determining if a suspect was engaged in
espionage activities, e.g., who is he meeting? where? about what? are these activities
related to espionage or criminal actions? Tasking comprised a series of four sequential
tasks, each time a bit more information was provided to the RV's, including at one point
the name of the suspect. (Note: this "sequential tasking" is unique. Each of the tasks
assigned from other operational organizations was a "singular" or "stand alone" event.)
Motivation for Employing Star Gate: SG PMO briefed RV activities and his desire to
expand customer base. User was willing to "try" using SG capabilities since there was
no cost to the user and, given the very difficult nature of user business, "grasping at
straws'" in the hope of receiving some help is not unreasonable. ,
User Attitude:
Pre-SG experience -User (#9) had a perception of beneficial assistance
allegedly provided to domestic police by parapsychologists; thereby he was encouraged
to try using the SG capabilities and hopeful of success.
Post-SG experience -Still very positive in spite of the lack of value or utility from
SG efforts (see below). User is "willing to try anything" to obtain assistance in working
his very difficult problems.
Results - ValuelUtility: None of the information provided in any of the four sequential
tasks was specific enough to be of value or to warrant tasking his surveillance assets to
collect on-site information as a result of SG information. SG data was too generic and
while it may have contained accurate information, it required too much personal
interpretation to warrant subsequent actions by his assets. Much of the SG information
was clearly wrong so there was no way to ascertain the validity of the rest. One major
deficiency noted in the SG responses was the lack of any RV data regarding large fund
transfers that the suspect was known to be engaged in and which the user believes
would have been uppermost in the suspect's mind. User would be more supportive of
process if data provided was more specific and/or closely identified with known
information.
Future Use of SG Services: User would be willing to use SG-type services in future.
However, in current budget environment, demonstrated value and utility are not
adequate to justify funding from user resources. User would tae willing to have a joint
activity whereby RV's work directly with his analysts on specific problems if: a) user did
not pay for RV services and b) commitment for joint RV's services was long term , i.e.,
several years.
Page B-6
Approved For Release 2000/08/10 :CIA-RDP96-007918000200300002-2