INTELLIGENCE USER SURVEY
Document Type:
Collection:
Document Number (FOIA) /ESDN (CREST):
CIA-RDP83M00171R001500050002-5
Release Decision:
RIPPUB
Original Classification:
K
Document Page Count:
17
Document Creation Date:
December 16, 2016
Document Release Date:
January 4, 2005
Sequence Number:
2
Case Number:
Publication Date:
December 11, 1981
Content Type:
MF
File:
Attachment | Size |
---|---|
![]() | 612.23 KB |
Body:
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5'
DCI/ICS 81-2234
11 December 1981
MEMORANDUM FOR:
VIA:
FROM:
SUBJECT:
Director, Intelligence Community Staff
Director, Office of Assessment and Evaluation
and Evaluation
Intelligence User Survey
1. Attached is a pappr d byl in
close coordination withi As you are aware, it was not possible
for us on this staff to impose a rigorous methodology for analyzing the
data on the User Survey prior to the submission of the questionnaires to
respondents for the following reasons:
a. The questionnaire itself represented an amalgam of disparate
interests of the IC Staff (PAO and PGS), DIA, NSA, and NFAC.
Although the Office of Medical Services "experts" on such
questionnaires attempted to ensure some form of rigor which would
enable analysts to evaluate the meaning of the data acquired, their
efforts were only partially successful.
b. Although the Survey was sponsored by you as D/ICS and by
John McMahon as D/NFAC (wearing his Community hat), the form of the
questionnaire and the manner in which the project was implemented was
skewed by the virtual veto power of in his role as
Chairman of the, Interagency Community on me igence production.
Understandably, n agency to do nothing to antagonize members of
his Community Working Group, which represents one of the few attempts
of NFAC to exercise a Community role.
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
SUBJECT: Intelligence User Survey
2. It is now time for us to prepare for you a report summarizing our
initial findings based on a preliminary analysis of the data obtained both
from the questionnaires and the interviews. Once this has been done, we
would hope to subject the data to far more rigorous analysis along the
lines indicated in the attachment. If you approve of this method and
approach, we would hope to have the initial report for you by the end of
January. We would also welcome your thoughts on the model we have
constructed as a guide to further analysis. This is provided in the
attachment.
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
SUBJECT: Intelligence User Survey
Distribution: DCI/ICS 81-2234
Orig-D/ICS
1-DD/ICS
1-D/OA&E
1-A-D/OP
1-NFAC
1-OP
1-OA&E
1-OA&E
1-OA&E
1-OA&E` Subject
1-OA&E Chrono
1-ICS Registry
DCI/ICS/OA&l1
12/11/81)
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
14 December 1981
Proposed Approach to Evaluating Consumer
Survey Questionnaire Data
Background
A survey addressing the interests and preferences, intelligence usage,
and derived satisfactions of Washington-level policy consumers has been
underway for several months under the sponsorship of the Interagency Working
Group on Intelligence Production. Collection of data in questionnaire form
is complete. Follow-on interviews of select respondents are now being
conducted. Computerized questionnaire data, in their entirety, have been
provided to the IC Staff to support whatever evaluation purposes may be
deemed worthwhile. Simply put, the questionnaire asked Carter Administration
incumbents about:
? What they wanted.
? What they got, and how they got it.
? How much of it they used.
? How they liked.it.
Objectives of the Analysis
To structure, assess, and understand the consumer survey so that we can:
? Estimate the value of intelligence information to the consumer.
? Find ways to make intelligence more useful.
Analytical Concept
Establishment of a comprehensive, sustained product evaluation function,
possibly within the re-organized Intelligence Community Staff, has been under
consideration by Community management for some time. Since any such activity
might benefit from the existence of a logical analytical infrastructure upon
which to proceed, this paper proposes a systematic, in-depth approach to
interpreting the Survey data. Our approach is aimed at defining and
describing the filtering process through which intelligence outputs progress
toward the ultimate payoff: application by a consumer against an issue he
faces, with results beneficial to U.S. national interests (See Attachment
1).
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Data extracted from the survey questionnaires will be used to estimate values
and frequencies at different nodes in the intelligence utilization process.
Fuller appreciation of this process, and how intelligence is flowing through
it, ought to suggest promising areas for modifying procedures or resource
allocations so as to increase the proportion of useful outputs.
Since the survey is essentially a market poll, the proposed analysis
will seek-out and present statistical highlights in a "polling report" vein.
Our initial efforts will be of this sort, and should find a ready audience
among the functional managers directly concerned with intelligence production
activities.
Analytic Sequence
It is proposed that IC Staff analysis proceed along the following lines:
Phase I: Concept Development and Exploratory Data Analysis
? Examine data for general patterns, realizing that more
sophisticated analysis will follow. Simplify descriptions without
concern for uncertainty or variability.
? Summarize responses to questions deemed of particular interest.
? Develop the appropriate statistical methods and data extraction
tools to circumvent methodological difficulties (See Attachment 2).
Phase II: Refinement of the Model (an iterative process)
? Determine which aspects of the model are addressed by data from the
questionnaire, and what data is missing.
? Examine, one-by-one the hypothesis implicit in the model. Prune
back the model if data do not support hypotheses (See Attachment
3).
Phase III: Quantification of the Model
? Associate specific questions in the questionnaire with nodes in the
model.
? Verify the "closeness" of questions with appropriate statistical
tools, making adjustments as necessary.
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
? For each point in the model, estimate frequencies, values,
proportions and levels of significance as required.
? Describe activity levels through the utilization process;
explain routing, conduct sensitivity analysis to identify
chokepoints and opportunities for increasing the utility of
intelligence.
Phase IV: Prepare Final Report
? Dovetail statistical results with textual and verbal responses.
? Present inferences drawn from Survey data.
? Recommend resource reallocations and procedural changes that
offer the best opportunity to maximize the utility of the
intelligence product to consumers.
? Recommend directions for further study (those points in the
model for which data is inconclusive or non-existent).
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Attachment 1. Prototype Model of the Utilization Process
This attachment offers for critical review initial returns from an
OA&E effort to develop a logical model of the process by which consumers
utilize the outputs of the Intelligence Community. Confronted with the
problem of interpreting data collected via the Consumer Survey
Questionnaire, one cannot avoid the impression that here is a wealth of
fact and opinion pertinent'to the "bottom line" of the intelligence
business, which ought to be of utmost interest to all levels of Community
management if only the bits and pieces can be put together in some
coherent fashion. What we have is a bunch of empirical observations in
search of theoretical structure. The analysis which follows endeavors to
prototype such a structure, without which the Survey "results" can
constitute little more than titillating tidbits of statistical fact, left
to drift aimlessly in a contextual void (poetic, huh!).
Figure I depicts a detailed model of the intelligence utilization
process. RESOURCES are transformed by intelligence SUPPLIERS into OUTPUT,
which can be characterized according to FOCUS (i.e., geographical and
topical concern), KIND (i.e., raw intelligence, in-depth analysis, etc.),
and FORM (i.e., formal publication, especially prepared briefing, etc.).
These outputs are in demand by certain CONSUMERS who have individual
INTERESTS AND PREFERENCES which can also be characterized by focus, kind
and form. A consumer demands intelligence outputs, and is willing to
expend time and effort in their consumption, in the expectation that they
will promote beneficial OUTCOMES to issues of importance he faces. For
this to happen, suppliers must make the right output AVAILABLE to the
consumer, who must then proceed to ACQUIRE, ASSIMILATE and APPLY that
output. An output's journey from the dock of an intelligence supplier to
beneficial application by a consumer against an important issue is clearly
long and tortuous. Numerous opportunities exist for each output to go
astray, or "leak-out" of the system, short of its final destination.
Within the logic of the model, the efficiency with which the
Community performs is represented in the proportion of output which
completes the passage to beneficial application. Only these survivors
contribute directly to the credit side of the Community's ledger. All
other output represents only costs, which take the form of consumers' time
and effort as well as resources put into the intelligence production
process. Any action which increases the probability that a given output
will move in the right direction across any one of the many decision
"switches" depicted in the model will, if nothing else changes, serve to
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
6FtWisk
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171R001500050002-5
(jrRIASSI IEo
Approved For Release 2005/03/24: CIA-RDP83M00171R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
increase performance efficiency. Since most of these decisions are made
in the consumption sector, either by consuming principals themselves, or
their staff, it is necessary for suppliers to discover the criteria upon
which these decisions are based.
Intuitition suggests a consumer would consider at least three
sequential aspects of an intelligence output in deciding whether or not to
proceed to the next step in the utilization process: RELEVANCE (i.e., how
closely its focus, kind and form matches his interests and preferences);
INTRINSIC MERIT (i.e., his. perception of the competence, or credibility,
of its content); and COMPLETENESS (i.e., how thoroughly it covers his
interest in the matter at hand). All of this, of course, presumes that
the consumer perceives a NEED for information or analysis about something,
and harbors some EXPECTATIONS that intelligence will contribute toward
that need's gratification.
An abridged version of the utilization process is shown in Figure II,
with the twenty-six questions asked in the Survey Questionnaire
superimposed to indicate that phase of the process to which each question
applies. Two essentially different kinds of questions are identified:
those which call upon the consumer for purely descriptive estimates (e.g.,
How often did...), and those which ask for an evaluation (e.g., How
satisfied were you with...). The plan is to check these "common sense"
results against question associations derived independently via various
statistical techniques, making whatever refinements to the model appear to
be warranted. Figure II depicts the abridged model in a simple yes-no
probability (or decision) tree format. Each decision point (denoted by
a'.'') is further modeled as a four-part sequence, as indicated in Figure
IV. Data collected via the Consumer Survey will be assessed in the
context of the mathematical expressions associated with these diagrams
(still to be worked out).
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
UN: LASSiFIED
T IU %# M726
cG.At~ewJ OF iVRVly QV[Si.u.
W . nA1'a
Approved For ReIf -215705/24^ PAiRRP881ca71R001500050002-5
. (s~o~orwcwnw (13)
coufv~+fp3
Pot tr'owJ (%I$
QQQE-U. iL.vK L4 3
pttocenvacs r -
i
Focv3 (se)'
FoQM(f)1
AeVs
(my Pu,ugPf-l
? AnIn"reollived For Release
T *P.* W -t. . ,S /vq~vw~~YE
M QVWT-TY ? b CC. %
4110, Tub"NuawT
005/03/24: CIA-RDP83M00171 R001500050002-5
L!NC'~_ ~',SS':F D
F a
1 I t 1 i 1 1 1 1 1 1 1' l i t 1 1 1? 1, 1! 1 .--i e ? G'
EL
O
v1
N1 LL
Oj O: O
O~ Oi
O~ p
O
O
_
~
i
Q Q~ 4
Qf
O
o
4
4
ar a[
o
OC
3
ac
ac
1
aC
u -
JI a
~! O~ O
o
a
a o
a
a
TTT
M
y. r
a a! a
r
a
d
*I
a 0.
+
a
a
Q a
a
a
Lai 2
aI J ~
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
v1. a; of o' J O
4! 4
+, t
011 0
i O! O O~ O G
at ~: ?;
n
O
W
O
N r
0
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
UNCL ",SS u I=
7
Y
~
Y
A
C
Y I
i
u
M
a
o.
m
- f Q-
4
a Q
IL
a.
~W C
UNCLASSIFI_C--
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Attachment 2. Methodological Difficulties
In the questionnaire phase of the Consumer Survey, over 130
individuals representing 13 organizations and 4 levels of responsibility
have responded to over 150 queries concerning the frequency, quantity,
quality and usefulness of intelligence output. The raw responses exceed
20,000. Since the volume of data is so large, we must place a great deal
of effort on selecting statistically sound methods that will allow us to
reduce and summarize the data while permitting us to draw valid
inferences. This task is made more difficult because the survey was not
designed with a clear line of analysis in mind.
Three areas particularly hamper present efforts at interpretation:
? Lack of a clear experimental design: A sound experimental
design,even with masses of data this large, goes a long way
toward making it simple to analyze the data. Any biases or
interactions can be identified, accounted for, summarized, and
tested for significance. An experimental design tied to a
specific analytical approach is essential to reduce error and to
negate the effect of extraneous variables. The Consumer Survey
appears to have given only cursory consideration to such
fundamentals as the choice and range of the response variables,
minimizing distortions from extraneous variables, and the
make-up of the consumer population. Nonparametric* cluster
analysis and resistant** classification analysis offer some hope
for making reasonable assumptions and associations, but drawing
inferences will require the utmost caution. We are not in
control of the data any more than cattle drivers are in control
of a stampede. It is difficult to tell what direction the
cattle are heading, and estimates of the time of delivery and of
the profits upon shipment are specious.
? Missing data: Most consumers did not answer every question. In
fact, there are more than 4500 missing data points--about 20% of
the data. A comparison between two questions can be considered
fair only if the groups of consumers responding to each question
are similar to each other and representative of
*Nonparametric (or robust) techniques are relatively insensitive to
data distribution.
**Resistant measures are those which are relatively insensitive to
large variations in small portions of the data.
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
the larger population. Similarly, the comparison of two
consumers' views must be adjusted to account for
unrepresentativeness in the sample of questions. Missing data
greatly decreases the likelihood that fair comparisons can be
made without some form of adjustment. Tukey's algorithms for
direct and indirect standardization, developed for use in poorly
controlled studies, are good methods for comparing responses in
two or more disproportionate categories.
? Ine uitable scales: Once data has been standardized and
legitimately extracted from the questionnaire, it may still be
difficult to compare or summarize the data. Rating scales for
many of the questions are inherently incompatible. One question
partitions the frequency domain into "Daily", "Weekly", and,
'Monthly"; another into "Frequently", "Occasionally", "Rarely";
a third into "Excessive", "Sufficient", "Insufficient". Robust
and resistent measures that adjust for the effect of broad
categories may allow for order-of-magnitude comparisons.
Each of these problems will complicate our analysis, and the
synergistic effect of all three could invalidate classical correction
technqiues. The degree to which these complications will affect our
understanding of the survey and the subsequent evaluation of the
intelligence product is not yet clear though there is a reasonable
prospect for extracting useful data. Furthermore, the nonparametric and
the resistant methods outlined here assure that if we can move ahead, it
will be in a generally correct direction if at the expense of some
precision and sensitivity.
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Attachment 3: Refinement of the Model
The ICS approach to analyzing the Consumer Survey data will combine a
clear theoretical framework with cautious, step by step, refinement of the
framework.
Ideally, before an experiment or survey is conducted, with two
issues should be dealt with somewhat rigorously:
the model - the series of interrelated hypotheses and assumptions,
which will be proved or disproved based upon the experimental data;
and,
the experimental design - the structure that allows us to tie the
data collection to the hypotheses in our model.
Since these issues were not integrated into the consumer survey a priori,
we must impose them externally and a posteriori. While a posteriori
designs do not invalidate analyses they can make them more controversial.
Our analytical approach must therefore use the model and exploratory
statistics to support each other. The attached flow diagram outlines an
interactive procedure we might follow. The model will set forth an
hypothesis. Data exploration will yield indications of both the truth of
our hypothesis and of the direction confirmation analysis ought to take.
If confirmatory analysis supports our hypothesis we proceed to the next
hypothesis. If however, our hypothesis should prove false, we should make
the appropriate adjustments to our model, then retest all hypotheses.
Only when we have run through all hypotheses without rejection can we
expect to fully understand the survey data and feel confident about
inferences we draw from it.
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5
ata Collection
Build Model
'A posteriori'
Examine Hypothesis #I
Do Data Support HT?
Model=A series of interrelate,
Assumptions and Hypotheses
Adjust Model
All Hypots' Tested?
ADJUSTED
MODEL
Flow Diagram. Suggested Analytical Procedure to Evaluate Consumer
Survey Data.
Approved For Release 2005/03/24: CIA-RDP83M00171 R001500050002-5