(SANITIZED)SOVIET PAPERS ON AUTOMATIC CONTROL(SANITIZED)
Document Type:
Collection:
Document Number (FOIA) /ESDN (CREST):
CIA-RDP80T00246A023500250001-3
Release Decision:
RIFPUB
Original Classification:
K
Document Page Count:
157
Document Creation Date:
January 4, 2017
Document Release Date:
December 14, 2012
Sequence Number:
1
Case Number:
Content Type:
MISC
File:
Attachment | Size |
---|---|
CIA-RDP80T00246A023500250001-3.pdf | 12.91 MB |
Body:
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Signalling and Prediction of Failures in Discrete
Control Devices with Structural Redundancy
M. A. GAVRILOV
In solving problems of providing reliable operation of automatic
control devices, a great deal of attention is devoted to the use of
methods involving the application of structural redundancy.
These include all possible methods of duplicating individual
elements within units, as well as the more common methods of
providing redundancy of all the necessary elements and units on
the whole with the least possible number of additional elements.
The ever-increasing practical use of methods of structural
redundancy is a result of the fact that, in present complex
automatic systems, the control devices require such a large
number of individual elements to perform their functions that
even though the elements may have a very high reliability, the
necessary reliability demanded of the entire device cannot
be achieved.
A number of works" is devoted to the question of the
introduction of structural redundancy and the determination
of the minimum number of additional elements necessary to
achieve the prescribed reliability of the device on the whole.
For discrete control devices it is most natural and suitable to
examine the required value of operating reliability of the device
as being prescribed by a certain number of elements which
simultaneously fail during operation while nevertheless per-
mitting the device to perform accurately the control algorithm
assigned to it7.
The author of the present report showed3 that when the
problem is treated in this manner, the determination of the
minimum number of additional internal elements necessary to
achieve a given reliability completely coincides with the task
of determining the minimum number of additional symbols in
the construction of correcting codes with correction of the
corresponding number of errors. In the same article a method
was given for constructing tables of states which provide for
a realization of the structure of a discrete control device having
the required reliability.
The proposed method links the problem of constructing
such a device to the distrubution of the states of its internal
elements along the vertices of a many-dimensional cube of
single transitions in such a manner that the number of transitions
(distance) between the vertices, selected for the corresponding
stable states of the device, would be no less than:
D=2 d+1
(1)
where d is the number of simultaneously failing elements with
which the devices must still exactly perform their control
algorithm.
In differentiating the demands on reliability (namely, separa-
ting them from the viewpoint of the number of simultaneously
failing elements), first, into that for which the device must
accurately perform a given control algorithm and, second, into
525/1 dap
that for which it must not provide any actions at its outputs, the
value of the distance between vertices selected for the stable
states must be no less than:
D=2d-1- 4+1 (2)
where 4 is the number of simultaneously failing elements in
addition to d for which the indicated second condition of
reliable operation of the device must be fulfilled.
In discrete types of devices which have reliability as a result
of structural redundancy, the required reliability is retained
only until the moment of onset of permanent failure of even one
of the elements.
In fact, let the prescribed probability of failure of the entire
device on the whole require that the given control algorithm be
exactly performed with the simultaneous failure of d elements.
Then, with a permanent failure of any one of the elements, the
device will capably perform the control algorithm only upon
the simultaneous failure of d? 1 elements; that is, it will have
a probability of complete failure which is less than prescribed.
Particular importance is therefore devoted to rapid signalling
of failure of individual elements or their prediction, which
permits one to take timely measures to replace the faulty
elements or other measures which will return the probability
of failure of the entire device to its prescribed value. The present
report is devoted to an examination of the fundamental possibil-
ities of providing such signalling or prediction for automatic
control devices designed on the basis of the principles described
by the author3.
First it is shown that the table of states constructed according
to the principles contains all the necessary* information on
failure, both generally for all elements as well as for each of
them individually, and, even more, on the nature of the failures.
Those states of internal elements which correspond to the
stable states of the corresponding table of transitions and which
are distributed, as was pointed out above, in the vertices of a
many-dimensional cube of single transitions with a distance one
from the other of not less than D, are called basic. To each of
these states there must correspond a particular state of outputs
which provides for the performance of the prescribed control
algorithm.
Let the number of inputs of the discrete device be equal to a
and let it be given that, to perform the control algorithm with a
prescribed degree of reliability, that is, in the presence of
simultaneous failure of d internal elements, it is necessary
to have K internal elements. Then each of the basic states may
be characterized by a certain conjunctive member of a Boolean
function of length a ? K. In accordance with this the table
of states contains, on the left-hand side, a + K columns of
525/1
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
525/2
which a characterizes the states of the inputs and K charac-
terizes the states of the internal elements. The binary number
characterizing the state of the internal elements corresponds
to a particular vertex of the many-dimensional cube, selected
in distributing the given basic state.*
The failure of any element is characterized by a change in
the binary number, corresponding to a given basic state, from
zero to one or one to zero. The first is called a 0 1 type
failure and the second a 1 0 type failure. Each such failure
transfers the basic ?state to an adjacent vertex of the many-
dimensional cube. The simultaneous failure of any two internal
elements transfers the basic state to a vertex two units removed
from the vertex selected for the given basic state; it is adjacent
to any vertex to which the basic state was transferred by the
failure of any one of these two elements.
In order to provide exact performance of the control
algorithm upon the failure of internal elements, each of the
states to which the basic state is transferred upon the failure of
any number of elements within the prescribed limits (that is,
inclusive to d) must compare in the right-hand side of the table
of states to the same state of outputs as the basic state. Therefore,
for each stable state of the table of transitions, for the case of
structural redundancy, there must correspond a particular
combination of states consisting of the basic state and all the
states to which it transfers upon failure of the internal elements.
All of these states are adjacent to one another, forming a certain
multiple of adjacent states. This multiple is called a set of basic
states.
Frist it is shown that the set of adjacent states, together with
the basic states, may be described by a symmetrical Boolean
function whose active numbers represent a natural series of
numbers from K ? d to K.
Let there be any state fio corresponding to one of the basic
states and let this state be characterized by a row in the table
of states containing K, zeros and K2 ones, where K1 ? K2 = K.
Then, with d = 1, the collection of adjacent states Efii contains
all the states differing from the basic by the replacement of one
variable by its reciprocal. More precisely, they are K, while K1
of them corresponds to a failure of the type 0 1 and K2 to
a failure of the type 1 0. It is easy to see that the sum of these
states may be characterized by the symmetrical function:
E fii = SK? 1071) )721 ? ? ?1)7Kil 5C-Ki, XKI +23 ? ? ? XK1 +K2)
if the basic state is considered a symmmetrical function of those
variables with an active number equal to K, namely:
fi0 = SK ? ? ?5 XKl +1, XK1+2' ..? XK1 +K2)
The sum of the basic and set of adjacent states is thus
characterized by the symmetrical Boolean function:
fio +E = SK? 1, K(....15 .3723 ? ? ?5 .1CIXK+ 11 XK+ 2, ???3 XK1 +K2)
If d = 2, the set of adjacent states consists of all states
differing from the basic by the replacement of one variable by
its reciprocal, the number of which, as was pointed out, is equal
to K = OK, and two variables. The number of the latter is
obviously equal C2K, and since each of them differs from the
* All references made below to internal elements with an identical
base pertain to inputs and sensing elements.
basic by a change having a value of two variables, their total
E A2 corresponds to the symmetrical function:
fiz = Sx - 2 (5C-1, :g2, ? ? ?, 54Ci, + 1, XK1 + 2, ? ? ? , XK1 +K2)
The Boolean function characterizing the basic state and the
entire set of adjacent states is thus a symmetrical function of the
type:
fio+Efii s+Efiz
= SK? 2, K-1, i()73 ? ? ?) XICI + XXI + 2> ? ? ?, XXI +K2)
It may be proved in an analogous manner that in the general
case, with the simultaneous failure of d internal elements, the
basic state and the set of adjacent states may be characterized
by a symmetrical Boolean function of the type:
SK?d, K? d + 1, K (X15 i?2, ? ? ?/ XKI, XICI+ XKI + 2, ? XKI + K2)
Thus, the class of reliable structures of discrete devices is,
with respect to internal elements, a class described by symmetrical
Boolean functions of a special type, which facilitates their
realization since these functions have been most widely studied
and may be economically realized with the aid of different types
of threshold relay elements, including electromagnetic relay
elements with several windingss.
The basic state is designated as A and the set of adjacent
states corresponding to it as N, assuming that A + A = F.
The table of states of a discrete control device consists on
the left-hand side of all sets Fi combined with the corresponding
values of inputs. For each of these sets there corresponds on the
right-hand side of the table, as was pointed out above, a state
of outputs which provides for the performance of the control
algorithm. One more output is added for which is included in
the table of states a zero for each of the basic states and a one
for any of the states which are included in the sets of adjacent
states.
Since the latter corresponds to the failure of any one or to
the simultaneous failure of several internal elements, the appear-
ance of a one at this output occurs only by means of a decrease
in the reliability of operation of the discrete device and may be
used to signal the presence of a failure.
For example, let there be a discrete device with three inputs
and one output (Figure I) and an action, equal tb one, must
appear at the latter in the subsequent sequence of change of the
states of the outputs:
0 0 0
1 0 0
1 1 0
1 1 1
0 1 1
Any subsequent change of inputs must lead to the appearance
of an action at an output to zero, while the further appearance
of an action at the output equal to one occurs only by the
repetition of the indicated sequence of change of the states of
the inputs. With any other sequence of change of the states of the
inputs, the action at the output must remain equal to zero.
The corresponding table of conversions is given in Table 1.
Here it may be seen that it is necessary to provide for four stable
states, which is possible with the aid of two internal elements.
When it is necessary that the aforementioned discrete device
performs exactly a preassigned control algorithm in the event
of the simultaneous failure of one of the internal elements, five
525/2
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Table 1
000
100
110
010
011
111
101
001
(1)0
(1)0
2
4
(1)1
4
4
4
?
4
(2)?
4
?
3
?
?
?
4
?
1
(3)?
4
?
1
(4)0
(4)?
(4)?
(4)?
(4)?
(4)?
(4)?
internal elements are required, as seen in Table 5 of reference 3.
The following distribution for the basic states is chosen:
0 0 0 0 0
1 0 1 1 0
0 1 0 1 1
1 1 1 0 1
Then the table of states will'have the form shown in Table 2.
In agreement with what was mentioned above, let us add the
output Co, in the column of which are written zeros in
all the rows of the table of states corresponding to fi and ones
in all the rows corresponding to Ni (Table 3). Then this output
will signal the presence of a failure of any one or several of the
internal elements.
Table 2
A
B
C
F
X,
X2
Xi X4
X2
Z
0
0
0
F2
0
0
0 0
0.
0
0
0
0
0
0 0
0
0
0
1
1
1
1 0
1
0
0
1
1
1
1 0
1
0
1
0
F,
1
1
1 0
1
0
1
0
1
1
1 0
1
0
1
0
F4
1
1
1 ? 0
1
0
1
1
F,
0
0
0 0
0
0
1
1
F,
0
0
0 0
0
0
1
1
F4
1
1
1 0
1
1
0
0
F,
0
0
0 0
0
1
0
0
F2
1
1
1 0
1
1
0
0
F4
1
1
1 0
1
1
0
1
1
1 0
1
1
0
1
F3
1
1
1 0
1
1
0
1
F4
1
1
1 0
1
1
1
0
F,
1
0
1 1
0
1
1
0
1
0
1 1
0
1
1
0
1
1
1 0
1
1
1
0
F4
1
1
1 0
1
1
1
1
1
1
1 0
1
1
1
1
F,
0
1
0 1
1
1
1
1
F2
0
1
0 1
1
1
1
1
1
1
1 0
1
In this table:
0 0
0 0
0
1
0 1
1
0
1
0 1 1
1 1 1
0 1
1 0
0 0
0
0
0 1
1
1
1
0 1 1
0 1 1
0 1
F,
0 1
0 0
0 0
1 0
0
0 *F2
1
1
1 1
0 0
1 0
1 0 13
0
0
0
1
0 1 1
1 1 1
F4
1 0 1
1 1 0
0 1
0 1
0 0
0 1
0
1
0 1
0 0
0
1
0 Q 1
1 1 1
1 1
0 0
0 0
1
1
0 1
1 1
0
1
0 1 0
1 1 1
0 0
Table 3
525/3
X,
X2
X,
X,
X2
C0
C1
C2
Cs
C4 C2
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
.1
0
0
0
0
0
1
0
0
0
1
0
1
0
0
0
0
1
0
0
1
0
0
1
0
0
0
0
0
1
0
1
0
0
0
1
0
0
0
0
0
1
1
0
0
0
0
1
1
.0
1
1
0
0
0
0
0
0
0
0
0
1
1
0
1
1
0
0
0
0
1
1
1
1
0
1
0
1
0
0
0
1
0
0
1
0
1
0
0
1
0
0
1
0
1
0
0
1
0
0
0
1
0
1
0
1
1
1
1
0
0
0
0
1
01011
0
0
0
0
0
0
1
1
o
1
1
1
1
0'
0
0
0
O
o
o
1
1
1
0
1
0
0
0
0
1
1
1
1
1
0
0
1
0
0
0
1
0
0
1
1
0
0
0
1
0
0
1
0
1
0
1
0
0
0
0
1
1
1
1
0
1
0
0
0
0
0
0
0
1
1
0
1
1
1
0
0
0
0
1
0
1
0
1
1
0
1
0
0
0
1
1
0
0
1
1
0
0
1
0
0
1
1
1
1
1
1
0
0
0
1
0
1
1
1
0
0
1
0
0
0
0
1
If one places the action from this output into a computer
and determines the number of times that actions equal to one
appear at this output during a certain time interval, the answers
from the computer may be used to predict an approximation
of reliable operation of the device.
The described principle of signalling and prediction has
significant advantages in the sense that neither the signalling nor
prediction requires the introduction of any additional internal
elements. Usually the performance of these functions relies upon
special units of the discrete device which require elements having,
in principle, a reliability as much as one order of magnitude
greater than the elements which make up the discrete device itself.
In the design examined above, comprising a structure of
signal outputs based on actuating devices already having internal
elements, and assuming that the connections between these
devices and the sensing signal and predicting devices have
100 per cent reliability, one would expect that the signalling of
failure would have absolute reliability in principle.
In fact, only two mutually exclusive events may occur:
(a) not one of the internal elements is faulty. Then the actions
equal to one appear at the corresponding operating outputs
and at the signal output the action is equal to zero; (b) failure
of one or several internal elements occurs within the limits of d.
Then an action equal to one appears both at the signal and
operating outputs.
It is noted that achieving reliable operation by means of the
introduction of structural redundancy, according to the principles
previously presented by the author3 pertain to the internal
elements of the device as a whole, that is, both to the actuating
525/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
525/4
and the reacting devices. Therefore, with respect to failures of the
actuating organs, the device retains its ability to perform
exactly the control algorithm upon the failure of either one or,
simultaneously, all of the actuating devices of a given internal
element for the conditions when these failures are all of a
single type.
The described principle of designing signal circuits makes it
possible to provide separately for signalling the number of fail-
ures greater than d, including those located between the limits of
d 1 and d A. Additional outputs must be added for this
purpose. This requires that ones be written in the 'specific rows
in the appropriate columns of the table of states; namely, for
signalling failures of elements within limits from d 1 to d
in the rows corresponding to failures in these limits, and for
signalling a large number of failures in the rows corresponding
to unused states.
It is obvious that the signalling of failures may be not .only
general but also specific, or, for each of the internal elements
of the device separately. For this purpose one must have for
each of them an individual output, for which there must be
written in the columns of the table of states ones for all states
differing from the basic by the change in value of the correspond-
ing variable. For example, to signal the failure of element X, in
the above case, ones must be written for each first row of the
sets Ni for the corresponding output.
Table 3 gives the corresponding values of outputs for each
of the internal elements. The realization of such outputs pro-
vides, in the event of faulty elements in the device, for advance
notification as to which of the internal elements is malfunc-
tioning or, with prediction, an approximate indication, per-
mitting timely replacement or adjustment of the element for
proper action.
Obviously it is possible to provide not only for signalling of
failures of individual internal elements but for the separate
signalling of the nature of these failures as well. For example,
in Table 4, for the internal element X1 examined above, are
shown the operating states corresponding to failures of the
type 0# 1 [Table 4(a)] and failures of the type 1 -->- 0
[Table 4(b)].
Table 4
X,
X,
X,
X,
X,
X,
X,
X,
X,
X,
1
0
0
0
0
0
0
1
1
0
1
1
0
1
I
0
1
1
0
1
(a)
kb)
In conclusion some of the problems of realizing signalling
and prediction networks are considered. The circuit of each
output in the structure of a multi-cycle discrete device must
contain actuating devices of both internal and sensing relay
elements. The signal circuits must contain actuating devices of
only internal elements. Therefore the rational design of the
structure of a discrete device would be that shown in Figure 2,
namely, a structure in the form of a certain [1,K] terminal net-
work having at its outputs all the functions of f, and N, and
containing the actuating devices of only the internal elements,
and an [M, N] terminal network containing the actuating devices
of only the sensing elements.
As was pointed out above, the functi6ns which realize the
basic states together with the sets of adjacent states are sym-
metrical with the operating numbers from K ? d to K and for
their realization it is suitable to use so-called 'threshold' elements.
When such elements are used it is advantageous to use the
structure of the discrete device having a form shown in Fig-
ure 2(b), where the [1,K] terminal network is based on thresh-
old elements according to the number of basic states. The [M, N]
terminal network has the same make-up as that shown in
Figure 2(a), while the output circuits for signalling and predic-
tion of failures are derived from the outputs of the threshold
elements by means of their series connection (providing an 'and'
operation) and from circuits corresponding to the function A.
The latter may also be designed with the aid of threshold elements
having symmetrical functions with the operating number K.
In addition it is noted that, in the case examined above, it is
most rational from the viewpoint of the simplest physical
realization of the structure of a discrete device to choose the
operating levels of the symmetrical functions not from K ? d
to K but from 0 to d, while simultaneously taking not the
variables but their inversions.
In conclusion one should note that the method considered
previously by the author3, as well as everything discussed in
this report, refer to the case in which the probability of failure
for all internal elements has a single value, the failures
are symmetrical (that is, the probability of failures of the
type 0 ?> 1 is identical to that of type 1 0), and, in addition,
failures of individual elements are mutually independent. Con-
ditions differing from these necessitate a somewhat different
approach to determining the minimum number of elements and
the distribution of the states. However, the principles of de-
signing signal circuit and of prediction remain the same, with
the exception that the functions characterizing the basic sets
and the sets of adjacent states may not prove symmetrical.
References
1 VON NEUMAN, S. Probabilistic logics and the synthesis of reliable
organisms from unreliable components. Automata Studies. 1956.
Princeton; Princeton University Press
2 MOORE, E. F. and SHANNON, C. E. Reliable circuits using less
reliable relays. J. Franklin Inst. Vol. 262, No. 3 (1956) 191, 281
3 GAVRILOV, M. A. Structural redundancy and reliability of relay
circuits. Automatic and Remote Control. Vol. 2, p. 838. 1961.
London; Butterworths
4 ZAKROVSKIY, A. D. A method of synthesis of functionally stable
automata. Dok. AN SSSR Vol. 129, No. 4 (1959) 729
5 RAY-CHANDHURI, D. K. On the construction of minimally re-
dundant reliable system designs. B.S.T.J. Vol. 40, No. 2 (1961) 595
6 ARMSTRONG, D. B. A general method of applying error correction
to synchronous digital systems. B.S.T.J. Vol. 40, No. 2 (1961) 577
7 GAVRILOV, M. A. Basic terminology of automatic control. Auto-
matic and Remote Control. Vol. 2, p. 1052. 1961. London; Butter-
worths
8 GAVRILOV, M. A. The Structural Theory of Relay Devices, Part 3.
Contactless Relay Devices. 1961. Moscow; Publishing House of the
All Union Correspondence Power Engineering Institute
525/4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Figure 1
JL
I
H I
f N2 I I I
I I
LLTT TT TT
F(N) F (a)
7 1
Co Z1 Z2
(
(a)
S KA ....K fl
SK fl
SK.....K 2
S Kf 2
m
Sem
NSK)
F(a)
Zn
(b)
Figure 2
525/5
525/5
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
506/1
A Digital Optimal System of Programmed Control and
its Application to the Screw-down Mechanism of a Blooming Mill
S. M. DOMANITSKY, V.V. IMEDADZE and Sh. A. TSINTSADZE
Introduction
Digital servo programmed-control systems are finding con-
tinually wider applications in various branches of industry:
in particular, they are used for the automatic control of screw-
down and other mechanisms of rolling mills, for the control of
various moving parts in control systems for metal-cutting
machine tools, and in a number of other instances. The operation
of such mechanisms normally falls into two stages. In the first
stage the device must choose or compute an optimal programme,
working on the basis of information about the requirements for
the technological process, about the condition of the plant,
about external perturbations, etc. In the second stage the given
programme must be carried out according to an optimal law.
The term 'optimal law' is normally taken to mean the carrying
out of the given displacements with the maximum possible
response speed and with the required accuracy; in addition a
condition is often included covering requirements on control
response quality.
While the function of choosing an optimal programme is
not necessarily inherent in the digital control system itself,
particularly when it operates in a complex installation with a
controlling computer, the function of carrying out the given
displacements according to an optimal law must still be organic-
ally inherent in the digital servo system. If this requirement is
not satisfied, such systems cannot be considered fully efficient,
since for many mechanisms, e. g. manipulator jaws, shears and
rolling-mill pressure screw-down, the response speed and
accuracy determine the productivity and output quality of the
whole line.
A system of programmed control has been developed by the
Institute of Electronics, Automatic and Remote Control of the
Academy of Sciences of the Georgian S. S. R. in cooperation
with the Institute of Automatic and Remote Control of the
U. S. S. R. Academy of Sciences. The basic unit of this system
is a digital optimal servo system which has a number of character-
istic properties. The electric motor drive of the optimal system
works at accelerations that are maximal and constant in magni-
tude. This ensures the greatest response speed and simplifies the
design of the computing part of the programmed-control
system. The required system accuracy is ensured by the digital
form in which the programme is given and executed. The small
quantity of information processed in unit time has made it
possible to use a pulse-counting code rather than a binary one,
which improves the reliability and interference-rejection pro-
perties of the system. The system is entirely built out of ferrite
and transistor elements.
This report gives a general description of the digital optimal
programmed-control system, and also a practical example of its
application to the automatic control of the screw-down mecha-
nism of a blooming mill; this device has passed through
laboratory and factory testing, and by the end of 1962 it was
introduced into experimental service at the Rustavi steelworks.
Design Principles of the Programmed-control System
The basis of the system developed for programmed control
is the optimum principle; the execution of the required displace-
ment takes place at limiting values of the restricted coordinates,
especially of the torque and rotation speed of the motor.
For the case where the drive control system has negligible
inertia, Figure I will clarify the above; it shows the law taken
for the variation of the control action F, and the curves of
motor torque Mm and speed n. The figure shows that during
run-up and braking the drive maintains the constant maximum
permissible value of torque developed by the motor. When
executing large displacements, after the motor has reached its
maximum speed nmax it is automatically switched over by the
drive circuit to operate at that constant speed (point MS on
Figure I). The instant of braking (point T2) is chosen by the
control system such that only a relatively short path remains
to be traversed up to the instant when the speed is reduced to
10-12 per cent of the maximum (point CS). The execution of the
rest of the path to the required low speed is automatically
performed by the drive circuit, and ensures maximum accuracy
in carrying out the programme. Figure I shows that the variation
of drive speed with time follows a trapezoidal law. For small
required displacements the motor does not have time to run up
to its maximum speed, and the speed variation follows a
triangular law.
The above-mentioned properties of the drive allow the
controlling part of the programmed-control system to be
considerably simplified, since in this event it only has to generate
and execute commands for starting the drive in the required
sense, for braking and for stopping the drive.
The design logic is very simple for that part of the control
system whose purpose is to start the drive in the required sense
and to determine the instant for generating the command to
stop the drive; it is suitable both for control of low-power
drives that have no links with appreciable inertia, and also for
control of high-power drives with large inertia. The required
displacement path and sense of rotation of the motor are deter-
mined by comparing the given programme with the actual
position of the controlled mechanism (to give an error signal).
During the execution process the path traversed is continuously
compared with the initial error; the command to stop the drive
is generated at the instant when these two quantities become
equal.
506/1
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
506/2
The programme is given in terms not of previously defined
initial errors, but of absolute values of position-coordinates for
the controlled mechanism. This avoids the possibility of errors
accumulating from excution to execution, and also the need for
the controlled mechanism to be resting initially in a closely
defined position.
The part of the system that determines the instant for the
command to start braking has a relatively more complex
design logic, and also takes different forms in systems for con-
trolling the two different types of drive mentioned above.
For systems controlling inertia-free drives the ratio between
the paths traversed on braking Sb and on the run-upS is, a
constant and equal to the ratio between the absolute values of
acceleration on run-up a, and on braking ab:
Sb
Sr ab
(1)
Taking into account the condition that should be satisfied:
Sr+ Sb = A
where A is the required excution path (i.e. the initial error),
one gets
A=S,.(1+k) (2)
This expression defines the design logic for the part of the
system determining the instant for the command to start
braking: the path traversed by the drive during the run-up is
continuously multiplied by the fixed quantity 1 k, and when
the resultant quantity becomes equal to the initial error A, then
the command is generated to start braking.
For large displacement, when the drive has time to run up
to its fixed maximum speed, the full displacement path must
consist of three terms:
A = S,.? Sb+ Sms
where Sms is the path traversed at the constant maximum speed.
By using eqn (1) it is found that
A=S,.(1+k)-1-Sms (3)
This expression ,shows that the device for determining the
instant to start braking should be designed on the following
principle: the path traversed during the run-up is continuously
multiplied by 1 k; to the value obtained at the instant of
reaching the maximum speed the path traversed at that speed
should continue to be added; and when the resultant quantity
becomes equal to the initial error, then the command should be
generated to start braking. It can readily be seen that expres-
sion (2) is a particular case of expression (3).
It has been assumed in the above discussion that the drive
accelerations on run-up braking are constant, therefore their
ratio k is constant also. But in practice k may vary between
certain limits, which are not actually very wide; hence its maxi-
mum possible value is set into the computing device in question.
With k smaller than the maximum, the last few millimetres of
the path will be executed at a low speed, as has already been
pointed out.
But in those cases where it is particularly vital to minimize
the time of execution, self-adjustment may be introduced into
the control system for the quantity k set into it. It is simplest to
operate the self-adjustment according to the results of the
completed execution, and for the self-adjustment criterion one
should take the minimum both of the path length executed at
low (creep) speed S, and also of the overrun path S? beyond
the required point.
The ratio of the path A ? Sms to the run-up path is denoted
by y, and suffixes are given to all symbols as follows: 1 to
indicate the previous action and 2 to indicate the next action.
Then one can write
5ms1=Y1. Sri
Since the creep speed is small enough one has:
Since the
equation
one gets
whence
Al= Sri (1+ k)+S?i+Snisi
aim of the self-adjustment is to establish the
Y2=1.+k
Smsl = Sri. Y2 ?Scs1
, At ? Smst ? Scst A ? Smst Sest
I 2 =
Sri AiSmsi
Finally one has:
Y
scsi
Y2 =Yi A
- "MS1)
(4)
Employing an analogous argument for the case of overrun
beyond the required point, one can write to a sufficient accuracy
Srol
Y2 = Y1 (1 A ? Smsi
(5)
where S,.01 is the overrun on the previous action.
Expressions (4) and (5) indicate the design logic for devices
to give self-adjustment of the quantity k set into the system.
In control systems for high-power drives the presence of
large inertia means that the current, and hence the motor
torque, does not vary in a stepwise manner as shown in Figure 1,
but much more slowly. This is evident from the oscillogram
given in Figure 2, recorded for the motor of a blooming-mill
screw-down mechanism.
For this reason, and also because considerable static loading
is present, the Motor speed, while varying with time in a roughly
triangular law, lags behind the voltage during the run-up, and
after the start of braking there no instantaneous reduction
in speed; .in fact it even goes on increasing for a certain
time. Hence the ratio of the complete path to the run-up path
required to the condition for optimal operation, which is a
constant in the case of relatively low-powered drives with fixed
characteristics, proves here to depend on the magnitude of
the full action path itself, this dependence being of a complex
non-linear nature. The relation y = f (A) has been derived
analytically for the screw-down mechanism of particular bloom-
ing mill and has then been checked on the mill itself, as shown
in Figure 3. It should be noted that the curves for upwards and
downwards motion are somewhat different, and the graph in
Figure 3 has been drawn from certain averaged-out values.
506/2
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
In the programmed-control system developed for the
blooming-mill screw-down mechanism, the complete range of
possible displacement values has been split into eight groups:
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
less than 16 mm
16-32 mm
32-48 mm
48-64 mm
64-96 mm
96-128 mm
128-192 min,
greater than 192 mm.
The use of narrower intervals for small A is explained by the
nature of the curve y = f (A), whose slope gradually diminishes.
The choice of the limits for the ranges was determined by the
ease with which the given division could be engineered.
A special device forming part of the controlling part of the
system automatically estimates the value of the initial error
before each action, determines the group into which it falls, and
sets up the mean value of y corresponding to that group. The
execution process itself proceeds similarly to that for the control
of relatively low-powered motors, the nature of it being optimal
in this case also by virtue of the fact that the run-up and braking
accelerations are still constant and correspond to the maximum
permissible torque value. It is only in the first two groups, for
rarely met small displacements, that the excessively wide limits
of variation of y make it practically impossible to combine the
optimum principle with accuracy requirements. Hence for the
first group an action is used that is from start to finish at a lower
speed equal to 10-12 per cent of maximum, while a limited
speed is used for the second group.
If it is necessary to introduce self-adjustment of the quantity y
set into the control system, in this case it is evidently most
desirable to apply the principle of altering the y for a given
group by the same increment at each repetition of a A corre-
sponding to that group. A very complex installation would have
to be designed in order to be able to apply the principle of self-
adjustment of y after the very first action.
Operation Algorithm of the Progranuned-control System for the
Screw-down Mechanism of a Blooming Mill
A system designed according to the above principle for con-
trolling high-powered drives has two memory devices for rolling
programmes:
(1) A static programme store (SPS) for long-term storage
of fixed programmes specified according to the technological
set-up for rolling at the works-40 programmes in all, with a
maximum number of passes up to 23.
(2) A variable programme unit (VPU) for programmes that
change often and are not stored in the SPS. There are two
means for recording programmes on the VPU: (a) Manual
recording using a telephone dial, and (b) Automatic recording
of a rolling programme carried out under manual control by
an operator. This allows one to use the system for automatically
rolling a series of roughly identical unconditioned ingots for
which no fixed programme is yet in existence. The operator uses
his experience to roll the first of this series of ingots, the gap
sizes set on the rolls being automatically recorded on the VPU
during the rolling; the remaining ingots of the series are then
rolled according to this recording.
506 /3
As well as these methods of use, the VPU can also be con-
nected to a computer calculating optimum rolling program-
mes. A single programme containing up to 35 passes may be
recorded on the VPU.
In the developed system the size of the required gap between
the rolls is given in the form of a ten-digit binary number, ex-
pressed in millimetres and equal to the distance from the initial
point of a given position on the upper roll.
The operational algorithms for the systems of control from
the SPS and from the VPU are basically identical; they contain
the following operations or elements:
(1) Choice of operating regime (automatic operation from
SPS or from VPU).
(2) Choice of the necessary programme (when working
from SPS).
(3) Setting up the computing equipment to the initial position.
(4) Feeding in, from the programme store, of information
on the given position for the upper roll.
(5) Determination of the actual position of the upper roll
(interrogative operation) and computation of the initial error
signal.
(6) Determination of the determination of rotation of the
motor.
(7) Setting up the value of the coefficient y.
(8) Start of operation.
(9) Attainment of maximum speed by the drive.
(10) Determination of the instant for braking to start,
generation and execution of the relevant command.
(11) Transition of the drive to creep speed.
(12) Determination of the instant for stopping the drive,
generation and execution of the relevant command.
(13) Transition from the given pass to the next one, all the
operations from (3) to (13) then being repeated.
All the operations are carried out automatically except for
(1) and (2) where the operator has to press the relevant push-
buttons.
The automatic recording of a prograrnme on the VPU with
manual control follows this algorithm:
(1) Choice by the operator of the relevant regime.
(2) Setting of the upper roll to the required position.
(3) Setting up the computing equipment to the initial position.
(4) Interrogation of the measuring equipment to give the
position of the upper roll, and translation of the resulting in-
formation into binary code.
? (5) Transmission of the information to the VPU.
(6) On proceeding to the next pass, all the listed operations
from (2) to (5) are repeated.
Operations (3), (4) and (5) are carried out automatically
one after the other.
A programme can be set manually into the VPU using the
telephone dial while the system is in operation from the SPS.
Block Diagram of Programmed-control System
The block diagram of the control system is shown in Figure 4.
One of the fundamental elements of the system is a measuring
unit MU of original design. It fulfils two functions: (1) on
receiving an interrogation command it makes a single deter-
mination of the actual position of the upper roll, and gives out
506/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
506/4
a number of pulses equal to the gap between the rolls in milli-
metres, and (2) it signals the path traversed, giving out during
the execution process a pulse for every millimetre traversed. In
order to carry out these tasks the MU has two independent
channels, one each for interrogation and for execution. It is
linked to the screw-down mechanism by a synchro transmission.
The interrogation operation takes place when the rolls are
stationary and during the rolling of the metal.
A reversible binary counter RC is used to determine the
magnitude of the initial error signal A, to derive the stop com-
mand and to record the rolling programme on the VPU. For
convenience in the design of the computer section, the counter
determines not A but its complement A = C ? A. Here C=1.027
is the counter capacity. The demand (D) in the form of a ten-
digit binary number in direct code is introduced into the RC
by a parallel means. Then the interrogate command is sent out,
and the RC receives from the MU a number of pulses ((F) corre-
sponding to the actual gap between the rolls expressed in the
complementary code F=( C ? (1). Hence the resultant number
in the counter is D 0. Two cases arise:
(1) D (1). In this case the upper roll must be displaced
upwards by an amount A = D ? al. So that the quantity 2K.
should be derived in the counter also in this event, interrogation
pulses must be added to D only till the counter is full; from
that instant the switch SW puts the counter into the subtraction
mode, and the arrival after this of the number
(C ?(F) ? (C ? D)= D ?0=4
of pulses from the MU gives in the counter the quantity
C? A = -A-
During the execution the counter always operates in the
addition mode. When it receives from the MU a number of
pulses equal to A, it overflows
A+A=C?A+A=C
and gives a pulse from its last digit that is used in the command
unit GUI to generate the 'stop' command.
A straightforward logic designed into the command unit
CUl generates the command 'up' or 'down' according to
whether the binary counter has overflowed or not during the
interrogation process. These commands are passed to the logic
unit for the drive control.
A transfer register connected to the reversible counter and
repeating all its actions serves for the transfer of the quantity -A
-
derived in the counter to the device for determining the coeffi-
cient y and to the non-reversible binary counter BC that serves
to determine the instant for giving the command to start brak-
ing. It is also used when a programme carried out by a rolling
operator is being recorded on the VPU. In this event the re-
versible counter is put into the read-out mode, and then inter-
rogation of the MU is carried out. As a result one obtains in the
counter and the transfer register the magnitude of the gap
between the rolls in direct code:
c?if=c?(c?(F)=e.
This information is read over in the transfer'register and trans-
ferred to the VPU by a parallel means.
The frequency divider FD serves to generate the various
values of the coefficient y. It consists of a normal binary counter
to the cells of various digits of which are connected the inputs
of switches K4-K10. Thus, for example, if the outputs of the
first, third and seventh digits are connected to any switch, then
when 128 pulses arrive at the input of the frequency divider
from the MU, 64 + 16 + 1 = 81 pulses will reach the switch.
If the output of this switch is connected to the input of the
third digit of the braking binary counter, then evidently the
coefficient y = 81/128 ? 4 2.53.
The role of the device described later for determining the
quantity y consists in opening whichever of the switches K4-K10
will set up the required value of y for a given A.
Since S has already been recorded in the braking counter,
therefore when A/y pulses have been received from the MU the
counter becomes full and its last digit gives out a pulse that is
then used in the command unit CU 1 for forming the braking
command, realized by the drive control logic unit.
If, before the braking counter becomes full, the drive has
time to run up to its fixed maximum speed, then from that in-
stant all the switches K4-K10 are closed, and by opening
switch K3 the number of pulses originated by the measuring
unit is passed to the first digit of the counter. This carries out
the logic for determining the instant to start braking, as already
described.
The next section describes the devices for automatically
limiting the maximum drive speed and for transition to creep
speed during the braking process. The path length traversed at
creep speed is 3-5 mm.
The system is started up automatically by a photoelectric
relay system at the instant when the metal leaves the rolls. But
because the motor has a delay in starting of 0.6 sec, a correspond-
ing advance must be introduced. This is achieved by a special
assembly that indirectly measures the speed of the metal and
generates a pulse to start the system calculated so that the drive
starts at the instant when the metal leaves the rolls. This as-
sembly is not shown in Figure 4.
The system also contains a number of elements that carry
out various logical functions required for the sequencing of the
operations, for their automation, etc. In particular, a photo-
electric relay unit is mounted on the mill for automatic drive
starting.
The system provides for control of the most responsible
operation?the stopping of the drive at the correct time. For
this purpose the reversible counter is duplicated. The outputs
of both counters are fed to a special control logic unit. If over-
flow pulses are not generated simultaneously by both counters,
this unit gives out both a stop pulse and a fault signalling pulse;
if both overflow pulses arrive at once, it generates only a stop
pulse.
Certain Basic Elements of the Control System
(1) Electric Motor Drive and its Control
The electric motor drive for the blooming-mill screws
is designed as a generator-motor system. A375 kW d.c. generator
506/4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
506 / 5
powers the two 180 kW screw-down motors connected in
series, and is controlled by a 4.5 kW amplidyne.
The drive must provide for the execution of a prescribed
path according to the optimal speed curves given in Figure].
In this connection the following requirements are placed on
the drive:
(1) In order to obtain the maximum response speed, the
motor current must be held equal to the maximum permissible
during run-up and braking.
(2) Limitation of the maximum rotation speed of the motor
is necessary.
(3) During the braking process an automatic transition must
be ensured to the creep speed n = nmin.
(4) Heavy braking is necessary when the drive is finally
stopped from creep speed.
The layout of a drive satisfying these requirements is shown
in Figure 5. The control winding W1 of the amplidyne is con-
nected tothe output of a three-state semiconductor trigger circuit
which receives control pulses from the drive control logic unit.
The run-up and braking of the drive take place at an invariable
value of motor current 1,7? = (4)max, which is achieved by the
use of strong negative current feedback in the armature circuit
(feedback winding W4), with a feedback gain of 8-10. For
large error signals, when the voltage at the generator terminals
reaches its maximum value, depending on its polarity one of the
stabilovolts ST strikes. This causes the maximum-speed relay
RMS to operate and apply the generator voltage to winding W3.
The current flowing in this winding sets up a negative feedback
that limits the generator voltage and consequently the motor
rotation speed.
The creep speed is obtained by means of the twin-winding
relay RCS. This relay is operated at the start of the execution
by one of the windings being energized. At the start of braking
this winding is de-energized, and the relay is held on only by the
action of the second winding, which is energized from the
generator output voltage; as this voltage falls in consequence
of the braking process, the relay drops out and causes a strong
negative feedback to be applied, which together with the change
in the polarity of the current in the amplidyne control winding
sets up a speed that is about 10 per cent of the maximum.
Efficient braking from this speed on stopping is achieved by the
self-damping of the generator on the removal of the control
action from the control winding WI.
As stated above, the drive control equipment consists of a
three-state power trigger circuit whose output is connected
through a balanced semiconductor amplifier to the amplidyne
control winding Wl.
In order to obtain the required variation in the control
action, pulses must be supplied to the appropriate inputs of the
trigger circuit. The order of application of the pulses depends on
the direction in whiCh the upper roll has to be displaced; it is
developed by the drive control logic unit.
Signals are fed by six channels to the input of this unit from
the digital control system and the drive system. These commands
are as follows: selected direction of motion (up or down),
clearance to start, braking, transition to creep speed, and stop.
From these commands the logic circuit derives the signals
that go to the appropriate trigger circuit inputs.
(2) Static Programme Store
The static programme store SPS (Figure 6) is a matrix
memory device in which binary numbers forming a programme
are recorded by means of networks of semiconductor diodes.
It consists of a distributor, a programme unit and a numerical
unit.
The distributor (see the bottom line of Figure 6) sequentially
sends out a read pulse to the programme unit (second line of
Figure 6) in accordance with the sequence of passes making up
each programme; it is a device without moving parts that
switches from pass to pass. The maximum number of passes in
the programmes is 23, and so the distributor has 23 digits
(23 ferrite-transistor cells).
The programme unit consists of 23 ferrocart programme
cores, each of which has one primary winding connected to the
distributor and 40 secondary windings (one for each fixed pro-
gramme). The secondary windings of all the cores for a given
programme are all connected at one end to a common bus,
while the other ends go to the diode numerical matrices. Selec-
tion of the required programme is made by connecting one or
other of these secondary-winding bus-bars to the output bus
(+ on the diagram). Thus the operator needs only to press a
button on the control desk to select the required programme.
(3) Variable Programme Unit
A fundamental element of the VPU is its store ST, con-
sisting of a ferrite matrix on which 35 ten-digit numbers can be
recorded. Each core of the matrix has four windings: erase
(reset), carry-in of numbers, write (also serving as read-out
winding), and output (Figure 7). The carry-in and output
windings of the ferrites for the same digit are connected in
series (35 ferrites each); the write and read-out windings of all
the ferrites for a given number (pass) are also connected in
series (10 ferrites each).
The operation of the store is based on the well-known
Cambridge principle. But the design logic and circuit are
original and very simple.
For the recording of a number in the store, pulses are
applied to the input shaping circuits for the appropriate digits.
At the same time an activation pulse is applied to the distribu-
tor, 35 of whose cells have their outputs connected to the corre-
sponding write and read-out windings. Figure 8 shows the form
of the pulses generated by the distributor and shaping circuits,
and also their relative timing. The sense of the current corre-
sponding to the top part of the pulses is for read-out. Hence,
as is clearly seen from Figure 8, the superposition of the two
magnetizations on the ferrite at the start only confirms the
absence of recording, while later on (when the bottom parts of
the pulses in Figure 8 coincide) a 1 is written.
When reading out numbers, an activation pulse is stipplied
each time to the distributor, and the pulse coming from it
performs the read-out. So as to regenerate the read-out number,
feedback is taken from the output shaping circuit of each digit
to the input shaping circuit for the same digit, resulting in the
appearance of a pulse from the input shaper almost at the same
instant as a register pulse appears; but the relation of the
initial parts of these pulses is such that this attenuates the read-
out pulse only negligibly. A coincidence of the magnetizations
(the lower halves in Figure 8) brings about regeneration of the
number?its re-recording. By this means the recorded pro-
506/5
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
506/6
gramme may be reproduced a practically unlimited number
of times.
As already pointed out, the recording of a rolling programme
carried out by an operator under manual control is achieved by
means of the reversible counter included in the system. There is
a special original device for the manual recording of programmes.
A number is dialled on a somewhat modified telephone dial,
taking its digits in sequence one after the other. To record the
number 253, for example, 2, 5 and 3 are dialled in sequence,
while to record 72 one dials 0, 7 and 2 in sequence, etc. The
dial has two contact systems: one for numerical pulses and one
for control pulses, which are fed out on separate channels. The
dial is designed so that when one dials zero only two control
pulses are generated (one each for clockwise and anticlockwise
rotation of the dial); when one dials 1 there is one control pulse,
one number pulse and then another control pulse, etc. The
control pulses thus generated serve to activate the six-digit dis-
tributor controlling the recording system. The outputs of its
cells control switches in such a way that the first switch (hun-
dreds) is open at the instant when the number pulses come
through for the first digit of the number to be recorded, the
second switch (tens) for the second-digit pulses, etc. These
pulses are passed from the switches to a binary counter that
serves to form the binary code for the number (Figure 9).
This device works on the principle of introducing pulses
into the digits of the binary counter in such a way that the
sum of their values equals the number of pulses received.
For example, since the number 100 has the form 1100100 in
binary code, for every pulse arriving from the first switch
(hundreds) one pulse is put into the third, sixth and seventh
digits of the binary counter; so as to avoid disruption of the
computation in the event of digits being carried from lower to
higher columns, these pulses are supplied not simultaneously to
all three of the digits mentioned, but spaced by a time delay
which is enough to allow the carry to take place.
After the dialling of the third figure is complete, the final
control pulse causes a pulse to be sent out from the output of
the sixth cell of the distributor, which in its turn brings about
the transfer into the store of the number formed in the counter,
followed by the preparation once more of the first cell of the
distributor. This makes it possible to dial numbers continuously
one after the other. The correctness of the dialling may be
checked on a visual indicator of dialled numbers, which uses
three dekatrons. The operator has the facility of erasing a
number when necessary by pressing a button (shifting the dis-
tributor backwards by one cell), and of then recording it again.
(4) Coefficient Selection Unit
Figure 10 shows the block diagram of this unit. As has
already been stated, the quantity y is chosen in accordance with
the value of A, while the whole range of variation of A is divided
into eight groups.
The unit contains three basic elements:
(1) Ferrite assembly (top line in Figure 10). These ferrite
cores serve for the estimation of the value of A, and are con-
nected into the lines for transferring A from the reversible
counter to the braking counter. There are eight of them alto-
gether, and on them are written the eight highest digits of S.
(2) Switch assembly (middle line in Figure 10). The switches
serve to control the lines for various values of y. Since, as was
pointed out, the execution for one group of A (from 0 to 16mm)
is carried out from start to finish at creep speed, the number of
switches is one less than the number of groups, i.e., seven.
(3) Transformer assembly (bottom line in Figure 10). The
transformers have ferrocart cores, and each serves for the
setting up of a certain value of y. For this purpose each core has
several primary windings, to which are connected the outputs
of those digits of the frequency-divider that are required to
give the necessary value of y. The secondary (output) winding
of the core is connected to the input of the corresonding switch.
The estimation of the value of A is based on the following
principle:
Since what is written on the ferrite cores is not the value of
A itself but its complement A w.r.t. 1024, the following picture
is obtained for various groups of values of A :
(1) A < 16: l's are written in all the digits from the fifth
upwards; one or more of the cores for the first four digits
contains a 0.
(2) 16 < A < 32: l's are written in all the digits from the
sixth upwards; the core for the fifth digit contains a 0.
(3) 32 < A < 64: l's are written in all the digits from the
seventh upwards: the core for the sixth digit contains a 0.
(4) 64 < A < 128: l's are written in all the digits from the
eighth upwards: the core for the seventh digit contains a 0.
(5) 128 < A: one of the digits from the eighth upwards
contains a 0.
Making use of the above, the device is designed in the
following manner.
Immediately after A has been recorded on the ferrite cores,
it is read out with polarity such that those cores containing O's
give pulses in their output windings. After amplification by
triodes, these pulses are passed to windings for opening switches
corresponding to these cores. So that several switches should
not open all at once, the opening winding for each is connected
in series with the shut-off windings for all the switches corre-
sponding to cores of lower digits. Thus each time only one
switch opens, corresponding to the core of the highest digit in
which no 1 is written.
To consider the means by which certain of the above inter-
vals are split into two, the interval 32 < A < 64 is taken as an
example. This is split into ?the two parts (1) 32 < A < 48
and (2) 48 < A < 64.
In addition to the conditions for this interval, an extra one
will exist for the first half?the presence of a 1 in the fifth digit:
while for the second half it will be the absence of a 1 in the
fifth digit. In order to control the satisfaction of these con-
ditions, an extra ferrite core is connected in the transfer line
for the fifth digit, and on read-out it gives a pulse in its output
winding when a 1 is present on it. This pulse is amplified by a
triode and closes a switch corresponding to the band 480, k=1,...,M (58)
then the minimization of the quantity (57) has roughly the same
physical significance (see ? 9) as the minimization of quantity (51)
in quasi-stationary modes, and thus the result of normal opera-
tion of the self-adjustment network should be taken as acceptable.
But the more strongly the self-adjustment mode differs from
quasi-stationary, the larger are the angles 19 I and I coa <
the smaller are for the coefficients in (58) (in particular,
the quantity Mi+, cos (p j + mi?, cos since over a sub-
stantial range of frequencies the quantities M are hardly
much different from unity. As a result the quality factor for the
Xi tracking system falls, while for IcpM > 7r12 the coefficient
(58) becomes negative and minimization of the weighted sum (57)
of squares I W2 (jWk) - Wi (jC07,) 12 loses its evident sense, or
even on inversion of the self-adjusting servo-system occurs
(particularly if all the coefficients (58) become negative, which
512/5
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
512/6
may happen if a strong signal 0 (t) is applied at a frequency
close to a search frequency Q1?see Example 3).
Example 3. Let the adjustable filter in System A be a link
with transfer function W2 (p)= kQ203)-1 = k(p2 2ap cog)-1,
the adjustment parameter being the gain k, modulated by a
signal Ak ? cos Qt, while to the input of the system is applied
the test action 0 (t) = 0 cos (cot + oc).
Writing out the general expressions for the quantities
M? ej9 = Q2 (jW) ? '22 Ej(w? (59)
it can readily be established that condition (58) for the system
considered is explicitly inobserved if wo is small (w0-->- 0), while
the frequencies Q and co of the search and test signals coincide
and exceed coo, since then
M + 4 \ (Q2 + 2 a 2) 422 oc2)-- 1
tcos
M - cos 9- -> - C224
M + sin cp+ - ?4a (Q2 + OC2)-
M- sin - -> 2 a coo- 2
and the coefficient M- cos go- becomes in (58) greater in modulus
than the coefficient M+ cos go+.
It is observed that since in this case the quantities (59) are
independent of k, the error associated with the term E11 in
eqns (45)-(49) proves equal to zero [this situation will occur
evry time that the adjustable parameters of filter 452 appear
only in the numerator ,of the transfer function W2 (p)].
Considering eqns (40), (41) and (28), it is noted that to
increase the capability of the self-adjusting circuit for operating
in non-quasi-stationary conditions one may use in the phase
discriminators cioDi the reference voltages
cos Ott+ m,5 sin 52 it (60)
which are phase shifted with respect the search modulation signal
itAtcos Ott (61)
In this case the processes of self-adjustment will proceed in
accordance with the equations
(D)[E, (82 (t) mic cos I lit) + Et (82(t) m5 sin Qi
(62)
where Et (62 (t) mie cos Qt) is determined by formulae (46)-(49),
while
1
Et (e2 (t) m5 sin Qt) = ?4 ?Aim (E(j) + ET + ET) (63)
M
E6 W(Jw2 (Mik sin 9,1,, -M i+k sin 9,1)
(64)
ET = - EO w 00.012 ?a (Wilk sin (PiT, -Mil sin (Pi1) (65)
k= 0 aXi
= ? Eiw OW012 2(p (Wk) (Mi+k COS - Mt; cos 9)
k= ax,
(66)
Here the necessary condition (58) for normal operation of
the self-adjusting circuits is replaced by the condition
Mi+k(m cos - iris sin got+k)
M i-k (in ic cos qt-k + ms in yolk) > 0 (67)
which may prove much more favourable given a suitable choice
of the phase of the voltage (60); (i.e. of the quantities mi, and
mis) the actual result of the undistorted forced process of self-
adjustment comes out in this case to be the minimization of the
quantity
E0 jW(jw)j2 [Mi+k (mc os (foil - 11715 sin ()oil)
k= 0
(Mic cos (1)1k mis sin yolk) (68)
In choosing the phase of the reference voltage (60) one can
aim not only at increasing the coefficient (67) but also at the
same time decreasing the quantity
I Mi+k (Mic sin (pi+k + m is cos yoi+k )
+ Mi-k (nic sin yolk + mis cos j-k)1 (69)
i.e. (see (62), (66) and (49)) the error associated with the term
ET. In practice, as a rule, it proves tedious to achieve an
accurately optimum phase-shift (e. g. in the sense of a minimum
ratio between the quantities (69) and (67) between the signals (60)
and (61), since by virtue of (29) this shift depends not only
on the drifting parameters of filter 4)2 (a similar situation
arisesil also in extremal control systems), but also on the form
of the test signal 0 (t). Nevertheless by using a priori information
on the operating conditions of the system, or by carrying out
a running analysis of the signal 0 (t) and the results of system
operation, in a number of cases one can evidently achieve an
improvement in the dynamic properties of the given self-adjusting
system relatively simply by using reference voltages of the form
in (60) that only approximate to the optimum. In order to
increase the stability of automatic phase-shift optimization
betweep voltages (60) and (61) one can correlate the search and
test signals in frequency [phase relations between the signals
o (t) and duAi cos Qt have no effect on the quantities (47)-(49),
(64)-(66)].
The self-adjusting system, the phase of which use discrimina-
tors reference voltages of the general type given in (60) will
be denoted by System B.
? 11. The equations of motion (45)-(49) and (62)-(66) were
derived under the assumption that the frequencies of the search
modulation and the harmonic components of the test signal all
satisfy the conditions (44). If these conditions do not hold, then
the voltages Et (82 (t) mi, cos Qt) and Et (62 (t) mi, sinDit),
together with the signals (47)-(49) and (64)-(66), will also
contain other components, which generally speaking will
introduce certain additional distortions into the self-adjustment
process. Equations (40) and (41) enable one effectively to
calculate all these parasitic components of the control signal in
the self-adjusting network.
For example, let only one of the conditions (44) be disturbed:
let the frequency of the pth harmonic of the test signal coincide
with the search frequency in the ith self-adjusting channel,
i.e. cop = Q. According to (41), in this case the signal
Et (62 (t) tnie cos Qt) will contain an additional term ET,
512/6
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
generated by the presence in ekt of harmonics with frequencies
col, -I- co/ - S20 (for k = 0, = p, q = i and k = p, = 0, q = i)
and w, col - (Q, S20) (for k = = p, s =q = i):
ecf)=. --2-1 mic Et [OO 1W(0) W (ftop)1(ep.+ ecp)
1
02 I W ( jcopl 2e] cos Qt
2 P
=n14,00 p 1W (0)W Ow p)I cos p
1
+.,71 Mie0 p2 IW 1)1 x [a(w)cos 2 ,9p
- b i (con) sin 2 p] (70)
co+ v+
while the integral
is a weighted
where Q are the
+ v and
in the form
Qv co+ v
sum of integrals
frequencies
?
f i- To
cos[Q(+t--1T
-+ To
+ (Qs+ S1,), (s =1,
G,(co,v,t)dT
t- To
of the type
cos (fir + 9) dr
t - To
in (75) and 19. are angles
+ a/2. On rewriting the
2 ?
512 / 7
N) (75)
(76)
of the form
integral (76)
where ?,,, = 0 (cog), a, (co,) and bi(co?) are defined respectively
by eqns (36), (28) and (29) with co= coy.
For the system considered in Example 3, the first term in
expression (10) is 'zero (since 0o = 0), while the second may be
calculated given the frequency characteristic of W, (ja)). Even
in this actual example it is, on the whole, difficult to judge what
effect the use of a reference voltage of (60) type will have on the
additional error in question. One can evidently achieve a stable
reduction in this error or even its conversion into a useful
signal, provided one correlates the search and test signals not
only in frequency but also in phase, so as to limit unforeseen
variations in the angle /9'23.
,
IV. Calculation of Self-adjustment Operating Modes where the
Test Action is a Stationary Random Process
? 12. It is assumed for simplicity that the filters Wo(D)in Sys-
tem A consist of elements which carry out the ideal averaging of
the quantity /The es (t) cos Qt in time over the interval (t- To, t):
,
Wo (D) ES2 (t)Mic cos Qit j=?, 8 (t) cos Qi2 dr (71)
i 0 t - To
? and that the test signal 0 (t) is a time-function whose 'shortened'
spectrum (10) actual only slightly depends on the instant of ob-
servation t and is located in the region of quite high frequencies:
OT (jW, t)=-19T (jW), OT (3W) = 0 for co < co* (72)
co* - 2 QN> To- 1 , S21-- fli_ i> To- 1.
(C2,>52i_1,. i= 1, ..., N)
Every actual filter W(jco) = W, (jco) - W2 (jW) has a finite
cut-off frequency co,p (it is further considered that co* < (DO,
so that in accordance with (19), (32), (34) and (71)-(73) the
equations for the process of self-adjustment of the qth para-
meter may be put into the form
gg-=----n- 2 dco dv To- ' T G8 (w, v, r) dr (74)
L. JW 0)
WC, a),
f
1?
t.
i
G,(co,v, T)= D c9, v) [cos {(co - v) 2 + 9,0 ? Sy}
+ cos {(co + V) T + 9 + 9v}] cos fiq't
where D (co, v), Va, and 19v are defined by eqns (35), (36), (28)
and (29). The quantity G, (a), v, t) is a sum of harmonic com-
ponents with frequencies Q equal to
it is observed that in accordance with a knowns integral repre-
sentation
?2 It COS (Clt ? 9) di = cos 9. ? (5(Q) (77)
of the c5 function and for large enough averaging time intervals To
of the filter W,0 (D), the approximate equation
St
COS (at +19) di COS [gi
t - To
1
?2 To) + 91(5(Q)
= cos ? 6 (52)
(78)
is true; using this, eqn (74) can be readily got into the form
k q."-. T ? To-1 ? k,? It - 1 G,(co)
to.
ii
it
+? E Ai [g, (w, - )+ gi(co,v7)+ gq (co, v-)] dco (79)
2 i=i q `q qq
where
G,(w)= T (i(o) OT{J(a) + fi,)} W (jw) W ti (co + Qq)}1
? T-1 cos (S?, - 9.?)
(80)
gi(co,v)=-1 T _ 1 IGT (j(0) OT (iv) W Ow) W (iv)i
2
? [Vic (co, v) cos (9.- 90- Vis (co, v) sin (9,0- 9,)] (81)
(82)
vi, = co +52i+ fig , vig =co+ Pi fig! (83)
(73) Vie (co, v)= ai(w) I w(i(0)1-1-i-ai(v) I w(iv)I
vis (co, b4(co) I F..17(./(0)1- 1 + b1(v) I w(iv)1
1
[the quantities ai (w), b (w) and 79. being defined by eqns (28),
(29) and (36) and I the memory of filter W (jco)?see ? 3].
Considering the function (5) as a typical realization of a station-
ary random process {0(t)} and performing averaging according
to achievements, one can go from eqns (79)-(83) to equations in
the mean (as taken together) values X, of the adjustable para-
meters. If here the interval T is taken large enough, then in the
right-hand sides of these equations one may replace the quantities
T-11 07-(jco) OT (jv) I by characteristics like the mutual spectral
power densities12 of the process {0 (t)} and certain random
processes obtained from {0 (t)} by simple transformations that
do not infringe the stationary condition.
512/7
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
512 / 8
This paper does not deal with the more detailed analysis of
the general case, but gives the results of the calculation for the
quasi-stationary mode of self-adjustment, i.e. the mode in which
co*> 2 ON (52i>52(_ 1, i= I, ..., N) (84)
with a test signal of white-noise type:
for co < co*
(85)
T-110,( jw)12m007?i 10,-(J(0)12 = 0Go for co > co*
Since eqns (30) and (31) are satisfied in quasi-stationary
modes, and furthermore (w> 60)-1-2D? CO*), one may
neglect the terms (co, v) sin (#(0 ? z9\,) in (81), and so putting
OT (jw) OT{j (co ? 2Qm)} and W ( jco) -= WI j(co 2S2m), the
following equations for the self-adjustment process are arrived at:
a IC?9IW
Xe kff 1W( jco)I2 do) + itAg axq wa 0(42 dW
co.
EIW ( jco)12 dcol
z _1
1 N a /.9
i#q
kg? =T ? TO-1 ? kr ? ?1 ? mge? Go
The following conclusions are evident from (86):
(1) In the mode of operation (84), (85) studied, minimization
of the quantity
(86)
lw(ico)l tho
(87)
may be naturally considered the ideal result of the self-adjust-
ment process.
(2) The control signal for the qth self-adjusting network
contains derivatives of the quantity (87) being minimized, not
only w.r.t. A', but also w.r. t. all the other adjustable parameters
X.? so that one has not got a pure gradient system of extremal
control.
(3) The equilibrium condition = 0 (q = 1, N) for the
system (86) is characterized for Ai = A (i = 1, ..., N) by the
relations
fCO,
1W(j co)I2 d ? IW(j co)I2 do) (88)
(i=1, N)
from which it can be seen that the more pronounced the extremal
nature of the dependence of quantity (87) on the parameters Xi,
Figure I
Figure 2
and the less essentially attainable the minimum of this quantity,
the closer will this condition be to the ideal result of self-adjust-
ment.
(4) If quite large differences arise rapidly between the
frequency characteristics Wi. (jco) and 14/2(jco), the non-negative
term (87). on the right-hand side of eqn (86) will increase so
much that the operation of the self-adjusting network will be
reduced merely to increasing the parameter X, ()T, > 0), and
this may lead to the system's losing its required extremal
condition.
Finally it is noted that the equations given by Krasovskiy2
for quasi-stationary self-adjustment with a white-noise test
signal contain only terms analogous to the second term in the
right-hand side of equation (86).
The author expresses his gratitude to Ye. A. Barbashin and
I. N. Pechorina for their discussion of this paper.
References
1 KRASOVSKIY, A. A. Self-adjusting automatic control systems.
Automatic Control and Computer Engineering. 1961. No. 4.
Mashgiz
2 KRASOVSICIY, A. A. The dynamics of continuous automatic control
systems with extrema' 'self-adjustment of the correcting devices.
Automatic and Remote Control. 1960. London; Butterworths
3 KAZAKOV, I. YE. The dynamics of self-adjusting systems with
extremal continuous adjustment of the correcting networks in the
presence of random perturbations. Automat. Telemech. 21,
No. 11(1960)
4 YARYGIN, V. N. Some problems in the design of systems with
extremally self-adjusting correcting devices. Automat. Telemech.
22, No. 1 (1961)
5 TAYLOR, W. K. An experimental control system with continuous
automatic optimization. Automatic and Remote Control. 1960.
London; BUtterworths
6 MARGOLIS, M., and LEONDES, K. T. On the theory of self-adjusting
control systems, the learning model method. Automatic and Remote
Control. 1960. London; Butterworths
7 ITSICHOKI, YA. S. Non-Linear Radio Engineering. 1955. Sovetskoye
Radio
8 KBARKEvicu, A. A. Spectra and Analysis. 1953. Gostekhizdat
9 MALKIN, I. G. Some Problems in the Theory of Non-Linear
Oscillation. 1956. Gostekhizdat
10 POPOV, YE. P. The Dynamics of Automatic Control Systems. 1954.
Gostekhizdat
11
CH'IEN HSDEH-SEN. Technical Cybernetics. 1956. Izd. Inostr. Lit.
12 LANING, G. H., and BETTIN, R. G. Random Processes in Automatic
Control Problems (Russian transl.). 1958. Izd. Inostr. Lit.
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Optimal Control of Systems with Distributed Parameters
A. G. BUTKOVSKIY
In many engineering applications the need arises for control of
systems with parameters that are distributed in space. A wide
class of industrial and non-industrial processes falls within this
category: production flow processes, heating of metal in metho-
dical or straight-through furnaces before rolling or during heat-
treatment, establishment of given temperature distributions in
'thick' ingots, growing of monocrystals, drying and calcining of
powdered materials, sintering, distillation, etc., right through to
the control of the weather.
The processes in such systems are normally described by
partial differential equations, integral equations, integro-
differential equations, etc.
The problem of obtaining the best operating conditions for
the installation (the highest productivity, minimum expenditure
of raw material and energy, etc.) under given additional con-
straints has required the development of an appropriate mathe-
matical apparatus capable of determining the optimal control
actions for the plant.
Pontryagin's maximum principle and Bellman's dynamic
programming method have been the most interesting results in
this direction for systems with lumped parameters.
A wide class of systems with distributed parameters is
described by a non-linear integral equation of the following
form:
Q(P)= K [P, S, Q(S), U (S)] dS
Here the matrix
Q1 (p)
(1)
=11.2' (P) II (2)
Qn (p)
describes the condition of the controlled system with distributed
parameters, while the matrix
U1 (P)
Ur (P)
describes the control actions on the system. Here and in the
following, the index t will refer to a row number and j to a
column number in a matrix. The point P belongs to a certain
fixed m dimensional region D in Euclidean space.
The components of the single-column matrix
K1 (P, S, Q, U)
K" (P, S, Q, U)
belong to class L2 and have continuous partial derivatives
w. r. t. the components of the matrix Q.
U (P)=
= II U( P) (3)
K (P, S, Q, U)=
--= IlKi (P, S, Q, (4)
513/1
It will be assumed that the function U (P) is piecewise dis-
continuous, its values being chosen from a certain fixed permis-
sible set Q. Controls U (P) having this property will be called
permissible.
Further, from the set of conditions Q (P) and controls U (P),
related by integral eqn (1), let q functionals be determined,
having a continuous gradient (weak Gato differential).
= [Q (P)], i =0, 1, , 1 (5)
= [Q (P), U (P)] = (z), i=1 +1, ...,q (6)
where
Z =
DF? [S, Q(S), U (S)] d.S
Fk [S, Q (S),U (S)] dS
= F [S, Q (S), U (S)] dS
The function (Di(z), i I , q and Fi(S,Q,U),i = 0,1 k,
are continuous and have continuous partial derivatives w.r.t.
the components of the matrices z and Q respectively.
The optimal control problem is formulated in the following
manner.
It is required to find a permissible control U (P) such that
by virtue of equation (1)
Ii=0, i=0, I, ...,p? 1,p+ 1,..., q (8)
while the functional Iv assumes its smallest value. Here p is
a fixed index, 0 < p < q.
The following rectangular matrices are introduced
ao_0 ..."
1 /. j=0,1,...,k (9)
Oz azi
OF ?Q./' i=0 ,1,...,k; ;=_-1,2,...,n (10)
a11
grad / = l[gradiPil ; i = / + 1, ..., q ; j = 1, 2, ..., n (11)
where grad ji denotes the jth component of the vector grad
w.r. t. the coordinate Q.
The following theorem5 can be used as the basis of a solution
of the problem formulated above on the optimum control of a
plant with distributed parameters.
Theorem. Let U = U (S) be a permissible control such that
by virtue of eqn (1) the conditions (8) are satisfied and the
(7)
513/1
Declassified and Approved For Release 2012/12/14 : CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
513/2
matrix function M (P, R) = iM1 (1), =1, 2, ..., n,
satisfies the integral equation [linear in M (P, R)]
3
M (P, R)+-K [P, R, Q (R), U (R)]
a
0Q
a
(S, R) dS D aQ (12)
" '
Then for this control, U (S), to be optimal there must exist
one-row numerical matrices
a = tic0, cl, c, II and b =ci+ 1, ..., ceil (13)
of which at least one is not null, and also cp < 0, such that for
almost all fixed values of the argument S e D the function
(S, U)= a [grad/ {Q (P)}, K {P, S, Q(S), U}
?f
+ b_o[fD F {P, Q (P), U (P)} dP1
aZ
a
? [?aQF {P, Q(P), U (P)} , K {P, S, Q (S), U}
? M (P, R) K {R, S, Q(S), U} did
+ b a. [f F 113, *Q (P), U (P)} dld? F {S, Q (S), v}
a Z D
(14)
of the variable Un D attains a maximum, i.e. for almost all
S e D the following relation holds:
In the heating of a body there is usually given a temperature
distribution Q* = Q*(x) which is required to be attained in the
minimum time. However, if the equation
Q (x, t) = Q* (x) ? (19)
for any permissible control is not satisfied for any fixed t,
0 < t < T, then the problem becomes that of determining a
permissible control u (t), 0 < t < T, such that the functional
I? = [Q* (x)? Q (x, T)]' dx
(20)
attains its minimum. Here y is a positive even integer.
Since the integrand in eqn (18) is independent of the con-
trolled function Q (x, t), then according to eqn (12) the
function M (t, c) 0 for all t and T in the interval [0, T].
It follows that the function vr (r, U) takes the form
?
it (-c, U)= co j [Q* (x)? 07 ? K (x, T, T)U dx
= ? y co U [Q* (x)? Q (x, T)]1-1K (x, T, T) dx
(21)
Since in this case by the conditions of the theorem c? < 0,
so -_- yCo > 0, and hence the maximum of 7C (T, U) w.r.t. U,
with A, < U < A2, is reached when
A, +A2
U (r)=
2 (22)
A2 ? A1? _
sgn Lg (x) ? g (x, T)] K (x, T,T)dx
2
If we substitute expression (18) for Q (x, t) in eqn (22), then
we obtain an integral equation for determining the optimum?
control action U (r).
For eicample, if y = 2, A, = ? 1, A2 = 1, then the optimum
control action satisfies the following integral equation:
where -
(S, U)= H (S)
H (S)= sup 7t (S, U).
u E S2
(15)
(16)
CL
U(t)=? sgn [Q* (x) ?
Jo
CT
K (x, T,T)U (r)dt ]K (x, T, T) dx
(23)
As an example of the application of this theorem, consider
the important practical problem of the heating of a massive
body in a furnace. Let the temperature distribution along the
x axis, 0 < x < L, at any instant t, 0 < t < T, be described by
the function Q = Q (x, t). Here the temperature U (t) of the
heating medium, which in this case is the controlling agent, is
a function constrained by the conditions
U (t)_A2, (17)
i.e. in this case the set Q is the interval [A,, A2].
It is known that the distribution function Q (x, t), if initially
zero, is related to the control U (t) by the following integral
equation
Q(x, t)= K (x, t, T)U (r) (18)
where K (x, t, r) is a known weighting function.
Opening the brackets and altering the order of integration,
one finally gets
CT
U ("0= sgn[B (T) ? N (r , 0)U (0) dB Jo ] (24)
where N (r, 0) is the symmetrical nucleus
CL
N(t0)= I K (x, T,r)K (x,'T, 0) dx (25)
CL
B (T)= Q* (x)K (x, T, r) dx (26)
o
Methods of approximating partial differential equations by
finite difference equations can be applied successfully to the
approximate solution of problems of the optimal control of
systems with distributed parameters. This has the advantage
513/2
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
that results obtained for lumped-parameter optimal systems
can be used.
As an example, consider the optimal control of a system
described by the following equation
aQ a2Q
8t 12=Q(x,t),0x,S,00 (43)
From these one obtains the following equation for the de-
termination of mathematical expectation mu 1 :
(TD2+D---)ai)m?1 = ? Nio (44)
For A < 0 the stable process of tuning is assured. When one
determines the centred random component u10, one obtains the
equations:
[T D2 + D ? Aa ju?= ? 2 Ain co[C jo (0) + C?(D)] e(?)
+ 2 AvmziZ? , (45)
where
527/4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
527/5
The magnitude mz, may be set equal to zero by proper selection
of the corresponding filter. Taking this into account and also
utilizing expressions for se in terms of X? ? Z?, one obtains
from eqn (45)
where
(46)
? 2 Amco [Cio (C)+ Cio (D)]
(D)= (T. D2 ? D ? 2ai)[1+ A (D) Bo (D)]
(47)
In this case, for computing the dispersion of parameter u1 in a
stabilized regime, one obtains:
D.,= f .10 10(012 [Sx(co')+ Sz(w)]dco (48)
where Sx and Se are the spectral densities of random functions
X and Z. For 40 = const. the magnitude m?, = 0 in the sta-
bilized regime. In this case the systematic error of a following
system with self-tuning in a stabilized regime of operation is
equal to me = meo, that is, equal to systematic error for an
optimal value of parameter 40. The random component of the
error of following is equal to:
80 =
[1+ biA(D) 1
1+ A(D) Bo (MO1 (D)] + A (D)Bo(D)(X? +Z?) (49)
where the magnitude b1 according to formula (36) is given by
b _aBo(G)
1? aic, me?
(50)
In computing the dispersion of error 8 one obtains the formula:
2[biAc oOw) (Do
+A()B()i(hod?A )B(la))
[Sx (W) + Sz (W)] dco (51)
The calculations carried out for a tracking system (Figure 4)
having the values of the preceding example for A = 105, T = 1.0,
and the optimal value of parameter 40 = 370, show a sufficiently
Ccc
D e=
-
cc
E
B
good effectiveness of tuning. Thus, the mathematical expectation
of tuned parameter 4 is equal to mCi 40, and the dispersion
of the error of tuning computed by formula (48) is given by
Dc, = Da, = 4 x 10-7. From these calculations it follows that
the maximum relative error of tuning the parameter 4, is equal
to 6.3 x 10-2. per cent. As regards the error of tracking by the
following system, the mathematical expectation of this error in
tuning coincides with the value of this magnitude in an optimal
system me = meo = 0.33 x 10-2.
The dispersion of the error of tracking in a self-tuning system
computed by formula (51) coincides with a precision to three
significant figures with a value of dispersion of the error of
tracking in the optimal system DE Dee, = 2-31 x 10-5. Thus,
in the considered example the self-tuning system with the utiliza-
tion of the method of auxiliary, operator assures an effective
tuning for the minimum of the second initial moment of error
in the presence of random disturbances.
Conclusion
The considered scheme of a self-tuning system may be
effectively utilized both for the direct control of objects and the
synthesis of automatic control systems during their design. The
advantages of the system of self-tuning utilizing the method of
auxiliary operator are: relative simplicity of achieving tuning
circuits, effectiveness of operation in the presence of disturb-
ances, and the possibility of obtaining high values of quick ?
response.
References
1 FELDBAUM, A. A. Computers in Automatic Control Systems. 1959.
Moscow; GIFML
2 MARGOLIS, M. and LEONDES, C. T. A parameter tracking servo
for control systems. Trans. Inst. Radio Engrs, N. Y. AC-4, N 2
(1959)
3 MARGOLIS, M. and LEONDES, C. T. On the theory of adaptive
control systems; the learning model approach. Automatic and
Remote Control. 1961. London; Butterworths
4 PUGACHYOV, V. S. Theory of random functions and its application
to problems of automatic control. 1960. Moscow; GIFML
(q)
Figurg 1
527/5
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
527/6
of
8e
?
)T(-647Z7tc) 6
ceofio (760T?105
B(g)
30
1.1.2t
2-8 &le
26 0:1?
2.4 ke
FC2
2.2 e
20
18
1.6 fl;Az
14 ItM2E
5E1E
12
10
1 I I 1 I 1 J 1 1
0 1 2 3 4 5 6 7 8 9 M 1
Figure 3
Figure 2
527/6
V
qem)
_J
(t)
A()
V
Figure 4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
One Self-adjusting Control Systems Without Test
Disturbance Signals
E.P. POPOV, G.M. LOSKUTOV and R.M. YUSUPOV
Statement of the Problem
In this paper, the term 'self-adjusting control system' means a
system which performs the following three operations:
(1) Measures by means of automatic search or computes
from the results of measurements the dynamic characteristics
of the system, and possibly the characteristics of the disturbances
as well.
(2) On the basis of this or that criterion defines the controller
setting, parameters or structure needed for calibration (or optf-
mizat ion).
(3) Realizes the resultant controller structure, parameter or
setting values.
Many studies of the theory and practice of self-adjusting
control systems for stationary controlled plants have so far
appeared in the world literature. There have also been con-
tributions on self-adjusting of quasi-stationary systems. But there
is almost a complete lack of contributions dealing more or less
specifically with problems of synthesis and analysis of self-adjust-
ing control systems for essentially non-stationary controlled
plants. Moreover, as far as. the authors are aware, even in the
case of stationaryi and quasi-stationary systems, the process of
self-adjustment is frequently effected solely on the basis of an
analysis of the dynamic characteristics of the system, without
taking into account the unmeasured external disturbances acting
upon the controlled plant. At the same time it is obvious that
external disturbance, besides the dynamic characteristics of the
system, determines the quality of the process of control.
Another drawback of many of the self-adjusting systems in
existence and proposed in the literature is the need to use
special test signals to check the dynamic characteristics of the
system.
This paper proposes, and attempts to validate, one of the
possible principles for the creation of a self-adjusting control
system for a particular class of non-stationary controlled plants.
The main advantage of the principle in question is the
opportunity it provides to take account of both internal
(system parameters) and external (harmful and controlling
disturbances) conditions of operation of the system. In contrast
to the self-adjusting systems known, a system created in accord-
ance with the principle proposed will make it possible to obtain
automatically the fullest possible information about the process
under control without the use of test signals.
For the operation of a self-adjusting control system created
on the basis of the principle proposed, a mathematical model
of a reference (calculated) control system must be constructed.
A 'reference system' is understood to be a system the controller
of which is designed in accordance with the requirements on
528/1
the quality of the control process, with the assumption that the
mode of variation in time of the system's parameters as well as
the disturbance effects is known.
The structure of the mathematical approximation of the real
process is selected to match that of the mathematical model of
the reference process. The self-adjusting system operates in such
a way as to ensure continuous identity between the mathematical
approximation of the real process and the model of the reference
system. In this connection, the problem is posed of making the
mathematical approximation of the real process as close as
possible to the model of the reference process.
Without loss of generality, the case of control of only one
variable is considered, which is denoted by x, and the correspond-
ing reference differential equation is written in the form
n ? 1
x(;)+ Ea(t)x= E b(t),f(p
i = 0 i = 0
(1)
The real process is approximated by a linear differential equation
of the same structure:
n ? 1 in
x(o+ E a (t) x") = E bin (t) (2)
i = 0 1=0
t = to, x(i) (t 0) = (1 =0, 1, ..., n ? I)
The operation of the proposed self-adjusting control system
will be examined in accordance with the sequence of the process
of self-adjustment, indicated at the beginning of the definition.
General Case of Determination of the Dynamic Characteristics
of a System
In order to create an engineering method of determining the
dynamic characteristics of non-stationary systems in the construc-
tion of a self-adjusting control system, this paper proposes the
use of the methods of stationary systems. For this purpose, the
non-stationary system (1) is replaced by an equivalent system
with piecewise-constant coefficients. (The methods of stationary
systems are used on the intervals of constancy of the coefficients.)
The transfer from a system with variable coefficients to one with
piecewise-constant coefficients is effected on the basis of a
theorem which can be formulated with the assistance of a
number of the propositions of the theory of ordinary differential
equations. In accordance with this theorem, the solution of a
differential equation of form (1) with piecewise-continuous
coefficients (a finite number 'of discontinuities of the first kind
is assumed) can be obtained with any degree of accuracy in a
preset finite interval (to, To) by breaking down the latter into
a finite number of sub-intervals. (tK, ti) and replacement of
528/1
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
528/2
the variable coefficients within each sub-interval by constants,
equal to any values of the corresponding coefficients inside or
on the boundaries of the sub-intervals under consideration.
In the general case, it is expedient to effect the breakdown
process by the method of multiple iteration of solutions on a
high-speed computer.'
Let the differential equation with variable coefficients (1)
be approximated by an equation with piecewise-constant
coefficients.
Then, for t e tx-F1), one may write
n ? 1
A'?(n) m "V ,( V LE f(i) (3)
E iK L'iK
? i = 0 i = 0
In accordance with differential equation (3), the real process
is approximated by the equation
n ? 1
aiKx(i)= E biK f(i) (4)
As the dynamic characteristics of the system at the first
stage of operation of the self-adjusting system on each interval
(tK, tx+i), the coefficients aiK (i = 0, 1.....n ? 1), biK (i = 0,
1, m) are defined.
The simplest way to define these coefficients lies in defining
the values of x and f and their corresponding derivatives at the
points tK = xl, T2, ..., -cs = tK+1 ? At.
By substituting these values into eqn (4), one obtains for
each interval (tK, tx+i) a system of S algebraic dissimilar
equations for defining the searched coefficients.
In practice it is not always possible to measure the disturbing
effect f and its derivatives. Therefore, in the general case, the
above-mentioned method of defining the coefficients ail( and bac
cannot be directly employed.
This difficulty may be avoided in the following way. The real
process is approximated, not by differential eqn (4), but by a
differential equation of the form
i-(n) E am -x(0= E T),?
i=0 i.0
In eqn (5) the disturbing effect and its corresponding derivat-
ives are taken to equal the reference values. This avoids
the need to measure the real disturbance f, and makes it
possible to use the above-mentioned means of defining the
coefficients of the differential equation approximating the real
control process. The non-agreement of the real disturbances
with the reference ones are taken into account through the
coefficients aiK and biK. Therefore dashes are placed over them.
In the general case 5i(i) xe0- (i = 0, 1.....n) i.e., there
is an approximation error. In view of this, in the transfer from
eqn (4) to eqn (5), it is necessary to evaluate the maximum,
possible value of this approximation error, using for this
purpose the assumed values of the limits of variation of
disturbance f
If for some class of controlled plants it can be assumed that
in the process of operation only scale of the disturbance changes,
i.e., the equality
(5)
f (07 CK fE (0,
t e (tK, tic+ i)
(6)
where CK is the random scale of disturbance, is satisfied, then
the approximation error is absent, and the connection of the
coefficients of eqns (4) and (5) is expressed by the equalities:
aiK=aiK(i=0,1,...,n? 1)
bac = CK b iK (ti = 0, 1, ... m) (7)
Equation (5) is used (henceforward, to simplify the notation,
the dashes over the coefficients and the variable x are dropped)
for definition of the coefficients aiK and biK. It is assumed that
measurements x, x', x(n) are performed at the points
= T2, ? ? ?, TS ? +1 ? At.
The values of fE, fE, fE(m) are known. Then, for the
definition of (n m 1) desired coefficients in each interval
(tK, &Al) one obtains the following system of S algebraic
equations, which will be written in abbreviated form thus:
n ? 1
E x(i)(ti)aiK- E freobiK=-P)(r.,) (j =1,2, ..., S)
i=0 i=0 (8)
It is not always expedient to solve directly system (8) for
S = m n 1, since, on account of the existence of measuring
instrument errors and random high-frequency control process
oscillations, the accuracy of definition of the coefficients will
be very low. Moreover, for the same reasons, system (8) may
be altogether incompatible.
To eliminate the case of incompatibility and to increase the
accuracy of definition of the searched coefficients the method of
least squares is employed', 2. In so doing, the problem of
approximation is also solved. When utilizing this method,
it is expedient to take 5> m n 1.
Using the method of least squares, the coefficients aiK, biK
are defined, minimizing according to these coefficients the
function s
L= E p(rj) L2i
where
j = 1
n ? I
? = x(i) (T .) a ? E f(i) bK X(n)er
is the disagreement, and p(ri) are weight coefficients which
define the value of each measurement and, accordingly, of
each of equation of system (8).
The necessary condition of the minimum of function L is the
equality to zero of its first-order partial derivatives according to
aix and b,K. Having computed the partial derivatives and
equated them to zero, one obtains an already compatible
system of m n 1 linear algebraic equations for the defini-
tion of m n 1 coefficients :
S 8L
= E p(Ti)L?n ? 1) .
aaiK J=1 Liam
aL S OL ?
= E =o(i=o, 1, ...,
aum j = - LIUiK
Solving system (9) . by known methods, one obtains the
values of aac and biK.
In certain cases the process of control at intervals may be
approximated by a differential equation of the form
n ? 1
X L ax (i)=
(n)
EK (0
i = 0
(9)
528/2
Declassified and Approved For Release 2012/12/14 : CIA-RDP80T00246A023500250001-3
(10)
where
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
(pEK(0= E bficfr (0
1=0
This coarser approximation will make it possible to reduce
computing time considerably by a reduction of the quantity of
searched coefficients; in the given case only the coefficients aiK
are desired.
In the given approximation the deviations of the values of
real coefficients biK and real disturbances f will be taken into
account in the system via the values of the coefficients aiK.
System (11) will be the initial algebraic system for definition of
the coefficients:
n-1
E x(i) (r,i)auc=c9Eic(ri)? x(n)("ri) S) (11)
i=o
For definition of the searched coefficients aiK by the method
of least squares, one minimizes the function
where
L,= E p(ii)L2i (12)
j=
n-1
L =x. E (i)(T.) a ? + .x(n) (PEK(T
j tK
i=0
Using the necessary condition of the existence of a minimum
of function (12) for the definition of n, coefficients aiK (i = 0,
1, n ? 1), one obtains a system of n algebraic equations:
aL, s ? aL
= E (i=0,1, ...,n-1) (13)
uaiK j=i - yaw
All the above discussion and the operations were performed
on the assumption that the values of the control variable
and the necessary quantity of derivatives at the moments
of time of interest are available. In practice, however, one is
usually limited to second-order derivatives.
In a number of cases real high-order systems may be
approximated by second-order differential equations, preserving
the description of their main dynamic properties. But even in
the case of more complex high-order systems it is possible to
suggest a number of algorithms for defining the searched
coefficients, given the existence of a limited quantity of derivat-
ives, some of which are as follows:
(a) Derivatives of higher orders of the control variable can
be calculated with the assistance of a digital computer on the
basis of the Lagrange and Newton interpolation formulae or
according to the formulae of quadratic interpolation (method
of least squares).
(b) If one integrates each term of eqns (5) and (10) n ? q
times, where q is the order of the senior derivative of the control
variable, which one can measure in a system with the requisite
accuracy, then, taking the limits of integration tK, T.; (j= 1,
2, ...,S), one obtains the integral forms of eqns (8) and (11)
respectively. If reference values are given to the magnitudes
X (n-i) (tK), X (n-2) (k), ?, X (n-q+1) (tK) in these equations,
then for defining the coefficients (lac (i = 0, 1, n ? 1) and
baf (i = 0, 1.....m) it is sufficient to measure the derivatives
to the qth order.
(c) Practically all existing controlled plants and control
systems can be described by a set of differential equations, each
528/3
of which characterizes one degree of freedom of movement and
therefore has an order no higher than second. -
(d) Sometimes, to reduce the order of the derivatives required
for measurement, one may also take advantage of a number of
coarse assumptions in relation to the terms of eqns (5) and (10),
which contain derivatives of high orders.
For example, in these equations the values of-the derivatives
x(', x(-i), x(n-q+1.) can be assumed equal to the reference values.
(e) The coefficients of approximating eqns (5) and (10) can
be defined without any recourse to algebraic systems (8) and (11),
if one uses the following method5. -
Let the composition of the control system include an analogue
simulator, on which is set up a differential equation of form (5)
or (10). In this simulator there is a controlling device, which
provides an opportunity to effect variation of coefficients aiK
and biK in a certain way.
The control system memorizes the curve of the real process
in the interval (tK, tK-Ft ? AO, and selection of the coefficients
aaf and biK is performed on the simulator in such a way as to
bring together in a certain sense the real process and the solution
of the equation set up on the simulator.
When the quantitative value of the proximity evaluation
'reaches the predetermined value, the magnitudes of coefficients
aiK and biK, are fixed and extracted for subsequent employment
in the self-adjusting control system. Obviously the simulator
operation time scale must be many times less than the real time
scale of the system. Only under this condition can the requisite
high speed of self-adjustment be achieved. Practically any time
scale may be realized with the assistance of analogue computing
techniques.
Automatic Synthesis of Controller Parameters
For the operation of the majority of self-adjusting systems,
the system operation quality criterion is set in advance. For
systems constructed on the basis of the proposed principle, it
is generally expedient to use as the criterion the expression
n-1
M= E (am? 4)2+ E (biK?b)2 (14)
1=0 i=0
This criterion generalizes both the methods of approximation
of the real control process expounded above.
To simplify subsequent operations, the following notations
are introduced.
boK=aac; biK=an+ 1,K, ??? bmK=am+,,,K
Expression (14) can then be rewritten in the form
no
M = E (a aEiK)2 ;
n+m for (5)
no =n-1 for (10)
(15)
On each interval (tK, tK+1) the adjustable parameters are so
selected as to bring expression (15) to the minimum. The ideal,
i.e., most favourable, case would be one when M would reach
zero as the result of selection of the adjustable parameters. This
is not always possible, however. In the first place, not all the
coefficients aac (i = 0, 1, ..., no) are controllable. Second, in
multi-loop non-autonomous systems even the values of the
controllable coefficients cannot all be tuned up to the reference
values simultaneously, since the relationship of the coefficients
528/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
528/4
ai to the adjustable parameters, although usually linear, is
nevertheless arbitrary in relation .to the quantity of adjustable
parameters, the sign and the coefficients with which these
parameters enter into expressions for
The second difficulty may be avoided by means of successful
selection of the reference system or by complete disconnection of
the loops (channels) of control of the main variables, i.e., by
satisfying the conditions of autonomy.
It is assumed that all the coefficients ai (i =0, 1, ..., no) are
controllable (in practice the values ofuncontrollable coefficients
may be reckoned to be reference values). Then, for the coeffici-
ents ai one may write
ai=ai(Ki, K2, ...,Kp; T1, T2, ...,'T?; 11,12,
(i = 0, 1, ..., no) ?
where K1, K2, K, are the gains of the controlled plant;
T1, T2,..., T, are the time constants of the controlled plant and
the controller, and 4, /2, ..., are the gains of the controller
(adjustable parameters).
Since the coefficients ai usually depend on the adjusiable
parameters linearly, one may write
where
ai= au 1 j vi (i =0,1, ...,n0) (16)
i=
Pu= (I( K2, ? ? ?, Kp; T1, T2, ? ? ? , TO;
yi = vi (K? ...,Kp; T1, T2,
Using the necessary condition for the existence of,a minimum
of function M, one obtains the following algebraic system for
determination of the setting values / /
1, 2, ? ? ?, Ir
no
E
[a IK (11112, ? ? 10- CliE]?a iK (111 12, ? ? ?5 ir)
( j= 1, 2, ..., S)
(17)
It is assumed that when the system is in operation, the
adjustable parameter values only change in accordance with
their computed values, i.e., at any moment of time one knows
the magnitudes of /1, 12, ? ? ?, Ir? Then, for the interval (tic, tx-i)
until the moment of correction of the adjustable parameters in
accordance with expression (16), one can write:
a iK= E AA 1j, K-1+ viK
.1=1
(18)
From system (18) one may determine the magnitudes of
MijK and viif(l = CI, 1, ? ? ; j = 1, 2, r) since the values
of aac (i = 0, 1, ..., no) and l, ie- (j = 1, 2, ..., r) are known.
Taking into account eqn (16), after substitution of the
values of Migc and riK the algebraic system (17) for eicfining
11K, 12K, ? ? 1,?K takes the form
E E aijK IjK?viK)- KljK=?
i = 0 j= 1
(j.-- 1, 2, ..., r)
(19)
Realization of Adjustable Parameters
Block-circuit with a Self-adjusting System using a Digi/al Computer
The duration of the intervals of constancy of the coefficients
of reference eqn (3), when a digital computer is used in the
control system, must satisfy correlation
K+ 1 tic.= Ti+ T2 + T3 ? At (20)
where T, = (S - 1) is the time required to carry out measure-
ments; T2 = Nino is the time required for the computations;
T3 is the time of actuator generation; 0 < At < tti+i - rx;
Ar = r2+, - tj is the period of measurements (j = 1, 2, ..., S);
no is the computer speed of action, and N is the number of
operations required to define coefficients /1K (j = 1, 2, ..., r).
It is obvious that to ensure better operation of the self-
adjusting system, it is necessary to reduce as much as possible
the magnituee T = T2 + T3.
Now the opportunities for reducing the time T3 are dealt
with. This question is directly linked with the choice of the
actuator. Electromechanical servosystems with a considerable
time constant are usually employed as actuators at the present
time. But it turns out that it is possible to suggest a number of
purely circuit variants of the change of the transfer functions or
of gains of the correcting devices (regulators) of the system.
?These inertia-less actuators are termed 'static'. It is particularly
advantageous to produce static actuators with the aid of non-'
linear resistors (varistors), valves with variable gains (varimu),
electronic multipliers, etc.
Consider, for example, one of ,the variants of a static
actuator based on an electronic multiplier. Let the made of
control have the form
= 1
and let the jth adjustable parameter have the value /1? at moment
t0j=1 (start of operation of the system). While the system
operates in accordance with the signals of the computer, the
value /1 is constantly being corrected.
Thus at the end of the interval (tic, tx+i) one has
IjK = lj? +4 ljK
y= x(j) + /iK x(j) (21)
j= j=
Obviously each addend in the right-hand side of expression
(21) can be instrumented with the aid of the circuit in Figure],
where EM is the electronic multiplier, and AD the adder.
The following are self-adjusting system computer operating
algorithms: when the real process is approximated by
differential eqns (5), the algebraic systems (9), (18), and (19);
when the real process is approximated by differential eqns (5),
the algebraic systems (13), (18), and (19).
It is obvious that in the general, case it is more convenient
to solve the problem of self-adjustment according to the proposed
principle with the aid of a high-speed digital computer. It can
be specialized for solving systems of algebraic equations.
Figure 2 shows the block diagram of a self-adjusting system
with a digital computer.
528/4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Some Particular Cases
In the preceding sections the proposed principle for creating
a self-adjusting control system fOr non-stationary objects was
expounded in general form. In practice, one may naturally
encounter cases when the given principle can be used in more
simplified variants. Several such opportunities are considered.
(1) Obviously, the entire theory expounded above can be
applied fully to stationary and quasi-stationary systems, which
are particular instances of non-stationary systems. In this case
the durations of the intervals of constancy of the coefficients
(tK, tK?I) equal, for stationary systems
K=O, tic + 1 ? t K=11? t0 = TO ? tO
for quasi-stationary systems
tK + I ? tK At p (23)
(22)
where Atp is the control time (duration of the transient process).
As can be seen from relations (22) and (23), in stationary and
quasi-stationary systems one is less rigidly confined to the time
of analysis of the real process and synthesis of controller para-
meters. It is therefore possible to define coefficients aiK and biK
more accurately and to use criteria which reduce the self-
adjustment process speed, but make it possible to increase
the accuracy of operation of the system. Among such criteria
one may cite, in particular, the integral criteria for the evaluation
of the quality of a transient process3.
For stationary and quasi-stationary systems the problem of
self-adjustment in accordance with the principle proposed above
may be solved as a problem of the change in position of the
roots of the transfer function of a closed system, i.e., the self-
adjustment problem may be solved in accordance with the
requirements of the root-locus method, which is extensively
employed in automatic control theory. A feature of the use of the
proPosition of the root-locus method in accordance with the
principle under consideration is that the zeros and poles defined
by the coefficients aiK and b,K are fictions since they not only
depend on the parameters of the controlled plant and controller,
but also depend on real disturbances as well.
(2) In practice, one may encounter cases when a controller
is required to ensure only the stability of a system in the course
of operation. As is known, the stability of linear stationary
systems is determined by the coefficients of the characteristic
equation. This proposition is also valid for certain quasi-
stationary systems (method of frozen coefficients).
Therefore to solve the problem posed (the provision of
stability), the Control system must define the actual values of
the coefficients of the left-hand side of the differential equation
of the system and must set on the controller such gains
factors as will satisfy the conditions of stability, for example the
conditions of the Hurwitzian algebraic criterion. On the assump-
tion that disturbance f is constant in the interval (tK, tK41) the
coefficients of the characteristic equation of the system on this
interval are determined in the following way.
The differential equation of the system for t e (tK, trf+,) is
written in the form
n? 1
X(n) E aiKx(l)= F K
i= o
where FK is in the general case the unknown right-hand side,
constant for t e (tK, tx+i). The algebraic system for determining
the described coefficients will then be written thus:
528 / 5
n? 1
x("(rj)a,K=FK (i= 1, 2, ...,S) (24)
i= 0
Since FK is unknown, but is constant in the interval (tK, tK-Fi) it
is eliminated with the assistance of one of the equations of
system (24). For this purpose one uses the equation
n? 1
X(n)(21) E x(i)(to aiK=F K(1 t,
the function n (T) were elc,t rmined and tallied with the prediction
of its mathematical expectation M{77 (T)/n (t)} made according
to the actual value of I) (t). The magnitude of the dispersion of
(t) and the quantities aii (t) do not affect $?, and manifest
themselves only naturally in the quantity MI J,,[to, x?, no, ell.
As has been shown above, it is very laborious, in the general
case, to construct the functional from Theorems 1 and 2. The
following methods may be indicated for approximate its deter-
mination (and consequently that of $0): the small parameter
method; approximate solution of the functional equation (9);
approximating v in the mean; replacing the equations with
delays or the functional equation (9) by finite difference equa-
tions; replacing the equation with delays by a set of equations
529/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
529/4
for the Fourier coefficients of a section of the trajectory
x (t 0) ? h < < 0. These methods can be illustrated by
numerical examples.
Delay of Feedback Signals
Consider now the system of Figure I when there is no after-
action in the plant A, but signals in channels 1-3 can be
delayed.
Case II. Let the motion of the plant A be described by the
vector differential equation
dx
f [t,x (t), n (t), + (21)
where x, i,$, f have the same meaning as in the first part of the
paper, and ep is a disturbance of the white noise type, giving
rise to diffusion spread of x (t) in the time dt with the matrix
{dx, dx;} = lio-,;(0117 dt (22)
The problem is to minimize the quantities
and
T = M tf(T)]} dt
to
J r with T oo
T ?
Problem 4. It is required to find a control signal belonging
to S which minimizes (25) for all y? belonging to Yo, to > 0.
Problem 5. It is required to find a control signal $? belonging
to S minimizing (26) for all y? belonging to Yo, to > 0. Here Yo
, is some region of the components y given in advance.
Denote by x (t, y? (to), $) the random motion of the system,
generated by the initial conditions y? (t?) with a certain choice
of the control law; moreover, assume necessarily, with to = h <
< t < to, that the control signal (t) tallies with that (to + 0)
(to ? 0 = t) which is a component of y? (to).
Now formulate the criterion of optimality for Problem 4.
Theorem 3 .It is assumed that for all y? (to) belonging to Yo and
o < to < T there exists an admissible control signal (t) (or
= [t, y (t)]) such that (25) has a meaning, is finite, and almost
all the realizations Ix (t, y? (t0), $), i (t, y? (to), (t 0)
(? h 0 and
h2> 0 (or either h1> 0 or h2> 0) respectively (h1 h, h2 h).
In other words, assume that in the regulator B at the instant t
in the closed interval [0, T] the values of the actual quantities
x (t ? h1) and ?I (t ? h2), where ri (t) is a random Markov func-
tion, are known. Also assume that the regulator B is capable of
remembering up to the instant t the signal (t 0) worked out
by it with ? h 0< 0. Denote the set of magnitudes x (t ? 111),
(t ? ho and (t 0) (? h < 0 < 0) by y (t), and x (?
(? ho, (0) (? h < 0) by respectively y. The quantity
y (t) makes it possible to compose a probability description of
the plant A at the instant t. The quantities Jr (23) and Jo? (24)
with the chosen law of control may be regarded as functionals
with respect to y (to), that is,
M
(0[4 x (t),
to
dt + (T)}= ? I T [to, y? (to),
(25)
T
J [to, ?(t0),]
T ?to= y
(26)
It is therefore reasonable in this case to seek the optimal
control signal $? as a function of y (t), that is, in the form of a
functional
(27)
Call the admissible control signals the set of such functionals,
sufficiently regular to give a meaning to the solution of (21) with
(t) of (27), and, possibly, constrained by supplementary re-
strictions arising from the statement of the problem (for
instance, I I < 1). Designate the set of admissible control
signals by the symbol S. Now the problem can be formulated.
(1114dt{10.4 +M {c? [t, x (4.1? (t), '3]}
dM
= min +NI {w[t,x(t, y (t),)01= 0 (29)
dt
4 e
for all y (t) belonging to Y and all tin the closed interval [0, T].
Then [t, y (t)] is the optimal control signal for Problem 4
and v [to, q? (to)] = min JT [to, .Y? (to),
The solution of Problem 5 is obtained by passage to the
limit from the solution of Problem 4.
The results of applying the given criterion to a system
described by equations of an actual form are illustrated.
Consider Problem 5 for the system
dt = E
+ 0 (30)
with the condition of minimum (26), where
Jr=M {SI wii(t)xi(t)x-j(0+ R2(t)1}
to i, j=1
+ E xi(T)x j(T)
j=i
(31)
The delays along both channels 1 and 2 are assumed to be
equal to h> 0, and it is admitted that any initial deviations
x? (to ? h) and n (to ? h) belong to oh, no.
With sufficiently wide assumptions concerning the character
of the Markov probability process n (0 and with the condition
of full controllability of the system dxi/dt = Zazi xi bi $,
the functionals v [t, y] and [t, y] satisfying criterion (21) can
be found, and passage to the limit with T?> oo can be carried
out. Problems 4 and 5 can also be solved. In addition the
following result is valid.
529/4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Results
The optimal control signal for Problems 4 and 5 stated with
conditions (30) and (31) has the form
o [t, y (t)] = E (t) x (t ? h) + v[t,ti(t ?h)]
=
9[t,OR[t+0]d0 (32)
?h
The term is determined at every instant t with respect to
the realized n (t ? h), but to calculate it one must know the
prediction Mtn (r)/n (t ? h)} with 'r > t ? h.
Here the functional v [t, y (0] has the form of the sum of the
quadratic and linear functionals of xi (t ? h) and (t + 0)
with coefficients dependent on 97 (t ? h).
Analysing the resulting solution the following conclusion
is arrived at: the optimal control signal e chosen here at every
instant t is the same as would be obtained in a deterministic
system and without delay of the feedback signals; however here,
instead of the known quantities xi (t) of the deterministic system,
their best mean square predictions Mlx, (t)/x (t ? h), s (t ? h),
(t 0) (? h < 0 < 0) must enter into the control law, and
the deterministic load n (r) (T > t h) is likewise replaced by
the mean prediction Mtn (-0/77 (t ? h)} .
Case III. This case reduces naturally to the previous one,
and it is not considered individually.
When several of the cases analysed are combined in one
system, the statements of the problems, criteria of optimality
and results are combined correspondingly.
In conclusion it is observed that Case II can be included in the
more general case when incomplete information is transmitted
the feedback channels 1 and 2. For it can be assumed along indeed
that at the instant t there are applied to the regulator B signals
y (t) and (t), statistically connected with x (t) and n (t) (in
Case II, Li/ (t), (t)} = (t ? h), n (t ? h), (t 0)}) and
an optimal control signal in dependence of these signals can
be constructed. The foregoing reasoning and conclusions are
generalized to this more general case. The quality of the process
depends on how much the processes {y (t), WI and Ix (0,
n (01 are connected informationally, or, in other words, how
far the processes- tx (0, n (01 are observable12 with respect
to ty (t), (01.
References
1 FELDBAUM, A. A. Optimal processes in automatic control
systems. Automat. Telemech. 14, No. 6 (1953)
2 FONTRYAGIN, L. S., BOLTYANSKII, V. G., GAMKRELIDZE, R. V., and
MISHCHENKO, E. F. A mathematical theory of optimal pro-
cesses. Fizmatgiz (1961)
3 ROZONOER, L. T. Pontryagin's Maximum Principle in the theory
of optimal systems. Automat. Telemech. 20, Nos. 10-12 (1959)
4 BELLMAN, R. Dynamic Programming. I.I.L. (1960)
5 LERNER, A. YA. Maximum high speed of automatic control
systems. Automat. Telemech. 15, No. 6 (1956)
6 LERNER, A. YA. Design Principles for High-speed Following
Systems and Regulators. 1961. Moscow; Gosenergoizdat
7 BELLMAN, R., GLICKSBERG, J., and GROSS, 0. Some Aspects of the
Mathematical Theory of Control Processes. 1958. Project Rand
8
529 / 5
FELDBAUM, A. A. Calculating devices in automatic systems.
Fizmatgiz (1959)
9 FILIPPOV, A. F. Some questions of the theory of optimal control.
Vestn. MGU No. 2 (1959)
10 KALMAN, R. E., and BERTRAM, J. E. Control systems analysis and
design via the 'Second Method' of Liapunov. Pap. Amer. Soc.
Mech. Eng., No. 2 (1959)
11 KALMAN, R. E. On the general theory of control systems. Auto-
matic and Remote Control. 1961, Vol. 2. London; Butterworths
12 KALMAN, R. E. New methods and results in linear prediction and
filtering theory. RJAS Tech. Rep. 61-1 (1961)
13 KuLucovsKr, R. A. Bull. Acad. Polon. Sci., Serie des sciences
techniques, Vol. VII, No.. 6,11,12 (1959), Vol. VIII, No. 4(1960)
14 LA SALLE, J. Time optimal control systems. Proc. nat. Acad. Sci.,
Vol. 45, No. 4 (1959)
15 GIRSANOV, I. V. Minimax problems in the theory of diffusion
processes. Dokl. AN SSSR, Vol. 136, No. 4 (1960)
16 TSYPKIN, YA. Z. On optimal piocesses in pulsed automatic
systems. Dokl. AN SSSR, Vol. 136, No. 2 (1960)
17 BELLMAN, R., and RALABA, R. Theory of dynamic programming
and control systems with feedback. Automatic and Remote Control.
1961, Vol. 1. London; Butterworths
18 MERRIAM, K. U. Calculations connected with one class of optimal
control systems. Automatic and Remote Control. 1961, Vol. 3.
London; Butterworths
19 D
uUTKOVSKII, A. G., and LERNER, A. YA. Optimal control of
systems with distributed parameters. Dokl. AN SSSR, Vol. 134,
No. 4 (1960)
29 KRAMER, J. On control of linear systems with time lags. Inform.
Control, Vol. 3, No. 4 (1960)
21 KHARATISHVILI, G. L. The maximum principle in the theory of
optimal processes with time lags. Dokl. AN SSSR, Vol. 136,
No. 1(1961)
22 BELLMAN, R., and KALABA, R. Dynamic programming and
control processes. J. Bas. Engng. (March 1961)
23 LETOV, A. M. Analytical construction of regulators. Automat.
Telemech., Vol. 21, Nos. 4-6 (1960)
24 LETOV, A. M. Analytic construction of regulators; the dynamic
programming method. Automat. Telemech., Vol. 22, No. 4 (1961)
25 FELDBAUM, A. A. Information storage in closed systems of
automatic control. Izv. AN SSSR, Otdelenie tekhnicheskikh nauk.
Energet-automat., No. 4 (1961)
26 BELLMAN, R. Adaptive control processes. 1961. Project Rand
27 LIAPUNOV, A. M. The General Theory of Stability of Motion.
1950. Gostekhizdat
28 CHETAEV, N. G. Stability of Motion. 1956. Gostekhizdat
29 KRASOVSKII, N. N. Some Problems of the Theory of Stability of
Motion. 1960. Gostekhizdat
30 KRASOVSKII, N. N., and Liosxn, E. A. Analytic construction of
regulators in systems with random parameters. Automat. Telemech.
Vol. 22, Nos. 9-11 (1961)
31 RRASOVSKII, N. N. A problem of tracking. Prikl. matemat. mech.,
Vol. 26, No. 2 (1962)
32 KRASOVSKII, N. N. Analytic construction of an optimal regulator
in a system with time lags. Prikl. matemat. mach. Vol. 26, No. 1
529/5
(1962)
z?
(2)
6
z(t)
x - z z?
A
(3)
(1)
Figure I.
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
530 / 1 di 64/9
Problems of Continuous Systems Theory of
Extreme Control of Industrial Processes ?
A. A. KRASOVSKI
Many continuous industrial processes lend themselves to the
following plan. There is available some quantity n of adjustments
or controls of machines, apparatus, regulators securing an
industrial, process. The flow of the industrial process and the
parameters depend on the coordinates of the adjusting or
control elements (adjustment parameters).
Together with the controlling adjusting element coordinates
the output parameters are affected by various disturbance factors
(change of material parameters, wear of machines and tools,
temperature and moisture variations and other factors).
The output parameters are controlled continuously or dis-
cretely (but with sufficiently small intervals of discontinuity by
special measuring devices?output parameter information
transmitters (Figure I) influenced by disturbance factors and
also by random variations of adjustment parameters. The output
parameters are subjected to continuous variations.
Even though a practically ideal adjustment of the machine
system, securing the industrial process, is initially attained, after
some time the disturbance factors will bring forth considerable
changes in the output parameters. In order to prevent the drop-
ping out of the output parameters from the established tolerances
(scrap output), adjustment and tuning of the machine system is
necessary. Various means of automation of these operations are
possible. If it is precisely known which parameter and to what ex-
tent it is affected by one or another controlling adjusting
element, the usual feedback principle may be used (regulation
by deflation). For this it is necessary first to smooth the
results of measurements in order to eliminate overshoots
of the system in the presence of small, random deviations within
the tolerance limits. Methods of such automatic processing of
information may be set up, based on the widely utilized methods
of non-automated statistical contro12. The measured and
smoothed signals of output parameter deviations are conveyed
to the performing arrangements and cause changes in the
controlling .adjusting element coordinates. Such systems are
sometimes called staistical autotmata3.
Undoubtedly the introduction of statistical automata will
prove to be an important step in the automation of industry.
However, a necessary condition of their application must be a
.sufficiently complete a priori information about the character-
istics of the industrial process. In many cases this information
is absent, and even if it is available during the initial period of the
systems adjustment it loses authenticity in time, due to the
change in properties of the industrial process.
Under these conditions the application of usual, non-self-
adjusting control loops (statistical automata) becomes impos-
sible. In these cases it is expedient to utilize an extremal control.
The present work is devoted to the investigation of some
possible schemes of extremal control systems with continuous
industrial processes and some questions of the theory of these
systems. It is a development of earlier work by the author'.
For the realization of an extrema] control a quality
output (production) index Q is selected, having extrema at
wanted values of product parameters. Such an index may be,
for example, the sum of the squares of deviation of the output
parameters from the standard values. The quality index Q
is determined by a computer (calculating machine in diagram
Figure /) based on information transmitter data on current
values of output parameters. To secure the basic function of the
system-maintenance of the quality index at the extremum
level, search oscillations are necessary. Natural high frequency
random oscillations, as well as artificially produced oscillations
of controlling elements, may be employed as search oscillations.
Naturally the first method is preferable, since it is not linked
with any increase of high frequency fluctuations of the pro-
duction parameters.
In order to make use of natural oscillations as search oscilla-
tions, it is necessary to measure them. The measurement of
search oscillations is done by information transmitters for these
oscillations (Figure 1), which measure controlling element
oscillations and disturbance effects transmitted to them.
The measured search oscilations are transmitted to a
simulator or a dynamic model of the industrial process. The
purpose of the simulator is to transform the search oscillations
in the same manner as these oscillations are transformed in a
real process. For many industrial processes the simulator may
be carried out in the shape of a delay line.
The output signals of the simulator are transmitted to the
multiplying elements, to the other entrances of which is trans-
mitted the computer signal which is proportional to the current
value of the production quality index. The output values of the
multiplying element are smoothed by the low frequency filters
and are transmitted to the entrances of the control devices
which move the controlling element.
If the quality index deviates from the extremum value, then
a correlated component of the search fluctuations appears at the
computer outlet. Values of the mathematical expectation of the
duplicating links signals differing from zero then appear. Slowly
changing signals are separated out by the low frequency
filters and start the control devices. The controlling elements
act on the production parameters in the direction approaching
the extremum of the quality index.
The values of control parameters, together with the disturb-
ance effects transmitted to them, are designated as X (v = 1,
2, ..., n). Each control parameter brought forth has three
kcomponents.
530/1
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
530/2'
Xv= X: + oX,+ SX,,,
Here X?* working elements are output values of the ex-
tremum controlling portion of the system; OX, are search
elements for which it is expedient to utilize high frequency
controlled effects transmitted to the control parameters. and
are uncontrolled disturbance effects transmitted to the
control parameters.
The current value of the production quality index in
general is a function of indicated control parameters and
disturbance effects f2,j'n, according to transmitted con-
trol parameters.
When the transient process characteristics are described
sufficiently accurately by time delays, then the current value of
the production index is expressed by the function of preceding
values of indicated control parameters and disturbances effects.
Q= (t --T1), xn(t fm] (1)
The selection of the composition of control parameters
must conform to the following condition. To each set of
permanent control parameter values must correspond a definite
(with an accuracy up to the level of noises) set of production
parameter values. In other words, in a static regime and with
absence of noises a unilateral conversion of control para-
meters into production parameters must be realized. It should
be noted that no mutual unilateral conversion is required, so
that the number of control parameters may greatly exceed the
number of controlled production parameters.
In virtue of one-sided-unilateral conversion, to each extremal
function of production parameters, corresponds an extremal
function of control parameters.
As agreed, the production quality index is an extremal
function of its parameters. Therefore, function (1) in relation
to the control parameters X1, X2, ...,X is also extremal.
Adjustment-loss Time
Assuming that a process having unchanged, fixed working
components of control parameters Xv*, is under investiga-
tion, and assuming also that, by the initial adjustment, it was
possible, at some time t = to, to attain the extremum value of
the production quality index, then under the influence of
distrubance factors the production quality index will in time
deteriorate spontaneously, in spite of the constancy of the
control coordinates (Figure 2). At the expiration of time T,
the quality index will get out of the permissible limits. The
disturbance effects are random functions of time or random
values, although in some individual applications their mathe-
matical expectations may dominate centred random elements.
The chance of producing quality index with time Q (t) is
also a random time function, known to be non-stationary for
this process with a fixed adjustment. And so, repeating the
above test, one gets new realizations Q (t) and new time values
T., (Figure 2).
The overall adjustment-loss time by the quality index Q
is designated as the mathematical expectation M (Ti) of time
interVals T. So the overall adjustment-loss time expresses the
mean value of time interval, after which the production quality
index of the industrial process with a fixed adjustment gets out
of the permissible limits.
. The adjustment-loss time, understandably, depends on the
nature of the industrial process and its automation level by means
of frequency automatic systems. If the overall adjustment-loss
time is great, then a non-automatic, hand control is not
difficult and there is no need to use a complex self-adjusting
system. If the aggregate adjustment-loss time is small, then a
person is unable to secure adjustment even with the presence
of appropriate data transmitters and self-adjustment becomes
necessary.
It should be noted that the higher the speed of the industrial
process and the stricter the demands on the quality of production,
the smaller is the overall adjustment-loss time. Acceleration
of the industrial processes and stepping up of demands on the
quality of production are inherent characteristics of technical
progress. Therefore, the application of self-adjusting control
systems of industrial processes has a broad prospect.
Equations of Extreme Control Processes
It is assumed that, in' the vicinity of the extremum point,
serving as a working portion of the system under consideration,
the quality index (1) approximates with sufficient accuracy by
the quadratic shape of preceding values of cootrol parameters
and by the additional member OQf expressing the influence of
disturbing effects f1, frn:
Q(t)=Q1+1
E a ikAki(t ? Ti) AX J(t?T.)
i, j=1
+ 5'21 ai1= a11=aX1aX;
here
(2)
Xy= X,? X vi= X: + SX,+SX,,,
are complete deviations of brought forth coordinates (para-
meters) of control , AX,* = X,* ? Xvi are working deviations
of control coordinates, and Q/ is the extremum value of the
quality index. In case the computer of the quality index does
not bring about smoothing (smoothing is secured only by
subsequent elements of the circuits) and the production para;
meter measuring instruments are practically non-inertial, or
their inertness is accounted for in the values of time delays rj,
the output value of the computer equals:
U (t)= Q(t)+ Q7, (t)
Here 6Q,?,(t) is the element created by the errors of the
production parameter meters and the errors of the computer.
Thus
n
(t)= + 1E AX i(t ? zi) AX j(t?Ti)+SQ (3)
where
6.2= 6Q f + 6Qn
The value U (t) in the multiplying elements of the synchronous
detectors (correlators) is multiplied by the search signals oX,
displaced in time in the delay simulator. The errors in delay
simulation are designated
To the second entrances of the multiplying elements are
530/2
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
transmitted values OX, (t ? Tv ? Or) and the output signals of
these elements equal V? = U (t) .OX (t ? Tv ? Sr).
The linear portion of the controlling system without any
common restriction is divided into a set of filters and integrat-
ing elements (Figure I). The output working coordinates equal
n
X k = -5 AWkv (D) T/v
Here Wkv is the matrix of the transfer functions at low
frequencies. Thus
Or
DX: = DAX: + D X kl = E Wkv(D) V D=
=
DAX: = E Wkv (D)[tt (t)0X'v(t 0Tv)]?DXki
v =1
utilizing expressions (3) for U (t), one finds
1
E ai,wkv (D) {[Axl ? ti)d- SX,(t ? 2)
V
x [AX T (t j) + SX v (t ? c j)+ oX 1)]
xSX1(t?tv?STv)]
+E W1( D) [(21+ SQ)SX v(t 7 ? 6-EA ?Dxki
(k =-- 1, 2, n)
(4)
Summation by indices i, j, v is carried out within the limits
from 1 to n.
Qualitative Analysis of Extremum Control Processes
Quasi-stationary Regime
The quality demand of an extremum control process reduces
to the following. With considerable initial deviations from
extremum the state point must move to the extremum as
smoothly as possible (without much overshoot). In a steady
operation the state point must stay sufficiently close to the
extremum.
Let eqn (4) be converted into:
DAX: = E aij Wk, (D)
v
[AV (t? t) OX (t j)(5X ?(t?T, ? On,,)]
+?E Wk" (D)
2 v ?
E auzs,xr (t ? AX1 (t ?2 i)SX,(t--e?? ST,)
+ E aii wk?(D)
[Axt (t ? bx.,?,(t?n i)SX,(t? On,,)]
+ (kk? DXkl
(5)
530/3
here
695k = au Wkv(D){[6x(t?n1)+Oxi(t?o]
X [SX j(t ? 1)+ SX j(t ?n)] OX,, (t Tv ? On,,)}
+E Wki(D)[(2E+6Q)6Xv(t ?Tv? On v)]
(6)
Values 60k may be treated as the effect of errors, noises and
search elements, brought to the outputs of filters of the syn-
chronous detectors, provided there are no working deviations
(AXi* = 0). These functions do not depend on working devia-
tions (it is assumed, that OQ does not depend on working
deviations) and on the whole may only obstruct the movement
of the state point to the extremum.
Thus, Ogok always plays the role of disturbance effects and
it is expedient to decrease them as much as possible. If the
search elements OX, have permanent constituents then, as seen
from expression (6), it is impossible to decrease indefinitely
&pk by any increase of time constant filters of the synchronous
detectors. Indeed, according to (6), the constant components
OX,, will cause deviations at the outputs of the synchronous
detectors.
1
? E ai ? w, (o)6XioXiSX,+ E wkv (o) (Qv+ SQ)6X,
2
v
Where at least part of the transfer coefficients Wkv (0) is
known to differ from zero, since otherwise the circuit of the
extremum control is inefficient. Thus, it is expedient to secure
zero parity of the permanent elemeents of search constituents
i.e. the centering of the search oscillations. This is easily attained
by installation of high frequency filters at the outputs of the
search oscillation pickups.
In particular, an ideal high frequency filter separates, from
the input value, the high frequency constituent not correlated
with the remaining part of the input value. This is illustrated
by the graphs in Figure 3, showing a density spectrum curve S (co)
of the input function, which is assumed to be stationary and
ergodic and amplitude frequency characteristic A (w) of an
ideal high frequency filter.
An ideal filter separates the high frequency constituent with
a spectral density SO? (a)) Figure 3(b) not correlated with the
filtered component (spectral density Sw (w)), since the mutual
spectral density of these components equals zero.
If the data meter controls the full input coordinate of
the system Xv = OX,, oXv?, then the ideal high
frequency filter in a stabilized operation separates the high
frequency constituent OX not correlated with constituent
Kv* + OX,,,?. It should be noted that stationary X?* may be
expected only in a stabilized regime of the system operation.
In transient regimes X,,* is a non-stationary random function
and even with the use of ideal filters the search elements
prove to be to some extent correlated with the working
elements X,,".
However, as is seen from the following in the present system
(perhaps eyen more than in other continuous extremum sys-
tems), a quasi-stationary regime is profitable. In a quasi-station-
ary regime the transient process times are great compared to
correlated times of search elements. When a quasi-stationary
condition is secUred and with the application of high frequency
530/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
530/4
filters near to ideal the search elements may be considered
with a high degree of accuracy not correlated with Xv*, both
in a stabilized and in a transient condition of the system.
Based on the above the search elements bX? it is assumed
as centred by random functions not correlated to AX*, 0(w, IQ.
Investigation of other members of the right portions of eqn (5)
is now made. The second member of the right portion may be
rewritten in the shape
where
EW (D)EF* 5X, (t ?; ?
F* = E AX: (t ? T1) AX T (t?T 1)
In view of the definiteness of the signs of the functions of
working deviations this member cannot facilitate the organiza-
tion of movement to the extremum.
Thus, members (7) play the part of impeding effects and it
is expedient to reduce their influence to the minimum values.
The only accepted means of reducing the effects of these
members is the raising of frequencies (decreasing the correlation
times) of the search elements at given times - of transient
processes of a closed loop or, inversely, increasing cumulative
times at given correlation times of search elements. Either
one or the other means switch to a quasi-stationary regime.
In a 'quasi-stationary regime the effects of members (7) can be
neglected. The following members of eqns (5)
(7)
E
a1 Wk (D) [AX (t? x) 5X (t?T j)(5Xv(t? Tv? (5T) (8)
j, v
although linearly depend on working deviations, are also playing
the role of impeding effects.
In fact, as agreed (54? and aXv are not correlated and 6X,
are centred. Therefore, the mathematical expectations of the
products 64, (t ? T,) oXv (t ? ? ory) equal zero. Thus,
the expressions in the square brackets represent linear forms of
working deviations, whose coefficients are centred 'high fre-
quency' random time functions. These members can only
increase the scattering of the trajectories of the state point
during its movement to the extremum.
In the quasi-stationary regime, because of the intensive
suppression of the high frequency constituents, members (8)
may be neglected.
Turning to the investigation of the first member of the right
portions of eqns (5) it is noticed that the product of search
constituents may be represented as a sum of the mutual cor-
related (at j v) or a auto-correlated function and a centred
random function. Moreover, if the search constituents are
stationary and are stationary combined, then the correlation
functions depend only on the argument difference.
j (t ? T j) 6X v(t ? Ty? v) = R j, (T v ? T j + ST,) + (t)
where (t) are centred random functions members
E a ,j (D) AX jv (t) (9)
j,v
play the same kind of negative role as members (8). In a quasi-
stationary regime the influence of these members may be
decreased to the same extent, as the influence of members (8),
since the correlation times of function &i,,(t) are compared with
the correlation times of function oX,u, aX v. In a quasi-stationary
regime one neglects the influence of members (9).
And so, the general equations (5) give up their place to the
following equations of a quasi-stationary regime of the system
under consideration.
DAX: = E a1 R (t .V ? T j 6T?) vvkv (D) AX' (t?ri)
+ fk(t)? DX" (10)
(k= 1, 2, ..., n)
These general equations of a quasi-stationary regime are
simplified in concrete, particular cases.
First of all it is noted that the correlation times of search
signals are small, due to the presence of high frequency filters.
Therefore, for a typical case, when the delay times xi are not
identical it may be assumed
{
0 for j 0 v
Rjv (T v ? T ; + 6T v) = 0) for any g(t). In other words, the
controlled coordinate x(t) will reproduce any continuous g(t)
without static error, and the requirements for the operator N(p),
determined by condition (12), will be absent. If the function g(t)
has a discontinuity at some moments, then slight dynamic
errors will appear at these moments: An attempt will be made
to solve this problem, using the principles of construction of
variable-structure automatic control systems12.
Conditions of Invariance in Combined Tracking Systems with
Variable Structure
In the domain, G, of an n dimensional space c, ..., en let
the motion of a dynamic system be described by a system of
non-homogeneous differential equations with a discontinuous
right-hand side
Here
dg
fi=e, (1=1,2, ..., n ? 1)
f?= ? E ctiei+
.
i=1 i = 1
(13)
where
bi or ( E ciej)gi(t)>Ot
j=1
for V c e)gi (t) 0) and G-- ( E c,s; < 0), in which the vector
j=1
function f (E, R(t)) of system (13) is constrained and for any
constant value of time t on the approach to S from G+ and
G- there exist its limit values f+ (E, g(t)) and!- (, g(t)). On
the approach of the solution Z(t) to some domain U S let
the vector functions I+ and f- be directed towards the hyper-
plane S (fj > 0, friv' < 0, where f and fy- are the projections
of the vectors f+ and!- on to the normal to the hyperplane S,
directed from G- to G+). Then, when (t) hits U there arises
the so-called sliding mode and the solution of system (13) does?
not depend on, (11, bi, bi*, g1 (t). In fact in this case, as shown by
Filippov13, in the domain U- there exists a solution E(t) of
system (13), and the vector d E / dt = f ? (E, g(t)), where f? =
(f?, f), lies in the hyperplane S and is determined by the
values of the vector functions f+ and f-.
From the condition that 10 (E, R(0) a S there follows the
linear relationship of the components of the vector f?
E c1f.0 =u
j=1
where f? is the jth component of the vector f? whence
1
f? =? E cif?
j=
(14)
(15)
Hence the solution of system (13) for E(t) a U coincides with
the solution of the system of similar homogeneous differential
equations
Here
E=
, ? ? ? , en)
n-1
=Ej+i(i= 1, 2, ? ? ? , n-1), f? E Ci8J+i
j=1
C5 are constants.
Obviously the solution of system (16) does not depend on
b1, b1*, gi(t). Use will be made of this property of the solution
of the system of non-homogeneous differential equations with
a discontinuous right-hand side for the construction of a com-
bined tracking system with variable structure.
(16)
f In the case ( n
cjej)gj(1)=-- 0
i=1
532/4
= bi for
j=1
yi (t)) = br for ,E c; 65) gi(t)---> ? 0
'i=1
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Let the structure, selected in a definite way, of the open-loop
cycle of a combined? tracking system [Figure 3(b)] change
stepwise on some hyperplane S = Eej = 0 in such a way
t= 1
that the movement of this servosystem is described by a system
of non-homogeneous differential equations with a discontinuous
right-hand side (13), where (E, g(t)) = F (E, g(t))i
Ki for E ciej)gi(O>0 t
(i= 1, 2, ..., n)
Ki* for E cis.J) g ? (0 - oo, therefore converges. Using
the known theorem of the convergence of series with positive
terms, one concludes that
lim 4:1) (x x [n](1 431 (x [n]))) --,-- 0
Kx[n]
Hence, by virtue of the conditions (1), it follows that
(13) lim x [n] = 0 (22)
Thus a pulse control system which has a stable pulse linear
part and a non-linear characteristic (x) and which satisfies
the conditions (1), will be absolutely stable if the real part of the
analogue of Popov's function is positive, i.e. if
Re II* (jC)) = Re W* (j(7)) + ?1> 0 (23)
111 N[n]= xn [11] [n] (11)
xN [n] = f [n] w [n ? n'] PN [m]
mO
It is obvious that for 0 < n < N
xN [n] x [n]
where x [n] is the solution of eqn (3).
Now the following expression is formed
PN= E c9N En] tkN [n]
n=0
which, having regard to (10) and (11), is equal to
p N = E ((x [n]) x [n] ? K-1 (x En]))
n=0 (14)
(12)
(21)
According to the Liapunov-Parseval equality9 eqn (13) can also
be represented as
where
(15)
Cr' ( jc7))=D {co N [n]},= (16)
and by virtue of (11) and (12)
tGN (./(7))= D {t/ = .i(7)= 1) (WV) (14 op
(17)
These spectral functions exist if conditions (10) and (8) are
fulfilled.
The condition of stability (23) determines the magnitude of
the interval (0, k) which includes the non-linear characteristic
4:1) (x) for which the pulse system is absolutely stable. This
condition is sufficient.
Frequency Criteria of Absolute Stability
To formulate the criteria of stability of a pulse control
system one introduces the concept of a static gain of the non-
linear element
537/2
S (x)= (x)
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
(24)
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
which is the slope of a straight line passing through the point
of the non-linear characteristic for a specified value of x. The
maximum Sma? and the minimum Sniin static gains are
determined by the rays of a sector which is tangential to the
characteristic (Figure 3). A non-linear pulse control system in
which the non-linear element is replaced by a linear element
with some fixed gain, k, is said to be a linearized pulse control
system. For a linearized pulse system to be stable, by analogy
with the Nyquist criterion7, it is necessary and sufficient that the
frequency characteristic of the linear pulse part LP should
not embrace the ponts ?1/k, j0. It will be said that a linearized,
system is obvtously stable if the frequency characteristic of the
linear pulse part does not intersect the straight line ? 1/k.
Then, according to the condition of stability (23), the fre-
quency criterion of absolute stability of a non-linear pulse
control system can be formulated in the following way. A non-
linear pulse control system with its characteristic belonging to
the interval (0, k), will be absolutely stable if the linearized
pulse system corresponding to it is obvtously stable or if the
frequency characteristic W*(ja3) of the linear pulse part does
not intersect the straight line ? 1/k (Figure 4).
The greatest value k = k? which determines the span of
the interval (sector) in which the non-linear characteristic is
located, is determined by drawing the vertical tangent to W*(j65).
The difference k ? Smax characterizes the margin of stability.
The stability criterion of a pulse control system can also
be formulated with referenec to the frequency characteristic
K*(jeti) of a closed linearized pulse control system. Selecting
k =k0/2; then
ko *
?2 W (jw)
K* OM? k (25)
,,
1 -1-= W-(go)
2 "-
According to the usual constructions of the frequency
characteristic of a closed loop from th e frequency characteristic
of an open loop?, for a obvtously stable linearized pulse control
system if k = k212, one has
IK* (./61)I (26)
Thus a non-linear pulse control system with its characteristic
belonging to the interval (0, ko) will be absolutely stable if the fre-
quency characteristic of the closed linearized pulsecontrol system
K*(CO with gain k0/2 does not exceed unity in absolute value.
One Notes that the frequency criteria are also applicable in
those cases when the continuous part. contains delay elements
or elements with distributed constants.
The frequency criteria of absolute stability can also be
expressed in analytic form. The first criterion is closely related
to the problem of Karatsodor, whilst the second criterion is
closely associated with Shur's problem in the theory of analytic
functions'''.
The analytic form of the criteria is considered in a special
paper. One will not consider it here as, more over, the use of
frequency criteria is the simplest way of elucidating various
general properties of non-linear pulse control systems.
Generalization of the Stability Criteria
Non-linear pulse control system which contain a stable
linear pulse part have been considered above. Now suppose that
the linear pulse part is neutral or unstable. This implies that
537/3
its transfer function W*(q) has poles on the imaginary axis, and
in particular, at the origin or in the right-hand half-band
Re q > 0, ? v < Im q < 're. Since the determined sufficient
conditions must hold for any non-linear characteristic which
belongs to the interval (0, k0), they must also hold for a linear
characteristic which belongs to this interval. But for sufficiently
small gains z of this linear characteristic, a closed pulse control
system will behave like an open pulse control, system corre-
sponding to the linear pulse part, i.e. it will be neutral or un-
stable. Therefore, for instances of a neutral or unstable linear
pulse part it is necessary to impose additional limitations on
the minimum static gain Smin. Let us elueidate these limita-
tions. Given a proportional feedback with the coefficient z
across the linear pulse part (Figure 6), one supposes that the
structure of the linear pulse part is such that for a finite z < Smin
the closed linear pulse part is stable. The frequency criteria of
stability are then applicable to this non-linear pulse control
system, but the role of the frequency characteristic of the
linear pulse part W*(j65) will now be played by the frequency
characteristic of the closed pulse control system, which is a new
linear pulse part equal to
W* (jcTi)
We* (lc? = I z W*
But the blockdiagram of a non-linear pulse control system
[Figure 6(a)1 can easily be converted to the form of Figure 6(b)
where f [n] is now the response of the closed pulse control
system, and the non-linear characteristic is equal to
(x) + zx (28)
However, since this characteristic must satisfy the condi-
tions (1),
(27)
i.e.
(1). (x) < k
z <
X
Smin > Z
(29)
Thus the formulation of the frequency criterion remains unchan-
ged. Only the characteristic of the non-linear element must now
belong to the sector (z, k), and the frequency characteristic
of the linear pulse part W,*(jCo) is determined by the expression
.(27).
One Notes that if the linear pulse part is neutral and its
transfer function W*(q) has only one zero pole, whilst the rest
of the poles have negative real parts, then z in eqn .(27) can be
arbitrarily small and for this case one has
W.* 0(70 W* (30)
i.e. in this case there is no need to construct W:(jc7)) from
W*(jCo) on the basis of the relation.(27).
If the non-linear characteristic 0 (x) at x > x? goes outside
the limits of the sector (z, k), which is usually the case for
non-linear characteristics of the saturation type, the frequency
criterion of stability guarantees stability with deviations of the
error not exceeding x?.
The frequency criteria of stability also hold for those cases
when the non-linear characteristic (or gain of the linear pulse
part) is a function of time n, if 0 (x, ii) for any n > no satisfies
the conditions (1), i.e. if it belongs to the sector (0, k0) or in
the case of a neutral or unstable linear pulse part belongs to
the interval (z, k0).
537/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
537/4
The Necessary and Sufficient Conditions of Absolute Stability for
Some Non-linear Control Systems
Frequency criteria of absolute stability determine the
sufficient conditions of absolute stability. It is obvious that in
those cases when these sufficient conditions of absolute stability
coincide with the necessary and sufficient condition of stability
of linearized pulse control systems, they also become necessary
conditions of absolute stability. Let us define the class of non-
linear pulse control systems for which the conditions of absolute
stability are necessary and sufficient. This problem was first posed
by Aizerman11, for continuous control systems, and slightly later
by Letov?. The solution of this problem is of importance since
it permits reduction of the investigation of the absolute stability
of non-linear pulse control systems to the well-known investiga-
tion of the stability of linear pulse control systems.
It follows directly from the formulation of the frequency
criterion that this class of non-linear pulse control systems
includes those for which the obvtous stability of linearized pulse
control systems coincides with their stability. The frequency
characteristics of these latter pulse control systems W*(j63) [or
W:( j@)] must have the form shown in Figure 7(a) and (b).
The frequency criterion of absolute stability determines the
necessary and sufficient conditions for all non-linear pulse con-
trol systems of the first order (with amplitude- or pulse width-
or time-modulation), and also for non-linear pulse control systems
of any order whose frequency characteristic W*(j@) has the
largest real part in absolute value at the boundary frequency.
It is worthwhile pointing out that for this class of system the
absence of periodic conditions according to the improved
method of harmonic balance12, testifies to their stability. For
digital automatic systems, as shown elsewhere13, the deter-
mination of periodic conditions with, a relative frequency@ =
entails drawing a straight line with a slope ? 1/ W*(jeo) in the
plane of the non-linear characteristic (Figure 7)* . If the maximum
real part W*(jCo) in absolute magnitude is attained for @ =
(which always occurs for firstorder pulse control systems), the
condition requiring the absence of aperiodic conditions with
a relative frequency @ = n coincides with the condition of
absolute stability.
Estimation of the Degree of Stability
For the simplest estimate of the quality of the behaviour
of a non-linear pulse control system, onewill use the concept
of degrees of stability which characterizes the process damping
speed.
For this purpose, instead of the auxiliary functions (10) and
(11), the following functions are introduced.
(PN [I] ={c1) (x [n]) e"
0 1}.
4. The separation of functionS. F with respect to contour r
gives the real functions F = F? F-, where the sign F+ denotes
the absence of poles of the function in the region D-, and
the sign F_ denotes their absence in the region D+.
5. The representaion (the transfer function) of the controlled
plants will be made by G = PIQ, where P and Q are poly-
nomials of z; the representation (the programme) of the pulse
unit will be made by W = CID, where C and D are polynomials
of z; the representation of the pulse system as a whole will be
made by H, and the representations for the input and output
are equal to X and Y respectively.
Analytical Conditions for the Efficiency of Pulse Systems
By considering the mathematical model of an actual physical
system, one is deliberately making a differentiation between the
'coordinates' of the system, the changes in which are reflected
by the given model, and its 'parameters' which are determined
540/1
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
540/2
as fixed numbers which, in the given model, form the basis for
calculations. However, the practice of construction of auto-
matic systems shows that the uncontrollable discrepancies
between the calculated and the actual parameters may be the
cause of profound disparity in the calculated and the actual
behaviour of the system. The failure to take this fact into
account will sometimes lead to the construction of inefficient
systems. The majority of automatic systems (the systems of
stabilization and programme control, the computer. and the
reproduction systems, the systems for transmission and process-
ing of data) require a continuous relationship for the behaviour
and the small changes in external conditions, which are expressed
in the change of input coordinates and parameters of the system.
The conditions for which a continuous relationship, between
the coordinates of the system is observed are the conditions of
stability. The conditions for which a continuous relationship
between the behaviour of the system and the deviations of its
parameters from the calculated values, which are assumed to be
constant in a given model, is observed, are the conditions of
approximation of simulation. The general condition of efficiency
for an automatic system, constructed on the basis of a definite
calculated model, which unites the conditions mentioned, may
be formulated as follows. With small variations in the input
coordinates and parameters of the system the variations in the
output coordinates should be small.
Let us find the analytical conditions for the efficiency Of
an automatic pulse system, with a single input and a single
output coordinate, described by the following difference
equation:
xi-i, ?? ?, Xi?n, Yi, Yi-1, ? ? ? yi-??1)=? (1)
where 3.4- is the continuous function differentiable with respect
to all arguments, i is the discrete time, and n and m are the
corresponding number of stored values x at the input and y at
the output. At the foundation of calculation of the system lies
the linear model, obtainable by means of linearization of
the equation of the system in the vicinity of the current
'operating point':
n (
k=0 uXi?k
xi_k+ E
uY
k=0 i?k
m ag.r.
The numbers
ak =
? ( (8,57-
aar b?
k?i-k)o5 0
which do not depend on index i over the interval of time under
consideration, represent the equivalent parameters of the linear
model.
Using z-transformation of number sequencesw, the equation
for the linear model (2) may be written in the form:
Y =HX (4)
where H is the representation of the model, which is the rational
function
(2)
(3)
A fl
H , A -= E a kzk ,B bkz
k= 0 k=0
(5)
The representation of a real system, the parameters of which
change in relation to time and coordinates, but sometimes also
in an unexpected form, differs from the representation of its
model by the variations OH, 62H, 03H, ..., which must satisfy
the general condition for the efficiency of the system.
By varying the relation (4), the corresponding variations for
the output of the system are obtained:
SY=H?SX-P5H?X
52Y=H?62X+251-1?5X+52H?X
The conditions under which the variations in the output coordi-
nate remain small have the form:
(6Y)_=0; (02Y)_=0;
(6)
(7)
By separating the right sides of expressions (6) the analytical
conditions for the efficiency of the pulse system are obtained:
H_= 0 ; (OH)_= 0 ;. (PH) = 0, ...
(8)
in which case the first of these conditions is the usual condition
of stability, whereas the last are the conditions of 'approxima-
tion' of simulation. The necessity for taking into account the
large variations is caused by the fact that as regards the para-
meters of the system its representation is a non-linear function.
It is possible to construct an example where the violation of the
efficiency is caused as much as is desired by a high variation20.
However, in practice, mostly violations of the first .two con-
ditions of efficiency are encountered.
Criteria for the Efficiency of the Basic Structures of the Automatic
Pulse Systems
The method of combining the controlled plants and the
computing units is called the structural system of control. The
simplest pulse systems of automatic control contain a single
computing unit with representation (programme) W and a
single controlled plant with representation G. To each
structure of the system of control corresponds a definite func-
tion H, which depends rationally on W and G:
H=H(W,G) - (9)
which is called the representation of the system. For each
structure of control there is a definite class of permissible
functions H, which may be realized in the system by the choice
of different control programmes W, remaining at the same
time within the limits of conditions of efficiency. The structures,
which permit the realization of arbitrary functions H are called
the ideal structures. The structures which do not have even a
single permissible function are called the inefficient structures.
From the point of view of the condition of stability only the
stable functions of type H+ are the permissible functions.
However, if it is necessary to realize an unstable function, then
the condition of stability may be discarded by limiting oneself
to the fulfilment of the conditions of approximation.
By taking into account the variations in the representation
of the controlled plant, simulated by function G, the condi-
tions of efficiency (8) applied to system .(9) may be written in
the form:
a 02G 0
H_ = 0; CH ? 64=0; (2H (10)
aG _ aG2 _
540/2
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
The functions H2OHISG,o2HloG2, ..., derived by differentiation
of (9), depend on W and G. In synthesis of systems for the
automatic control of the programme of the computing unit,
W is chosen in relation to the representation of plant G:
W =W (G) (11)
The verification of the synthesized systems for efficiency is made
by the substitution of this relationship in the expression (10)
after carrying out the operations of differentiation in them.
In a general case the pulse systems of automatic control
contain several controlled plants and computing units, which are
connected up into a single structure. These systems may have
several inputt and outputs. The verification of the conditions for
efficiency should be carried out in this case by the variation of
all the output coordinates for the variation in the representations
of all the controlled plant.
The compensation for the negative dynamic properties of
the controlled plant, by means of the computing unit having
the same negative dynamic properties, is the cause of violation of
the conditions of efficiency of pulse systems of automatic
control. Namely, such a compensation takes place, for example,
during the trivial recalculation of the programme for the com-
puting unit W for a simple closed system, the representation of
which is:
by the formula:
H? WG
1+ WG
1 H
W = G ? 1 ? H
(12)
(13)
by proceeding from the initial function H, which is chosen
without taking into account the conditions of efficiency.
This assumption will be proved. By carrying out the fac-
torization of the representation of the plant it is obtained that:
G=G+G- (14)
Functions G+ and G-, equal to:
G+ =P+ IQ+ ; (15)
are the positive and the negative portions of the representation
of the plant.
The positive plant, which has representation G+, is charac-
terized by the following dynamic properties: stability, instan-
taneousness of reaction, and smoothness of transition process.
The negative plant, which has representation G-, displays
negative dynamic properties: instability, retardation of reaction,
and sudden ejections in transition process.
By modifying formula (12) one obtains:
Ui 2 SH 1/1/2
? SG ; 62H= = (1 + WG)2 (1 + WG)3. S2G; ... (16)
First of all, conditions will be found under which the closed
system is ideal, i.e. capable of reproducing the arbitrary func-
tion H. The corresponding programme for the calculating unit
is chosen in accordance with formula (13). By substituting this
formula in (16) one obtains:
0 G a2G
SH=H(1?H)?;S2 H=-2H2(1?H) ?
G2
540/3
or( SP SQ).
S2H= ?2 H2 (1 ? H) (2;522P 2 6/3 ? SQ 22Q) (17)
P2e1T PQ
The conditions of efficiency (8) require that P- = Q- = 1.
Thus, the closed system is ideal only in that case when the
plant is positive. In the case of the plant with negative
dynamic properties the function H is not reliazable because of
the violation of the conditions of approximation.
It will be shown that the closed automatic system is efficient
for any controlled plant, under which conditions the class of
permissible functions of this system is equal to
(18)
where F+ is the arbitrary stable rational function of the form:
F,=A1B+ (19)
and 0 is the polynomial which satisfies the polynomial equation
in respect of the unknown polynomialS 0 and H:
AP- +Q-1-1= B+ (20)
The corresponding programme of control has the form:
AQ+
W= (21)
P
it will be verified whether the conditions of efficiency are
fulfilled. By substituting (21) in (16) and by taking into account
(20) one obtains:
A011 QSP?PSQ.
oH 70,)2 p+ Q? ,
62H2 A2021-1 Q262P? 2 Q5PSQ + 2 PS2Q
=
(B?)3 (p+)2 Q+
The conditions of efficiency are fulfilled for any values of G.
In the case of a stable controlled plant the polynomial 0,
as follows from the polynomial eqn (20), becomes arbitrary,
and the class of permissible functions is extended to
= P- F (22)
Thus, one proves the criterion for the efficiency of a closed sy-
stem, which requires in addition to the fulfilment of the usual
criterion of stability, that the programme of the computing unit
does not shorten polynomials P- and Q-*.
Using the analytical conditions of efficiency, it is possible
to derive the criterion of efficiency for any structures of auto-
matic systems. By means of these conditions it is easy to prove,
for example, the following well-known propositions:
(1) The systems on the limit of stability are inefficient.
(2) The open systems of control are efficient only for the
stable plants. '
* Applicable to the stable plants the criterion of non-contraction
P- was, for the first time, introduced in the work of Bergen and
Ragazzini7.
540/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
540/4
(3) The ideal structures of control for the plants having
negative dynamic properties do not exist.
(4) The parallel system of control is ideal for stable plants;
the sequential (cascade) system of control is ideal only for
positive plants.
In view of the non-existence of ideal structures of control
for the arbitrary plants, the criterion of efficiency of the auto-
matic system more rigid than the criterion of stability. Only
for the positive plants are the general criteria of stability of the
linear systems adequate.
In order not to violate the conditions of efficiency, .the
optimum function H of the system should be sought for in the
class of permissible functions. The wider the class of per-
missible functions for a given structure, the higher the quality
of the optimum system, remaining conditions being equal.
Therefore, in the synthesis of a system of control for a given
plant, a structure of control having as wide a class of per-
missible functions as possible, a structure close to an ideal one,
should be chosen.
The Use of Polynomial Equations in the Synthesis of Optimum
Pulse Systems
It has been established that the classes of permissible func-
tions for the pulse systems are expressed in terms of polynomial
equations. In the author's work12-20, it was shown that the
synthesis of optimum pulse systems of control for the linear
plants based on a number of basic criteria may be made entirely
by means of polynomial equations. The finding of the optimum
programme of control is, as a rule, reduced to the solution of
a system of polynomial equations. The computation methods for
the solution of a system of polynomial equations applicable to
the use of digital computers have also been developed and their
advantage over the ordinary methods in the synthesis of con-
trolled programmes for the plants of a high order with complex
correlational relationships was proved. By means of the poly-
nomial equations, a number of new problems of automatic
control, in particular for the unstable controlled plants, was
solved. The basic problems for the synthesis of pulse systems
and their solutions, obtained by the method of polynomial
equations, omitting the proofs because of the lack of space,
are now enumerated.
The problem of synthesis of the pulse system with the mini-
mum transient period for a given input action:
X = RIS
(23)
where R and S are the polynomials of z, is reduced to the solution
of the following polynomial equation:
13-0+SQ-11=R (24)
in respect of unknown polynomials 0 and /7. The corresponding
controlling programme is equal to:
Q? 0
W= (25)
P-F SIT
The representation of the transient process has the form:
The minimum duration of the transient process, which ensures
the fulfilment of the conditions of efficiency, from the number oi
cycles, is equal to the sum of powers of polynomials P- and Q.
With the limitation for the module of the controlling action:
(i= 0,1,2, ...) (27)
the corresponding problem is reduced to the finding of a non-
minimum solution of the polynomial equation, which is found
by special computing methods. The modification of the poly-
nomial equation (24) leads to the derivation of a system which
has no pulses.
The problem of synthesis based on the criterion of the
minimum of the total quadratic error:
E(z)E(z-l).lz- (28)
is reduced to the solution of the system consisting of two
polynomial equations:
13-0+Q11
-=I715
+--Q
P-0+-}
in respect of the unknown polynomials 0, H and ck. The poly-
nomials I and U are the numerator and denominator of func-
tion X (z) X (z-). The corresponding controlling programme
is equal to
(29)
w ?Q+0
P +II
(30)
The calculation of the quadratic error may also be made by
means of the polynomial equation20.
The problems of synthesis of the optimum pulse systems of
automatic control and of processing of data for the random
input signals, by taking into account the universal nature and
the prevalence of quadratic dispersion criteria, represent the most
favourable field for the application of polynomial equations.
The general problem of synthesis of a pulse system, optimum
according to the criterion of dispersion of the error for finite
time of transition into the unshifted state is reduced to the solu-
tion of a system consisting of three polynomial equations, one
of which secures the efficiency of the synthesized system, the
second, the finiteness of the settling time and the third, the -
minimization of dispersion of the error. The solution of this
general problem determines the solutions of the numerous
particular problems of extrapolation, filtration, differentiation
and integration of random processes by means of pulse
computing units. The optimization of the pulse systems, by
arbitrary criteria of quality, is reduced to the combination of
the method of polynomial equations and the general methods of
mathematical programming. By means of the theory of poly-
nomial equations it is possible to synthetize the most economic
programmes for the processing of data by the method .of least
squares. The obtained results show that the polynomial equa-
tions represent a suitable mathematical tool for the programming
of many procedures of computer mathematics and of mathemat-
ical statistics, which are widely used in the self-ciptimizing
systems of automatic control.
* Polynomial with the reversed order for the sequence of coeffi-
(26) cients is denoted by symbol A.
540/4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Conclusions
The conditions of efficiency, formulated in this paper, limit
the possibility of change in the dynamic properties of controlled
media by means of pulse computing units. Under these con-
ditions the worst properties of the plant?instability, retarda-
tion, fluctuation?are shown to be the most difficult to overcome.
The limits of the accuracy of control for the dynamic. plants by
means of the pulse computing units whilst being wider than for
the units of the continuous type, are, however, not limitless.
Physically, this means that the inertia of the plants cannot be
completely overcome. The problem of the theory of automatic
control lies in the further clarification of the limits of possible
accuracy of control, and the realization of these possibilities
through the design of the most perfect controlling machines. It is
hoped that the future development of polynomial equations
will prove to be one of the important aids in the solution of
this problem.
References
1 TSYPKIN, Ya. Z. Theory of Pulse Systems. (Monogr.) 1958.
Moscow; Fizmatgiz
9 JAMES, H., NICHOLS, N and PHILLIPS, R. Theory of Cascade
(sequential) Systems. Eds, (Monogr.) IL, 1951, Chapter U
8 ZADEH, L. A. and RAGAZZINI, J. R. The analysis of pulse systems.
Trans. Amer. Inst. elect. Engrs 71 Pt 11 (1952)
4 TSYPKIN, Ya. Z. Design .of a system for automatic control under
stationary incidental actions. Automat. Telemech., Moscow 4(1953)
5 TSYPKIN, Ya. Z. Some questions relating to the synthesis of
automatic pulse systems. Automatika 1 (1958)
6 TSYPKIN, Ya. Z. Optimum processes in automatic pulse systems.
Izv. AN SSSR, OTN, energet. avtomat. 4 (1960)
540/5 ,
7 BERGEN, A. R. and RAGAZZINI, J. R. Sample-data processing
techniques for feedback control systems. Trans. Amer. Inst. elect.
Engrs 73 Pt 11 (1954)
8 CHANG, S. S. L. Statistical design theory for strictly digital
sampled-data systems. Trans. Amer. Inst. elect. Engrs 76 Pt I
(1957)
9 CHANG, S. S. L. Statistical design theory for digital-controlled
continuous systems. Trans. Amer. Inst. elect. Engrs 77 Pt 11 (1958)
10 CHANG, S. S. L. Syntesis of Optimum Control Systems. 1961.
New York; lull
11 JURY, E. I. Sampled-data Control Systems 1958. New York; fli
18 BERTRAM, J. E. Factors in the design of digital controllers for
sampled-data feedback systems. Trans. Amer. Inst. elect. Engrs
75 Pt 11 (1956)
13 POTAPOV, M. D. The problem of terminal time of control and
peculiarities of synthesis of some systems of automatic control.
Trudy VVIA im. N.E. Zhukovskogo, 1959
14 KRASOVSKH, A. A. Synthesis of the correcting pulse units of the
cascade systems. Automat. Telemech., Moscow 6 (1959)
15 PEROV, V. P. Statistical Synthesis of Pulse systems. - (Monogr.)
Sovetskoe radio, 1959
" KALMAN, R. E. Design of self-optimizing control systems. Trans.
Amer. Soc. mech. Engrs 80, 2 (1958)
" BIGELOW, S. C. and RUGE, H. An adaptive system using periodic
estimation of the pulse transfer function. I.R.E. Cony. Rec. IV
(1961)
18 VOLGIN, L. N. Method of synthesis of linear pulse systems for
automatic control based on dynamic criteria. Automat. Telemech.,
Mascow 20 No. 10 (1959)
" VOLGIN, L. N. and SMOLYAR, L. I. The correction of cascade
systems by means of certain calculating units. Automat. Tele-
mech., Moscow 21 No. 8 (1960)
29 VOLGIN, L. N. The Fundamentals of the Theory of Controlling
Machines. (Monogr.) Soyetskoe Radio, 1962
540/5
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
550/1 (;eiit
Most Recent Development of Dynamic Programming Techniques
and Their Application to Optimal Systems Design
R.L. STRATONOVICH
Introduction. Block-diagram of an Optimal Controller
As is known1-4, dynamic programming theory solves, in prin-
ciple, a large number of the problems connected with optimal
systems synthesis. The applicability of dynamic programming
methods is not impaired by taking into account white gaussian
noise and other random factors in various components?the
statistical nature of the signal to be reproduced, imprecise
knowledge of it, random influences on the controlled plant, or
interference in the feedback circuit (Figure 1). Of course, as the
problems grow more complicated, the actual performance of
the calculations becomes more and more difficult.
Although the basic principles of dynamic programming
were expounded long ago, the number of non-trivial problems of
optimal control theory actually solved by this method is not
large. This is explained by purely computational difficulties
which have to be overcome before a solution is found.
What has been said confirms the importance of the develop-
ment of new methods and techniques to increase the effective-
ness of the theory and make it easier for concrete results to be
obtained.
In complex statistical problems the effective use of the
theory becomes possible as a result of the introduction of
'sufficient coordinates' on which the risk function depends.
The importance of this concept was noted by Bellman and
Kalaba2, and ihe author has clarified and developed it further8, 6.
The sufficient coordinates form the space in which the Bellman
equation is considered. A non-trivial statistical example is used
in this paper to illustrate the effectiveness of the introduction
of sufficient coordinates. In the example, the sufficient coordin-
ates are a combination of a posteriori probabilities and the
dynamic variables of the controlled plant.
In complex statistical problems the introduction of sufficient
coordinates has the result that the optimal _controller breaks
down into at least two consecutive units, each of which is
constructed according to its own principles. The first unit SC
(Figure 1) produces the sufficient coordinates X. In some
dynamic programming problems it is trivial, but in complex
statistical problems it may perhaps prove most important. In
the latter, it is synthetized with the aid of methods similar to
those of non-linear optimal filtration7. In the example considered
below, it simply coincides with a unit effecting optimal non-
linear filtration.
The signals from the SC unit output are sent to a further
unit OC, which produces the optimal control action. The form
of this unit, which converts the sufficient coordinates into a
control signal, is found by consideration of the Bellman
equation. This unit can be synthetized without great difficulty
if the risk function is first found as a solution of the Bellman
equation. The most difficult problem is the obtainment of this
solution. Therefore, techniques and methods, which make it
easier to obtain the solution of this equation, are of interest.
The equation is made far simpler by considering the station-
ary mode of operation, when the time-dependence and time-
derivative are eliminated from the Bellman equation. The corre-
sponding stationary equation was considered by Stratonovich
and Shmalgauzen8, and the method quoted is also described
in this paper. Furthermore, to solve the resultant equation, use
is made of the asymptotic step-by-step approximation method,
first expounded by the author8. This method is convenient for
the case of small diffusion terms, and makes it possible to
obtain consecutive approximations whose accuracy is deter-
mined by the magnitude of the coefficients for the second
derivatives in the Bellman equation.
It must be noted that the number of methods for approximate
solution of the Bellman equation, which can be thought up for
the solution of concrete problems, is practically unlimited; each
method is best suited for the solution of problems of a particular
type. To them must be added the obtaining of a solution on
analogue or digital computers. Out of the whole range of
methods, a special approximate method will be described and
applied to the example under consideration, in the concluding
part of the paper. The essence of this method is that the risk
function is represented as a function whose appearance is fully
determined by a finite number of parameters a). The Bellman
equation for the risk function is replaced by a system of equations
which specify the evolution of these parameters in inverse time.
This system is roughly equivalent to the original Bellman
equation.
The unit OP (Figure 1) simulates this system of equations
and determines the parameters a as a function of time. It
operates as a self-contained unit, if measurement of the statistics
of the processes and other variables is not carried out in the
course of operation, and must finish its work before the start
of operation of the main system. If the operating conditions
change, then there may be a need for periodic plotting of the
process of determination of the parameters by the OP unit in
application to the new operating conditions. Such a system will
belong to the class of adaptive systems. The OC unit produces
the optimal control action in response to the values of the
sufficient coqrdinates and the risk function parameters corres-
ponding to a given moment of time. The corresponding algorithm
is derived from the form of the Bellman equation and the
adopted approximation of the risk function.
Usually the transition to a finite number of parameters entails
some deterioration of the quality of operation of the system. The
greater the number of parameter taken, the higher the accuracy
550/1
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
550/2
of approximation and the closer the system to optimal, but, on
the other hand, the more complicated the OP unit. For a
specified number of parameters is is important to determine the
successful choice of the means of approximation. Here a great
deal depends on the ingenuity and inventiveness of the designer.
In this paper, one natural means of selecting the parameters is
suggested?taken as the parameters are the bottom coefficients
of the expansion of the risk function by a suitable full set of
functions.
The block diagram of an optimal controller given in the paper
is of a basic nature, and in fact not all the units need be there.
In some problems the SC unit can be left out because of triviality.
The OP unit can be separated from the system. It can be
replaced by a preliminary calculation, and the parameter
values can be taken into account once and for all in the syn-
thesis of the OC unit. The situation is different if the system
itself investigates varying conditions of operation. In that case
to the units OF, SC, OC (if there is no OP unit) must be sent
the signals from the appropriate metering devices.
Example?Sufficient Coordinates?Stationary Fluctuation Regime
Let the variable part of the system?the controlled plant CP
(Figure 1)?have a transfer function K(p). Let the control
action u be limited to the values ? 1 < u < 1. The input signal
xt, like the output signal yt, is assumed to be known accurately.
Let the signal on the input xt = St + $t be the sum of the pulse
signal St -= ? 1 and interference be the normal white noise
(1Vgt =0; t t-Ft = x6 (T)).
The task of the system is to ensure that the coordinate of
the plant Yt reproduces as accurately as possible the pulse
signal S. If st = 1, but Yt 0 1, the penalty c (I, Yt) in a unit of
time is taken. The functions c (? 1, Yt) can differ. For the step-
by-step method, which is used to obtain formula (22), the
condition that these functions be differentiable is essential.
Henceforward, to make things specific, use will be made of the
criterion of the minimum mean square error, which corresponds
to the functions
c (s, y)= (s ? y)2 (1)
It will be assumed that the signal st is a priori a symmetrical
two-position Markovian process, moreover the a priori pro-
babilities pt (+1) = P [St = ? 1] satisfy the equations
dp (1) dp ( ? 1)
= 11P (1) /LP 1-)
dt dt
(2)
This means that the pulses and intervals are independent
and distributed according to the exponential law P [r> c] =
= e-Pc.
It is required to design an optimal controller which produces
a control signal ut so that the mean penalties are reduced to a
minimum. The latter is a function of the sufficient coordinates.
The sufficient coordinates of the given problem will be
considered. Their definition, which is given by the author5, 6
reduces to the requirement of the sufficiency of the selected
coordinates in three respects:
(a) Sufficiency for determination of the conditional mean
penalties:
rr=M[ctlx,,u? 0, and correspondingly
R_ where ? Slo y < 0. The boundary r between R+ and R_
willbe termed the switching line or separatrix; it is to the
finding of this line that the calculation of the OC unit (Figure 1)
reduces. On it are satisfied the conditions of continuity of the
550/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
550/4
risk function and its first derivatives y, ? Slo z. These con-
ditions are a consequence of the diffusion nature of eqn (16).
From the continuity of the derivative Sio y there follows the
condition
as =0 on F (19)
ay
Eqn (16) describes the evolution of the risk function with the
inverse passage of time. The role of the initial condition for it
is played by the fixation of the risks at the moment of termina-
tion of the operation S(y,z,T). If there are no special additional
considerations, then S (y, z, T) can be made equal to zero.
The Bellman equation is also derived in a similar way for
more complex functions K(p). As in case (5), the velocity
v = oy/o t must be included among the sufficient coordinates.
Then the function S (y, v, z, t) will satisfy the equation
aS aS as as as p2N a2S
v pv av p -b--- ?2 itz az ?
2
k_uY
+1r22 a2s
2+C(1,y)1+z+C( 1,y)1+z=0 (20)
az 2 2
An important particular problem among the group of prob-
lems connected with optimal systems synthesis is the problem of
calculating the optimal stationary mode of operation. In this
case the operation-termination time T tends to infinity. Then,
irrespective the values of the coordinates at the moment t a
stationary fluctuation mode is established in the system, char-
acterized by some mean penalty y in a unit of time. This means
that when T increases, e.g., by Zit, the risk function increases
by y A t
If the difference S(t) ? y (T? t) is formed and the limit
transfer T?> co performed, the resultant function will not
depend on time. In case (4) this function
f (z y)= lim [S (y, z, t)? y (T ? t)]
T-.co
as can easily be seen in accordance with (16) satisfies the equation
af
af
+2 [tz
az
=
N a2f 1 2 a2f
yz+1 ?y
(21)
y
2+ (1 z-9)2
2 ay 2 K aZ
[here (1) is used]. Moreover the same conditions (17)-(19) are
satisfied on the boundaries as before. The solution of eqn (21)
makes it possible to find simultaneously the function f(y,z),
the switching line r and the stationary mean penalty y. The
same holds for eqn (20).
Solving the Bellman Equation
In view of the difficulty of obtaining a precise solution of the
alternative equation, various approximate methods can be
developed. Some of them will be illustrated, taking eqns (16)
and (21) as an example. Of course the methods?for example,
the method of parameters?permit generalization to other more
complex cases as, say case (20), but then the laboriousness
of the calculations increases markedly. The results obtained
with the aid of (16) are also approximately valid for case (20),
when e> 1, i.e., when the inertia of the controlled plant plays
a small part and can be disregarded.
In this case, the optimal control action depends on the
variables y, z, and equals u = 1 in the domain R+ (correspond-
ingly, u = ? 1 in R_). Figure 2 show's the approximate location
of these domains, and of the switching line; the mean transfer
velocities III dyldt, M dzld t are also given. An approximate
calculation was performed of the switching line in the stationary
case (eqn 21), by the asymptotic step-by-step method developed
by the author9. For the case N = 0, 2 < 1, the switching line
of the first approximation was found to be
2 ity (1 ? y2)2
The higher approximations have an order of (u/K)2 and
higher.
The second approximate method of solution, which has a
wider sphere of application, will be dealt with in greater detail.
This method is linked with the determination of the parameters
of the risk function, to which corresponds the unit OP in
Figure 1, as was stated in the introduction.
One of the ways of introducing the parameters is the ex-
pansion of the risk function according to some preselected
suitable system of functions. For the given example these are
the functions of the variables y and z. Let yoo(y),
and zpo(z), ip8_1(z) be the selected functions. Then the para-
meters of the risk function will be the coefficients a, (t) of the
expansion
(22)
r-1 S-1
(23)
i=o j=o
Since the above systems of functions are not complete, re-
placement of the risk function by the expression given usually'
entails some errors. To make the coefficients ai; more exact, any
criterion is set, e.g., the minimum integral from the square of
the difference
f_ii_i[
S?E 2 dydz = min
will be required.
The variation of this expression leads to a system of linear
equations
E (49i,g0e)(1Pitk.)=(S,Te0.)
e=0, ...,r ?1; m=0, ...,s ? 1
which permits (to to be calculated, if S (y,z,t) is known.
Here is written
1
(S, et I ?,)= ? 1 1j' 1 S etli.dydz
4 _ _
With the aid of the inverse matrices
iicieii = (Pell -1;116.11= 11(1P.p0.11-1
the solution of system (33) can be written as
=E cie C;,? (s, coetp?,)
m
550/4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
(24)
(25)
(26)
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
550 / 5
How the equation for the parameters is obtained from the In addition, within the framework of the selected approxi-
alternative equation will now be shown. Let the latter have the mation
form 32 8 z2. . 7E
as (1 ? Z2)2 0,., - 35 7 ,y n2 s y
[S] (27)
Differentiating (26) according to time, and substituting (27) into
the right-hand side gives
E c. tpetk?,)
dt e
da
2 1 4
y COS ny
3 re
After the above substitutions, eqn (16), where 1/2 c (1,Y)
(1 + z) +112c (-1,y) (1 -z) = y2- 2yz + 1 adopts the form
If the replacement of (23) is performed here, this will give a E da ii ; it
iz =l1E P if( P ez'
closed system of equations for the parameters ? ? dt (P = a 11
.\/2 ij
daii = E cie ,,n(g-[E a po o di q], 49 elk.) (28) +2 tt [2 aa2z2-1-allapi +2 a22z292]
dt e, ?, P, el
132 8
7
The example being considered will be utilized to illustrate - ? ---z2 La02+a2292]
K 35 ( r
the application of this method. Because of the boundary con-
N n2 r
dition (17) it is convenient to select the functions goi(y), each of
+ 4 (am+ a22z2) 92]
9
which possesses this property di/dy (+1) = 0. For the second +--2 ?4 LalizTi
coordinate z, there is no such condition, so
r = s= 3; To CO= tfro (.0= 1; T i (Y)=Niisin7' ;
92 (y) = NI2.- cos ny; tlf 1 (z)= z; 1112 (z)= z2
can be written:
In the given case (92, ( p) = c,1, =
11(P.p1G.)11=
to'
3
3
3
0 ?
5
116.11=
9 15
40 4
0.3 0
15 45
?1-0
1 2..J2 8/2
V;
- ?3 +7r-2-T2+7E2-WI 1
Separately equating the coefficients of the functions cpizi
gives five equations for the dad/dt derivatives. The most im-
portant of these are the three equations
da,',it (a20 a22) 7E2
8V2
at =-21c/111Pu +2itaii+--g-Naii+?ir-2-
.11 an
clan it (an a22) 32 a22 nr 2J2
P20 + Di C/20
(29) dt an' an 35 K 2 it
Since the f sk function is symmetrical S(y,z,t)= S(-y,-z,t)
(with symmetrical penalties S (y,z,T), then in expansion. (23)
there should be present only symmetrical terms
S(y, z, t) + 27 4- a 72:sin
) a^,_00 _ a 02=2 _11 _ z ?71 y
2
+ (6120 a22Z2) V2 cos ny
Moreover, putting a20 = Nail; an
make the substitution
as
ay
= j9a21,
(30)
it is expedient to
it it
an z cos ?
2 - 2 (a + fiz2) sin ny
it
ii
EPki(c0)9i(Y)zi
v z
where
pii (a, /3) = E CjeCjm7ern
e, m
1
em fl ft
=? z cos ?2 y - 2 (a - z2fi) sin rcy
4
--1 -1
(31)
da22it? (a 20
=vilaiiIP22
a 22)
a22
??
+4a22+
8
7
K
?2
'
+ 2
Na22
(33)
dt "IA
The switching line is found by equating to zero the derivative
(31). The equation of this line has the form
4 (a +fizD sin ?y= zr; zr=zr(y)
2
(34)
The course of the switching line is determined only by the
relations a = a20, p" = (! of the parameters entering into (33).
all ail
As is usual in dynamic programming, eqn (33) must be
solved for the inverse passage of time. If the inverse time
ti = T- t is introduced, the conditions corresponding to the
end of operation will look like 'initial' conditions. In the absence
of conclusive penalties at the moment T the corresponding con-
ditions will be null:
aii=a20=a22=0 when t1=0 ot=-14,/3=0
When a sufficiently long time t passes, the mode of operation
(32) of the system approaches the stationary. This corresponds to the
approach of the parameters an, a20, a,2 to the stationary values
e (y) zni dydz a,,40, 42. The latter are the solution of the system of three
equations obtained by equating to zero expressions (33).
550 / 5
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
550/6
Using (29) and (34), formulae (32) can be brought to the form
(GC fl)= 3 Oj;
9 15
2'
D., 45 15
P i2 (CC) P = 0i2 ,71 aio;
N 1-1_4J-2 cos it y 2 (
- a 1 ?zir+1
(lir= 2 j 1 L j+ 2 2 j+1
1 ? Zir+ 3) .
+fl j+ 3 strutylpi(y)dy
(35)
For further calculation of the functions eij (oc,i3) numerical
methods can be employed, or use can be made of one or another
approximation of the function zr(y).
The solution of the given problem consists in the fact that
the unit OP (Figure /) realizes eqns (33) in inverse time, and
unit OC realizes the switching line (34).
Xt
_
Optimal
controller OP
fr I
I ?
0- DK OC PO'
I
Yt
References
1 BELLMAN, R. Dynamic Programming? 1960. Moscow; Foreign
Languages Publishing House
2 BELMAN, R., and KALABA, R. Dynamic programming and feedback
control. Automatic and Remote Control. Vol. 1, p. 460. 1961.
London; Butterworths
3 FELDBAUM, A. A. The theory of dual control, I?IV. Automat.
Telemech. 21 (1960), 9, 11; 22 (1961), 12
4 STRATONOVICH, R. L. Conditional Markovian processes in pro-
lems of mathematical statistics and dynamic programming. Dokl.
Akad. Nauk. SSSR 140 (1961)
5
STRATONOVICH, R. L. On optimal control theory. Sufficient co-
ordinates. Automat. Telemech. 23 (1962), 7
6 STRATONOVICH, R. L. Conditional Markovian processes in pro-
blems of mathematical statistics, dynamic programming and games
theory. 4th All-Union Math. Congr., Leningrad (1961)
7 STRATONOVICH, R. L. Conditional Markovian processes. Probabil-
ity Theory and Its Application. 5 (1960)
8 STRATONOVICH, R. L., and SHMALGAUZEN, V. I. Some stationary
problems of dynamic programming. in). Akad. Nauk SSSR,
Otdel. Tekh. Nauk. Energ. Automat. 5 (1962)
9 STRATONOVICH, R. L. On the optimal control theory. An asymp-
totic method of solving the diffusion alternative equation.
Automat. Telemech. 23 (1962), 2
1? STRATONOVICH, R. L. Selected problems of the theory of fluctua-
tions in radio engineering. Soy. Radio (1961)
RA
\
'''"------,.
\ \
..
..,------
'\
\
A
Figure I. Optimal servosystem. SC: sufficient-coordinates unit; OF:
parameter-determination unit; OC: optimal control unit; CP: con- Figure 2. Space of sufficient coordinates. POQ: separatrixr ; POQBC:
trolled plant; DK = SC; Orl = OF; OY = OC; PO = CP domain R?; POQAD: domain R`
550 / 6
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
The Realization of Optimal Programmes
in Control Systems
G. S. PO SPELOV
Methods of mathematical programming [the term is used to
mean the application of mathematics to the practical activity
of planning, development, decision-making etc., and is a natural
generalization of such concepts as linear (or non-linear) dynamic
programming] are spreading to all branches of the national
economy, economics, engineering, industry, agriculture and
so on. This presupposes the development of mathematical
models of the events or sets of controlled plants which require
to be controlled. Once the aim of control has been formulated,
the task is to determine the optimum strategy of control whereby
a programme of effects upon the controlled plants produces in
some sense the optimal result.
It must be emphasized that the programming methods
determine the strategy of control or a priori programme. The
-degree of coincidence betWeen the actual result or process pro=
duced by control and the result or process anticipated from the
a priori programme, is indicative, in particular, of the perfection
of the mathematical model Or of our knowledge about the
controlled plant.
However, a mathematical model is a model and not the
phenomenon itself, and, apart from this, during the process of
realizing the a priori programme, the controlled plant can be
affected by a variety of factors and perturbations which are not
taken into account in the model. This can lead to deviations,
and sometimes to substantial deviations, from the programme
results, which by definition are optimal.
If the programme is time scheduled, use can be made of
feedback to correct the effect of perturbations and inaccuracies
in the mathematical description so as to ensure an actual
programme closer to the optimal one.
The most completely represented by mathematical models
are control systems. Taking their case as an example, we will
consider the possible ways of realizing optimal programmes; in
this instance, controll programmes.
A mathematical model of a control system is usually formed
by means of ordinary differential equations. The control pro-
gramme is broadly defined to cover the planning of the dynamic
characteristics of the control system, its programme of operation,
and the variation of the relationship. In all cases it is assumed
that the system is provided with complementary feedbacks which
improve the realization of the predetermined programme or a
priori programme.
(1) The desired dynamic characteristics of a system are
realized by complementary self-adjusting circuits, which in this
case are complementary feedbacks which improve the realization
of the predetermined 'programme of 'the control system of
operation. Figure 1 shows the well-known self-adjusting system
of an automatic pilot which controls the angle of pitch of an
555/1
aircraft'. The self-adjustment circuit changes the gain of the
angular velocity circuit such that the margin 'of stability of this
circuit is maintained constant. The correcting circuit 2 is selected
to obtain a sufficiently high gain K. Under these conditions the
transfer function of the closed angular velocity circuit is close
to unity. Therefore, despite the variation of the properties of
the controlled plant (owing to changes in flying conditions), the
dynamic properties of .the angle of pitch circuit will be deter-
mined by the transfer function of the ?model, i.e. in all cases
they will be quite close to the predetermined or planned prop-
erties. Another example is the self-adjusting control system with
extremal tuning of the correcting circuits'. Both examples refer
to continuously operating control systems.
A somewhat special problem arises jn the preservation of
planned dynamic properties for 'single-action' systems' for which
the behaviour is significant on a finite interval t (0 < t
and for which the operating process is, as a rule, a transient
?
process. Here one meets with the problem of maintaining a
desired nature of transient behaviour, or a programme of
motion of the representative point in phase space, on condition
that the mathematical model does not exactly describe the
dynamic properties of the controlled plant, nor the perturbations
acting on the latter during the motion. Several possible ways
of solving this problem are now indicated with simple examples.
Let the mathematical model of the controlled plant be
represented in the form
(1)
where x is the output coordinate and u is the controlling action.
Given the equation of the controller under the form
the equation of the
whole is
?
u= ? a ox
mathematical
(2)
model of the system as a
5c+ct0x=0 (3)
Accordingly, for any initial condition x0, the process of motion
is characterized by an exponential with the exponent ? a. Now
suppose that there is a suspicion that, in fact, the control object
is described by the equation
where
)*c f (x,u,t) (4)
f (x,u,t)= ? a (t) ? 0 (x) F (t) u (5)
a (t), u, F(t) are random functions and 4) (x) is also random.
Here it is known beforehand that I a (t) 4 (x) ? ? F < I u.
55/1
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
555/2
In this situation one can make a decision concerning the discrete
control of the plant, such that at each step it is possible to con-
trol the fulfilment of the a priori programme, which is expressed
as a function of time in the following manner:
x=x0 Cat (6)
With discrete control we require for control a relationship
between the value of x (t) and the value of this coordinate at the
instant of time t ? At, i.e. the quantity x (t + At). According
to (6) this programme relationship is given by the relation
x (t At) = x (t) ? e -"At (7)
where At is the interral of discreteness or the step of control.
Using an analogy between the numerical solution of differential
equations by difference methods and the discrete control of
controlled plants, one writes the equation (5) in discrete form
At
x(t+At)=x(t)-Ff (t+T)?At (8)
where
f (t+At)=f
2
x ,u
2
t+At
2
2
The discrete form (8) of the solution of eqn (5) is used in the
method proposed by Bashlcirov. (The method of Bashkirov is
described in the monograph by Popov4.) According to eqn (8),
by measuring the value of x (t) at each step one can select the
increment Au at the instant t (At/2) such that x (t At) is
governed by condition (7). The discrete form (8) is convenient
in that the interval At/2 is available in the procedure for calculat-
ing Au (t At/2). The information for calculating Au (t At/2),
apart from the known value of the desired x (t At), is obtained
from the preceding values of Au and x. In the general case
Au (t At/2) is calculated by the formula:
Au ( Att +-2)= Au (t --At). tit [x (t+ At), x (t --At), x (t?Atd
2 2
(9)
The form of the function v depends on the particular theory of
extrapolation which is adopted.
? The information about the preceding values of x and u also
includes information about changes in the properties of the
object and of the perturbation F (t). The use of this information
for calculating Au (t At/2) represents the additional feedback
signals, or self-adjusting signals, and makes it possible to realize
more accurately the desired programme of motion more exactly?.
Equation (5) and its results can be generalized without
difficulty to multi-dimensional systems of any order.- In this
case , the equation of the controlled plant in the vector form is
?dX = f (X , U; t) (10)
dt
where X is the vector with the components xi (i = 1, 2, ..., n),
f is the vector with the components f, (i = 1, 2, ..., n), and U is
the control vector with the components u (i 1, 2, ..., y);
y n.
The maintenance of planned dynamic properties of single-
action systems can also be realized by a continuous control.
Suppose, for example, that the mathematical model of the
controlled plant is written in the form
and I u I uo.
Suppose also that it is required to realize the system with
maximum operating speed. According to Pontryagin's principle?
of the maximum, the equation of the controller is of the form
u = ? um sign Ex f (12).
However, there is a suspicion that in fact the controlled plant
can be described by the equation
-x-Fat(t)?5c+4(t)?x=u+F(t) (13)
In view of the incomplete information about a,* (t), c/o* (t) sand
F (t) it is impossible to prescribe the control law of type (12)
which ensure the maximum operating speed.
In view of this one proceeds as follows, forming the accelera-
tion control circuit ?k = n by means of the controlling action u
(Figure 2). If the pass band of this circuit is sufficiently high the
error en = n? ? n will be close to zero and the programme
acceleration will be equal to the actual acceleration. In more
complex cases the acceleration control circuit, like the pitch
angle control circuit (Figure I), can be a self-adjusting circuit.
If now the programme acceleration is close to the actual accelera-
tion, any desired variation of the coordinate x and its derivative
may be required. Thus, to form the system of maximum operat-
ing speed in accordance with the mathematical model (11), it is
sufficient to put
= = ?a1 ?u1 sign [x f (a 1,)] (14)
The block diagram which realizes (14) is shown in Figure 3.
In expression (14) u? is always less than et, since some part of the
control resource u0 = u? goes to compensate the perturbation
F (t) and to compensate the difference between the coefficients
a,* (t) and c/o* (t) on the one hand and the coefficients of the
mathematical model a, and (20 = 0 on the other. Thus, at the
expense of some reduction of operating speed (since u, < uo)
a definite realization of the programme for the optimum
transient process is obtained.
Any other law of variation of the coordinate x can be
required in this example. It may, for example, be required that
the transient process should take place in accordance with the
solution of a linear equation with constant coefficients
-1-a0x=0 (15)
For this, it is obviously necessary to put 5 = ? ? aox.
Figure 4 shows oscillograms which have been obtained on
the electronic simulator for the case when I ao* < 0-05;
a,* (t) < 1.0; a, = 0.4; ao = 0.04. The gain of the servo
motor of the acceleration control circuit was taken as 10 1/sec.
It will be seen from the oscillogram that the perturbation F (t)
and the fluctuations of the coefficients al* (t) and ao* (t) have
no effect on the course of the coordinate x which is governed by
the solution of eqn (15).
? The results explained by this example are also capable of
very wide generalization. The generalization consists in that
for a known indeterminacy of the properties of the controlled
plant and of the acting perturbations it is advisable to organise
a self-adjusting subsystem of rapidly varying coordinates of the
controlled plant or of its higher-order derivatives. After the
programme variation of the rapidly varying coordinates or of
555/2
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
their higher-order derivatives has been largely determined by
this subsystem, the law governing the variation of the slowly
varying coordinates or lower-order derivatives of the output
quantity of the controlled plant can be built as desired. The
additional feedbacks which make it possible to realize the
required programme of dynamic properties of the system in the
example under consideration are the feedbacks amongst which
are the self-adjusting circuits for acceleration control.
Very often the realization of desired dynamic properties for
single-acting systems is handicapped by unfavourable combina-
tions of initial conditions. In non-linear systems these unfavour-
able combinations of initial conditions can lead to instability
of the process for a given realization. The effect of unfavourable
combinations of initial conditions can be eliminated by changing
the initialvalues of the coordinates and by the formation of special
signals which act on the system and which are functions of the
initial conditions. Briefly, this means creating special feedbacks
with respect to the initial conditions. The idea of using feedback
with respect to the initial conditions has already been published
in a paper by the authors.
(2) In developing systems with programme control of the
output coordinates of the controlled plant use may, to a large
extent, be made of the foregoing ideas and methods which relate
to the realization of programmed dynamic properties of control
systems.
Suppose, for example, that it is required to vary according
to ,the programme g, (t) the output coordinate x (t) of the con-.
trolled plant (Figure 5). For this the input of a closed system
consisting of the controlled plant and the controller receives the
programme signal g? (t). For a system with a high pass band,
if no perturbations are present, it is well known that x g?.(t).
However, a random perturbation which is not taken into account
can considerably distort the desired programmed variation of
gv,(t). In order to fulfil more accurately the programme, an
additional feedback is formed (shown by the dotted line in
Figure 5) and the programme correction circuit abcdega is there-
by formed. The programme signal g, (t) is compared with the
actual signal and the difference signal acts at the input to the
fundamental system via a self-adjusting correction circuit with
a high gain WK. The correction circuit may consist of the
elements 2, K, 6, 7, 8,9, 10 and 11 which are shown in Figure 1.
Assuming, for the sake of simplicity, Wk = K, the following
operator relationship is obtained between the input and output
for the circuit of Figure 5:
X =0 (p)(1- + K) g r(t)-1- f (P) (t)
(16)
1 + KO (p) P 1+ (p)
where
(p)= ?WPwW: WOwo and Of (p) = 1+ wpwo
and expression (16) can be written as
g (t)
x= (17)
1
It will be seen from (17) that if oo
x = g (t) (18)
555/3
independently of the action of the perturbation F (t) and the
fluctuations .of the parameters of the controlled plant. It is
Understood that in this case condition (18) is fulfilled approxi-
mately since K = oo is not realizable in actual conditions.
Another example of programme control is the method of
stabilizing acceleration (Figure 2 and 3) with subsequent con-
struction of the desired programmed variation of the coordinate
x0 by means of a computer.
Using this method the 'logarithmic navigation'7 can be
realized when the acceleration according to the programme
k x, and consequently, the coordinate x is the solution of the
differential equation
x?k=0
A very important case of programme control? is that when it is
important to maintain a functional relationship between one
coordinate and another. For example, the optimum programme,
as regards operating speed, for the altitude and speed of an
aircraft, as calculated, for instance, by the method of dynamic
programming, is a programme in the coordinates H and V,
i.e. it is given as a functional relationship H? = H? (V?)
(Figure 6), both the quantities Hand V here being the output
coordinates of an aircraft controlled by the altitude rudder (the
thrust of the engine is usually maximum in this case). The
relationship H? = H? (V?) can always be represented para-
metrically:
H pr= H (t)
V?= V?(t)
The altitude control circuit H can now be formed by the usual
method (Figure 7). If the system is unaffected by perturbations
and the calculated characteristics of the aircraft coincide with
the actual characteristics, and if the atmosphere through which
the aircraft is flying remains standard, the completion of the
programme H?, (t) will at the same time imply the completion
of the programme V? (t), and consequently of the programme
relationship H? = H? (V?). However, if all the stated condi-
tions are not fulfilled, the completion of Hr (t) will not generally
imply the fulfilment of V?. (t), and consequently the completion
of H? = H? (V). For the planned programme H? H? (V)
to be fulfilled with acceptable accuracy, it is necessary to intro-
duce a programme correction circuits. For this purpose the
programme value of speed is compared with the actual speed
and the difference in terms of the transfer function Wk changes
the rate at which the programme is delivered, i.e. the speed of
the clocks of the programme mechanisms H? and V? (Figure 8).
As a result the speed of the clock mechanism of the programme
is not uniform and the programmes H? and V? become func-
tions of some irregularly varying argument Z, i.e. H?. (r) and
V? (r). Elimination of the argument r again brings us back to
the original relationship Hpr(V?). However, insofar as the rate
of delivery, of the programme signal H?. at the input of the system
conforms to the fulfilment of the speed programme, the accuracy
of the realization of H? = H? (V?) is substantially increased.
A similar circuit can be constructed for the motion of some
controlled plant along a prescribed unperturbed trajectory
ye = ye (xe) in the coordinates x, y (Figure 9). However, this
report is Confined to the plane problem. Suppose that the speed
of the object is V and that the orientation of the speed vector
555/3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
555/4
is characterized by the angle tp. The obvious relationship bet-
ween the coordinates x, y and the speed is expressed as follows
SI = V sin tfr +
?=Vcostfr+147?
(19)
(20)
where W? and We,, are, perturbations in the form of speeds of
displ_cment of the environment relative to the system of co-
ordinates x, y. [In the formulae (19) and (20) the actual values
of the coordinates of the controlled plant are used. The values
of the desired unperturbed trajectory are denoted as x, and ye.]
Consider the kinematic problem, i.e. suppose that the angle tp
of the speed vector can be arranged arbitrarily. On this assump-
tion the control circuit for the coordinate y is formed. Here it
is required that
where
sin t/i = k, (21)
C= Y pr= Y
(22)
the term y? = y? (t) here being the programme value of the
coordinate y, which does not coincide, as will be seen below,
with the unperturbed value ye = ye (t).
The equation for the coordinate y is found from the equa-
tions (19), (21) and (22):
+ y = Vkcypr(t)+Wy (23)
Assuming y = ye + Ay, we obtain now the equation for the
deviation Ay from the unperturbed motion
AS) + VIccAy=1/kE(v
pr Y ).; e Wy (24)
It is worthwhile selecting the programme signal y? in accordance
with the formula
Sje
YPr= Ye+ Vlc,
For this value of the programme signal, eqn (24) becomes
(24a)
AS) + T/k, Ay = J4' (25)
(25)
This implies that in the absence of the action Gfc the deviation
from the unperturbed projectory will tend to zero. A constant
action will cause a constant error.
The control block diagram for the coordinate y is shown in
Figure 10. It is obvious that a single control circuit acbording to
the coordinate y cannot ensure the necessary control of the
coordinate x or the fulfilment of the required programme
Ye = yes(xe). According to the circuit shown in Figure 10, the
coordinate x varies according to the expression
x= V f f cos 0, dt? V tan tife?A)) dt+ o Jo f Wx dt (26)
The first term in eqn (26) 's the desired unperturbed value of
X = xe, the second term can be limited, since it is determined
by the error in the circuit for the stabilization of y, and the third
term for Wx = const will continuously increase. In order to
realize the programme of motion along the unperturbed trajec-
tory it is necessary to proceed in the same way as in the previous
case (see ?Figure 8), i. e. it is necessary to form, by measuring the
error x?. ? x, a signal which acts on the speed of the programme
mechanism y? (r) and x? (x).
It should be noted that it is much simpler to correct the
programme by varying the speed of Cie programme clocks if
in the first example d V/dt > 0, and in the sec3nd example if
dx/dt > 0. Generalizing, this method of correction to the pro-
gramme of a system with n coordinates and y controlling devices,
we shall note that in this case the argument of control (the non-
decreasing coordinate V in the first example, and the non,
decreasing coordinate x in the second) should be any constant
sign form of system derivative9.
Frequently this form of coordinate originates naturally from
the statement of the problem. For example, this is the case if it
is required to control the ingredients of a mixture as a function
of the volume of this mixture when this -volume is varying in a
monotonous way.
References
1 MELLEN, D. Application of adaptive flight control. Symposium
on Self-adjusting Systems. Rome, April 1962
2 KRASOVSKII, A. A. The dynamics of continuous control systems
with extremal self-adjustment of correcting devices. Automatic and
Remote Control. 1961. London; Butterworths
3 POSPELOV, G. S. Concerning the principles of construction of
various types of self-adjusting control system. Symposium on Self-
adjusting Systems, Rome. April 1962
4 POPOV, YE. P. Dynamics of Control Systems. 1954.. Moscow;
GITTL
5 PONTRYAGIN, L. S., BOLTYANSKII, V. G., and GAMICRELIDZE, R. V.
Mathematical Theory of Optimal Processes. 1961. Moscow;
Fizmatgiz
6 POSPELOV, G. S. Various methods of improving the quality of
processes of regulation and control. Symposium on Use of Computer
Engineering in Automation of Production (in Russian), 1961.
Moscow; Mashgiz
7 Green, Logarithmic navigation for precise guidance of space
vehicles. IRE Trans., ANF-8, No. 2 (1961)
8 KOZIOROV, L. M., and KOROBKOV, M. N. A method of stabilizing
the functional relationship between two interrelated variables by
means of one control device. lz. Akad. nauk S. S.S. R., OTN,
Energetika i Avtomatika, No. 4 (1961)
9 LETOV, A. M. The Stability of Non-lineai Control Systems. 1955.
Moscow; GITTL
?.0p, -
tpr
3
Iii
L _
H2
1
1 4
I 5 J
Self-adjusting
circut
_J
Figure I. - angle of pitch; ?.9? -programme value of pitch angle;
I - object: 2- correcting circuit; 3-- model; 4, 5 - measuring devices
for the angular velocity a9' and the angle V; 6, 7 - detectors; 8, 9 - high
and low pass filters; 10 - servo motor; 11 7 limiter
555/4
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3
Servo motor
n
"
pr pr
Object
npr -n
_ d
P
Figure 2
F(t)
xpr ,
555/5
X pr e
CalCulator for iipr
Servo motor
Figure 3
t(sec)
2 20
1 10
0
al
1\2-1 sec
Figure 4
Figure 5
Figure 5. W v? regulator; W,? object; Wk ? self-adjusting correction
circuit with a high amplification factor
HkVk
H OVO
. Figure 6
555/5
a.0
Hpr. (t)
t(sec)
t (sec)
u+F(t)
Object
Auto-pilot Aircraft
?
Figure 7
Program me
mechanism
Figure 8
Figure 9
wx
Declassified and Approved For Release 2012/12/14: CIA-RDP80T00246A023500250001-3