Научная статья на тему 'About non-parametric identification of T-processes'

About non-parametric identification of T-processes Текст научной статьи по специальности «Математика»

CC BY
82
13
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ДИСКРЕТНО-НЕПРЕРЫВНЫЙ ПРОЦЕСС / ИДЕНТИФИКАЦИЯ / Т-МОДЕЛИ / Т-ПРОЦЕССЫ / DISCRETE-CONTINUOUS PROCESS / IDENTIFICATION / T-MODELS / T-PROCESSES

Аннотация научной статьи по математике, автор научной работы — Medvedev A.V., Yareshchenko D.I.

This paper is devoted to the construction of a new class of models under incomplete information. We are talking about multidimensional inertia-free objects for the case when the components of the output vector are stochastically dependent, and the character of this dependence is unknown a priori. The study of a multidimensional object inevitably leads to a system of implicit dependencies of the output variables of the object from the input variables, but in this case this dependence extends to some components of the output vector. The key issue in this situation is the definition of the nature of this dependence for which the presence of a priori information is necessary to some extent. Taking into account that the main purpose of the model of such objects is the prediction of output variables with known input, it is necessary to solve a system of nonlinear implicit equations whose form is unknown at the initial stage of the identification problem, but only that one or another output component depends on other variables which determine the state of the object. Thus, a rather nontrivial situation arises for the solution of a system of implicit nonlinear equations under conditions when there are no usual equations. Consequently, the model of the object (and this is a main identification task) cannot be constructed in the same way as is accepted in the existing theory of identification as a result of a lack of a priori information. If it was possible to parametrize the system of nonlinear equations, then at a known input it would be necessary to solve this system, since in this case it is known, once the parameterization step is overcome. The main content of this article is the solution of the identification problem, in the presence of T-processes, and while the parametrization stage can not be overcome without additional a priori information about the process under investigation. In this connection, the scheme for solving a system of non-linear equations (which are unknown) can be represented in the form of some successive algorithmic chain. First, a vector of discrepancies is formed on the basis of the available training sample including observations of all components of the input and output variables. And after that, the evaluation of the output of the object with known values of the input variables is based on the Nadaraya-Watson estimates. Thus, for given values of the input variables of the T-process, we can carry out a procedure of estimating the forecast of the output variables. Numerous computational experiments on the study of the proposed T-models have shown their rather high efficiency. The article presents the results of computational experiments illustrating the effectiveness of the proposed technology of forecasting the values of output variables on the known input.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «About non-parametric identification of T-processes»

UDC 519.711.3

Siberian Journal of Science and Technology. 2018, Vol. 19, No. 1, P. 37-43 ABOUT NON-PARAMETRIC IDENTIFICATION OF Г-PROCESSES A. V. Medvedev1, D. I. Yareshchenko2*

1Reshetnev Siberian State University of Science and Technology 31, Krasnoyarsky Rabochy Av., Krasnoyarsk, 660037, Russian Federation

2Siberian Federal University 26/1, Academician Kirensky St., Krasnoyarsk, 660074, Russian Federation E-mail: [email protected]

This paper is devoted to the construction of a new class of models under incomplete information. We are talking about multidimensional inertia-free objects for the case when the components of the output vector are stochastically dependent, and the character of this dependence is unknown a priori. The study of a multidimensional object inevitably leads to a system of implicit dependencies of the output variables of the object from the input variables, but in this case this dependence extends to some components of the output vector. The key issue in this situation is the definition of the nature of this dependence for which the presence of a priori information is necessary to some extent. Taking into account that the main purpose of the model of such objects is the prediction of output variables with known input, it is necessary to solve a system of nonlinear implicit equations whose form is unknown at the initial stage of the identification problem, but only that one or another output component depends on other variables which determine the state of the object.

Thus, a rather nontrivial situation arises for the solution of a system of implicit nonlinear equations under conditions when there are no usual equations. Consequently, the model of the object (and this is a main identification task) cannot be constructed in the same way as is accepted in the existing theory of identification as a result of a lack of a priori information. If it was possible to parametrize the system of nonlinear equations, then at a known input it would be necessary to solve this system, since in this case it is known, once the parameterization step is overcome. The main content of this article is the solution of the identification problem, in the presence of T-processes, and while the pa-rametrization stage can not be overcome without additional a priori information about the process under investigation.

In this connection, the scheme for solving a system of non-linear equations (which are unknown) can be represented in the form of some successive algorithmic chain. First, a vector of discrepancies is formed on the basis of the available training sample including observations of all components of the input and output variables. And after that, the evaluation of the output of the object with known values of the input variables is based on the Nadaraya-Watson estimates. Thus, for given values of the input variables of the T-process, we can carry out a procedure of estimating the forecast of the output variables.

Numerous computational experiments on the study of the proposed T-models have shown their rather high efficiency. The article presents the results of computational experiments illustrating the effectiveness of the proposed technology offorecasting the values of output variables on the known input.

Keywords: discrete-continuous process, identification, T-models, T-processes

Сибирский журнал науки и технологий. 2018. Т. 19, № 1. С. 37-43 О НЕПАРАМЕТРИЧЕСКОЙ ИДЕНТИФИКАЦИИ Г-ПРОЦЕССОВ А. В. Медведев1, Д. И. Ярещенко2*

1 Сибирский государственный университет науки и технологий имени академика М. Ф. Решетнева Российская Федерация, 660037, просп. им. газ. «Красноярский рабочий», 31 2Сибирский федеральный университет Российская Федерация, 660074, г. Красноярск, ул. Академика Киренского, 26, корп. 1

E-mail: [email protected]

Рассмотрено построение нового класса моделей в условиях неполной информации. Речь идет о многомерных безынерционных объектах для случая, когда компоненты вектора выходов стохастически зависимы, причем характер этой зависимости априори неизвестен. Исследование многомерного объекта неизбежно приводит к системе неявных зависимостей выходных переменных объекта от входных, но в данном случае подобная зависимость распространяется и на некоторые компоненты вектора выходов. Ключевым вопросом в данной ситуации является определение характера этой зависимости, для чего и необходимо наличие в той или иной степени априорной информации. Учитывая, что основным назначением модели подобного рода объектов

является прогноз выходных переменных при известных входных, необходимо решать систему нелинейных неявных уравнений, вид которых на начальной стадии постановки задачи идентификации неизвестен, а известно лишь, что та или иная компонента выхода зависит от других переменных, определяющих состояние объекта.

Таким образом, возникает довольно нетривиальная ситуация решения системы неявных нелинейных уравнений в условиях, когда собственно самих уравнений в обычном смысле нет. Следовательно, модель объекта (а эта основная задача идентификации) не может быть построена так, как это принято в существующей теории идентификации в результате недостатка априорной информации. Если бы можно было параметризовать систему нелинейных уравнений, то при известном входе следовало бы решить эту систему, поскольку она в данном случае известна, раз этап параметризации преодолен. Основным содержанием настоящей статьи является решение задачи идентификации при наличии Т-процессов и при том, что этап параметризации не может быть преодолен без дополнительной априорной информации об исследуемом процессе.

В этой связи схема решения системы нелинейных уравнений (которые неизвестны) может быть представлена в виде некоторой последовательной алгоритмической цепочки. Сначала на основании имеющейся обучающей выборки, включающей наблюдения всех компонент входных и выходных переменных, формируется вектор невязок. А уже после этого оценка выхода объекта при известных значениях входных переменных строится на основании оценок Надарая-Ватсона. Таким образом, при заданных значениях входных переменных Т-процесса мы можем осуществить процедуру оценивания прогноза выходных переменных.

Многочисленные вычислительные эксперименты по исследованию предлагаемых Т-моделей показали достаточно высокую их эффективность. Приводятся результаты вычислительных экспериментов, иллюстрирующих эффективность предлагаемой технологии прогноза значений выходных переменных по известным входным.

Ключевые слова: дискретно-непрерывный процесс, идентификация, Т-модели, Т-процессы.

Introduction. In numerous multidimensional real processes output variables are available to measure not only at different time periods but also after a long time. This leads to the fact that dynamic processes have to be considered as non-inertial with delay. For example, while grinding the products time constant is 5-10 minutes, and control of output variable, for example fineness of grinding, is measured once in two hours. In this case investigated process can be presented as non-inertial with delay. If output variables of the object are somehow stochastically dependent, then we call such processes ^-processes. Similar processes require special view on the problem of identification different from existing ones. The main thing is that identification of such processes should be carried out differently from the existing theory of identification. We should pay special attention to the fact that the term "process" is considered below not as processes of probabilistic nature, such as stationary, Gaussian, Markov, martingales, etc. [1]. Below we will focus on ^-processes actually occurring or developing over time. In particular technological process, industrial, economic, the process of person's recovery (disease) and many others.

Identification of multidimensional stochastical processes is a topical issue for many technological industrial processes of discrete-continuous nature [2]. The main feature of these processes is that vector of output variables x = (,x2, ...,xn),consisting of n component is

such that the components of this vector are stochastically dependant unknown in advance way. We denote vector of input component - u = (,u2, ...,um). This formulation of the problem leads to the fact that the mathematical description of the object is represented as some analogue of the implicit functions of the form Fj (u, x) = 0,

j = 1, n . The main feature of this task of modeling is that

class of dependency F (•) is unknown. Parametric class of

vector functions Fj (u, x, a), j = 1, n , where a is a vector

of parameters, does not allow to use methods of parametric identification [3; 4] because class of functions accurate to parameters cannot be defined in advance and well known methods of identifications are not suitable in this case [3; 4]. In this way the task of identification can be seen as solving of non-linear equations:

Fj (u, x) = 0, j = in (1)

relatively component vector x = (, x2, ..., xn) at known

values of u. In this case it is expediently to use methods of nonparametric statistics [5; 6].

J-processes. Nowadays the role of identification of non-inertial systems with delay is increasing [7; 8]. This is explained by the fact that measurement of some of the most important output variables of dynamic objects is carried out through long periods of time, that exceeds a constant of time of the object [9; 10].

The main feature of identification of multidimensional object is that investigating process is defined with the help of the system of implicit stochastic equations:

Fj (u(t-t), x(t), !;(/))= 0, j =Vn , (2)

where Fj (•) is unknown, t is delay in different channels

of multidimensional system. Further t is omitted for simplicity.

In general investigated multidimensional system implementing ^-processes can be presented in fig. 1.

Fig. 1. Multidimensional objects Рис. 1. Многомерный объект

In fig. 1 the following designations are accepted: u = = (, ..., um) - m-dimensional vector of input variables,

x = (, ..., xn) - n-dimensional vector of output variables.

Through various channels of investigated process dependence of j component of vector u can be presented as dependence on components of vector u: x<j> = fj (u<j>),

j = j n .

Every j channel depends on several components of vector u, for example u <5> =(u1, u3, w6), where w<5> is a

compound vector. When building models of real technological and industrial processes (complexes) often vectors x and u are used as compound vectors. Compound vector is a vector composed from several components of the vector, for example u < j> =( x2, x5, x7, x8) or another set of components. In this case, the system of equations will be Fj (u<j>, x<j>) = 0, j = \ji.

J-models. The processes, which have output variables that have unknown stochastic relationships, were called T-processes, and their models were called T-models. Analyzing the above information it is easy to see, that description of the process in fig. 1 can be accepted as a system of implicit functions:

F] (u

(u<j>,x<j>) = 0, j = 1,n ,

(3)

where u<3>, x<3> are compound vectors. The main feature of modeling of such a process under nonparametric uncertainty is the fact that functions (3)

Fj (u<J>,x<j>) = 0, j = 1,n are unknown. Obviously the system of models can be presented as following:

Fj (u<j>, x<j>, x,, u, ) = 0, j = ~n, (4)

where xs, us are temporary vectors (data received by s time moment), in particular xs =(xj, ...,xs) =

= (xjj, xj2, ..., xjs , ..., x21, x22 , ..., x2s , ..., xn1, xn 2, -., xns ),

but even in this case Fj (•), j = 1, n are unknown. In the

theory of identification such problems are not solved and are not set. Usually parametric structure is chosen (3), unfortunately it is difficult to fulfill because of lack of apriori information. Long time is required to define parametric structure, that is the model is represented as:

F} (u<j>,x<j>,a) = 0, j = j« , (5)

where a isa vector of parameters. Then follows the evaluation of parameters according to the elements of training sample ui, xi, i = 1, s and solution of the system of nonlinear interrelated relations (5). Success in building a model will depend on qualitative parametrization of the system (5).

Further we will consider the problem of building T-models under nonparametric uncertainty, when the system (5) is unknown up to the parameters.

Let the input of the object receive the input variables values, which, of course, are measured. Availability of training sample xi,ui, i = 1, s is necessary. In this case evaluation of vector components of output variables x at known values of u, as noted above, leads to the need to solve the system of equations (4). If dependence of output component from vector components of input variables is unknown, then it is natural to use the methods of non-parametric evaluation [5; 11].

At a given value of the vector of input variables u = u', it is necessary to solve the system (4) with respect to the vector of output variables x. General scheme of solution of such a system:

1. First a discrepancy is calculated by the formula:

■<j > y<j >

(i ^ x, , us ) , j = 1 n ,

(6)

where we take F (u <j >, x< >(i), xs, us) as nonparametric evaluation of regression of Nadaraya-Watson [10]:

S j (i) = F33 (u<j> , xj (i)) =

= xj(i )-

E Xj [iin Ф

i=1 k=1

u'k~ uk [i]

V suk У

ЕП ф

i=1 k=1

uk" uk [i]

V suk У

(7)

where j = 1, n, , < m > is dimension of a compound vector uk, < m > < m , further this designation is also used

( - [-]A

uk~ uk[i]

for other variables. Bell-shaped functions ®

suk

and parameter of fuzziness csu^ satisfy several conditions

of convergence and have the following features:

® (•)<»; c-1 J ® (c-1 (u - ut ))u =1;

Q(u)

iimsc-:®((u - ui)) = s(u -u), limsc =0,

limsSCs =».

2. Next step is conditional expected value:

xj = M [x | u<j>, e = 0], j = \n . (8)

We take nonparametric evaluation of regression of Nadaraya-Watson as an estimate (8) [10]:

xj =

Z и П Ф

i=1

fc =1

Uk1 - Uk1[i]

П Ф

/ h=1

8k2[i]

zn Ф

i=1 k1=1

% -%[i]

П Ф

/ k2

k2 =1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

8k2 [i]

V CSE

j = (9)

where bell-shaped functions O (•) are taken as triangular

core:

Ф

Uk1 " U^M

1 -

\uh - и

M¿1 - u

q [i]| |Uk1 - Uk1[']\

Csu

kiM , 1.

Ф

Чил

1 -i

LÜÍ Míl < !

0,

^ г 1.

described, for example, by the following system of equations:

i=X1 (x1, хз, u1, u2 > u5 ) =0;

Fx2 (x1' X2 ' U4 , U5 ) = 0; Fx3 (x1' X2 ' X3' U2 ' U3 > U5 ) = 0'

(11)

The system of equations (11) is a dependence, unlike the system (10), known from the available a priori information.

Having got a sample of observation, we can proceed to a studied problem, which is finding the forecast values of output variables x at known input u. First, discrepancies are calculated (7) using the technique described earlier. We introduce discrepancies as a system:

e1 (i) = F1 ( x, x3, U, u2, u5);

82 (i ) = F2 (x1, x2 , U4 > U5 ) ;

(12)

83 (i) — F3 (x1, ^2 , x3 , U2 , U3, U5

Carrying out this procedure we obtain the value of output variables x under input influences on the object u = u', this is the main purpose of a required model, which further can be used in different management systems [8], including organizational one [12].

Computational experiment. For computational experiment a simple object with five input variables u = (, u2, u3, u4, u5) taking random values in the interval

u e [0, 3] and with three output variables x = (xl, x2, x3)

where x¿ e [-2; 11], x2 e [-1; 8], x 3e [-1; 8] was chosen. We will develop a sample of input and output variables based on a system of equations:

x - 2u: +1.5^2 -u5 - 0.3x3 = 0;

<x2 - 1.5u4 - 0.3^/«7 -0.6 -0.3x1 = 0; (10)

x3 - 2u2 + 0.9^7 - 4u5 -6.6 + 0.5x - 0.6x2 = 0.

As a result we get a sample of measurements us, xs where us, xs are temporary vectors. It should be noted that the process described by the system (10) is only necessary to obtain training samples, there is no other information about the process under investigation. Dealing with a real object, a training sample is formed as a result of measurements which are carried out with available control measures. In the case of stochastic dependence between output variables, the process is naturally

where e j, j = 1,3 are discrepancies, whose corresponding

components of an output vector cannot can't be derived from the parametric equations.

The forecast for the system (11) is carried out according to the formula (9) for each output component of the object.

First, we present the results of a computational experiment without interference. In this case, values of input variables of the newly generated input variables (not included in the training sample) go to the input of the object. A configurable parameter will be a parameter of fuzziness сs, which in this case, we take equal 0.4 (the value was determined as a result of numerous experiments to reduce the quadratic error between model and object output [13; 14]) the parameter of fuzziness will be taken the same when calculating in the formulas (7) and (9), sample size is s = 500. Let's give graphs for object outputs by components xl, x2 and x3.

In fig. 2, 3 and 4 the output values of the variables are marked with a "point", and the output value of the model are marked with a "cross". The figures demonstrate the comparison of the true values of the test sample of the output vector components and their forecasted values obtained by using the algorithm (6)-(9).

We will conduct the results of another computational experiment, in this case, interference § is imposed on values of the vector x components of the object output. The conditions of the experiment: sample size is s = 500, interference acting on the output vector components of an object is § = 5 % , parameter of fuzziness is cs = 0.4 (fig. 5-7).

The conducted computational experiments confirmed the effectiveness of the proposed T-models, which are presented not as generally accepted in the theory of model identification, but as some method of forecasting the output variables of the object at the known input u = u'. It should be noted that in this case we do not have a model in the sense generally accepted in the theory of identification [15].

C

su

a

k

C

Output value

Л Ii

*

Model

Object

*

x

* *

0 " 10 10

Fig. 2. Forecast of the output variable x1 with no interference. Error 5 = 0.71 Рис. 2. Прогноз выходной переменной x1 при отсутствии помех. Ошибка 5 = 0,71

Output value

А

s

*

X

Model

Object

X *

X

*

X

*

X X

*

X

•<г X

X

X

X

X

X *

10

Fig. 3. Forecast of the output variable x2 with no interference. Error 5 = 0.71 Рис. 3. Прогноз выходной переменной x2 при отсутствии помех. Ошибка 5 = 0,71

Output value

Object

Model

х

ж *

Ж

¥ * *

* X *

it

* X

О 10 20

Fig. 4. Forecast of the output variable x3 with no interference. Error 5 = 0.71 Рис. 4. Прогноз выходной переменной x3 при отсутствии помех. Ошибка 5 = 0,71

Output value

10

Model

*

* X

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Object

« х

0 5 10 1J

Fig. 5. Forecast of the output variable xi with interference 5 %. Error 5 = 0.77 Рис. 5. Прогноз выходной переменной x1 с помехой 5 %. Ошибка 5 = 0,77

А

Output value

t

-i

Model

X

Object

v *

* X X *

£

X *

* Ï к

X

X

10

Fig. 6. Forecast of the output variable x2 with interference 5 %. Error 5 = 0.77 Рис. 6. Прогноз выходной переменной x2 с помехой 5 %. Ошибка 5 = 0,77

Ï0

Output value

А

20-

10

Model

£

£

ж

ж X ж

X

Object

X

* Ж ж

*

X

10

Fig. 7. Forecast of the output variable x3 with interference 5 %. Error 5 = 0.77 Рис. 7. Прогноз выходной переменной x3 с помехой 5 %. Ошибка 5 = 0,77

M

Conclusion. The problem of identification of non-inertial multidimensional objects with delay in unknown stochastic relations of the output vector components is considered. Here a number of features arise, which mean that the identification problem is considered under conditions of nonparametric uncertainty and, as a consequence, cannot be represented up to a set of parameters. On the basis of available a priori hypotheses the system of equations describing the process with the help of compound vectors x and u is formulated. Nevertheless functions F (•) remain unknown. The article describes the method

of calculating the output variables of the object at the known input, which allows them to be used in computer systems for various purposes. Above some particular results of computational studies are given.

The conducted computational experiments showed a sufficiently high efficiency of ^-modeling. At the same time, not only the issues related to the introduction of interference of different levels, different sizes of training samples, but also objects of different dimensions were studied.

References

1. Dub Dzh. L. Veroyatnostnye processy [Probabilistic process]. Moscow, Iz-vo inostrannoy literatury Publ., 1956, 605 p.

2. Medvedev A. V. Osnovy teorii adaptivnyh sistem: monografiya [Fundamentals of adaptive systems theory]. Krasnoyarsk, SibGAU Publ., 2015, 526 р.

3. Ehjkhoff P. Osnovy identifikacii sistem uprav-leniya [Basics of identification of control systems]. Moscow, Mir Publ., 1975, 7 p.

4. Cypkin Ya. Z. Osnovy informacionnoj teorii identifikacii [Fundamentals of information theory of identification]. Moscow, Nauka Publ., 1984, 320 p.

5. Nadaraya E. A. Neparametricheskoe ocenivanie plotnosti veroyatnostey i krivoy regressii [Nonparametric estimation of probability density and regression curve]. Tbilisi, Tbilisskiy universitet Publ., 1983, 194 p.

6. Vasil'ev V. A., Dobrovidov A. V., Koshkin G. M. Neparametricheskoe ocenivanie funkcionalov ot raspre-deleniy stacionarnyh posledovatel 'nostey [Nonparametric estimation of functionals of stationary sequences distributions]. Moscow, Nauka Publ., 2004, 508 p.

7. Sovetov B. Ya., Yakovlev S. A. Modelirovanie system [Modeling of systems]. Moscow, Vysshaya shkola Publ., 2001, 343 р.

8. Cypkin Y. Z. Adaptaciya i obuchenie v avtoma-ticheskih sistemah [Adaptation and training in automatic systems]. Moscow, Nauka Publ., 1968, 400 p.

9. Medvedev A. V. [The theory of non-parametric systems]. VestnikSibGAU. 2010, No. 4 (30), P. 4-9 (In Russ.).

10. Fel'dbaum A. A. Osnovy teorii optimal'nyh avto-maticheskih system [Fundamentals of the theory of optimal automatic systems]. Moscow, Fizmatgiz Publ., 1963.

11. Medvedev A. V. Neparametricheskie sistemy adaptacii [Nonparametric adaptation systems]. Novosibirsk, Nauka Publ., 1983.

12. Medvedev A. V., Yareshchenko D. I. [About modeling of process of acquisition of knowledge by

students at University]. Vysshee obrazovanie segodnya. 2017, No. 1, P. 7-10 (In Russ.).

13. Linnik Y. V. Metod naimen'shih kvadratov i os-novy teorii obrabotki nablyudeniy [The method of least squares and the foundations of the theory of processing observations]. Moscow, Fizmatlit Publ., 1958, 336 p.

14. Amosov N. M. Modelirovanie slozhnyh system [Modeling of complex systems]. Kiev, Naukova dumka Publ., 1968, 81 p.

15. Antomonov Y. G., Harlamov V. I. Kibernetika i zhizn' [Cybernetics and life]. Moscow, Sov. Rossiya Publ., 1968, 327 p.

Библиографические ссылки

1. Дуб Дж. Л. Вероятностные процессы. М. : Изд-во иностранной литературы, 1956. 605 с.

2. Медведев А. В. Основы теории адаптивных систем : монография /Сиб. гос. аэрокосмич. ун-т. Красноярск, 2015. 526 с.

3. Эйкхофф П. Основы идентификации систем управления / пер. с англ. В. А. Лотоцкого, А. С. Ман-деля. М. : Мир, 1975. 7 с.

4. Цыпкин Я. З. Основы информационной теории идентификации. М. : Наука. Главная редакция физико-математической литературы, 1984. 320 с.

5. Надарая Э. А. Непараметрическое оценивание плотности вероятностей и кривой регрессии. Тбилиси : Изд-во Тбил. ун-та, 1983. 194 с.

6. Васильев В. А., Добровидов А. В., Кошкин Г. М. Непараметрическое оценивание функционалов от распределений стационарных последовательностей / отв. ред. Н. А. Кузнецов. М. : Наука, 2004. 508 с.

7. Советов Б. Я., Яковлев С. А. Моделирование систем : учебник для вузов. М. : Высш. шк., 2001. 343 с.

8. Цыпкин Я. З. Адаптация и обучение в автоматических системах. М. : Наука, 1968. 400 с.

9. Медведев А. В. Теория непараметрических систем. Управление 1 // Вестник СибГАУ. 2010. № 4 (30). С. 4-9.

10. Фельдбаум А. А. Основы теории оптимальных автоматических систем. М. : Физматгиз, 1963.

11. Медведев А. В. Непараметрические системы адаптации. Новосибирск : Наука, 1983.

12. Медведев А. В., Ярещенко Д. И. О моделировании процесса приобретения знаний студентами в университете // Высшее образование сегодня. 2017. Вып. 1. С. 7-10.

13. Линник Ю. В. Метод наименьших квадратов и основы теории обработки наблюдений. М. : Физ-матлит, 1958. 336 с.

14. Амосов Н. М. Моделирование сложных систем. Киев : Наукова думка, 1968. 81 с.

15. Антомонов Ю. Г., Харламов В. И. Кибернетика и жизнь. М. : Сов. Россия, 1968. 327 с.

© Medvedev A. V., Yareshchenko D. I., 2018

i Надоели баннеры? Вы всегда можете отключить рекламу.