Научная статья на тему 'Automatic target recognition for low-count terahertz images'

Automatic target recognition for low-count terahertz images Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
276
88
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
automatic target recognition / concealed objects detection / low-count images / THz imaging / EM-algorithm / classification / image recognition

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Viacheslav Evgenievich Antsiperov

The paper presents the results of developing an algorithm for automatic target recognition in broadband (0.1-10) terahertz images. Due to the physical properties of terahertz radiation and associated hardware, such images have low contrast, low signal-to-noise ratio and low resolution – i.e. all the characteristics of a low-count images. Therefore, standard recognition algorithms designed for conventional images work poorly or are not suitable at all for the problem considered. We have developed a fundamentally different approach based on clustering 2D point clouds in accordance with a set of predefined patterns. As a result, we reduce the problem of target recognition to the problem of maximizing the image data likelihood with respect to the classes of model objects up to the size and position. The resulting recognition algorithm has a structure close to that of the well-known EM algorithm; its formal scheme is at the end of the paper.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Automatic target recognition for low-count terahertz images»

AUTOMATIC TARGET RECOGNITION ALGORITHM FOR LOW-COUNT TERAHERTZ IMAGES

V.E. Antsiperov

Kotel 'nikov Institute of Radioengineering and Electronics, Moscow, Russia, Russian Academy of Sciences Abstract

The paper presents the results of developing an algorithm for automatic target recognition in broadband (0.1-10) terahertz images. Due to the physical properties of terahertz radiation and associated hardware, such images have low contrast, low signal-to-noise ratio and low resolution -i.e. all the characteristics of a low-count images. Therefore, standard recognition algorithms designed for conventional images work poorly or are not suitable at all for the problem considered. We have developed a fundamentally different approach based on clustering 2D point clouds in accordance with a set of predefined patterns. As a result, we reduce the problem of target recognition to the problem of maximizing the image data likelihood with respect to the classes of model objects up to the size and position. The resulting recognition algorithm has a structure close to that of the well-known EM algorithm; its formal scheme is at the end of the paper.

Keywords: automatic target recognition, concealed objects detection, low-count images, THz imaging, EM-algorithm, classification, image recognition.

Citation: Antsiperov VE. Automatic target recognition algorithm for low-count terahertz images. Computer Optics 2016; 40(5): 746-751. DOI: 10.18287/2412-6179-2016-40-5-746-751.

Acknowledgments: The author wants to express his gratitude for the financial support of the RFBR, grant 16-29-09626 ofi_m

Introduction

Terahertz (THz) radiation consists of electromagnetic waves with frequencies in the range between infrared and microwave radiation - from 0.1 to 10 THz [1]. Since this frequency range comprises wavelengths from 3 mm to 30 ^m (see Fig. 1), the terahertz range is often referred to as a submillimetre range.

wavelength: 3 cm 3 mm 300 \xm 30fim ijxm

_I_I_^_I_I_

radio microwave WmM^^t infrared visible

-1-1—^-1-1-

frequency; 10GHz 100GHz ITHz IQTHz IQOTHz

Fig. 1. The terahertz (THz) spectral range

A number of useful properties of THz radiation are the same as properties of neighboring ranges on the spectrum. As well as infrared and microwave radiation, THz radiation is propagated in the line of sight (LOS) and is nonionizing (as opposed to X rays). Like microwave radiation, THz radiation can penetrate through nonconductive material: clothes, paper, wood, plastic, etc. (however, it should be borne in mind that, as a rule, THz penetration depth is slightly smaller, and, moreover, it cannot penetrate throughliquids [2]). Since THz radiation exhibit good penetrating power, it can be used to obtain images of hidden objects. For this reason it is a good basis for automatic target recognition (ATR) systems [3] developed for early detection and warning of threats. Because detection of such threats is one of key issues of public places security, it is obvious the great interest in such systems. A lot of them either have been developed or are at the evaluation stage. Accordingly, in recent years there is an extraordinary growth of publications on this subject, and we observe it both as in foreign editions [1, 4 - 7] as in Russian ones [8 - 10].

However, a number of existing problems partially diminish the initial optimism about the THz-based ATR. First of all, THz radiation is usually weak (including the

case of active illumination). Therefore, THz images are characterized by low signal-to-noise ratio. This leads to a low contrast, fuzzy shapes of the objects on the background scene. Secondly, because of limited sensitivity of THz detectors, the THz image should be formed by time-consuming scan procedures (sometimes forming individual pixels). As a result, the obtained images contain a small number of resolution elements - they belong to the class of low-count images. Therefore, in contrast to the images in the visible range, which can be approximated with good accuracy by a continuous distribution of intensity within the image plane, THz images are rather 2D clouds of discrete points with a small number of gradations of intensity, often binary (0/1) images, see Fig. 2 (A - visible image, B - THz image, C - infrared image).

Fig. 2. Image quality for different spectral ranges: visible, THz and infrared images [3]

From the above arguments it follows that the standard recognition algorithms designed for conventional images suitable for ATR will work poorly in the THz range. Therefore, the task of developing new THz image processing algorithms and techniques in the context of ATR is crucial today.

1. Automatic target recognition for low-count images

Automatic target recognition (ATR) technologies [11], as a rule, comprise the use of computer hardware for systems of detection and recognition of controlled, hidden objects of interest by processing data images from cameras, antennas, radars and other sensors, for example THz sensors.

The fundamental problem of ATR is detection and identification of objects (targets) of interest in the context of complex scenes with other objects registered, often in very noisy environment. The precise definition of the concepts of a target, scene and noise depends on a particular application. In case of ATR the term classification is often used instead of the term identification, and although there are subtle differences (identification is a more precise category), when solving practical problems, these differences are usually ignored, and the term recognition is used instead of above two terms.

Practical ATR systems, as a rule, include a pipeline of operations as shown in Fig. 3 [11]. Ideally, the targets of the original image are subsequently detected, recognized by pipeline operations and included in the output list of targets. While original data are moving through the pipeline, data processing procedures become more specific and focused on certain target attributes. As a result, the amount of data associated with non- target objects must gradually decrease. Since there is usually a lot of non-relevant objects in the scene and very few target objects (sometimes none), very sophisticated, non-trivial algorithms for original data processing, image segmentation and object recognition are required.

Detection Segmentation Recognition Sensor ^^

Data

Taget List

Background clutter

Objects selection

Non-taget clutter

Fig. 3. Conceptual diagram of data processing pipeline in ATR systems

This paper presents the results of developing an algorithm for one of the procedures executed in the pipeline - the algorithm of automatic target objects (briefly targets) recognition. A special feature of the algorithm is that it is initially focused on the specifics of terahertz images containing low contrast, low-count objects with a low signal-to-noise ratio [12], such as presented in Fig. 4 (A - a ceramic knife and a handgun hidden under the clothes; B - a handgun and a rectangular piece of radio anechoic material under the clothes; C - a ceramic knife also hidden under the clothes). It is assumed that the algorithm receives at its input a fragment of the scene already containing an object detected at the preceding steps of the processing pipeline. The purpose of the algorithm is to recognize the object according to the specified classification database (DB), including as a rule the classes of target objects.

f/Ht

1

290 295 300 305 310 315 320

Fig. 4. THz images, the radiometric temperature is shown on the right in grayscale [12]

2. Classification DB Classes of objects in Classification DB are families of low-contrast, low-count images of similar subjects, which can be arbitrarily positioned or scaled.

Representatives of such classes (a, b, c), for example, for the ATR system that recognizes the images in Fig. 4, are shown in Fig. 5 (a - 'knife', b - 'rectangular', c -'handgun', respectively). Let us first specify what is meant by similarity of subjects and leave the questions of their permissible locations and sizes for the next section.

Fig. 5. Formalized description of three classes of objects in the form of Gaussian mixture parameters (centers and elliptical contours of four Gaussian components are shown ever images)

A main recognition problem is the fact that original data are usually presented in a form that is hardly suitable for their immediate recognition (in our case in a form of binary low-count images). Recognition algorithms usually require a high-level representation of objects. Such representation is carried out by some formalized description. Note that, depending on the degree of generality of descriptions, one (formalized) description can correspond to the whole family of (similar) subjects.

In our previous works [13] related to the processing of discrete space-distributed data that can be represented as a point cloud, we developed a new method of their economic description in a convenient for computations form using a Gaussian mixtures [14]. The basic idea of the method is that a set of cloud points is considered as a set of independently sampled, random coordinate vectors {X }. These vectors are considered as identically distributed according to the weighted sum (mixture) of Gaussian densities:

p(9) = £*.

j =i

det B. ! 1

1 exp| - 2 Qj (X ) I;

(2*)

p

(1)

j = 1; Qj (x ) = (x -P. )TBj ( x -P j );

i =i

where N is the number of mixture components, 8 is a set of parameters {tc, B}-} of all N components, P is the dimension of coordinate space (in [13] P was 3). As a result, it is possible to use ~ NP2 parameters for the relatively rough description of the cloud instead of a list of coordinates {X } of all its points. In case of small N it is more practical. If the same idea is applied to the binary low-count images (P=2), then each object will be associated with some Gaussian mixture whose parameters can be considered as a formalized object description (see Fig. 5).

Besides the fact that this description will be much more practical and convenient in terms of computation, it provides an opportunity to introduce a quantitative crite-

rion of similarity. That is, two objects are considered similar if Gaussian mixture (1) corresponding to image of one of them, describe well the image of other and vice versa. As a numerical parameter of the description quality, it is possible to take, for example, the value of likelihood ratio of both images (better its logarithm) with given Gaussian mixture of one of them. The level of this value can be compared to the likelihood ratio of that object image and, for example, an empty image of the same size - an image with a uniform probability distribution -considered as natural point of reference.

However, the most important argument in favor of the Gaussian mixture is a remarkable fact that there are effective algorithms to determine Gaussian mixtures that describe given randomly sampled data in the best way. These algorithms are known as the family of EM (Expectation-Maximization) algorithms [15]. A common feature of this family of algorithms is iterative (recurrent) nature of computations. At consequent iterations, the EM algorithm improves the estimation of parameters 8 for distribution describing the selected data in the best way. The value of the likelihood function is chosen as a criterion of quality in EM (EM implements maximum likelihood method). Mixtures schematically shown in Fig. 5 and describing the consequent fragments in Fig. 4 were obtained exactly using the classical EM algorithm with N = 4.

3. Recognition algorithm

Successfully solving the problem of describing classes of a certain domain DB (solving the problem of system learning), the proposed method, in the foregoing form, is unfortunately not very suitable for recognition. It would seem that if we find an appropriate mixture (1) for the subject of recognition and test it using the above method for similarities with the available classes in a database generated in the course of learning, the class that is the most similar to the object will solve the problem. However, such "rough" recognition falls down even in case of small displacements of the object on the image plane or changes in size. It turns out that similarity by size and location (on the image) are a much more important factor than the similarity of shape and other more subtle details.

For this reason, we expanded the concept of object classes by adding the ability of scaling the size and random displacement in a plane to each object. Conventionally speaking, if an object of a certain class is represented by a cloud of points with coordinates {X}, the object with the coordinates {yj, y = kXt + m , obtained from the initial object after extension of the image plane with coefficient k and subsequent shift by vector m is also considered belonging to this class.

It is easy to deduce that if the mixture with the parameters {n}- ,|1 j, Bj} corresponds to the initial object, then

after described k -m conversion mixture will have parameters {tcj, k1 j + m, Bj jk2} . Accordingly, the test for the class formally described by a set {tc j ,1 j, Bj} will in-

clude predetermination of k and m maximizing the likelihood function (better logarithm) of the mixture:

N Jdet Bj ( 1 ~ > p(x I e,k,m) = XJ \ w2 expl --Qj(xt)

i=i

(2^)k 2

N

X j = i, Qj ( x ) = ( x - Mj )tBj ( x - Mj )

(2)

j =i

M: = k|I : + M, k > 0,

and subsequent analysis of the obtained values of the likelihood function with a purpose to determine the class of the tested object (maximum likelihood class). Computation scheme of this recognition algorithm is very similar to the structure of the EM algorithm and it is shortly described in the next section. The results of applying the algorithm to recognition of independent handgun in THz image using the database shown in Fig. 5 are presented in Fig. 6.

Fig. 6. The results of the recognition of a new object of a 'handgun' type using the database shown in Fig. 5

3.1. Algorithm scheme

As mentioned above, our algorithm extends EM algorithm [15], so it has analogous scheme. It is also an iterative method for finding maximum likelihood (ML) estimates of parameters in a set of statistical models (from classification DB) like (2). For each model our algorithm iteration alternates between performing an expectation (E) step, which creates (implicitly) a function for the expectation of the log-likelihood evaluated using the current estimate for the weights {tcj} and k, m parameters and

maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are used to determine the distribution of the latent variables in the next E step.

More specifically, if the object given by a set of coordinate vectors {X-} is tested with respect to the definite class having formalized (model) description {tc j ,1 j, Bj}, then the successive steps of the algorithm are as follows:

Initialization: set {tc.}, k, m to some initial estimates, for example exactly equal to class parameters:

tcJ0) = tc, m(0) = 0, k(0) = 1, j = 1,...,N. (3)

Further calculations proceed iteratively with the iterations counter n.

Step E: with the values of parameters have been found at the previous iteration n, calculate discreet conditional distribution of the latent variables (component indicators) for each Xi:

M (n) = k(n) p.. + m( n),

At. = exp

j f]

f (Xi - M(]n))TBJ (Xi - M(n)) ^

l{k(n) )

' N _

X^V^A,

(4)

l =i

Step M: based on found di stribution {tc.} (4) recalculate weights {tc.} and find {m.,A.}- analogues of EM

{p., Bj} estimates:

1 M

T(n+1) _

J

^TrZ^i

M i =1

M

ij'-i

mi =

Mt] +1)

(5)

Xtj( xi- mi )(Xi- mi)T

AJ =

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

i =1

Mt(n+1)

were M denotes a number of coordinate vectors {x }. Using (5) calculate auxiliary vectors v and w :

f N V1

N

.Tjn+1)Bj Xj1 Bjmj, V J = J J =

^ N A 1 N

ETn+1)B] Zjj

(6)

V J

=1

j=1

using these vectors v and w and estimates (5) find coefficients a and p of the quadratic equation for k:

1 N

r = 1 XXt]+1) ^p j},

2 j=1

a = - X tj+1 (mj - v )T Bj (mj - v ) + y,

2 j=1

(7)

1 N

=2 2

2 j=1

1N

P = 2 Ztc?+1)(mj. -v )TBj (p. -w), 2 j=1

(k()2 +pk(-a = 0,

that finally allows us to find all the values of key parameters and k, m :

k (n+i) =

f n !■

m(n+1) = v - k(n+1)w.

(8)

After (8) n incremented, next iteration begins.

3.2. Computational experiments

To estimate characteristics of proposed algorithm a number of computational experiments was performed. Simulation of THz low-count images in accordance with their described in the introduction properties was carried out in two stages (see Fig. 7). At first, to reduce the contrast of the image and to perform its fuzzyfication (blurring), the Gaussian smoothing was performed. After that, a random sample of counts for a given sample size M was generated according to the intensity of smoothed image. Two stages a - b and b - c of this simulation procedure are shown in Fig. 7 (a - a visible source image of 'handgun'; b - low contrast, fuzzy shaped image - Gaussian smoothing of source image a; c -300 count size Poisson sample of smoothing image b).

Fig. 7. Modeling low-count images such as THz scans To generate a random sample of counts in stage b - c, Poisson sampler was used. The main reason for this is the simplicity of Poisson sampler implementation: all counts are generated independently and each count is formed in accordance with the probability distribution equal to the normalized intensity over the image. Tree typical results generated for different values of the sample size M are shown in Fig. 8 (a- 300 counts; b - 1000 counts; c-3000 counts).

Fig. 8. Different sample size low-count images of 'handgun' from Fig. 7

Algorithm under discussion was tested on image models like presented in Fig. 8. Preformed classification DB contained four-component Gaussian mixture parameters {tc.,p.,Bj} (j = 1,...,4) for three classes of objects

(images ~ 1000 counts) represented in Fig. 5: a -'knife', b -'rectangular', c-'handgun'. These parameter sets were found as a classical EM algorithm (100 iterations) output.

In these computational experiments for each of three different sample size low - count image of 'handgun'

i =1

(300, 1000, 3000 samples) besides resulting graphical representations like in Fig. 6 the numerical characteristic named 'similarity' and 'likelihood' were calculated. The results of these computational experiments (500 iterations) are shown in Table 1:

Table 1. Recognition characteristics of algorithm

Image sample size M = 300

DB class similarity likelihood

a ('knife') 10.88 28

b ('rect') 6.18 141

c ('gun') 4.26 324

Image sample size M = 1000

DB class similarity likelihood

a ('knife') 10.70 340

b ('rect') 5.62 575

c ('gun') 3.82 1239

Image sample size M = 3000

DB class similarity likelihood

a ('knife') 10.44 901

b ('rect') 5.74 1357

c ('gun') 3.94 3265

In Table 1 the characteristic named 'similarity' is the value of coefficient k500) (8). The characteristic 'likelihood' actually means the logarithm of likelihood functions ratio where nominator is probability of recognized object given the Gaussian mixture density p(X | 8, k, m) (2) and denominator is a uniform probability distribution p0(Xi) = const (empty image class).

Table 1 shows that for each of three different sample size low-count images the likelihood for class c ('gun') is maximal. It is more than twice more than likelihood of class b ('rect'), and about four times bigger the likelihood of class a ('knife'). This means that the algorithm is quite reliably detects the presented object like a gun. Note that with increasing sample size, the likelihood grows almost proportionally. This allows formulating the almost obvious conclusion: to improve the quality of algorithm in the noisy environment necessary to increase the number of image counts. In this regard, it is worth to note that the similarity parameter - estimation of the coefficient k with a change in the number of counts varies relatively slightly.

For a deeper understanding of the algorithm proposed it was compared with alternative algorithm which use the image counts clustering based on Mahalanobis metric, similar to k-means method. As it is well known, and it is evident from the Algorithm scheme above, the conditional distribution %ij calculated on step E assumes some randomized procedure for image counts distributing across a Gaussian components and evaluation on this base of desired parameters on step M. If instead of the randomized procedure for the distribution of counts would be used a deterministic one, for example based on the criterion of minimum Mahalanobis distance of counts from the center of component, the alternative algorithm would be designed. Because of the evidence of this idea, such algorithm, except may be some no principal details, seems certainly was somewhere designed and investigated. Nevertheless, we have implemented this algorithm in

software and a series of numerical experiments similar to that described above was carried out for it also. The results of computational experiments (500 iterations) are shown in Table 2:

Table 2. Recognition characteristics of alternative

Image sample size M = 300

DB class similarity likelihood

a ('knife') 10.88 28

b ('rect') 6.20 142

c ('gun') 6.82 268

Image sample size M = 1000

DB class similarity likelihood

a ('knife') 10.70 340

b ('rect') 5.61 574

c ('gun') 6.29 962

Image sample size M = 3000

DB class similarity likelihood

a ('knife') 10.44 901

b ('rect') 5.74 1357

c ('gun') 6.37 2504

Comparing Tables 1 and 2 it is interesting to note that in both series of numerical experiments the results of testing object 'handgun' by wrong classes 'knife' and 'rectangle' are almost the same. On the contrary, the proposed algorithm for the correct class 'gun' gives much bigger likelihood values and its similarity coefficients are closer to reality than for the alternative algorithm.

A careful analysis of these facts shows that in the case of wrong classes both algorithms behave equally badly -trying to distribute all the image counts in the only component (cluster) assigning zero weights to other components. Therefore, in both wrong cases, both algorithms give the same results. In the case of a class corresponding to the object tested, the algorithm based on Mahalanobis metric clustering, is still trying to assign all the counts to one cluster, while the proposed algorithm intelligently distributes them over suitable components as shown in Fig. 9 (a - algorithm based on Mahalanobis metric clustering ('hard' clustering) and b - proposed algorithm ('soft' clustering)). So our algorithm gives the best performance. The above discussion is a particular manifestation for a particular problem of a deeper general ideas [16] concerning the advantages of the 'soft' clustering methods over the 'hard' ones (randomized clustering procedures over deterministic clustering).

Fig. 9. The difference in correct recognition of the object 'handgun'

Conclusions

The computational results obtained by algorithm proposed demonstrate that our approach is a good basis for the ATR system development, including the systems for security screening to detect the presence of a variety of threats, such as weapons or explosives, or illicit items, ranging from drugs to illegal immigrants [17]. The proposed approach for algorithm synthesis is clear in theoretical concepts and computationally efficient. For this reasons, we hope that the proposed methodology will be further developed and will be used for the relevant applications.

References

[1] Kowalski M, Kastek M, Walczakowski M, Palka N, Szustakowski M. Passive imaging of concealed objects in terahertz and long-wavelength infrared. Applied Optics 2015; 54(13): 3826-3833.

[2] Armstrong CM. The truth about terahertz. IEEE Spectrum 2012; 49(9): 36-41. DOI: 10.1109/mspec.2012.6281131.

[3] Luukanen A, Appleby R, Kemp M, Salmon N. Millimeter-Wave and Terahertz Imaging in Security Applications. Terahertz Spectroscopy and Imaging. Springer Series in Optical Sciences 2013; 171: 491-520. DOI: 10.1007/978-3-642-29564-5_19.

[4] Kowalski M, Mariusz K. Comparative studies of passive imaging in terahertz and mid-wavelength infrared ranges for object detection. IEEE Transactions on Information Fo-rensics and Security 2016; 11(9): 2028-2035. DOI: 10.1109/TIFS.2016.2571260.

[5] Corsi C, Sizov F, eds. THz and Security Applications: Detectors, Sources and Associated Electronics for THz Applications. Springer Science+Business Media Dordrecht; 2014. ISBN 978-94-017-8827-4. DOI: 10.1007/978-94-017-8828-1.

[6] Trontelj J, Sesek A. Electronic terahertz imaging for security applications. SPIE Newsroom 2016. DOI: 10.1117/2.1201009.001234.

[7] Garbacz P. Terahertz imaging - principles, techniques, benefits, and limitations. Problemy Eksploatacji - Maintenance Problems 2016; 1: 81-92.

[8] Trofimov VA, Trofimov VV. New way for both quality enhancement of THz images and detection of concealed objects. Proc SP IE 2015; 9585: 95850R. DOI: 10.1117/12.2189299.

[9] Trofimov VA, Trofimov VV, Shestakov I, Blednov R. Concealed object detection using the passive THz image without its viewing. Proc SPIE 2016; 9830: 98300E. DOI: 10.1117/12.2225170.

[10]Antsiperov V, Mansurova T. Low-contrast objects detection with low signal/noise ratio in the low-count terahertz images [In Russian]. IX All-Russian Scientific and Technical Conference Radar and radio, Moscow, 2015: 311-315.

[11] Dudgeon DE, Lacoss RT. An Overview of Automatic Target Recognition. Lincoln Laboratory Journal 1993; 6(1): 3-10.

[12] Shen X, Dietlein CR, Meyer FG. Detection and Segmentation of Concealed Objects in Terahertz Images. IEEE Trans Image Process 2008; 17(12): 2465-2475. DOI: 10.1109/TIP.2008.2006662.

[13] Evseev O, Nikitov S, Antsiperov V. Parametric 3D Reconstruction of the Distribution Density of Point Objects. Journal of Communications Technology and Electronics 2014; 59(3): 259-268. DOI: 10.1134/S1064226914030048.

[14] McLachlan GJ, Peel D. Finite Mixture Models. New York, Chichester: Wiley & Sons, Inc; 2000. ISBN 9780471006268. DOI: 10.1002/0471721182.

[15] Gupta MR, Chen Y. Theory and Use of the EM Algorithm. Foundations and Trends in Signal Processing 2011; 4(3): 223-296. DOI: 10.1561/2000000034.

[16] Nock R, Nielsen F. On Weighting Clustering. IEEE Trans on Pattern Analysis and Machine Intelligence 2006; 28(8): 1223-1235. DOI: 10.1109/TPAMI.2006.168.

[17] Kemp MC, Taday PF, Cole BE, Cluff JA, Fitzgerald AJ, Tribe WR. Security applications of terahertz technology. Proc SPIE 2003; 5070: 44-52.

Author's information

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Viacheslav Evgenievich Antsiperov (b. 1959) graduated from Moscow Physical-Technical Institute in 1982, majoring in «Automation and Electronics». In 1987 defended PhD thesis on the specialty «Radio physics, including quantum radio physics». Currently he works as the leading researcher at Kotelnikov Institute of Radio-engineering and Electronics of Russian Academy of Sciences. Research interests are in digital signal and image processing, biomedical data processing, information systems design, computer aided systems, mobile systems and sensor nets. E-mail: [email protected] .

Code of State Categories Scientific and Technical Information (in Russian - GRNTI)): 28.23.15. Received May 14, 2016. The final version - October 16, 2016.

i Надоели баннеры? Вы всегда можете отключить рекламу.