Научная статья на тему 'Recognition of Drusen Subtypes on OCT Data for the Diagnosis of Age-Related Macular Degeneration'

Recognition of Drusen Subtypes on OCT Data for the Diagnosis of Age-Related Macular Degeneration Текст научной статьи по специальности «Медицинские технологии»

CC BY
8
2
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
AMD / OCT / reflectivity / binary classifier

Аннотация научной статьи по медицинским технологиям, автор научной работы — Nataly Yu. Ilyasova, Nikita S. Demin, Nikita S. Kuritsyn

The aim of this work is to identify drusen subtypes on OCT images for the diagnosis of age-related macular degeneration. In this paper we propose a technology of drusen extraction on OCT-images and their classification. The relevance of the problem is determined by a large number of age-related macular degeneration diseases, which can be diagnosed with the help of timely detection of drusen, as well as the possibility to speed up the time of work of a specialist. The method is based on the segmentation of drusen on the original images and their classification on the reflexivity features. The conducted study allowed achieving a classification accuracy of 98%. © 2024 Journal of Biomedical Photonics & Engineering

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Recognition of Drusen Subtypes on OCT Data for the Diagnosis of Age-Related Macular Degeneration»

Recognition of Drusen Subtypes on OCT Data for the Diagnosis of Age-Related Macular Degeneration

Nataly Yu. Ilyasova1,2*, Nikita S. Demin1,2, and Nikita S. Kuritsyn2

1 IPSI, NRC "Kurchatov Institute", 151 Molodogvardeyskaya, Samara 443001, Russian Federation

2 Samara National Research University, 34 Moskovskoye Shosse, Samara 443086, Russian Federation *e-mail: [email protected]

Abstract. The aim of this work is to identify drusen subtypes on OCT images for the diagnosis of age-related macular degeneration. In this paper we propose a technology of drusen extraction on OCT-images and their classification. The relevance of the problem is determined by a large number of age-related macular degeneration diseases, which can be diagnosed with the help of timely detection of drusen, as well as the possibility to speed up the time of work of a specialist. The method is based on the segmentation of drusen on the original images and their classification on the reflexivity features. The conducted study allowed achieving a classification accuracy of 98%. © 2024 Journal of Biomedical Photonics & Engineering.

Keywords: AMD; OCT; reflectivity; binary classifier.

Paper #9097 received 8 Apr 2024; revised manuscript received 9 Sep 2024; accepted for publication 10 Sep 2024; published online 30 Sep 2024. doi: 10.18287/JBPE24.10.030307.

1 Introduction

Age-related macular degeneration (AMD) is a progressive disease characterized by damage to the macular zone (central retinal zone) due to developing pathological processes in the retinal pigment epithelium [1]. According to the results of meta-analysis of population studies (129,664 people), there are currently about 64 million patients with AMD in the world, AMD is the cause of blindness in 2.1 million people out of 32.4 million blind people worldwide, the projected number of patients with AMD by 2040 will be 288 million people. The incidence of AMD in the Russian Federation is more than 150 cases per 10,000 population [2, 3]. As the aging of the population becomes more and more noticeable every year, age-related diseases, including AMD, become more widespread. Based on this information, it is clear that the diagnosis of AMD is of great importance, especially in the context of the increasing number of elderly people and the projected increase in AMD cases.

It is very important to detect AMD in its early stages, when symptoms are not yet so pronounced. The main diagnostic features of early and intermediate stages of AMD according to the original AREDS classification include druse [4]. Druse is an extracellular deposit, the main components of which are lipids, proteins, and minerals [5].

(a) (b)

Fig. 1 Optical coherence tomography images: a) for a healthy patient, b) a patient with AMD (white arrows indicate drusen).

Spectral optical coherence tomography (OCT) allows visualization of drusen, estimation of their area, volume and substructure based on optical features. Fig. 1 shows optical coherence tomography images of the ocular fundus for a healthy patient and a patient with AMD.

In Ref. [6], the authors examined the structural and compositional heterogeneity of drusen in an attempt to predict the progression of AMD by identifying a number of patterns in their heterogeneity. Cross-sectional and longitudinal associations were made between compositional patterns and stages of AMD (druse volume, geographic atrophy (late stage of AMD), and pre-atrophic changes).

This study divided druse into 4 subgroups based on their reflectivity: H-subtype (highly reflective core), L-subtype (low-reflective core), C-subtype (conical débris), S-subtype (with separation into hypo- and

hyperreflective areas), and simple subtype (with homogeneous internal reflectivity and less than 1000 ^m in size). The authors showed that drusen structure is a biomarker of age-related macular degeneration progression. The study showed that drusen structure is an important indicator of age-related macular degeneration progression. The L- and C-subtypes of drusen are high-risk biomarkers for the development of severe AMD with geographic atrophy. These drusen subtypes can change over time, transitioning into each other. Therefore, accurate identification of drusen subtypes is important for the diagnosis and prognosis of AMD, as well as for patient monitoring.

However, the proposed method for evaluating the optical features of drusen substructure on spectral domain OCT (SD-OCT) images is performed visually, requires the participation of several specialists, and may include an arbitration board to resolve disagreements. This method is labor-intensive, time-consuming, and does not always provide high objectivity, which limits its widespread use in clinical practice.

In Refs. [7, 8], the authors also investigated druse heterogeneity, considering their shape: conical, semicircular, sawtooth-shaped slight elevations of the pigment layer, then reflectivity: low-reflective, medium-reflective, and high-reflective objects, as well as their heterogeneity: homogeneous, nonhomogeneous with a highly reflective core, nonhomogeneous without a core, and the presence or absence of hyperreflectivity over the druse (see Fig. 2). The authors also concluded that such characteristics of druse heterogeneity may correlate with the risk of AMD progression.

(a) (b) (c) (d) (e)

Fig. 2 Druse patterns: (a) conical low-reflective nonhomogeneous with a nucleus; (b) semicircular medium-reflective homogeneous; (c) semicircular medium-reflective nonhomogeneous with a nucleus; (d) semicircular medium-reflective nonhomogeneous with a nucleus and overlying hyperreflectivity; (e) sawtooth-shaped elevations of the pigment layer [7].

In the framework of this work, we propose a technology aimed at creating an objective method for determining drusen subtypes on OCT images for the diagnosis and monitoring of AMD based on the method of retinal pigment layer extraction, recognition and localization of drusen, and assessment of their reflectivity. Further studies are related to the evaluation of the morphological component of drusen (determination of various features of the forms) [9].

The aim of our research is to improve the accuracy and informativeness of diagnosis and monitoring of patients with early and intermediate stages of AMD, to

reduce the labor intensity of diagnosis and the duration of patient examination.

2 Materials and Methods

2.1 Technology of Retinal Pigment Layer Extraction and Druse Recognition

Before determining the heterogeneity of the patterns of druse structures, it is necessary to localize them, namely, to isolate the pigment layer on OCT images, and then to segment the druse. The methods applied to the problem of layer extraction on OCT images are divided into three directions: a) thresholding and morphological operations [10]; b) graph theory; c) deep learning. Mixed algorithms are also possible, for example, incorporating both graph theory and neural networks. In Ref. [10], the authors applied additional noise filtering based on bilateral filter and additional steps such as false positive druse removal and druse smoothing (Gaussian filter for smoothing segmented drusen in 3D image) to improve the accuracy of segmented drusen. In Ref. [11], thresholding and morphological operations were also applied for druse extraction in OCT images. A low-pass filter was used to remove speckle noise, then a layer of nerve fibers was pre-selected using an upper-pass filter to reduce the anomalies in segmented drusen, just as in the previous work [9]).

The second approach for segmentation of retinal layers is based on graph theory. A graph model is introduced, where each pixel plays the role of a vertex of this graph with edges connecting this vertex with eight neighboring ones. With this OCT representation, the routes crossing the entire image width, i.e., the potential retinal layers, can be viewed as sets of connected edges. After assigning weights to the edges, the Dijkstra algorithm is applied to determine the shortest path [12, 13].

The third approach in retinal layer extraction is to use convolutional neural networks [14-19]. The U-Net convolutional neural network has achieved good results in biomedical image segmentation tasks [12, 20, 21]. The authors of this work defined the drusen extraction task as segmentation of 4 classes: drusen, pigment layer region, Bruch's membrane region and background region. In Ref. [13], the authors also use neural networks to solve the problem of retinal layer extraction on OCT. Before layer segmentation, regions of interest are extracted in

3 steps: 1) segmentation using Otsu's method to find the initial region; 2) applying morphological opening and expanding operators; 3) small objects are excluded to find the final region. In the next step, an initial segmentation is performed where the search region is reduced and the regions of the inner boundary membrane are separated from the region with pigment layer and Bruch's membrane. A U-Net neural network is used for this purpose [18]. In Ref. [19], the authors faced 2 problems: a) how to improve the network's ability to memorize multiscale nonlocal features in order to cope with complex pathological appearances of drusen in OCT

images, especially with respect to size and shape; b) how to improve the network's ability to memorize semantic global contextual features simultaneously with noise suppression in order to solve the problem of low-contrast OCT images and noise. In Ref. [15], the authors combine graph theory and deep learning.

The technology of retinal pigment layer extraction and drusen recognition presented in this article is based on the methods of preprocessing, morphological analysis of the pigment layer, on the basis of which drusen localization and their diagnostic analysis by reflectivity properties are performed.

The segmentation of the pigment layer with drusen takes place in several steps. The first step is preprocessing with median filtering to remove impulse noise. The second stage is the removal of background on the filtered image for further segmentation of the retinal layers. The reflectivity of the background region (vitreous) is the same throughout the image, so its binarization with a fixed value can be used to successfully remove it. Binarization takes place with a fixed value is performed, the threshold is determined experimentally by averaging on a sample of OCT-images, then pixels less than the selected threshold are zeroed, excluding the background. The threshold was 70. The third step is to remove all retinal layers between the inner border membrane and the outer plexiform layer. After binarization to remove the background region from the OCT image, the lower boundary of this area can serve as the upper contour of the nerve fiber layer. Bright pixels within 20 pixels of this contour are zeroed.

The fourth step is the isolation of the retinal pigment epithelium. Binarization by the threshold T calculated as a result of solving the following inequalities is applied: S(T) > c , S(T + 1) <c , where H(i), i = 0,1, ...,L -histogram of halftone image. S(t) = ^=iH(i) -cumulative histogram, constan c = width x (— + k),

width - image width, tr - approximate width of the pigment layer (20 ^m), res = he^ght- axial resolution,

d and height - the depth of the spectral OCT cubes and the height of the image, respectively. Non-negative constant k is determined experimentally, the larger it is, the more pixels will remain in the image after binarization [22].

The fifth step is to remove the photoreceptor layer, which in some OCT images also contains high-value pixels. Two-stage processing is performed. To preserve the contour of the pigment layer, it is necessary to restore the gaps, for this the middle line of this contour is calculated and the gaps are connected by interpolation. Further, to eliminate the photoreceptor layer, a morphological erosion operation is applied, and clusters of pixels less than a fixed value are eliminated.

In the first step, to preserve the contour of the pigment layer, the gaps are filled, the midline of the contour is determined and the gaps are connected by interpolation. Then, in the second step, a morphological "erosion" operation is applied to remove the photoreceptor layer and pixel clusters below a given value are eliminated. These steps are repeated in the second iteration, resulting in a final image of the midline of the pigment layer together with the drusen. To smooth the irregularities, the obtained line is approximated by a cubic spline (see Fig. 3).

In drusen segmentation on OCT images, we establish a baseline "normal" pigment layer by fitting a third-order polynomial to the retinal pigment epithelium line. By aligning this polynomial with the data, we define the "normal" line. The discrepancy between this line and the one affected by drusen indicates their presence. We then filter out clusters of pixels smaller than 70 pixels or less than 8 pixels in height to refine the segmentation. Referencing Fig. 3 provides a visual aid for this process.

(d) (e) (f)

Fig. 3 Main stages of druse extraction: a) removal of background region from OCT, b) OCT binarization with preliminary removal of nerve fiber layer, c) approximated pigment layer with druse, d) result of 3rd order polynomial fitting to estimate "healthy" pigment layer, e) result of downward shift of "normal" pigment layer, f) result of druse extraction algorithm operation.

2.2 Methods of Assessing the Signs of Reflexivity of the Druse

To assess the reflexivity of drusen, we used the following features: median of the brightness function, mean brightness value, dispersion of object brightness, and transparency coefficient. Two classes were distinguished: low-reflective drusen (L-subtype and C-subtype patternless), second - medium-reflective and high-reflective drusen (H- subtype h C- subtype patterned).

1. Median of the luminance function: Imedian =

2

where N - number of object points, In+± - brightness

2

function value -+- element of the sorted array of brightness values of all points of the object.

2. Average brightness value:

Thus, to evaluate the accuracy of retinal pigment epithelium segmentation, the following ratio is calculated, which determines the average Euclidean distances of all contour points by averaging it over all OCT images on which the algorithm is tested:

mean(X,Y) = =-\XÎ - Y{\), (5)

where X - pigment layer contour (coordinate setx and y), highlighted by the algorithm, Y - layer contour manually selected and confirmed by a physician, X£, Ytl - vertical coordinates of contour points with horizontal coordinate i images t for algorithm and manual segmentation, respectively, n - image width, T - number of OCT images. To estimate how much on average the deviation of contours differs from the mean value mean(X, Y) the metric used is standard deviation:

l=1Yt? J.

where Ij - brightness function at point j. 3. Brightness variance of the object:

(1)

4. Transparency factor:

i

(2)

(3)

The transparency of an object is determined using the probability distribution of the brightness function. A transparent object is characterized by a positive shift of the mean brightness value 1 relative to the median h = Qmin + Imax)/2:1 > Ic The formula for an opaque object I < I c.

3 Result and Discussion

3.1 Experimental Studies on the Accuracy of the RPE Extraction Algorithm.

The Euclidean distance between the coordinates of the points of the contour defined by the algorithm and the contour of a layer manually selected and confirmed by an expert physician is used: Euclidean distance between two points A(x-,y-) and B(x2,y2) is determined as following Eq. :

s td (X, Y) = JlXl! (IXiJXi -Y<:\- mean(X, Y)f. (6)

To create OCT images, the Zeiss CIRRUS HD-OCT 5000 device was used with the following main features:

- Maximum resolution: 5 ^m.

- Transverse resolution: 15 ^m.

- Scan frequency up to 27,000 A-scans per s.

- Scan sizes: 6 mm x 6 mm (macular and optical cube).

- Scanning depth: about 2 mm (in fabric).

- Image depth: 8 bits in grayscale.

The Python programming language was used for the software implementation. The developed software uses the following libraries: OpenCV, NumPy, SciPy, Matplotlib, Scikit-Learn. The average time for segmentation is 0.218 s, for obtaining features and assigning to a subclass is 0.022 s.

For the developed algorithm, averaging was performed on 120 OCT-images whose size was 1024x625 pixels. All images were filtered with a median filter with a window size of 3. The average segmentation error of the pigment layer contour in pixels, in ^m, as well as the average error with respect to contour length are presented in Table 1.

Table 1 Pigment layer segmentation accuracy error.

Mean deviation ± standard deviation, pixels

Mean deviation ± standard deviation, ^m

Error in relation to the length

of the pigment layer, %

dist(A,B) =

X2)2 + (y-

y2)2

(4)

Values 2.18 ± 1.98 7.79 ± 7.07 0.25 ± 0.23

Due to the fact that the contour extends from left to right over the entire width of the OCT image, the comparison of contour points with the same vertical coordinate is performed as follows: dist(A,B) = ly- -y2l.

3.2 Experimental Investigations of the Algorithm for Druse Extraction

The following metrics are used to evaluate the spatial coincidence of sets: Dice coefficient and Intersection over Union (IoU) [23]:

-

Table 2 Statistical characteristics of features.

Measure

Median

Average value Dispersion

Transparency factor

Mean of the feature,

class 1 Mean of the feature,

class 2 Root mean square of

feature, class 1 Root mean square of feature, class 2

107.44

135.45 14.01 31.89

99.41 145.02 15.49 49.58

2577.2 2189.06 709.04 684.21

-0.10 0.17 0.12 0.28

0

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

? 0.015

€ o.oio

--2

r / / / X 1 \ I \ 1 1 \

\ \ / \ 1

A i i i i \ \ \ t V

/ / / / y__' / / f \ \ V \ _

100 125 150 median

(a)

(c)

° 0015

0.005

0.000

/*• / / — 1 2

( 1

/ 1 I \ t 1

1 1 1 1 > « 1 1

1 1 / 1 \ \ \ V

✓ / / -

80 100 120 140 average value

(b)

(d)

Fig. 4 Histograms of features distribution for two classes of druse, where (1) the first is druse with low-reflective, (2) the second is druse with medium- and high-reflective: a) median value, b) average value, c) transparency coefficient,

d) variance.

Dice coefficient =

2\a na|

|X| + |B| ,

IoU =

\a na\ \a u Bf

(7)

(8)

where A - set of druse pixels selected by the algorithm, B - set of drusen pixels manually selected and confirmed by the doctor. For the developed algorithm, averaging was performed on 60 OCT images with a size of

1024x625 pixels, the following results were obtained: 0.79 Dice coefficient and 0.66 Intersection over Union.

3.3 Experimental Studies of the Separability of Features for Describing Patterns of Heterogeneity of Druse Structures

Fig. 4 shows the histograms of the distribution of features in the following classes of drusen - L-type and C-type

(low-reflective), patternless and C-type without pattern (medium- and high-reflective drusen). These histograms showed the separability of the classes for each of the reflexivity features. The histogram data allow us to draw preliminary conclusions about the separability of these classes for each of the reflexivity features. Namely, dispersion has a weak separability in contrast to other attributes.

Table 2 presents statistical characteristics of the reflexivity features calculated for 2 classes: L-type and C-type (low-reflective), the second one without pattern and C-type without pattern (moderately reflexive and highly reflexive drusen). The plots of class object locations in three-dimensional feature spaces are presented in Fig. 5.

Studies have shown that classifiers based on the construction of a linear separating boundary are feasible for druse classification.

After segmentation of the drusen themselves, they should be classified into classes: low-reflective, medium-reflective, and high-reflective. Logistic regression was considered as a binary classifier. The training sample was created from OCT images and enlarged to 120 objects by an augmentation (transformation and rotation) procedure. Due to the limited amount of OCT data, classification accuracy was evaluated on two test samples. The first sample consisted of drusen extracted by the algorithm from the OCT images. The second sample consisted of drusen from the training sample, where 80% were used for training the classifier and the remaining 20% for validation.

The accuracy of druse classification was determined based on the comparison of the obtained results of OCT-image analysis using the proposed algorithm and independent experts. The following metrics were used to

determine the accuracy of object classification by patterns:

Accuracy =

TC1 + TC2

Precision = -1—

TC1+FC1

Recall = -TCl-

tc1+tc2+fc1+fc2

TC1

F1-score = 2

TC^TC^

Preslclonx Recall

Recision+Recall

(9)

(10) (11) (12)

where TCi, TC2 - number of objects correctly classified as first and second class, respectively, FCi, FC2 - number of objects incorrectly classified as first and second class, respectively [24]. Accuracy shows how many objects were classified correctly. Precision measures how many of the objects classified as, for example, first class, are actually first class. Recall makes it possible to understand how many objects that should have been classified as, for example, first class, were actually classified that way. F1-score represents the average Precision h Recall.

The classification accuracy for the two data sets for the reflexivity features is presented in Table 3.

The classification accuracy metric (accuracy) is 95% and 98% for the subsample of augmented data and data extracted by the developed algorithm and respectively. Also, the F1-score metric reaches values of 90% and 97% on these samples.

From the Table 3 we can see that logistic regression has quite good classification accuracy values, so we can say that these algorithms can be used for localization and recognition of druse subtypes.

(b)

Fig. 5 Location of objects of two classes in the three-dimensional space of reflectivity features, where the first class is low-reflective drusens (blue), the second class is medium-reflective and high-reflective druses (red). Comparison by a) dispersion, median, transparency coefficient, b) average brightness, median, transparency coefficient.

Table 3 Accuracy of classifying patterns of heterogeneity of drusen structures by shape, where class 1 is low-reflective and class 2 is high-reflective, on two data samples.

Data

Accuracy

0.98

OCT image data extracted by the algorithm

20% of augmented data 0.95

Precision

0.96 0.91

0.98 0.90

Recall

F1-score

Class 1 Class 2 Class 1 Class 2 Class 1 Class 2

0.95 0.90

0.96 0.91

0.95 0.90

0.97 0.90

4 Conclusion

Thus, the article presents a technology for the extraction and determination of drusen subtypes on OCT images for the detection of AMD. The developed technology includes several stages: segmentation of the pigment layer, isolation of drusen and their classification by structural patterns, which brings it closer to a real diagnostic system. It is more than just automating calculations, it is a tool that can help doctors make clinical decisions. The results of each stage were experimentally investigated. It is shown that segmentation of the pigment layer has an error of 7.79 ± 7.07 ^m, while segmentation of drusen using Dice coefficient and Intersection over Union metrics equal to 0.79 and 0.66, respectively. Classification of drusen by reflectivity gives high results: 98% Accuracy and 97% F1-score, even with unbalanced classes. In general,

on the generated sample including all classes, Accuracy rates of 95% and F1-score of 90% are achieved.

It can be said that the technology allows to effectively perform classification analysis of drusen on OCT-images, to further reduce the labor intensity of examination, to increase the informativeness of diagnosis in patients with early and intermediate stages of AMD and to conduct monitoring in the process of dynamic follow-up of patients.

Acknowledgment

The work was carried out within the state assignment of IPSI, NRC "Kurchatov institute".

Disclosures

The authors declare that they have no conflict of interest.

References

1. N. Y. Ilyasova, N. S. Demin, A. S. Shirokanev, A. V. Kupriyanov, and E. A. Zamytsky, "Method for selection macular edema region using optical coherence tomography data," Computer Optics 44(2), 250-258 (2020). [in Russian]

2. A. Z. Fursova, O. G. Gusarevich, M. S. Tarasov, M. A. Vasilyeva, N. V. Chubar, and N. V. Litvinova, "Age- macular degeneration and glaucoma. Epidemiological and clinic-pathogenetic aspects," Siberian Scientific Medical Journal 38(5), 83-91 (2018). [in Russian]

3. "World Population Prospects 2022: Summary of results," United Nations, New York (2022). ISBN: 978-92-1148373-4.

4. E. V. Boyko, A. V. Doga, and V. V. Egorov, "Druse structure as a biomarker of AMD progression," New in Ophthalmology 4, (2016). [in Russian]

5. L. de Sisternes, G. Jonna, M. A. Greven, T. Leng, and D. Rubin, "Individual drusen segmentation and repeatability and reproducibility of their automated quantification in optical coherence tomography images," Translational Vision Science & Technology 6(1), 12 (2017).

6. M. Veerappan, A.-K. M. El-Hage-Sleiman, V. Tai, S. J. Chiu, K. P. Winter, S. S. Stinnett, T. S. Hwang, G. B. Hubbard, M. Michelson, R. Gunther, W. T. Wong, E. Y. Chew, C. A. Toth, C. A. Toth, W. Wong, T. Hwang, G. B. Hubbard, S. Srivastava, M. McCall, K. Winter, N. Sarin, K. Hall, P. McCollum, L. Curtis, S. Schuman, S. J. Chiu, S. Farsiu, V. Tai, M. Sevilla, C. Harrington, R. Gunther, D. Tran-Viet, F. Folgar, E. Yuan, T. Clemons, M. Harrington, and E. Chew, "Optical Coherence Tomography Reflective Drusen Substructures Predict Progression to Geographic Atrophy in Age-related Macular Degeneration," Ophthalmology 123(12), 2554-2570 (2016).

7. A. A. Khanifar, A. F. Koreishi, J. A. Izatt, and C. A. Toth, "Drusen Ultrastructure Imaging with Spectral Domain Optical Coherence Tomography in Age-related Macular Degeneration," Ophthalmology 115(11), 1883-1890e1 (2008).

8. J. N. Leuschen, S. G. Schuman, K. P. Winter, M. McCall, W. Wong, E. Chew, T. Hwang, N. Sarin, T. Clemons, M. Harrington, and C. Toth, "Spectral-Domain Optical Coherence Tomography Characteristics of Intermediate Age-related Macular Degeneration," Ophthalmology 120(1), 140-150 (2013).

9. N. Y. Ilyasova, A. V. Kupriyanov, and A. G. Khramov, Information technologies of image analysis in the tasks of medical diagnostics, A. S. Bugaev (Ed.), Radio and Communication, Moscow (2012). ISBN: 5-89776-014-4. [in Russian]

10. Q. Chen, T. Leng, L. Zheng, L. Kutzcher, J. Ma, L. Sisternes, and D. Rubin, "Automated drusen segmentation and quantification in SD-OCT images," Medical Image Analysis 17(8), 1058-1072 (2013).

11. S. Farsiu, S. J. Chiu, J. A. Izatt, and C. A. Toth, "Fast detection and segmentation of drusen in retinal optical coherence tomography images," Proceedings of SPIE 6844, 68440 (2008).

12. S. J. Chiu, J. A. Izatt, R. V. O'Connell, K. P. Winter, C. A. Toth, and S. Farsiu, "Validated Automatic Segmentation of AMD Pathology Including Drusen and Geographic Atrophy in SD-OCT Images," Investigative Ophthalmology & Visual Science 53(1), 53-61 (2012).

13. S. J. Chiu, X. T. Li, P. Nicholas, C. Toth, J. Izatt, and S. Farsiu, "Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation," Optics Express 18(18), 19413-19428 (2010).

14. R. Asgari, S. Waldstein, F. Schlanitz, M. Baratsits, U. Schmidt-Erfurth, and H. Bogunovic, "U-Net with Spatial Pyramid Pooling for Drusen Segmentation in Optical Coherence Tomography," in Ophthalmic Medical Image Analysis, H. Fu, M. K. Garvin, T. MacGillivray, Y. Xu, and Y. Zheng (Eds.), Springer International Publishing, 11855, 77-85 (2019).

15. O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," in Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi (Eds.), Springer International Publishing, 9351, 234-241 (2015).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

16. Z. Mishra, A. Ganegoda, J. Selicha, Z. Wang, S. R. Sadda, and Z. Hu, "Automated Retinal Layer Segmentation Using Graph-based Algorithm Incorporating Deep-learning-derived Information," Scientific Reports 10, 9541 (2020).

17. J. A. Sousa, A. Paiva, A. Silva, J. D. Almeida, G. Braz Junior, J. O. Diniz, W. K. Figueredo, and M. Gattass, "Automatic segmentation of retinal layers in OCT images with intermediate age-related macular degeneration using U-Net and DexiNed," PLoS ONE 16(5), e0251591 (2021).

18. N. S. Demin, N. Y. Ilyasova, R. A. Paringer, and D. V. Kirsh, "Application of artificial intelligence in ophthalmology for solving the problem of semantic segmentation of fundus images," Computer Optics 47(5), 824-831 (2023).

19. M. Wang, W. Zhu, F. Shi, J. Su, H. Chen, K. Yu, Y. Zhou, Y. Peng, Z. Chen, and X. Chen, "MsTGANet: Automatic Drusen Segmentation From Retinal OCT Images," IEEE Transactions On Medical Imaging 41(2), 394-406 (2022).

20. A. Zagitov, E. Chebotareva, A. Toschev, and E. Magid, "Comparative analysis of neural network models performance on low-power devices for a real-time object detection task," Computer Optics 48(2), 242-252 (2024).

21. A. A. Mikhaylichenko, Y. M. Demyanenko, "Using squeeze-and-excitation blocks to improve an accuracy of automatically grading knee osteoarthritis severity using convolutional neural networks," Computer Optics 46(2), 317-325 (2022).

22. Q. Chen, T. Leng, L. Zheng, L. Kutzscher, J. Ma, L. De Sisternes, and D. L. Rubin, "Automated drusen segmentation and quantification in SD-OCT images," Medical Image Analysis 17(8), 1058-1072 (2013).

23. M. V. Sherer, D. Lin, S. Elguindi, S. Duke, L.-T. Tan, J. Cacicedo, M. Dahele, and E. F. Gillespie, "Metrics to evaluate the performance of auto-segmentation for radiation treatment planning: A critical review," Radiotherapy and Oncology 160, 185-191 (2021).

24. Q. Gu, L. Zhu, and Z. Cai, "Evaluation Measures of the Classification Performance of Imbalanced Data Sets," in Computational Intelligence and Intelligent Systems, Z. Cai, Z. Li, Z. Kang, and Y. Liu (Eds.), Springer, Berlin, Heidelberg, 51, 461-471 (2009).

i Надоели баннеры? Вы всегда можете отключить рекламу.