Brain tumor segmentation by deep learning transfer methods using MRI images
E.Y. Shchetinin1
1 Department of Mathematics, Financial University under the Government the Russian Federation, 125993, Moscow, Russia, Leningradsky Prospekt 49
Abstract
Brain tumor segmentation is one of the most challenging tasks of medical image analysis. The diagnosis of patients with gliomas is based on the analysis of magnetic resonance images and manual segmentation of tumor boundaries. However, due to its time-consuming nature, there is a need for a fast and reliable automatic segmentation algorithm. In recent years, deep learning methods applied to brain tumor segmentation have shown promising results. In this paper, a deep neural network model based on U-Net neural network architecture is proposed for brain glioma segmentation. It is proposed to use deep convolutional neural network models pre-trained on the ImageNet dataset as U-Net encoders. Among such models, VGG16, VGG19, Mobilenetv2, Inception, Efficientnetb7, InceptionResnetV2, DenseNet201, DenseNet121 were used.
The computational experimental analysis performed in the paper on a set of MRI brain images showed that the best encoder model for the above deep models was the DenseNet121 model with the following values of segmentation metrics Mean IoU of 91.34 %, Mean Dice of 94.26 %, Accuracy of 94.22 %. The paper also comparatively analyses the results of the proposed segmentation method with several works of other authors. The comparative analysis of the segmentation results of the studied MRI images showed that the DenseNet121 model either surpassed or was comparable to the models proposed in the refereed papers in terms of segmentation accuracy metrics.
Keywords: brain tumor, glioma, segmentation, U-Net model, encoder, pre-trained deep models.
Citation: Shchetinin EY. Brain tumor segmentation by deep learning transfer methods using MRI images. Computer Optics 2024; 48(3): 439-444. DOI: 10.18287/2412-6179-CO-1366.
Introduction
Gliomas are the most common type of brain tumor. They account for nearly eighty percent of all malignant brain tumor diagnosed worldwide [1]. According to the World Health Organization (WHO), gliomas can be classified into four different grades based on microscopic images and tumor behavior. Grades I and II are low-grade gliomas (LGG), which are almost benign and grow slowly. Grades III and IV are high-grade gli-omas (HGG), which are malignant and aggressive [2]. There are several basic tools for analyzing and monitoring images of a brain tumor, such as magnetic resonance imaging (MRI). MRI provides detailed images of the brain and is a common tool used to visualize the extent of areas of a tumor.
Gliomas can occur in any part of the brain and are heterogeneous in shape, size, and appearance, with blurred and irregular borders, making it extremely difficult to determine the exact boundaries on the image. Modern clinical imaging uses a variety of MRI sequences for better diagnosis and accurate tumor sizing. The four main MRI imaging sequences, T1, T2, T1 with gadolinium contrast enhancement (T1-Gd) and FLAIR, can be used to identify glioma boundaries [3]. Fig. 1 shows images of the brain in different modalities. Although many brain MRI scans are performed around the world every day, the detection of gliomas and the determination of their grade depends mainly on visual examination by experts, which is time-consuming and error prone.
Segmentation of MRI images is one of the leading data processing techniques used to better describe a brain tumour, separate the tumour area from the healthy brain and draw a clear boundary between them. This allows oncologists to safely carry out different types of treatment, primarily surgery. Over time, several traditional methods of brain image segmentation have been developed, including manual segmentation. However, manual segmentation of MRI images is time-consuming and subject to inaccuracies and variability due to the highly complex nature of tumour appearance. Therefore, automated segmentation of brain tumor MRI images can significantly improve diagnosis, tumor growth rate prediction and treatment planning, especially in cases where access to an experienced radiologist is limited.
T1 T1c T2 FLAIR
w *
■
O W I
x HMI
I
Fig. 1. Examples of MRI images of the brain in various modalities
With the development of artificial intelligence, automated image segmentation methods based on deep learn-
ing have also become very popular [4, 5, 6]. The use of deep learning methods in segmentation problems has been intensified by the development of new efficient neural network architectures. These include, in particular, Fully Convolutional Neural network (FCN) architectures [7, 8]. Based on the FCN model, Ronneberger et al. proposed a symmetric fully convolutional network called U-Net for medical image segmentation [9].
The task of tumor segmentation is the subject of ongoing research. Deep learning has recently proven to be effective in medical image segmentation and information extraction. A significant number of papers have been published on the problem of brain tumor detection and segmentation using deep learning methods. Several improved U-Net modifications, such as ResU-Net [10] and U-Net+ [11], have also been proposed to achieve high performance in solving the problem of brain tumor segmentation. In paper [12], the authors proposed a multimodal approach to segmentation with a preliminary binary classification of brain MRI images. They combined meaningful statistical features with CNN architecture to create a method for segmentation of brain cancer cells. For this purpose, two-dimensional wavelet decomposition and Gabor filters were used for image identification and extraction. In papers [13, 14]; the authors investigated the possibility of improving the quality of tumor segmentation using attention mechanisms in the neural network architecture and have achieved good results. The authors of study [15] proposed to use a focal loss function, attention model and residual blocks in the part of the neural network decoder. The papers [16, 17] used attention models to solve the segmentation problem, which allow processing three-dimensional images with high accuracy, but at the same time require high computational resources. In addition, in paper [17] the authors applied transfer techniques to evaluate segmentation results on MRI-images of the brain.
In the study [18] a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNN) and dense micro-block difference feature (DMDF) into a unified framework to obtain segmentation results with spatial consistency. Compared to the traditional MRI brain tumor segmentation methods, the experimental results show that the segmentation accuracy and robustness has been greatly improved. An efficient tumor segmentation system based on denoising the MRI brain images with homomorphic wavelet filter is proposed in the paper [19]. Further features are extracted with deep model Inceptionv3 and informative features are selected using genetic algorithm. The optimized features are classified and then the tumor slices are transferred to the YOLOv2 model for localizing the tumor region. The developed method achieved prediction scores above 90 % in segmentation and classification of brain tumor.
Also, it is worth noting an excellent review of a variety of methods for detection, classification, and segmentation of brain tumors in papers [20, 21]. A comprehensive
survey of investigations of the deep learning-based brain tumor segmentation methods are presented in paper [22]. In summary, these papers show that segmentation of magnetic resonance images (MRI) of brain tumors is crucial and important in medicine. The task of segmentation of brain tumors is essential for diagnosing, predicting overall growth, determining tumor density, and developing treatment plans. Its complexity is primarily due to the wide range of structures, shapes, frequency, position, and visual variability of tumors. With recent advances in deep neural networks for image classification tasks, computational segmentation of medical images is an important area of research in brain tumors.
The main contributions of this paper to solution of the problem of segmentation of MRI brain tumor images are the following:
• A deep transfer learning approach to solving the problem of segmentation of areas of MRI brain images based on the U-Net architecture is proposed, in which the encoder uses a deep convolutional neural network model previously trained on the ImageNet dataset. Among such models, VGG16, VGG19, Mobilenetv2, Inception, Efficientnetb7, InceptionResnetV2, DenseNet201, DenseNet121 were used. Thus, a TL-U-Net deep model is proposed in this paper.
• Computational experiments on the application of the constructed TL-U-Net model with different models of pre-trained encoders for brain tumor segmentation using MRI images were performed and segmentation accuracy metrics estimates were obtained. The comparative analysis performed on a set of MRI brain images showed that the best encoder model for the above deep models was the DenseNet121 model with the following values of segmentation metrics Mean IoU - 91.34 %, Mean Dice - 94.26 %, accuracy - 94.22 %.
• A comparative analysis of the segmentation results of the studied MRI images showed that the DenseNet121 model either surpassed or is comparable to the models considered in the refereed papers in terms of segmentation accuracy metrics.
1. Materials and methods 1.1. U-Net based neural network architecture
This paper proposes a U-net based neural network architecture for high-precision segmentation of gliomas using MRI images of the brain. It has an arcuate shape and consists of two branches. The contraction path (coder) is a convolutional network consisting of a sequence of blocks containing two convolutions (3*3), each followed by a layer of the ReLU activation function, and at the end of the block the spatial averaging operation (MaxPooling) with kernel (2*2) is applied. The expansion path (decoder) includes upsampling, kernel (2*2) convolution that reduces the number of channels by half (upconv), concatenation with the corresponding feature map from the compression path, and two kernel (3*3) convolutions,
each followed by a ReLU activation function layer. Up-scaling of the feature maps is done using transposed convolutions. To improve the quality of the resulting map, feature maps obtained on intermediate layers of the model are usually used (lower-level features).
1.2. Transfer learning in an image segmentation problem
In most cases, image segmentation datasets consist of up to a few thousand or, in some cases, hundreds of images, since the manual preparation of masks is an expensive procedure. However, it is well known that to train a network with high accuracy requires a dataset with a sufficiently large number of images. To overcome these problems, transfer learning is used as one of the possible solutions [23]. Transfer learning (TL) is a deep learning technique that uses pre-trained models as input to solve a target problem, since developing neural network models from scratch for these tasks is computationally and time consuming.
concatenation
crop boundaries conv 3x3. ReLU -* copy and crop | max pool 2x2 t up-conv 2/2 — conv 1x1
feature map 32x32x512
Fig. 2. U-net architecture
The aim of using transfer learning method is to improve the performance of the model that is being developed by reducing the cost of its training by using information obtained from pre-trained weights of a neural network, which was trained on a different dataset. Large medical image datasets are very rare and often difficult to obtain, so transfer learning method is an effective tool for training neural networks on small datasets. At the same time, it is generally accepted that medical image datasets are highly variable across patients and tasks, which makes the transfer learning process for this task more difficult than for other computer vision tasks. To overcome this problem, this paper attempted to combine the U-Net architecture and the transfer learning method to develop a highly efficient brain MRI segmentation model for gliomas.
1.3. Description of a set of brain MRI images and their preprocessing
The dataset contains 3929 MRI brain images along with manually acquired segmentation masks. They correspond to 110 patients included in the Cancer Genome Atlas (TCGA) collection of low-grade gliomas [24]. Among them, 2556 are images with tumor, and 1373 images without tumor. All images have size of (256*256) pixels. The entire set of images and masks was divided into a
training set of 3005 images, a validation set - 393 images, and a test set - 531 images. Fig. 3 and 4 show examples of brain MRI images.
Fig. 3. MRI image of the brain (left image) and a mask for a brain without a tumor (right image)
Fig. 4. MRI image of the brain (left image) and a mask for a brain with a tumor (right image)
1.4. Development of the TL-U-net segmentation model
As the encoder of the U-Net neural network architecture it is proposed to use various models of deep convolu-tional networks such as VGG16, VGG19, Mobilenetv2, Efficientnetb7, InceptionResnetv2, Densenet121, etc., which were previously trained on the ImageNet dataset. Consider the structure of the TL-U-Net model using the VGG16 deep convolutional network model as an example. VGG16 consists of 11 sequential layers and contains 7 convolutional layers, each followed by a ReLU activation function, and 5 Max-Pooling averaging layers, each reducing the feature map by a factor of 2. All convolu-tional layers have a dimension filter (3*3). The first con-volutional layer creates 64 channels, and then as the network deepens the number of channels doubles after each Max-Pooling averaging operation until it reaches 512. Next, fully connected layers are added, the last of which has a Softmax activation function.
To create our TL-U-Net segmentation model encoder, all fully connected layers (FC) were removed from the VGG16 architecture and replaced with a block of three convolutional layers of 512 channels, which serves as a bottle neck from the encoder to the decoder. To build the decoder transposed convolutions (TransposeConv2D) were used, which double the size of the object map while reducing the number of channels by half. The output of
the transposed convolution is then combined with the output of the corresponding part of the decoder. The resulting feature map is processed by a convolution operation to keep the number of channels the same as in the encoder. Fully connected layers can take inputs of any size, but since we have 5 Max-Pooling averaging layers, each of them performs double image sampling. A conv(1x 1) convolutional layer with a sigmoid activation function is used as the last layer of the model. Due to the binary nature of the object masks, a threshold value of 0.5 was used to convert all pixels with values above 0.5 to 1 and pixels with values below 0.5 to 0.
1.5. Training process of the TL-U-Net model
The developed TL-U-Net model was further trained and tuned on the set of brain MRI images described above. In the process of retraining the weights of the neural network, the compression path was frozen to prevent it from changing and to reduce the computational time. All networks were pretrained using the ImageNet dataset. Unlike the transfer method of deep neural network training, where the weights of the of the neural network are frozen by setting the value of the training parameter for each layer to "False", the layers of the model were kept trainable.
Then, the GlobalAveragePooling layers were successively added to the model, two fully connected Dense layers with activation functions "ReLU" and "Softmax" and separated by a Dropout regularization layer with parameter 0.2. The models thus constructed were compiled and trained using the categorical cross-entropy loss function, Adam optimiser with learning rate parameter 1.0E-04. An adaptive learning rate schedule and callbacks were used in the training process, which automatically reduces the model learning rate and prevents overfitting if the model accuracy does not improve [25, 26].
We have a total number of 3929 images, of which 3005 images are allocated for training dataset, 393 images for validation dataset, and 531 for testing set. All models were trained throughout 200 epochs with batch size 32 and implemented in Python v.3.7 using the Tensorflow, Keras and NumPy libraries on a Core i5 Central Processing Unit (CPU) with 16 GB of main memory and a GeForce (3080 gtx) Graphic Processing Unit (GPU) [27].
1.6. Metrics _ for segmentation accuracy evaluation
To assess the accuracy of tumour segmentation, we used pixel accuracy, the Intersection over Union (IoU) metric or Jaccard index and the Dice index or F1-score. Pixel accuracy is defined as follows:
Accuracy = (TP + TN) /(TP + TN + FP + FN), (1)
where TP is the number of correctly classified pixels (true positives), FP is the number of pixels that the method incorrectly classified as belonging to the class (false positives), FN is the number of pixels that belong to the class but were not correctly classified by the model (false nega-
tives). TP + TN is the number of correctly classified pixels, TP + TN + FP + FN is the total number of pixels. For pixel-level image segmentation tasks, given a classification label X, TP means the pixel's classification is correct and the label value is X. FP means the pixel's classification is incorrect and the label value is not X. TN means the pixel's classification is correct and the label value is not X, while FN means that the pixel's classification is incorrect and the label value is not X. Pixel accuracy reflects the number of correctly classified pixels. Pixel accuracy is not an indicative segmentation metric in the case of class imbalance.
IoU or Jaccard index is defined as follows:
IoU = TP /(TP + FP + FN). (2)
Typically, the average value of the IoU metric (Mean IoU) is calculated for all classes on the full dataset. The average Mean IoU metric value can be calculated as a weighted average of the corresponding values obtained for individual classes. Weights are assigned equal to the frequency of occurrence of pixels in each class.
The Dice index or F1-score is defined as follows:
Dice = 2TP/2TP + FP + FN. (3)
2. The results of computer experiments and their discussion
The training process of the proposed TL-U-Net model was carried out on the set of brain MRI images described above. The segmentation scores (1) - (3) obtained for various TL-U-Net encoder models on the test set are presented in Tab. 1. It also includes segmentation results achieved using the U-Net model which was fully trained on the MRI images set.
Tab. 1. Glioma segmentation accuracy scores for various TL-U-net encoder models
TL-U-net model Accuracy, % Mean IoU, % Mean Dice, % Loss
VGG16 88.23 61.60 74.50 0.142
VGG19 89.35 77.25 87.05 0.14
Densenet201 89.34 89.41 90.31 0.114
Densnet121 94.22 91.14 94.26 0.090
MobileNetv2 87.50 75.79 86.03 0.123
Inception 86.34 76.13 85.81 0.127
InceptionResNetv2 88.43 78.98 88.14 0.126
Efficientnetb7 78.97 78.96 88.07 0.131
U-Net 77.21 78.25 84.62 0.145
ResU-Net 78.36 78.27 81.33 0.183
A comparative analysis of the results presented in Table 1 shows that the Densenet121-U-Net model provides the best values for the metrics (1) - (3), where Accuracy = 94.22 %, Mean IoU = 91.14 %, Mean Dice = 94.26 %. The value of the loss function also turned out to be minimal among all models and is equal to 0.0904. It should also be noted that the values of the metrics (1) - (3) obtained by the U-Net, ResU-Net models are lower than for the other transfer learning models, due to their full training on the training set of images. Fig. 5 shows the results
of predicting the application of the Densenet121-U-Net model on several MRI images from the test set.
o"
ISO-
0 50 loo 150 200 ISO o 50 100 150 200 250 0 50 100 150 2M 250
a) b) c)
0 50 100 150 200 250 0 30 100 150 200 250 0 50 100 150 :00 250
d) e) f)
Fig. 5. Results ofglioma segmentation prediction from MRI images
using the Densenet121-U-Net model: a) MRI image of the brain with a tumor; b) true mask showing the tumor location; c) predicted mask showing the tumor location; d) MRI image of the brain without a tumor; e) true mask showing the absence of a tumor; f) predicted mask showing the absence of a tumor
A comparative analysis of these results with those of other researchers has also been carried out. The improved U-Net modifications, such as ResU-Net [10] and U-Net+ [11], have been proposed to achieve high performance in solving the problem of brain tumor segmentation. Their accuracy segmentation metrics are the following Mean Dice = 90,4 % and Mean IoU = 82,5 %. In paper [12], the authors proposed a multimodal approach to segmentation with a preliminary binary classification of brain MRI images. In our opinion, this has some promise if the dataset has several different modalities and enough samples in each modality. Despite this drawback, the Mean Dice metric is quite high at 89 %.
In papers [13, 14], the authors investigated the possibility of using pre-trained deep models as encoders in the neural network architecture to improve the quality of tumor segmentation. For example, in paper [13] the DenseNet121 deep model was explored as an encoder of the U-Net architecture and resulted in Mean Dice = 90 %. In paper [14] the authors used VGG16 deep model as encoder of U-Net architecture and claim to have achieved a rather high Accuracy = 96,59 %, but very low values Mean Dice = 78.8 % and Mean IoU = 80 %. This discrepancy could mean that the training was done only on the encoder network and not on the whole model. In addition, it is possible that the presented model is overfitted due to incorrectly chosen hyperparameters of the neural network.
The authors of study [15] proposed to use the expanded 2D U-Net architecture and ReLU as activation function. In this way, they obtained a sufficiently high value of Mean Dice of 92 %. An interesting model SCU-Net has also been proposed in paper [16], where a serial coding/decoding structure of the network improves segmentation performance by adding Hybrid Dilated Convolution (HDC) modules and concatenation between each module of two serial networks. The resulted accuracy
metrics of segmentation are Mean IoU = 77 %, Mean Dice = 86.39 %. In paper [17] authors applied transfer techniques to evaluate segmentation scores on MRI-images of brain. They suggested Recurrent Residual U-Net model which demonstrated Mean Dice = 85 % and Mean IoU = 86.65 %. Paper [18] compared to traditional brain tumor segmentation methods on MRI images, shows greatly improved segmentation accuracy. The average Dice index reached 90.98 %. In paper [19] the proposed method's Mean Dice score achieved 92.3 %.
The results of the comparative analysis are shown in Tab. 2. The segmentation accuracy scores obtained in this paper are comparable to those obtained in [14] and [17], [19] and surpass those from the other papers. It should be noted that the accuracy score is slightly lower than in the other publications. However, it is not crucial to the overall quality of tumors segmentation because the image sets under study are unbalanced classes. In this case, as shown in a few papers, e.g. [28], the use of the accuracy metric may not be appropriate.
Tab. 2. Comparative analysis between segmentation accuracy reported in this study and that in other studies
Paper Mean Dice, % Mean IoU, % Accuracy, % Model Data set
[11] 88.4 82.5 89.13 U-Net+ [24]
[12] 89 - - ILinear BraTS
[13] 90 - - U-Net BraTS
[14] 78,8 80,4 96.69 U-Net-VGG16 [20]
[15] 92 - - 2D U-Net BraTS
[16] 86,39 77 - SCU-Net BraTS
[17] 84.95 86.65 93,14 Recurrent Residual UNET [24]
[18] 90,98 - - FCNN BraTS
[19] 92,3 - - BraTS
This paper 94.26 91.14 94.22 Dense-Net121-U-Net [24]
' -' Metrics not provided.
Conclusions
Detecting brain tumors is currently difficult and costly, as it is mainly done with the help of specialists. This problem can be solved by computer-aided detection. This paper investigates a model for tumors detection from MRI scans. The paper proposes a model for semantic segmentation of brain tumors based on the U-Net deep neural network architecture, using different models of pre-trained deep convolu-tional neural networks were used as encoders.
The computational experiments conducted on the segmentation of the brain tumors from their MRI images showed that the modification of the proposed TL-U-Net neural network, which uses the DenseNet121 deep con-volutional network model as an encoder, provided the results with the highest segmentation accuracy. Its segmentation accuracy values were as follows: Mean IoU is 91.14 %, Mean Dice is 94.26 %, Accuracy is 94.22 %.
Thus, it can be assumed that the proposed approach could be used as an independent automated system for preliminary processing of brain MRI images and as a tool for oncologists in the diagnosis of low-grade gliomas. The advantages of the proposed approach include the high accuracy of the results obtained, the flexibility of the model, and the relatively low cost of computation and computer memory. Future research can improve the current results and use deeper architectures to improve the overall effectiveness of the segmentation output. Also, it is also planned to integrate different types of attention mechanisms of Vision Transformers into the TL-U-Net architecture.
References
[1] Mellinghoff IK, Gilbertson RJ. Brain tumors: challenges and opportunities to cure. J Clinical Oncology 2017; 35(1): 2343-2345.
[2] Despotovic I, Goossens B, Philips W. MRI Segmentation of the human Brain: Challenges, methods, and applications. Comput Math Methods Med 2015; 2015: 450341. DOI: 10.1155/2015/450341.
[3] Marusina MYa. Modern types of tomography [in Russian]. Saint-Petersburg: Saint-Petersburg State University ITMO Publisher; 2006.
[4] Iqbal S, et al. Computer-assisted brain tumor type discrimination using magnetic resonance imaging features. Bio-med Eng Lett 2018; 8(1): 5-28.
[5] Yamashita R, et al. Convolutional neural networks: an overview and application in radiology. Insights into Imaging 2018; 9(4): 611-629.
[6] I§in A, Direkoglu C, §ah M. Review of MRI-based Brain tumor image segmentation using deep learning methods. Procedia Comput Sci 2016; 102: 317-324. DOI: 10.1016/j.procs.2016.09.407.
[7] Hoseini F, Shahbahrami A, Bayat P. An efficient implementation of deep convolutional neural networks for MRI segmentation. J Digit Imaging 2018; 31(5): 738-747.
[8] Long J, Shelhamer E, Darrel T. Fully convolutional networks for semantic segmentation. arXiv Preprint. 2015. Source: <https://arxiv.org/pdf/1411.4038.pdf>.
[9] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. arXiv Preprint. 2015. Source: <https://arxiv.org/pdf/1505.04597.pdf>.
[10] Lather M, Singh P. Investigating Brain tumor segmentation and detection techniques. Proc Comput Sci 2020; 167: 121-130. DOI: 10.1016/ j.procs.2020.03.189.
[11] Cai L, Gao J, Zhao D. A review of the application of deep learning in medical image classification and segmentation. Ann Transl Med 2020; 8(11): 713.
[12] Hussain S, Anwar SM, Majid M. Segmentation of glioma tumors in brain using deep convolutional neural network. Neurocomputing 2018; 282: 248. DOI: 10.1016/j.neucom.2017.12.032.
[13] Stawiaski J. A pretrained DenseNet encoder for brain tumor segmentation. arXiv Preprint. 2018. Source: <https://arxiv.org/pdf/1811.07542.pdf>.
[14] Pravitasari A, Iriawan N, Almuhayar M, Azmi T, Fithriasari K, Purnami S. UNet-VGG16 with transfer learning for MRI-based brain tumor segmentation. TELKOMNIKA Telecommunication, Computing, Electronics and Control 2020; 18(3): 1310-1318. DOI: 10.12928/TELKOMNIKA.v18i3.14753.
[15] Nasim A, Munem A, Islam, et al. Brain tumor segmentation using enhanced U-Net model with empirical analysis. arXiv Preprint. 2022. Source: <https://arxiv.org/abs/2210.13336>.
[16] Zheng P, Zhu X, Guo W. Brain tumour segmentation based on an improved U-Net. BMC Medical Imaging 2022; 22: 199. DOI: 10.1186/s12880-022-00931-1.
[17] Gupta A, Dixit M, Mishra VK, Singh A, Dayal A. Brain tumor segmentation from MRI images using deep learning techniques. arXiv Preprint. 2023. Source: <https://arxiv.org/abs/2305.00257>.
[18] Deng W, Shi Q, Luo K, Yang Y, Ning N. Brain Tumor segmentation based on improved convolutional neural network in combination with nonquantifiable local texture feature. J Med Syst 2019; 43: 152. DOI: 10.1007/s10916-019-1289-2.
[19] Sharif MI, Li JP, Amin J, Sharif A. An improved framework for brain tumor analysis using MRI based on YOLOv2 and convolutional neural network. Complex Intell Syst 2021; 7: 2023-2036. DOI: 10.1007/s40747-021-00310-3.
[20] Magadza T, Viriri S. Deep learning for brain tumor segmentation: A survey of state-of-the-art. J Imaging 2021; 7(19): 254-263. DOI: 10.3390/jimaging7020019.
[21] Biratu ES, Schwenker F, Ayano YM, Debelee TG. A survey of brain tumor segmentation and classification algorithms. J Imaging 2021; 7: 179. DOI: 10.3390/jimaging7090179.
[22] Liu Z, Tong L, Chen L, Jiang Z, Zhou F. etc. Deep learning based brain tumor segmentation: A survey. Complex Intell Syst 2023; 9: 1001-1026. DOI: 10.1007/s40747-022-00815-5.
[23] Shchetinin EY. Detection of COVID-19 coronavirus infection in chest X-ray images with deep learning methods. Computer Optics 2022; 46(6): 963-970. DOI: 10.18287/2412-6179-CO-1077.
[24] Buda M, Saha A, Mazurowski A. Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm. Comput Biol Med 2019; 109: 218-225. DOI: 10.1016/j.compbiomed.2019.05.002.
[25] Chollet F. Keras: Deep Learning library for Theano and TensorFlow. 2023. Source:
<https://www.datasciencecentral.com/keras-deep-learning-library-for-theano-and-tensorflow/>.
[26] Geron A. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems. 2nd ed. O'Reilly Media; 2019. ISBN: 1492032646.
[27] Keras: semantic segmentation metrics. 2023. Source: <https://keras.io/api/metrics/segmentation_metrics/>.
[28] Shchetinin EY, Glushkova AG. Arrhythmia detection using resampling and deep learning methods on unbalanced data. Computer Optics 2022; 46(6): 980-987. DOI: 10.18287/2412-6179-CO-1112.
Author's information
Eugene Yurievich Shchetinin (b. 1962), graduated from Moscow State University in 1985, majoring in Applied Mathematics. Currently he works as a professor of Mathematics department at Financial University under the Government of the Russian Federation. Research interests are data analysis, machine learning, deep learning, computer vision. E-mail: riviera-molto@mail.ru
Received June 4, 2023. The final version - November 11, 2023.