Modern algorithmic methods for the analysis of speech
disorders after a stroke
Sh.Q.Nematov Y.M.Kamolova yulduzkhonk5155 @gmail .com I.N.Abdullayev abdullayevibrohimj on 106@gmail. com Tashkent State Technical University
Abstract: Rehabilitation of stroke patients is a global problem. Speech is a very complex mental activity, divided into different types and forms. Speech is a specific mental function of a person, which can be defined as the process of communication through language. According to general psychological concepts, speech, like all mental functions of a person, is a product of long-term development. As a complex functional system, speech includes many afferent and efferent connections. All analyzers participate in the functional system of speech: auditory, visual, motor, etc.; each of which contributes to the afferent and efferent bases of speech. Therefore, the brain organization of speech is very complex, and speech disorders are diverse and different depending on which parts of the speech system are damaged as a result of brain damage.
Keywords: objective diagnosis, aphasial, hybrid aphasia diagnosis, acoustic frequency analysis, algorithm-based
Methods of creating selective speech synthesis quality
The process of ensuring the quality of production of selective speech and the methods of additional production, correction methods for adaptation to the production of selective speech are based on it. words, phrases) relative number.
Speech quality is a subjective assessment of speech sound in the test way: when comparing sounds in the control way (taken as five points); compared to the speech sound in the other tract. Based on the above, we can assume that it is produced under "quality". speech means its naturalness, i.e., the value describing the subjective assessment of the synthesized speech sound compared to the natural speech sound. Support for the quality of synthesis can be divided into two large groups: subjective and instrumental. In a separate intermediate group, the speech on a certain parameter is not subjective, but there is no error. it is possible to specify methods of requiring human participation. Materials for testing such tools individual modules of the synthesizer: normalization of text, decoding of abbreviations, additional reading in a
foreign language, application of stress, phonetic loading, etc. Availability testing: is considered availability testing, where the tester knows how to produce the system, and builds his tests based on this knowledge (work view and robust synthesizer modules, such as linguistic work lov . -giving). that is, to test the text of the synthesis of sounds as a unique order of speech[1-2].
Ways to improve speech intelligibility
It can be said with a certain degree of confidence that the problem of speech intelligibility for modern synthesizers has been completely solved. This means that while individual incorrectly synthesized words or incorrectly decoded abbreviations or acronyms can be changed, the overall meaning of synthesized sentences and texts remains clear. There are a number of tests for the intelligibility of programming speech by synthesis systems. To mention above, intelligibility is obtained by the relative number of correctly recognized speech. It is possible to choose a segment of speech presented for the test and several correct items depending on the tasks set for the subjects. There is sound (phonemic), syllabic, oral and phrasal intelligibility [1-3].
Neyrodinamik mashqlar 1 Neyrodinamik mashqlar
o o
Q ®
Figure 1. Speech analyzer program that recognizes and processes Uzbek words Integrated program for the quality of synthesized speech. It is necessary to obtain all the tests, levels for the individual components of the synthesizer in the complex of the quality of the synthesized speech. the effect on the overall quality of the synthesis according to the auditors. Themes can give the naturalness and pleasantness of the voice. content and sound appropriateness for a specific task (for example, in a railway ticketing system). In this case, it is necessary to get a clear task before the speech synthesis program. For example, the direct reading of various abbreviations and personal documents will be essential for making scientific articles or news reports into voice, voice self-service results and to say the set, you can prepare the text in advance. before handing over to the synthesizer. According to the auditors, to have a set of information, through the timbre and
pleasantness of the voice: the tone for telephone conversations should be calm, pleasant and friendly mother, and to write about anything. For this, it is difficult to talk about the integral quality of speech synthesizers, except for a special task. When controlling a particular system, it is also necessary to check its interactive orientation, whether it allows manual capture of problems received by the user: for example, sending from the user's dictionary, where you can decode the stress or certain contractions, the speech signal. What can be the sections, energy and frequency of the relevant parts (for example, the means of textual tags), phrasation ur'u, into o' signaling, software, production of sounds[4].
The purpose of the experiments is to strengthen the usefulness of the proposed methods for selective synthesis of synthesized speech, to determine the main characteristics of a modern selective speech synthesizer.
Figure 2. Pictorial overview of the proposed semi-automated aphasia diagnosis
system
Algorithm of the Developed Inspection Procedure
Confrontation naming is a key component in neurological assessments, since word retrieval stages are commonly affected by inappropriate neurological changes. All responses for the confrontational naming task were audio recorded separately in .wav format with the subjecs consent [5]. The auditory model provided instructions and cues for the confrontation naming task [7].
Accordingly, the confrontation naming score was calculated considering time taken to word production, formant frequencies, and pathological speech characteristics. Formants are the frequency peaks, which have a high degree of energy in the spectrum [6].
Conclusion
The number of people affected by acquired brain injury is large and growing, with 69 million people worldwide suffering from traumatic brain injury each year, and the global rate of first stroke increasing from 16 million in 2005 to 23 million by 2030. expected to collapse. More than 75%. most people experience communication impairment after a brain injury.
Communication impairment after a brain injury can affect a persons social integration and participation in school, work, and society. The quality of life and mood of the victim and his or her family members may also decline. In addition, communication difficulties may indicate important stress factors that affect the burden on the family and caregiver. As a complex functional system, speech involves many afferent and efferent connections. With sensory aphasia, complete loss of speech comprehension is observed in the early stages after a stroke or trauma.
Supports rehabilitation by speech pathologists for adults after brain injury acquired for the treatment of speech disorders. As the availability of mobile technology increases, apps designed for mobile phones and tablets are becoming increasingly popular for such therapy. The applications are designed to detect the presence of aphasia (language disorder) and improve language outcomes, facilitate homework in adults with stroke, and improve cognitive skills in adults with acquired cognitive impairment .
References
1. Dronkers, N.; Baldo, J.V. Language: Aphasia. In Encyclopedia of Neuroscience; Elsevier Ltd.: Amsterdam, The Netherlands, 2010; pp. 343-348.
2. Kim, H.; Kintz, S.; Zelnosky, K.; Wright, H.H. Measuring word retrieval in narrative discourse: Core lexicon in aphasia. Int. J. Lang. Commun. Disord. 2019, 54, 62-78. [CrossRef] [PubMed]
3. Robson, H.; Zahn, R.; Keidel, J.L.; Binney, R.J.; Sage, K.; Lambon Ralph, M.A. The anterior temporal lobes support residual comprehension in Wernicke's aphasia. Brain 2014, 137, 931-943. [CrossRef] [PubMed]
4. Raymer, A.M.; Gonzalez-Rothi, L.J. The Oxford Handbook of Aphasia and Language Disorders; Oxford University Press: Oxford, UK, 2018.
5. Butterworth, B.; Howard, D.; McLoughlin, P. The semantic deficit in aphasia: The relationship between semantic errors in auditory comprehension and picture naming. Neuropsychologia 1984, 22, 409-426. [CrossRef]
6. Rauschecker, J.P.; Scott, S.K. Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nat. Neurosci. 2009, 12, 718. [CrossRef] [PubMed]
7. Ebbels, S.H.; McCartney, E.; Slonims, V.; Dockrell, J.E.; Norbury, C.F. Evidence-based pathways to intervention for children with language disorders. Int. J. Lang. Commun. Disord. 2019, 54, 3-19. [CrossRef] [PubMed]