References:
1. Abu Sulayman, I.I. M., Almalki, S.H. A., Soliman, M.S., & Dwairi, M.O. (2017). Designing and implementation of home automation system based on remote sensing technique with Arduino Uno microcontroller. In 2017 9th IEEE-GCC Conference and Exhibition (GCCCE). IEEE. http://dx.doi.org/10.1109/ieeegcc.2017.8447984
2. Ghosh, A., Roy, H., & Dhar, S. (2018). Arduino quadcopter. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE. http://dx.doi.org/10.1109/icra.2018.8460924
3. Krauss, R. W. (2017). Teaching real-time control using Arduino: Timer ISR vs delayMicroseconds. In ASME 2017 Dynamic Systems and Control Conference. American Society of Mechanical Engineers. http://dx.doi.org/10.1115/dscc2017-5140
4. Margolis, M. (2012). Arduino cookbook (2nd ed.). Sebastopol, CA: O'Reilly Media.
© Gaylyyev Y., Hanmedov D., Yusupova O., Orazmmamedov M., 2024
УДК 62
Gurbanberdiyeva M.
Lecturer Oguz han Engineering and Technology University of Turkmenistan
Mametvaliyeva C.
4th year student Oguz han Engineering and Technology University of Turkmenistan
Batyrova N.
4th year student Oguz han Engineering and Technology University of Turkmenistan
Jummanov U.
4th year student Oguz han Engineering and Technology University of Turkmenistan
c. Ashgabat, Turkmenistan Toyjanov M.
Head of Department Oguz han Engineering and Technology University of Turkmenistan
GESTURE CONTROLLED SPEECH SYSTEM FOR THE DEAF AND HARD OF HEARING BASED ON ARDUINO UNO
Abstract
This research paper presents a gesture-controlled speech system designed for individuals who are deaf or hard of hearing, utilizing the Arduino Uno platform. The system employs hand gestures as input, which are processed to generate corresponding speech output. By integrating flex sensors and accelerometers, the device captures the nuances of hand movements, translating them into audible speech. This innovative approach aims to bridge communication gaps between the hearing and non-hearing communities, enhancing accessibility and interaction in daily life. The findings indicate that such systems can significantly improve communication efficacy for users, fostering inclusivity.
Introduction
The ability to communicate effectively is a fundamental human right, yet individuals who are deaf or hard of hearing often encounter significant barriers in their interactions with the hearing population. Traditional methods of communication, such as sign language, can be challenging for those unfamiliar with it, leading to misunderstandings and isolation. Recent advancements in technology have opened up new avenues for creating assistive devices that facilitate communication through alternative means. One promising approach involves the
development of gesture-controlled speech systems that convert hand gestures into spoken language.
Literature Review
Numerous studies have examined various methodologies for gesture recognition systems aimed at assisting individuals with hearing impairments. For instance, some researchers have focused on using flex sensors integrated into gloves to capture finger positions and movements. These systems typically convert the detected gestures into text or speech using machine learning algorithms. Other approaches involve utilizing computer vision techniques to recognize gestures through cameras, allowing for hands-free operation.
A notable example is a study that developed a real-time hand gesture recognition system using Convolutional Neural Networks (CNNs). This system automates the identification of sign gestures from webcam footage, thereby facilitating communication for those who are deaf or hard of hearing. The integration of deep learning technologies has shown promise in improving the accuracy and responsiveness of gesture recognition systems.
Another relevant study explored the use of accelerometers alongside flex sensors to enhance gesture detection accuracy. By capturing both hand orientation and finger movements, these systems provide a more comprehensive understanding of user gestures, leading to improved translation into speech.
System Design Components
The proposed gesture-controlled speech system comprises several key components:
Arduino Uno: The central microcontroller that processes sensor inputs and controls output devices.
Flex Sensors: These sensors detect bending in fingers, allowing the system to interpret specific hand gestures.
Accelerometer: This device measures the orientation and movement of the hand, contributing additional data for gesture recognition.
Speaker: Outputs the generated speech corresponding to recognized gestures. Circuit Design
The circuit design integrates flex sensors and an accelerometer connected to the Arduino Uno. The flex sensors are placed along the fingers of a glove, while the accelerometer is mounted on the back of the hand. The Arduino processes analog signals from these sensors and converts them into digital signals for further analysis. Software Development
The software component involves programming the Arduino to recognize specific patterns from sensor inputs. A predefined set of gestures is established, each associated with a particular word or phrase. When a user performs a gesture, the system captures the sensor data and matches it against its database to generate the corresponding speech output.
Implementation
Gesture Recognition Algorithm
The gesture recognition algorithm is pivotal in translating physical movements into verbal communication. The algorithm follows these steps:
Data Acquisition: Continuous monitoring of flex sensor readings and accelerometer data.
Signal Processing: Filtering noise from sensor data to enhance accuracy.
Gesture Classification: Matching processed data against known gesture patterns using threshold values.
Speech Synthesis: Activating text-to-speech functionality upon successful gesture recognition. Testing and Evaluation
To evaluate the effectiveness of the system, user testing was conducted with individuals from both deaf and hearing communities. Participants were asked to perform various gestures while wearing the glove equipped with sensors. Feedback was collected regarding both accuracy in speech output and ease of use.
Results indicated that users were able to communicate effectively using hand gestures, with an average
recognition accuracy rate exceeding 85%. Participants expressed satisfaction with the system's responsiveness and clarity in speech output. Conclusion
This research demonstrates that a gesture-controlled speech system can effectively bridge communication gaps for individuals who are deaf or hard of hearing. By utilizing accessible technology such as Arduino Uno combined with sensor-based input methods, this project contributes valuable insights into assistive communication solutions. References:
1. Jahnavi, P., Vamsidhar, E., & Karthikeyan, C. (2020). Arduino and flex sensor based hand gesture to speech conversion. International Journal of Emerging Trends in Engineering Research, 8(10), 6684-6691. https://doi.org/10.30534/ijeter/2020/108102020
2. Sumadeep, M., & Kumar, R. (2020). Hand gesture based speech recognition system for hard of hearing people. International Journal of Research and Scientific Innovation, 7(1), 45-50. https://doi.org/10.5120/ijrsiet.v7i1.2020
3. Reddy, M., & Ramesh, C. (2020). Real-time hand gesture recognition for improved communication with deaf and hard of hearing individuals. International Journal of Intelligent Systems and Applications in Engineering, 11(6s), 23-37.
© Gurbanberdiyeva M., Mametvaliyeva C., Batyrova N., Jummanov U., 2024
УДК 62
Gurbanberdiyeva M.
Lecturer Oguz han Engineering and Technology University of Turkmenistan
Bayramgeldiyev A.
4th year student Oguz han Engineering and Technology University of Turkmenistan
Soyungulyyev M.
4th year student Oguz han Engineering and Technology University of Turkmenistan
Jummanov U.
4th year student Oguz han Engineering and Technology University of Turkmenistan
c. Ashgabat, Turkmenistan Toyjanov M.
Head of Department Oguz han Engineering and Technology University of Turkmenistan THE FUSION OF ELECTRONICS WITH ART AND THE CREATION OF DESIGNS FROM E-WASTE
Abstract
The fusion of electronics with art through the repurposing of electronic waste (e-waste) has emerged as a significant movement in contemporary art and environmental advocacy. This paper explores the creative transformation of discarded electronic components into artistic expressions, highlighting the environmental implications of e-waste. By examining various artists and their innovative works, this research underscores the potential of e-waste art to raise awareness about sustainability, promote responsible consumption, and inspire a cultural shift towards valuing waste as a resource. Ultimately, this intersection of creativity and technology serves as a powerful commentary on our digital age's environmental challenges.
Introduction
The rapid advancement of technology has led to an unprecedented increase in electronic waste (e-waste),