Wearable Sign Language Translation Gloves

Project Background and Objectives

Background: Deaf and hard-of-hearing people face communication barriers in social and professional settings. Existing sign language translation devices have many shortcomings, such as being bulky, expensive, and inaccurate.

Objectives: To develop a high-precision, portable, and personalized sign language translation glove that converts sign language into speech and text, helping the deaf communicate better with others.

Design Concepts and Screening

Design Intention: Given that it is difficult for the deaf to learn typing or writing and there is a lack of effective communication tools in the market, it is expected to create a glove that can directly convert sign language.

Design Comparison: Referring to other projects, it is planned to use flex sensors on five fingers to collect finger bending data and a motion sensor to monitor palm movement data. Bluetooth module is used for wireless transmission and machine learning technology is adopted to improve recognition accuracy, overcoming the problems of limited sample data and easy errors of existing products.

Design Prototype: In terms of hardware, five flex sensors and MPU6050 module are used, integrating acceleration, gyroscope and temperature sensors, equipped with Bluetooth, LCD screen and voice module. In terms of software, machine learning technology is used.

Final Design Selection: The product finally uses Arduino Mega as the development board to connect sensors. In the software, a machine learning-based method is used to translate gestures. Due to the unstable transmission between Arduino Mega and the Bluetooth module, the Bluetooth is cancelled and wired connection to the computer is used instead.

Prototyping and Testing

Hardware Prototyping Method

Flex Sensor: It is a core component. The degree of finger bending is reflected by the change of resistance. Different specifications of sensors are used for different fingers. Attention should be paid to the port and resistance configuration when connecting to the development board.

Arduino Mega2560: It is the main development board. Compared with Arduino Uno, it has more pins. The pins are reasonably allocated for sensor connection and data input. The interval of gesture data input is set to 3.5 seconds.

MPU – 6050 Module: It integrates multiple sensor functions and is connected to the development board through specific pins. Its data is used to assist gesture recognition and improve accuracy.

LCD12864 Module: It is used to display the text translation corresponding to the gesture. It is connected to the corresponding port of the development board. There is a problem of busy serial port in the data transmission process.

Bluetooth Module (HC – 06): Although it can realize wireless communication, it cannot be used due to the memory limitation of the development board.

CN – TTS Voice Module: It is used for voice broadcast. The connection method is similar to that of the Bluetooth module. There was a failure in the debugging process, and the computer’s built-in speaker was used instead to realize the voice function later.

Software Prototyping Method

Data Recording: Gesture data is collected every 3.5 seconds, generating 14 records, including data from finger flex sensors and MPU6050. The data is preprocessed and filtered, and relevant compensation values are calculated. Currently, the database contains 13 gestures, with 450 records for each gesture.

Deep Learning Method: The bidirectional long short-term memory network (BiLSTM) combined with the attention mechanism is used as the model to analyze and process time series data. The performance and gesture recognition accuracy of the model are improved through multiple layers of structures and related mechanisms.

Static Gesture Recognition Algorithm: In view of the limitations of sensors and the model, the threshold range is set by calculating the mean and standard deviation of the gesture sample features to supplement the recognition of gestures that are not well recognized by the model.

Testing and Results

Design Test: The hyperparameters of the deep learning model are adjusted to find the optimal performance configuration. The model is evaluated by multiple indicators, and loss and accuracy curves, confusion matrices, etc. are drawn. It is found that some gestures are not well recognized. After combining the static gesture algorithm, the final accuracy reaches 84.3%.

User Experience and Feedback: 40 people were invited to try the product. Most users recognized the product’s function and appearance. However, it was pointed out that due to only one glove being made, there were problems in recognizing two-handed gestures, and sign language is affected by facial expressions, which is a shortcoming of the product.

Reflection, Discussion and Improvement

Software Improvement: Strengthen the signal conditioning and filtering of sensor data; adopt a new attention mechanism to enhance the model’s recognition ability; explore new model architectures to achieve more functions, such as customizing gestures after training.

Hardware Improvement: Replace with the MPU9250 six-axis sensor with higher performance; for the problem of the busy LCD serial port, a buffer time can be added or an LCD that supports the I2C protocol can be replaced; select a better quality Dupont wire; consider replacing the development board with stm32 to realize wireless communication and expand the memory; draw a PCB board to facilitate integrated design and large-scale production.

Project Summary

Project Overview: The glove uses sensors to collect gesture data and converts it into text and speech through dynamic and static algorithms, providing communication assistance for the deaf.

Promoting Equality, Diversity and Inclusion: Effectively reduce the communication barriers of the deaf, help them integrate into society, and promote a more inclusive and diverse society.

Environmental and Social Impact: Use environmentally friendly and durable materials, reduce the use of hardware and waste, reduce the cost of sign language education, and improve the social status and employment opportunities of the deaf.

Design Process and Performance Reflection: After multiple iterations of design and testing, the hardware and software solutions are determined, but there is still room for improvement in gesture recognition accuracy and latency.

Future Improvement: Plan to replace the MCU, six-axis accelerometer, display device, and add more gestures to continuously optimize the product.