This repository contains deep learning models for translating sign language gestures into text, focusing on both individual letters and complete words. The project leverages LSTM and CNN architectures for letter and word recognition respectively.
The LSTM (Long Short-Term Memory) model is employed for recognizing and translating complete words from sign language gestures. LSTM networks are chosen for their ability to capture temporal dependencies in sequential data.
The CNN (Convolutional Neural Network) model is used for recognizing individual letters from sign language gestures. CNNs are effective in extracting spatial features from images, making them suitable for recognizing hand shapes and gestures associated with different letters.
- Word Translation: Translate sequences of sign language gestures into corresponding words.
- Letter Recognition: Identify and translate individual sign language letters accurately.
- Deep Learning Architecture: Utilizes state-of-the-art deep learning techniques to achieve high accuracy in translation tasks.
- Training and Evaluation: Includes scripts and notebooks for training, evaluating model performance, and fine-tuning.
Contributions to enhance the models, improve accuracy, or add new features are welcome. Please fork the repository, make your changes, and submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.