- Usman Jalil
- Registration Number: 346172
- Ali Subhan
- Registration Number: 337505
- Muhammad Abdullah
- Registration Number: 334656
- Talha Zahid Ch.
- Registration Number: 346206
This project focuses on implementing Human Action Recognition through Videos using Convolutional Long Short-Term Memory networks (Conv LSTMs). The goal is to develop a robust system that can accurately identify and classify human actions in video sequences. The significance of this work lies in its potential applications in various domains, including surveillance, human-computer interaction, and sports analytics.
Our approach involves the use of Convolutional LSTMs, which are well-suited for capturing both spatial and temporal features in video data. The project includes the following key steps:
- Data Collection: Acquiring a diverse dataset of human actions in video format.
- Data Preprocessing: Cleaning and formatting the data for training.
- Model Architecture: Designing and implementing a Convolutional LSTM architecture for action recognition.
- Training: Training the model on the prepared dataset.
- Evaluation: Assessing the model's performance using appropriate metrics.
- Usage: Providing a user-friendly interface for utilizing the trained model.
- /data: Contains the dataset used for training and testing.
- /src: Source code files for the Convolutional LSTM model.
- /docs: Documentation and project-related files.
- /results: Output and results generated during the project.
To get started with the project, follow these steps:
- Clone the Repository
- pip install streamlit watchdog pytube
- streamlit run app.py
- To run in detach mode 'nohup streamlit run app.py &'
This project is licensed under the MIT License - see the LICENSE file for details.