๐ GLG project
We are Ying Hu, Cody McCormack and Cris Fortes. This repo is part of our capstone project of FourthBrain's Machine Learning Engineer Program, from August to December of 2022.
GLG's business largely revolves around matching clients, requesting insights on a specific topic, with an expert on that topic from their large database so that they can meet by phone, video or in person. Since they receive hundreds of these requests per day, we wanted to use machine learning to help automate and scale the process.
We used NLP models to extract useful informations from the requests. You can find a recording of our presentation here: Presentation and Slide Deck.
Our application takes a textual input and then outputs its key words, a list of possible related topics, and a list of similar sentences from our database. The application was deployed on AWS, but we took it down after the program. You can watch an HD product demo here:
demo.mp4
This app can be easily deployed using Docker. The instructions to deploy in the cloud or locally are the same.
-
Clone this repository, either on a local machine or in a cloud instance
-
Navigate to the flask_app folder
-
Build the Docker image, using the command
docker build -t <image_name> .
-
If you don't have Docker installed locally or in the cloud instance, you will have to install and activate the Daemon in order to build a Docker image.
-
Run the Docker image using the command
docker run -d --rm --name <container_name> -p 8000:8000 <image_name>
- Navigate to either your local host, port 8000, or the public IP of the cloud instance, port 8000. E.g. 127.0.0.0:8000
NOTE: This application depends on prebuilt machine learning models that were saved using Pickle files. The idea of Pickle files is that they can be built once and ported to any other machine. However, in testing we found that the app may crash when you try to run it and this is most likely caused by the 'pkl files'. Unfortunately, to resolve the issue, you need to take the steps below to remediate the issue. This will slow down the Docker image build considerably, and might take up to 20 minutes, depending on your machine.
- Open
Dockerfile
, and remove the#
from the 3rd line from the bottom, so that it readsRUN python model_maker.py
- Then you can pick up from the step 3 above.
- Improve the Topic Modeling:
- Training an LDA model on a more diverse dataset
- Using semi-supervised learning method (SentenceTransformers + Label Propagation)
- Expand the scope of the project:
- Building the expert(s) recommendation model
- Adapting our models to cover non-English languages (GLG also has offices in Europe, Asia, and the Middle East)
MIT License. Copyright (c) 2022 Cody McCormack, Cris Fortes and Ying Hu