High talk nalture language understanding platform of LingLing technology . Ltd
1、clone HT_NLU project to the local machine
git clone https://github.com/LLGOVRDML/Rasa_NLU.git
2、Serving bert as service model as service
cmd --> Windows+R
pip install bert-serving-server -i https://mirrors.aliyun.com/pypi/simple
pip install bert-serving-client -i https://mirrors.aliyun.com/pypi/simple
cd ${yourpath}/HT_NLU/bert-as-service
bert-serving-start -model_dir D:\chinese_L-12_H-768_A-12 -tuned_model_dir C:\Users\weizhen\Desktop\NLU\rasa_model_output -ckpt_name=model.ckpt-1028
when you see the log print "ready and listening" it means that the bert server is ready , and we can go to the next step
0、cd the root project folder , and double click "visualcppbuildtools full.exe" to install the c++ compiler in windows
1、install related python packages
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple
2、open project in pycharm and edit execution path
after the previous step , you can open the rasa_nlu_gq project in the pycharm ide
and edit the configuration
in pycharm edit configuration
serving paramaters :
-c sample_configs/config_embedding_bert_intent_estimator_classifier.yml --path projects/bert_gongan_v4
training parameters:
-c sample_configs/config_embedding_bert_intent_estimator_classifier.yml --data data/examples/luis/HighTalkSQSWLuisAppStaging-GA-20180824.json --path projects
3、start the rasa_nlu_gq server
and press the run button for running
train rasa nlu with the bert words vectors
python train.py -c sample_configs/config_embedding_bert_intent_classifier.yml --data data/examples/luis/HighTalkSQSWLuisAppStaging-GA-20180824.json --path projects/bert_gongan_v4
cd rasa_nlu_gq
docker build -t rasa_nlu_gq:v1.0 .
cd bert-as-service
docker build -t bert-as-service:v1.0 .
docker run -it -p 5555:5555 -p 5556:5556 bert-as-service:v1.0
changing rasa_nlu_gq model's ip endpoint and rasa project's ip call out endpoint , after that you can run the rasa docker image
docker run -it -p 5000:5000 rasa_nlu_gq:v1.0