fudannlp16 / cws_dict Goto Github PK
View Code? Open in Web Editor NEWSource codes for paper "Neural Networks Incorporating Dictionaries for Chinese Word Segmentation", AAAI 2018
Source codes for paper "Neural Networks Incorporating Dictionaries for Chinese Word Segmentation", AAAI 2018
Train Epoch 0 loss 63.578703 855.35 (sec) <<
Valid Epoch 0 loss 24.911499
P:0.920358 R:0.946347 F:0.933171
Traceback (most recent call last):
File "train_baseline.py", line 176, in
train()
File "train_baseline.py", line 118, in train
predict= model.predict_step(sess, input_x)
File "/home/bigdata/CWS_Dict/same-domain/models/BaselineModel.py", line 126, in predict_step
viterbi_sequence, _=crf.viterbi_decode(unary_score[:length],transition_param)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/crf/python/ops/crf.py", line 299, in viterbi_decode
trellis[0] = score[0]
IndexError: index 0 is out of bounds for axis 0 with size 0
Thanks.
like the title, are you use gensim word2vec to pretrain the vec?
I've used python 2.7.12
and tensorflow-gpu 1.0.0
in Ubuntu 16.04
to try to reproduce same-domain experiments, but so far only obtained different (lower) test scores of PKU and MSR for model2.
Please advise.
Some more info about my environment:
GeForce GTX 1080 Ti
* 2CUDA 8.0.61
CuDNN 5.1.10
How can I get CTB dataset? Thank you!
i have run you code,and the result not right. in your result :Model I on PKU is 0.962 msr is 0.976 cityu is 0.960 and i run the result on PKU is0.9697 msr is 0.9734 and cityu is 0.9675.why have i get this better result?
请问其他的训练数据集能够直接下载么
I use Tensorflow 1.3.1. When I run train_dict.py. A warning has been shown:
/opt/anaconda2/envs/tf1p3py27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py:95: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Do you have this situation?
如题,谢谢
hi,i have use keras build some model, i want use your pipline to eval on standard dataset, how can i do it?
不好意思,搞错了
When I run:
python train_dict.py --dataset msr --model DictHyperModel --model_path mymsr
After a long epoch info log, an error has been shown:
('epoch:0>>99.32%', 'completed in 959.34 (sec) <<\r')
('epoch:0>>99.48%', 'completed in 960.52 (sec) <<\r')
('epoch:0>>99.64%', 'completed in 961.96 (sec) <<\r')
('epoch:0>>99.81%', 'completed in 963.57 (sec) <<\r')
Train Epoch 0 loss 14.632959
Traceback (most recent call last):
File "train_dict.py", line 183, in
train()
File "train_dict.py", line 104, in train
loss, predict= model.dev_step(sess, input_x, y)
TypeError: dev_step() takes exactly 5 arguments (4 given)
Thanks.
There is no model in that link I think.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.