Coder Social home page Coder Social logo

segmenter's People

Contributors

jowagner avatar yanshao9798 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

segmenter's Issues

Please set a license

Could you please add a license to you code? With no license indicated, it means that no one except you has the right to run/modify/use your code. Even downloading it could be questionable.

AssertionError assert len(raw) == len(sents) with 2018 shared task raw text

I've run segmenter.py train successfully with just conllu files in the workspace but when I include the raw text from the 2018 shared task as raw_train.txt and raw_dev.txt, I get

Traceback (most recent call last):                                                                                                                           
  File "segmenter.py", line 155, in <module>                                                                                                                 
    reset=args.reset, tag_scheme=args.tags, ignore_mwt=args.ignore_mwt)                                                                                      
  File "/.../ud-parsing-2018/uusegmenter/toolbox.py", line 905, in raw2tags                                              
    assert len(raw) == len(sents)                                                                                                                            
AssertionError

(Line numbers may be slightly off as I added some comments here and there.)

It seems that you assume that the raw text has one sentence per line but the shared task raw text does not use line breaks in this way. Did you not use the raw text at training?

Do you use sentences as training instances? Wouldn't then the CRF never see the context to the right of sentence boundaries, e.g. in English the capitalisation of the next letter is a strong cue, and wouldn't the CRF in worst case learn to simply check whether it's the end of each sequence to assign T or U?

Weird behavior for multiword tokens

I'm getting some weird output for segmenting multiword tokens in some languages.

For example, in the Arabic-PADT dev set, the first sentence is tokenized as #sent_tok: ميراث ب 300 الف دولار يقلب حياة متشرد اميركي لونغ بيتش ( الولايات المتحدة ) 15 - 7 ( اف ب ) - كل شيء تغير في حياة المتشرد ستيفن كنت عندما عثرت علي ه \\\كككككككك%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% بعد عناء طويل ل تبلغ ه ب أن ه ورث 300 الف دولار و ب أن ه بات قادرا على وضع حد ل عشرين سنة من حياة التشرد في شوارع مدينة لونغ بيتش في ولاية كاليفورنيا .

It includes a multiword with a single word (?):

36-36   شقيقته  _       _       _       _       _       _       _       _
36      \\\كككككككك%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%     _       _       _       _       _       _       _       _

Similarly, in the Hebrew dev set, I have #sent_tok: נמיר הודיעה כי תפנה ל שרי ה פנים ו ה עבודה ו ה רווחה ו ל מזכיר תנועת ה מושבים , ב תביעה לבטל את ))))))ווווווווווווווווווווווו של 500 עובדים זרים מתאילנד כ מתנדבים כ ביכול .

This also has a multiword with a single word:

26-26   הזמנתם  _       _       _       _       _       _       _       _
26      ))))))ווווווווווווווווווווווו   _       _       _       _       _       _       _       _

I trained the model with the default options as given in the README. I got tokenization F1 similar to the reported in the shared task, so I suppose the system is mostly working correctly.

transducer相关的部分是用来做什么的?

我用的数据格式是(空格分割):
迈向 充满 希望 的 新 世纪 —— 一九九八年 新年 讲话 ( 附 图片 1 张 )
生成出来的dict.txt永远是空的(看代码猜测这个跟transducer有关)。transducer我感觉好像是跟翻译有关,但是我在数据里添加中英混合的句子,也触发不了transducer部分的代码。那么transducer和dict.txt在什么情况下会用到?我只研究中文相关部分。谢谢。

Option --rnn_layer_number 2 gives dimensions / input shapes error

Seeing --rnn_layer_number in the list of options, I gave it a go with 2 instead of the default 1 and got the following error (large number of tensorflow traceback lines removed):

Traceback (most recent call last):
  File "segmenter.py", line 261, in <module>
    rnn_num=args.rnn_layer_number, drop_out=args.dropout_rate, emb=emb)
  File "/[...]/model.py", line 130, in main_graph
    scope='BiRNN')(emb_out, input_v)
  File "/[...]/layers.py", line 236, in __call__
    scope=self.scope)
[...]
ValueError: Dimensions must be equal, but are 400 and 250 for 'tagger/BiRNN_1/fw/fw/while/fw/multi_rnn_cell/cell_0/gru_cell/MatMul_2' (op: 'MatMul') with input shapes: [?,400], [250,400].

If this option is currently unsupported it might be better to comment out the respective argparser line.

toolbox的gram_vec函数问题

        ngram = 0
        for k in dic.keys():
            if '<PAD>' not in k:
                ngram = len(k)
                break

上面的这段代码中,查看dic的前三个keys分别是<P>、<UNK>、<#>,所以这里必然ngram = len('<P>') = 3。我猜这里原本应该是用ngram文件里的字符来判断这个n是多少。所以判断条件是否应该改成这样?

if '<PAD>' not in k and k not in ['<P>', '<UNK>', '<#>']:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.