Summaries of some NLP research papers
- The summaries are originally a part for UCLA CS249 (Professor Cho, Spring 2019).
- Note: Just for reference, and plagiarism will result in the risk of penalty.
- Yoshua Bengio, et al.: A Neural Probabilistic Language Model, J. of Machine Learning Research, 2003.
- Tomas Mikolov, et al.: Distributed Representations of Words and Phrases and their Compositionality, NIPS 2013.
- Jeffrey Pennington, et al.: GloVe: Global Vectors for Word Representation, 2014.
- Quoc V. Le and Tomas Mikolov: Distributed Representations of Sentences and Documents, 2014.
- Joshua Goodman: A bit of progress in language modeling, MSR Technical Report, 2001.
- Yee Whye Teh: A Hierarchical Bayesian Language Model based on Pitman-Yor Processes, COLING/ACL 2006.
- Kamal Nigam, et al.: Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning, 1999.
- Adam Berger, Stephen Della Pietra, Vincent Pietra: A Maximum Entropy Approach to Natural Language Processing, J of Computational Linguistics 1996.
- Michael Collins: Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms, EMNLP 2002.
- John Lafferty, Andrew McCallum, Fernando C.N. Pereira: Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data, ICML 2001.
- Ryan McDonald et al.: Non-Projective Dependency Parsing using Spanning-Tree Algorithms, EMNLP 2005.
- Ronan Collobert et al.: Natural Language Processing (almost) from Scratch, J. of Machine Learning Research, 2011.
- Danqi Chen, Christopher D. Manning: A Fast and Accurate Dependency Parser using Neural Networks, EMNLP 2014.
- Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio: Neural Machine Translation by Jointly Learning to Align and Translate, ICLR 2015.