Coder Social home page Coder Social logo

yizhu-millie / nlp-journey Goto Github PK

View Code? Open in Web Editor NEW

This project forked from msgi/nlp-journey

0.0 1.0 0.0 15.79 MB

NLP 相关的一些文档、论文及代码, 包括主题模型(Topic Model)、词向量(Word Embedding)、命名实体识别(Named Entity Recognition)、文本分类(Text Classificatin)、文本生成(Text Generation)、文本相似性(Text Similarity)计算、机器翻译(Machine Translation)等,涉及到各种与nlp相关的算法,基于tensorflow 2.0。

Home Page: https://github.com/msgi/nlp-journey

License: Apache License 2.0

Python 100.00%

nlp-journey's Introduction

nlp journey

Star Fork GitHub Issues License

全面拥抱tensorflow 2.0,代码全部修改为tensorflow 2.0版本。实现代码

一. 基础知识

二. 经典书目(百度云 提取码:txqx)

  1. 概率图入门. 原书地址
  2. Deep Learning.深度学习必读. 原书地址
  3. Neural Networks and Deep Learning. 入门必读. 原书地址
  4. 斯坦福大学《语音与语言处理》第三版:NLP必读. 原书地址

三. 必读论文

01) 必读NLP论文

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. 地址
  2. GPT: Improving Language Understanding by Generative Pre-Training. 地址
  3. GPT-2: Language Models are Unsupervised Multitask Learners. 地址
  4. Transformer-XL: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. 地址
  5. XLNet: Generalized Autoregressive Pretraining for Language Understanding. 地址
  6. XLM: Cross-lingual Language Model Pretraining. 地址
  7. RoBERTa: Robustly Optimized BERT Pretraining Approach. 地址
  8. DistilBERT: a distilled version of BERT: smaller, faster, cheaper and lighter. 地址
  9. CTRL: A Conditional Transformer Language Model for Controllable Generation. 地址
  10. CamemBERT: a Tasty French Language Model. 地址
  11. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. 地址
  12. T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. 地址
  13. XLM-RoBERTa: Unsupervised Cross-lingual Representation Learning at Scale. 地址
  14. MMBT: Supervised Multimodal Bitransformers for Classifying Images and Text. 地址
  15. FlauBERT: Unsupervised Language Model Pre-training for French. 地址

02) 模型及优化

  1. LSTM(Long Short-term Memory). 地址
  2. Sequence to Sequence Learning with Neural Networks. 地址
  3. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. 地址
  4. Residual Network(Deep Residual Learning for Image Recognition). 地址
  5. Dropout(Improving neural networks by preventing co-adaptation of feature detectors). 地址
  6. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 地址

03) 综述论文

  1. An overview of gradient descent optimization algorithms. 地址
  2. Analysis Methods in Neural Language Processing: A Survey. 地址
  3. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. 地址
  4. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. 地址
  5. A Gentle Introduction to Deep Learning for Graphs. 地址
  6. A Survey on Deep Learning for Named Entity Recognition. 地址
  7. More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction. 地址
  8. Deep Learning Based Text Classification: A Comprehensive Review. 地址
  9. Pre-trained Models for Natural Language Processing: A Survey. 地址
  10. A Survey on Contextual Embeddings. 地址
  11. A Survey on Knowledge Graphs: Representation, Acquisition and Applications. 地址
  12. Knowledge Graphs. 地址
  13. Pre-trained Models for Natural Language Processing: A Survey. 地址

04) 文本预训练

  1. A Neural Probabilistic Language Model. 地址
  2. word2vec Parameter Learning Explained. 地址
  3. Language Models are Unsupervised Multitask Learners. 地址
  4. An Empirical Study of Smoothing Techniques for Language Modeling. 地址
  5. Efficient Estimation of Word Representations in Vector Space. 地址
  6. Distributed Representations of Sentences and Documents. 地址
  7. Enriching Word Vectors with Subword Information(FastText). 地址. 解读
  8. GloVe: Global Vectors for Word Representation. 官网
  9. ELMo (Deep contextualized word representations). 地址
  10. Pre-Training with Whole Word Masking for Chinese BERT. 地址

05) 文本分类

  1. Bag of Tricks for Efficient Text Classification (FastText). 地址
  2. Convolutional Neural Networks for Sentence Classification. 地址
  3. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. 地址

06) 文本生成

  1. A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation. 地址
  2. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. 地址

07) 文本相似性

  1. Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. 地址
  2. Learning Text Similarity with Siamese Recurrent Networks. 地址
  3. A Deep Architecture for Matching Short Texts. 地址

08) 自动问答

  1. A Question-Focused Multi-Factor Attention Network for Question Answering. 地址
  2. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. 地址
  3. A Knowledge-Grounded Neural Conversation Model. 地址
  4. Neural Generative Question Answering. 地址
  5. Sequential Matching Network A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots.地址
  6. Modeling Multi-turn Conversation with Deep Utterance Aggregation.地址
  7. Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network.地址
  8. Deep Reinforcement Learning For Modeling Chit-Chat Dialog With Discrete Attributes. 地址

09) 机器翻译

  1. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. 地址
  2. Neural Machine Translation by Jointly Learning to Align and Translate. 地址
  3. Transformer (Attention Is All You Need). 地址

10) 自动摘要

  1. Get To The Point: Summarization with Pointer-Generator Networks. 地址
  2. Deep Recurrent Generative Decoder for Abstractive Text Summarization. 地址

11) 关系抽取

  1. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. 地址
  2. Neural Relation Extraction with Multi-lingual Attention. 地址
  3. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. 地址
  4. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. 地址

四. 必读博文

  1. 应聘机器学习工程师?这是你需要知道的12个基础面试问题. 地址
  2. 如何学习自然语言处理(综合版). 地址
  3. The Illustrated Transformer.地址
  4. Attention-based-model. 地址
  5. Modern Deep Learning Techniques Applied to Natural Language Processing. 地址
  6. Bert解读. 地址
  7. 难以置信!LSTM和GRU的解析从未如此清晰(动图+视频)。地址
  8. 深度学习中优化方法. 地址
  9. 从语言模型到Seq2Seq:Transformer如戏,全靠Mask. 地址
  10. Applying word2vec to Recommenders and Advertising. 地址
  11. 2019 NLP大全:论文、博客、教程、工程进展全梳理. 地址

五. 相关优秀github项目

一份教程

六. 相关优秀博客

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.