Coder Social home page Coder Social logo

simple_bert_for_tf2's Introduction

simple_bert_for_tf2

A bert layer for TF2.0 model. This model is just for understanding the structure of BERT or Transformer layer. It can deal with some simple tasks. But recommend using huggingface bert as a keras layer to your own model so that you can use the well pretrained weights.

Introduction

Bert is built as a TF.Keras Layer.

Merits

1.Easy to apply bert as a layer in a practical TF2.0 model.
2.Using only numpy and Tensorflow2.0 as third party packages.

Notes

1.Comments are writen in Chinese.
2.Dropout are removed according to the study of ALBERT.
3.Transformer Block weights are shared cross layers according to ALBERT.
4.Pretrain model loss is made up of MLM loss and the loss estimated based on the remaining words. According to the study of ELECTRA.
5.No pretrained model language model provided here.
6.Vocab.txt is squeezed for a certain project.


简介

由于想在实际任务中应用bert,而找到的bert tensorflow实现又让我踩了不少坑。
因此就动手实现了一个稍微简洁清爽一些的TF2.0 bert。
这里的bert继承了keras.layers.Layer类,实际应用时,可以方便地加到keras模型中。 只是打算跑个bert模型的话,建议直接用huggingface的Transformers,自己动手搭建更多还是为了弄清bert内部的结构。

优点:

1.写成了一个layer,无论是预训练还是finetune,用起来都方便;
2.第三方库只用了TF2.0和numpy,你一定能跑起来;
3.中文注释拉满,你一定能看懂每一步.

注意:

1.根据ALBER的研究,移除了dropout;
2.根据ALBERT的研究,transformer层之间的参数共享(不共享效果好一点,共享后模型体积小,Inference time其实是一样的);
3.根据ELECTRA的研究,预训练时考虑没被mask的单词的损失可以加速模型效果提升(ELECTRA中是判断字符是否被替换),本项目中的loss也是由mlm loss和un-mask单词的loss两部分组成的。如觉不妥,可以直接在pretrain文件的loss定义里将后半部分的loss删去;
4.没有提供用大语料加TPU烹饪的预训练模型,如果要读google bert预训练模型里的参数,应该也是可以的。 不过实际项目中,很可能要根据项目语料重新进行预训练的。
5.vocab相较谷歌提供的原版,条目减少了很多。大多用不到,删了让token embedding的weights size减少了很多。可以根据实际项目情况,来替换vocab文件。

Files

|--bert_parts
|    |--layers.py       bert layer using keras TF2 | 基于keraslayer的bert layer, bert中的组件也被写成了layer放在此文件中
|    |--tokenizer.py    tokenizer for Chinese      | 用于对中文做tokenize的文件
|    |--vocab.txt       vocab file                 | 词典文件,用于将字符转换为token id
|--datasource.py        genarate data              | 产生数据
|--finetune.py          an example for finetune    | 微调的例子
|--pretrain.py          an example for pretrain    | 预训练的例子

Instructions

Pretrain:
python pretrain.py

finetune:
python finetune.py

test bert_parts
python bert_parts/layers.py

test tokenizer(for Chinese text)
python bert_parts/tokenizer.py

simple_bert_for_tf2's People

Contributors

silentmoebuta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.