Topic: pre-trained-language-models Goto Github
Some thing interesting about pre-trained-language-models
Some thing interesting about pre-trained-language-models
pre-trained-language-models,PLM 기반 한국어 개체명 인식 (NER)
Organization: ai2-ner-project
pre-trained-language-models,A PyTorch-based model pruning toolkit for pre-trained language models
User: airaria
Home Page: https://textpruner.readthedocs.io
pre-trained-language-models,The official GitHub page for the survey paper "A Survey on Large Language Models: Applications, Challenges, Limitations, and Practical Usage".
User: anas-zafar
Home Page: https://www.techrxiv.org/doi/full/10.36227/techrxiv.23589741.v1
pre-trained-language-models,Code for Findings of EMNLP 2022 short paper "CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model".
User: andychiangsh
Home Page: https://cdgp-demo.nlpnchu.org/
pre-trained-language-models,RoBERTa中文预训练模型: RoBERTa for Chinese
User: brightmart
pre-trained-language-models,A curated list of NLP resources focused on Transformer networks, attention mechanism, GPT, BERT, ChatGPT, LLMs, and transfer learning.
User: cedrickchee
pre-trained-language-models,Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization (ACL 2021)
User: cliang1453
pre-trained-language-models,Top2Vec learns jointly embedded topic, document and word vectors.
User: ddangelov
pre-trained-language-models,EMNLP 2020 GigaBERT Arabic Relation extraction system, named entity recognition, IE
User: edchengg
pre-trained-language-models,We start a company-name recognition task with a small scale and low quality training data, then using skills to enhanced model training speed and predicting performance with least artificial participation. The methods we use involve lite pre-training models such as Albert-small or Electra-small with financial corpus, knowledge of distillation and multi-stage learning. The result is that we improve the recall rate of company names recognition task from 0.73 to 0.92 and get 4 times as fast as BERT-Bilstm-CRF model.
User: hanlard
pre-trained-language-models,The official repo for the EACL 2023 paper "Quantifying Context Mixing in Transformers"
User: hmohebbi
pre-trained-language-models,
User: kelleyyin
pre-trained-language-models,A Survey on Automatic Generation of Figurative Language: From Rule-based Systems to Large Language Models (ACM Computing Surveys)
User: laihuiyuan
pre-trained-language-models,Code for CascadeBERT, Findings of EMNLP 2021
Organization: lancopku
pre-trained-language-models,Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"
Organization: lancopku
pre-trained-language-models,Zero-shot Transfer Learning from English to Arabic
User: lanwuwei
pre-trained-language-models,The Paper List on Data Contamination for Large Language Models Evaluation.
User: lyy1994
pre-trained-language-models,SLS : Neural Information Retrieval(IR)-based Semantic Search model
User: navy10021
pre-trained-language-models,LingLong (玲珑): a small-scale Chinese pretrained language model
Organization: nkcsiclab
pre-trained-language-models,[ACL'23] Open KG Completion with PLM (Bridging Text Mining and Prompt Engineering)
User: pat-jj
pre-trained-language-models,The official GitHub page for the survey paper "A Survey of Large Language Models".
Organization: rucaibox
Home Page: https://arxiv.org/abs/2303.18223
pre-trained-language-models,Codes and material used for evaluating PLMs on dialogue response dynamics
User: sangheek16
pre-trained-language-models,The code of our paper "SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model"
User: sunyilgdx
pre-trained-language-models,Keyphrase or Keyword Extraction 基于预训练模型的中文关键词抽取方法(论文SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model 的中文版代码)
User: sunyilgdx
pre-trained-language-models,A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Organization: thudm
pre-trained-language-models,An Open-Source Framework for Prompt-Learning.
Organization: thunlp
Home Page: https://thunlp.github.io/OpenPrompt/
pre-trained-language-models,Must-read papers on prompt-based tuning for pre-trained language models.
Organization: thunlp
pre-trained-language-models,Butterfly: An Open NLP Research Project for Chinese Danmaku,B站弹幕NLP社区建设
Organization: tinytalks
pre-trained-language-models,Must-read papers on improving efficiency for pre-trained language models.
User: tobiaslee
pre-trained-language-models,Awesome papers on Language-Model-as-a-Service (LMaaS)
User: txsun1997
pre-trained-language-models,VaLM: Visually-augmented Language Modeling. ICLR 2023.
User: victorwz
Home Page: https://openreview.net/forum?id=8IN-qLkl215
pre-trained-language-models,📔 对Chinese-LLaMA-Alpaca进行使用说明和核心代码注解
User: wangrongsheng
pre-trained-language-models,Learning notes of Prompt.
User: webgao
pre-trained-language-models,A repository listing important datasets for multimodal recommender systems
Organization: westlake-repl
pre-trained-language-models,HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊 HugNLP will released to @HugAILab
User: wjn1996
Home Page: https://wjn1996.github.io/blogs/HugNLP/
pre-trained-language-models,Calculating FLOPs of Pre-trained Models in NLP
User: xingluxi
pre-trained-language-models,SIGIR'22 paper: Axiomatically Regularized Pre-training for Ad hoc Search
User: xuanyuan14
pre-trained-language-models,Source code for ACL 2023 Findings paper "Making Pre-trained Language Models both Task-solvers and Self-calibrators"
User: yangyi-chen
pre-trained-language-models,中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
User: ymcui
Home Page: https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
pre-trained-language-models,A Curated List of Language Models in Scientific Domains
User: yuzhimanhua
pre-trained-language-models,Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding (Findings of EMNLP'23)
User: yuzhimanhua
pre-trained-language-models,Seed-Guided Topic Discovery with Out-of-Vocabulary Seeds (NAACL'22)
User: yuzhimanhua
pre-trained-language-models,ChatCell: Facilitating Single-Cell Analysis with Natural Language
Organization: zjunlp
pre-trained-language-models,[ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Organization: zjunlp
pre-trained-language-models,[EMNLP 2023] Knowledge Rumination for Pre-trained Language Models
Organization: zjunlp
pre-trained-language-models,Must-read Papers on Knowledge Editing for Large Language Models.
Organization: zjunlp
pre-trained-language-models,An Open-sourced Knowledgable Large Language Model Framework.
Organization: zjunlp
Home Page: http://knowlm.zjukg.cn/
pre-trained-language-models,[ICLR 2023] Multimodal Analogical Reasoning over Knowledge Graphs
Organization: zjunlp
Home Page: https://zjunlp.github.io/project/MKG_Analogy/
pre-trained-language-models,[ICLR 2024] Domain-Agnostic Molecular Generation with Chemical Feedback
Organization: zjunlp
Home Page: https://huggingface.co/spaces/zjunlp/MolGen
pre-trained-language-models,[CCL 2023] Revisiting k-NN for Fine-tuning Pre-trained Language Models
Organization: zjunlp
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.