Coder Social home page Coder Social logo

thudm / p-tuning Goto Github PK

View Code? Open in Web Editor NEW
891.0 23.0 113.0 6.12 MB

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

License: MIT License

Python 98.25% Shell 1.75%
natural-language-processing pre-trained-language-models prompt-tuning p-tuning parameter-efficient-learning few-shot-learning

p-tuning's Introduction

P-tuning

❗ News

🌟 [2022-10-06] Thrilled to present GLM-130B: An Open Bilingual Pre-trained Model. It is an open-sourced LLM outperforming GPT-3 175B over various benchmarks. Get model weights and do inference and P-Tuning with only 4 * RTX 3090 or 8 * RTX 2080 Ti FOR FREE!

🌟 [2022-07-14] Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers is out! Check our code.

🌟 [2021-10-15] P-tuning v2 is out! Check our Github repo.

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

Xiao Liu*, Yanan Zheng*, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang

You may be also interested in our another work GLM: All NLP Tasks Are Generation Tasks: A General Pretraining Framework

How to use our code

We have released the code and datasets for LAMA and few-shot SuperGLUE (32-dev) experiments. Please check README.md and requirement.txt in the corresponding subdirectories for details.

The LAMA and FewGLUE_32dev datasets are available. The LAMA dataset should be placed in ./data directory, and the SuperGLUE dataset should be placed in the ./ (project root) directory.

Citation

If you find our work useful, please cite the following paper:

    @article{liu2021gpt,
    title={GPT Understands, Too},
    author={Liu, Xiao and Zheng, Yanan and Du, Zhengxiao and Ding, Ming and Qian, Yujie and Yang, Zhilin and Tang, Jie},
    journal={arXiv:2103.10385},
    year={2021}
    }

p-tuning's People

Contributors

xiao9905 avatar zheng-yanan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

p-tuning's Issues

I have a question about prompt encoder's inputs.

Hi! Thank you for the great project.
I read the paper "GPT understand, too!". I understand that if an input such as "The capital of Britain is [MASK]" is given, then only the context Britain and Target [MASK] are embedded in the pre-train embedding layer, and the rest of the tokens are embedded in the prompt-encoder.
=> h(The), h(capital), h(of), e(Britain), h(is), e([MASK]) (h means prompt-encoder, e means pre-trained embedding layer).

But when I checked the code, the input of the prompt-encoder was always the same as [0, 1, 2, 3, 4, 5].
Can anyone explain what I misunderstood?

Thank you.

Bug in loss recording for LAMA

The loss variable recorded for the LAMA task isn't the average over the dataset.

Since loss is always reset in the if-else block, the recorded loss on this line is always the double of the last batch.

Pasting the relevant block here for reference

for x_hs, x_ts in loader:
    if False and self.args.extend_data:
        loss, _hit1 = self.model.test_extend_data(x_hs, x_ts)
    elif evaluate_type == 'Test':
        loss, _hit1, top10 = self.model(x_hs, x_ts, return_candidates=True)
    else:
        loss, _hit1 = self.model(x_hs, x_ts)
    hit1 += _hit1
    loss += loss.item()

P-tuuning的一些问题

1.你好,我想问一下,在P-tunning中,[Mask]在一众[unused]中得位置是怎么确定的?是人工选择的吗?如果不是的话,是根据什么方式确定的?
2.原论文中写的当数据量比较少的时候用的anchor-word,比如预测“英国首都”,在几个[unused]中加一个[capital]效果会比较好,这个[capital]应该加在哪个位置是如何确定的呢?

Doesn't anyone see a problem with the code about the prompt construction for BERT-style transformer???

if 'gpt' not in self.args.model_name and 'megatron' not in self.args.model_name:
    # BERT-style model(bert风格句子构造, CLS开头, SEP结尾)
    return [[self.tokenizer.cls_token_id]  # [CLS]
            + prompt_tokens * self.template[0]
            + [self.tokenizer.mask_token_id]  # head entity
            + prompt_tokens * self.template[1]
            + self.tokenizer.convert_tokens_to_ids(self.tokenizer.tokenize(' ' + x_h))  # [MASK] (tail entity)
            + (prompt_tokens * self.template[2] if self.template[
                                                       2] > 0 else self.tokenizer.convert_tokens_to_ids(['.']))
            + [self.tokenizer.sep_token_id]
            ]

我不知道理解的对不对?按照论文中的意思,这里head entity中的mask_token_id和tail entity中x_h的编码id是不是写反了呢?因此对于bert-style的finetune实际上是在用head entity去预测head entity?而并非是head entity去预测tail entity??? @Life-0-1 @#12 @#15 @Xiao9905

所以用bert尝试复现的效果不行的,有没有可能是因为这里呢?

gpt2-medium LAMA

Hi, i have just used the default params to p-tune the gpt2-medium on LAMA task and the results is as follows.
best dev_hit@1: 51.8 best test_hit@1: 44.5
For the results I got, I have some confusions...
(1) It seems that there is a gap between the dev results and the test results. Are the dev set and the test set in the same distribution? Is it possible to provide the scipts of generating the train/dev/test sets and the original dataset?
(2) The results reported in the paper is 46.5, which is close to the best test_hit@1. Are the results in the paper based on the test set?
It will be very nice if the shell scipts is provided to reproduce the results in the paper.

词表问题

想问一下如果样本的预测标签不在给定的词表中,该怎么办呢?
看了源码,LAMA实验中,通过if token_wrapper(args, d['obj_label']) not in vocab: continue来进行过滤(方法一);Fewshot实验中,是做了一些yes\no的映射,比如:' "contradiction": ["No"],"entailment": ["Yes"],"neutral": ["Maybe"]'(方法二).
有没有更好的方式来解决这一问题呢?比如我的总类别标签数非常的多,用不了方法二,但我又不想舍去这些样本,不用方法一.
谢谢!

MASK and x_h reversed on bert

in the function get_query() in LAMA/p_tuning/modeling.py, line 87
return [[self.tokenizer.cls_token_id] # [CLS]
+ prompt_tokens * self.template[0]
+ [self.tokenizer.mask_token_id] # head entity
+ prompt_tokens * self.template[1]
+ self.tokenizer.convert_tokens_to_ids(self.tokenizer.tokenize(' ' + x_h)) # [MASK] (tail entity)
+ (prompt_tokens * self.template[2] if self.template[
2] > 0 else self.tokenizer.convert_tokens_to_ids(['.']))
+ [self.tokenizer.sep_token_id]
]
so the query is like [ CLS h h h MASK h h h x_h h h h]? the "MASK" and "x_h" should be reversed to
[ CLS h h h x_h h h h MASK h h h] from the paper. Do I miss out anything? Thanks.

Few-shot NLU: learning rate for model parameters vs. embedding parameters

Hi!

Thanks for the interesting paper and releasing this nice codebase! I had a quick question with respect to the learning rate used for the fewshot NLU experiments. The paper mentions (Section 4.2) that:

We perform grid search of hyper-parameters and take the best combination on Ddev or Ddev32. Specifically, we take learning rates from 1e-5, 2e-5, 3e-5 and batch sizes from 16, 32

However, it seems like the model is updated with a fixed learning rate of 1e-5 in the code ( https://github.com/THUDM/P-tuning/blob/main/PT-Fewshot/pet/wrapper.py#L312 ) , and the learning rate taken from the CLI is only used for the embedding parameters.

Given that the paper and code seem to differ in this regard, I'm not sure if this is a bug in the code (i.e., the model and the embedding parameters should always use the LR taken from the CLI) or if the paper omits this detail (i.e., in reality, the LR grid search is only done on the embedding parameters, and 1e-5 is always used for the model). Could yo clarify which approach was taken in your experiments?

Thanks again!

loss, logits = output.loss, output.logits AttributeError: 'tuple' object has no attribute 'loss'

    def bert_out():
        label_mask = (queries == self.tokenizer.mask_token_id).nonzero().reshape(bz, -1)[:, 1].unsqueeze(
            1).to(self.device)  # bz * 1
        labels = torch.empty_like(queries).fill_(-100).long().to(self.device)  # bz * seq_len
        labels = labels.scatter_(1, label_mask, label_ids)
        output=self.model(inputs_embeds=inputs_embeds.to(self.device),
                            attention_mask=attention_mask.to(self.device).bool(),
                            labels=labels.to(self.device))
        loss, logits = output.loss, output.logits

Is LAMA p-tuning seeking a global prompt representation across relation?

Hi, I read the paper and found it very interesting! When you apply p-tuning on LAMA dataset, is the encoder for pseudo token shared across different relation types or do you train separate embedding for each relation type? After looking over the code, I believe the encoder trained via p-tuning is a global and not conditioned by relation types, but just want to double check.

Presumably the encoder is globally shared across relation types, does p-tuning really need that large training dataset like AutoPrompt? AutoPrompt optimized the prompt per relation (can be confirmed with their official release of generated prompts)
and that's one of the reason why they need the large training data for each relation type (1000 data points per relation).

Meanwhile, p-tuning seems to be more efficient because of the continuous optimization and shared encoder, so I feel it could establish very strong baseline even with a few shot setting. Have you done any ablation studies to see whether it works with a limited data points? I'm very curious about the result if you have it.

Thank you for sharing the code and your work!

AttributeError: 'PTuneForLAMA' object has no attribute 'prompt'

I called

python cli.py --model_name=gpt2

After several epochs, it printed the following error:

P1001 Dev Epoch 41 Loss: 0.007320404052734375 Hit@1: 0.6756756756756757
Traceback (most recent call last):
  File "cli.py", line 238, in <module>
    main()
  File "cli.py", line 234, in main
    trainer.train()
  File "cli.py", line 197, in train
    self.save(best_ckpt)
  File "cli.py", line 171, in save
    print("# Prompt:", self.model.prompt)
  File "/home/pouramini/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", 
line 948, in __getattr__
    type(self).__name__, name))
AttributeError: 'PTuneForLAMA' object has no attribute 'prompt'

P-tuning如何用于Question answering

您好,我是nlp初学者,有一个问题想请教大神。请问如何用代码实现将P-tuning用于Question answering?具体就是把QA task 的input里的question替换为p-tuning,代码方面烦请大神指点指点。不胜感激。

Prompt format selection

Hi,

Kudos for the nice work!

I'm looking at the paper and code for details regarding the format of the prompt, i.e. the locations of embeddings to be optimized. It doesn't seem too clear to me how this is chosen, and it seems the block_flag is part of input data. I'm looking at the RTE task, and it seems the block location is example-dependent.

Is it possible to clarify on this point? Are the locations selected based on previous work?

关于论文的一些疑惑

你好,有幸读到贵作《GPT Understands, Too》,确实很不错。在阅读过程中,主要有两个疑问,烦请指点。

1、prompt直接通过embedding优化生成,跟原论文中使用LSTM生成,效果差距有多大呢?论文似乎并没有对比两者的差距。

2、关于superglue的各个任务的template,能否简单罗列一下?我只看到LAMA那里写了(3, sub, 3, obj, 3)和(3, sub, 3, obj),其他任务未见。

questions about discreteness in optimization

Hi! Thank you for the interesting paper.
While I was reading the paper, I'd like to ask you something I don't understand.

In 3.2 Optimization part of the paper,

If h is initialized with random distribution and then optimized with stochastic gradient descent (SGD), 
which has been proved to only change the parameters in a small neighborhood (AllenZhu et al., 2019), 
the optimizer would easily fall into local minima.

Is the problem with the discreteness of word embedding $e$ in the optimization? could you explain about this in more detail?

the second question is why the proposed prompt encoder encourage discreteness. maybe it might be connected to the first question.

Thank you.

Potential bug in learning rate schedule for few-shot

Hi,

I recently refactored some of the code for my own research, and I may have spotted a potential bug in the learning rate schedule.

To pinpoint, I'm referring to this line. It seems embedding_scheduler should be created with embedding_optimizer as input, as opposed to optimizer.

Prompt Length in SuperGLUE

Hello,

Thank you for your work and for releasing the code! I just have a few questions regarding the size of the prompt embeddings:

  • From the scripts you shared for the superglue tasks, the pattern id chosen is 1 for most tasks (Except wsc with 2). If I understood correctly, you discard the original notion of patterns and use the pattern id to denote the number of prompt embeddings you are going to train. Does this mean you are using a single prompt embedding vector in most tasks?
  • If so, is there a specific reason why LSTM performs better than MLP in this case? If I understood correctly, one of the reasons why LSTM was used is to help with the association problem and to make the different prompt embeddings dependent. Would this problem exist given just 1 prompt embedding?

Thank you for your work and cooperation!

Where is the GPT-based implementation on SuperGLUE?

Hi, thanks for your great work!
I am looking for training GPT-2 on SuperGLUE but I could only find such implementation based on BERT/Roberta/albert.
Could you point out the location or it has not been released?
Thanks!

Few-shot SuperGLUE的部分数据集效果复现问题

您好,我在复现Few-shot SuperGLUE(即FewGLUE_32dev数据)实验时,CB、WSC、COPA数据集的结果和论文中存在一定差距(复现实验所有模型均基于albert-xxlarge-v2这一个预训练模型,与论文设计一致,实验seed=42无修改):
image

实验设置差异:

关于CB数据集的实验

  • 原始脚本中使用8卡/pet_per_gpu_train_batch_size=2/pet_gradient_accumulation_steps=1,在我的实验中使用1卡/pet_per_gpu_train_batch_size=8/pet_gradient_accumulation_steps=2,其余参数无差异;
  • 最终结果acc最高85.71,f1-macro对应78.76,论文结果为92.9/92.3;
  • 在项目的issue中我找到您关于CB数据集效果不如论文的解释:https://github.com/THUDM/P-tuning/issues/12,如果是脚本参数有误造成的,请问什么时候会更新训练脚本呢?

关于WSC数据集的实验

  • 与原脚本参数无差异(1卡/pet_per_gpu_train_batch_size=16/pet_gradient_accumulation_steps=1);
  • 最终结果acc最高81.73,论文结果为84.6;

关于COPA数据集的实验

  • 与原脚本参数无差异(1卡/pet_per_gpu_train_batch_size=16/pet_gradient_accumulation_steps=1);
  • 最终结果acc最高79.00,论文结果为87.0;

python库版本差异

考虑到可能存在版本差异影响造成复现效果不同,在此列出与requirements.txt对应的python库版本(括号中为项目requirements的库版本):

  • numpy 1.19.5(1.19)
  • jsonpickle 2.0.0(1.1)
  • scikit-learn 0.24.1(0.23.1)
  • torch 1.7.1+cu110(1.5.0)
  • torchvision 0.8.2+cu110(0.6.0)
  • transformers 4.5.1(3.0.2)
  • tqdm 4.49.0(4.48.1)
  • tensorboardX 2.2(2.1)
    由于设备cuda版本受限,torch相关库的版本与代码不同;而其他部分库如tqdm、tensorboardX等应该与效果无关。
    不知道是否是因为以上库版本差异导致效果不同?

设备差异

全部复现实验在单张GeForce RTX 3090上进行。

请问如何理解模型效果的差异?

what is the cudatoolkit you used to match the requirements.txt

Hi authors,

I wonder if you could tell me how you launch the scripts/rte_pt_few_shot.sh? I met some difficulties in running that. Here is how I installed the packages:

conda install pytorch==1.5.0 torchvision==0.6.0 cudatoolkit=10.0 -c pytorch

And it hints me that: AssertionError: Torch not compiled with CUDA enabled

Any help would be appreciated!

fully-supervised learning那部分的实验的具体细节。

以CB数据集在论文p-tuning中bert-base-cased上面报告的为例ACC---89.2 ,F1---92.1
然后论文提到这句话

MP zero-shot and MP fine-tuning report results of a single pattern, while anchors for P-tuning are selected from the same prompt.

是指MP-zero-shot 和 MP fine-tuning p-tuning都使用同一个pattern 进行报告结果吗?

然后运行代码后发现得到的结果是一个 平均值±标准差 ,因此fully supervised learning的实验中,论文上面报告的性能都是只取平均值吗?

翻阅代码后发现

searched patterns in fully-supervised learning

    # string_list_a = [text_a, ' question: ', text_b, ' true, false or neither? answer:', "the", self.mask]
    # string_list_a = [text_a,  "[SEP]", example.text_b, "?", 'the',  " answer: ", self.mask]
    # string_list_a = [text_a,  "the",  text_b, "?",  "Answer:", self.mask]
    # string_list_a = [text_a, 'the the', 'question:', text_b, '?', 'the the', 'answer:', self.mask]
    # string_list_a = [text_a, "[SEP]", text_b, "?", "the", self.mask]

关于fully supervised learning 在p-tuning的实验是这5个patten得到的性能取平均进行报告,还是说每个pattern上面运行3次 然后取这5个pattern中性能最高的均值进行报告?

original, lama, or shared?

Can I ask about how to chose the vocabulary setting, as the setting contains three choices (original, lama, shared) ?
What do they exactly mean and how to set it for reproduce the LAMA outcome in the experiment?
Thanks a lot and look forward to your answer.

Code about the implement of gpts

hi, I read the paper . on Figure 1, it say that GPTs can be better than similar-sized BERTs on NLU with P-tuning. but It seems that there is no code about the implement of gpts. So, Would you like to show the part?How can I get the result?

few-shot实验encoder换成bert-base-cased效果差很多

你好,非常感谢你们的开源代码。
在复现过程中,我产生了以下两个疑问,望解答:

  1. few-shot实验中,把encoder从albert-xxlarge-v2改成bert-base-cased,其他不变,效果下降非常多(在wic, rte数据集上acc只有50%上下)。这仅仅是由于encoder容量的关系吗?是否还有一些重要参数需要调节?
  2. 我在用开源代码复现论文结果的时候,发现CB这个数据上结果差别很大,如图,左边是我的结果,右边是论文结果,这可能是什么原因呢?
    image

multi-label classification

It's seems that P-tuning is not suitable for multi-label classification. Is it right? And if so, is it possible to expand P-tuning for those tasks?

A problem about the prompt

Is the input of bi-directional model randomly initialized in the process of p-tuning? Or It's the embedding of template. The pseudo prompts in Figure 2(b) seems to indicate that the model needs to use template embedding as input. I'm a little confused about this description.

BERT output is a tuple (in LAMA)

Hi, thanks for great codebase!

in bert_out method in PTuneForLAMA class,

# LAMA/p_tuning/modeling.py (124 line~)
def bert_out():
    label_mask = (queries == self.tokenizer.mask_token_id).nonzero().reshape(bz, -1)[:, 1].unsqueeze(
        1).to(self.device)  # bz * 1
    labels = torch.empty_like(queries).fill_(-100).long().to(self.device)  # bz * seq_len
    labels = labels.scatter_(1, label_mask, label_ids)
    output = self.model(inputs_embeds=inputs_embeds.to(self.device),
                        attention_mask=attention_mask.to(self.device).bool(),
                        labels=labels.to(self.device))
    loss, logits = output.loss, output.logits

output object has no attributes(loss, logits) since it is tuple

I think it should be changed like below

def bert_out():
    label_mask = (queries == self.tokenizer.mask_token_id).nonzero().reshape(bz, -1)[:, 1].unsqueeze(
        1).to(self.device)  # bz * 1
    labels = torch.empty_like(queries).fill_(-100).long().to(self.device)  # bz * seq_len
    labels = labels.scatter_(1, label_mask, label_ids)
    loss, logits = self.model(inputs_embeds=inputs_embeds.to(self.device),
                        attention_mask=attention_mask.to(self.device).bool(),
                        labels=labels.to(self.device))

I checked this code works fine on my machine.
Thank you again.


07.08 add
gpt_out() also has a same issue

loss, logits, _ = self.model(inputs_embeds=inputs_embeds.to(self.device).half(),
                    attention_mask=attention_mask.to(self.device).half(),
                    labels=labels.to(self.device))

If the huggingface Transformers version is higher, it can be solved by giving the return_dict option True

关于p-tuning中训练loss变为nan

利用开源的代码,在自己的数据集上跑p-tuning,发现loss变为nan,查了之后模型输出的logit中的数据变为nan,这是什么原因啊,求助

Fully-supervised SuperGLUE

Nice work. May I ask do you have a plan to release codes to re-implement P-tuning for fully-supervised SuperGLUE tasks? Seems there are no relevant codes yet.

Why discreteness of word embedding leads to the optimizer easily fall into local minima?

最近拜读了您的论文《GPT Understands, Too》,关于这段话有些不理解,希望您能帮忙指导解释下:”1) Discreteness: the original word embedding e of M has already become highly discrete after pre-training. If h is initialized with random distribution and then optimized with stochastic gradient descent (SGD), which has been proved to only change the parameters in a small neighborhood (AllenZhu et al., 2019), the optimizer would easily fall into local minima.” 按照我的理解,您这段话先说明预训练模型的词向量彼此之间相互离散,但是可训练参数h本身就是随机初始化的,并不来自于词向量,词向量的离散对h的优化有什么影响吗?

why does LSTM can be discarded during inference?

I am confused about this sentence in your papar of "GPT Understands, Too":

Moreover, in the inference, we only need the output embedding h and can discard the LSTM head.

If the LSTM encoder was used during training, and the finally embeddings was combined by the outputs of LSTM encoder and the original embeddings, while it was discarded duraing inference, the finally embeddings was just the outputs of two embedding layers. Does this make different performance?

So why LSTM can be discarded in the inference?

Thanks a lot.

Inconsistent SuperGLUE Results from P-Tuning and P-TuningV2 Paper

Hi, I find most of the SuperGLUE metrics of PT reported in P-Tuning paper are superior to metrics of fine-tuning. But the metrics of PT reported in P-TuningV2 paper are much worse than fine-tuning. For example in BoolQ tasks, in P-Tuning paper the acc is 72.9 for fine-tuning and 73.9 for PT. While in P-TuningV2 paper the acc is 77.7 for fine-tuning and 67.2 for PT.

It seems that from P-TuningV2 paper is much worse than fine-tuning which is opposite to the conclusion from P-Tuning paper.

AttributeError: 'Namespace' object has no attribute '_get_node_flag'

Hello, which version of "fairseq" and "omegaconf" did you use? I meet the following error.

Traceback (most recent call last):
File "cli.py", line 238, in
main()
File "cli.py", line 233, in main
trainer = Trainer(args)
File "cli.py", line 109, in init
self.model = PTuneForLAMA(args, self.device, self.args.template)
File "/home/yuec/P-tuning/LAMA/p_tuning/modeling.py", line 33, in init
self.model = create_model(self.args)
File "/home/yuec/P-tuning/LAMA/p_tuning/models.py", line 8, in create_model
return load_megatron_lm(args)
File "/home/yuec/P-tuning/LAMA/megatron_11b/megatron_wrapper.py", line 31, in load_megatron_lm
distributed_utils.infer_init_method(task_args)
File "/home/yuec/miniconda3/envs/run_self_learning/lib/python3.7/site-packages/fairseq/distributed/utils.py", line 70, in infer_init_method
with open_dict(cfg):
File "/home/yuec/miniconda3/envs/run_self_learning/lib/python3.7/contextlib.py", line 112, in enter
return next(self.gen)
File "/home/yuec/miniconda3/envs/run_self_learning/lib/python3.7/site-packages/omegaconf/omegaconf.py", line 943, in open_dict
prev_state = config._get_node_flag("struct")
AttributeError: 'Namespace' object has no attribute '_get_node_flag'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.