Coder Social home page Coder Social logo

jind11 / textfooler Goto Github PK

View Code? Open in Web Editor NEW
482.0 15.0 79.0 2.83 MB

A Model for Natural Language Attack on Text Classification and Inference

License: MIT License

Python 100.00%
adversarial-attacks bert bert-model text-classification natural-language-inference natural-language-processing

textfooler's Introduction

TextFooler

A Model for Natural Language Attack on Text Classification and Inference

This is the source code for the paper: Jin, Di, et al. "Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment." arXiv preprint arXiv:1907.11932 (2019). If you use the code, please cite the paper:

@article{jin2019bert,
  title={Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment},
  author={Jin, Di and Jin, Zhijing and Zhou, Joey Tianyi and Szolovits, Peter},
  journal={arXiv preprint arXiv:1907.11932},
  year={2019}
}

Data

Our 7 datasets are here.

Prerequisites:

Required packages are listed in the requirements.txt file:

pip install -r requirements.txt

How to use

  • Run the following code to install the esim package:
cd ESIM
python setup.py install
cd ..
python comp_cos_sim_mat.py [PATH_TO_COUNTER_FITTING_WORD_EMBEDDINGS]
  • Run the following code to generate the adversaries for text classification:
python attack_classification.py

For Natural langauge inference:

python attack_nli.py

Examples of run code for these two files are in run_attack_classification.py and run_attack_nli.py. Here we explain each required argument in details:

  • --dataset_path: The path to the dataset. We put the 1000 examples for each dataset we used in the paper in the folder data.
  • --target_model: Name of the target model such as ''bert''.
  • --target_model_path: The path to the trained parameters of the target model. For ease of replication, we shared the trained BERT model parameters, the trained LSTM model parameters, and the trained CNN model parameters on each dataset we used.
  • --counter_fitting_embeddings_path: The path to the counter-fitting word embeddings.
  • --counter_fitting_cos_sim_path: This is optional. If given, then the pre-computed cosine similarity scores based on the counter-fitting word embeddings will be loaded to save time. If not, it will be calculated.
  • --USE_cache_path: The path to save the USE model file (Downloading is automatic if this path is empty).

Two more things to share with you:

  1. In case someone wants to replicate our experiments for training the target models, we shared the used seven datasets we have processed for you!

  2. In case someone may want to use our generated adversary results towards the benchmark data directly, here it is.

textfooler's People

Contributors

jind11 avatar leix28 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

textfooler's Issues

Number of queries in attack_classification.py etc.

Thank you for open-sourcing the code! I would like to double-check the following concerns

  1. Does num_queries in attack_classification.py reflect how many queries were made to the classifier for each input text?

  2. Is the similarity threshold issue reported in the text-attack library already fixed in the current text-fooler implementation?

Another comment is that in the prerequisites part of README, the -r flag is missing in pip install requirements.txt. It should be pip install -r requirements.txt. Hope it helps :-0

Question about pretrained models w.r.t InferSent and ESIM

Hi, thanks for your code. Since I haven't found pretrained models for InferSent and ESIM, I trained both models by myself. However, I cannot reproduce the accuracy same as the data you provided. Could you provide details of training and validation? Or share the pretrained models?

POS filter - why 'NOUN' and 'VERB' can be replaced by each other

I read the source code in criteria.py, and found the function of pos_filter. However, I don't understand why you have it set this way by considering set([ori_pos, new_pos]) <= set(['NOUN', 'VERB'] as same = True. Is there anyone could explain it? Thank you so much!
def pos_filter(ori_pos, new_pos_list):
same = [True if ori_pos == new_pos or (set([ori_pos, new_pos]) <= set(['NOUN', 'VERB']))
else False
for new_pos in new_pos_list]
return same

Quality of adversaries and authenticity of results

There seems to be a issue in a few adversaries.

For example: A claimed adversary from mr_bert.txt is:
orig sent (0): to portray modern women the way director davis has done is just unthinkable
adv sent (1): to portray modern women the way director davis has done is just imaginable

unthinkable and imaginable are antonyms which erroneously have high cosine similarity suggesting that those are synonyms. I suggest such examples should not be considered while evaluating the success rate of attack, as the human evaluation would clearly label it as positive (1) and not negative.

Question on Experiment Details

Hi,

I'm trying to use your method to attack another dataset where the average text length is about 400 words. I have some questions about some details:

  1. In your paper, does the query number refer to the number of forward passes that the target model needs to do for EACH example? E.g. On Fake news dataset, you have to run BERT to get predicted probability for 4403 times on average for each test example. (For importance ranking and choosing the best replacement candidate.)

  2. In such case, if I have a relatively big test set, would the generation process take a very long time (I'm using BERT as the target model)? Could you provide some reference on how much time it took to generate the adversarial examples on the Fake News Dataset? It seems that if the generation process is slow, it will also make adversarial training harder where I have to generate adversarial examples on the training data.

  3. Do you have an ablation on randomly choosing an candidate from the final candidate pool? In your implementation, you also used the target model to find the candidate that gives least confidence, I'm wondering what if you remove that and randomly pick one instead.

  4. Have you considered the transferability issue? I think in your implementation, when you attack a model, you will also use it as the target model for generating the attacks. I'm wondering what if you use one model as target model to generate the adversarial examples, and use another model to run on the adversarial example for testing? Would that work equally well?

Thanks!

question about sim_scores

Hi, i find that in your code the sim_score is defined as ‘1-arccos(cos<x1,x2>)=1-<x1,x2>’, which makes me confused.
Why not use cosine similarity? Is it because ‘1-<x1,x2>’ makes them more distinguishable?

Missing vocabularies for wordCNN and wordLSTM pretrained models

I tried to load the pretrained wordCNN/LSTM, but I found that the embedding layer uses 400K vocab with 200 embeddings dimensions. It seems that you use the wikipedia pretrained embeddings from https://nlp.stanford.edu/projects/glove/.
However, you mentioned before in this reply that you used 10K and 20K vocabs for CNN and LSTM models.

Could you please explain which vocab sizes for did you use for the published results in the paper ?
And if possible, could you provide the 10-20K word vocabularies used for these models? (as you did with BERT)

Thank you!
Cheers

MemoryError for calculating cosine similarity scores

Hi,

I tried to pre-calculate the cosine similarity scores based on the counter-fitting word vectors, but met the Memory Error problems. The word vectors are (65713, 300) and finally the similarity matrix is (65713, 65713). There are some dot and element-wise division operations. I got 8G RAM. Any suggestions?

Thanks a lot!

Adversarial examples provided in google drive are not able to fool your trained target model (BERT) itself

Adversarial examples provided in google drive are not able to fool your trained target model (BERT) itself. As of now I checked 'mr_bert.txt' and most of the claimed adversarial texts do not have the same label as claimed in the file. Please address the same.

A few examples that I could quote are :
orig sent (0): davis is so enamored of her own creation that she ca n't see how insufferable the character is
adv sent (1): davis is well enthralled of her own creation that she ca n't see how insufferable the character is

orig sent (0): it 's hard to imagine that even very small children will be impressed by this tired retread
adv sent (1): it 's intense to thinking that even immeasurably small children will is impressed by this tired retread

Problems in the process of reproducing the code

Thank you for open-sourcing the code!
In the process of reproducing, the following problems appeared:
Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([4, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model is torch.Size([2]).

I haven't solved it, can you help me? Thanks!

Not able to run cos_sim.py

Hi.
I am trying to run comp_cos_sim_mat.py on the counter-fitted-vectors but I run into memory issues.
Is there any memory limit on that. I'm using google colab 12GB RAM.

How to install USE successfully?

When I run
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3"
self.embed = hub.Module(module_url)
there is error informed as below:

RuntimeError Traceback (most recent call last)
in ()
1 module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3"
----> 2 hub.Module(module_url)

/usr/local/lib/python3.6/dist-packages/tensorflow_hub/module.py in init(self, spec, trainable, name, tags)
174 name=self._name,
175 trainable=self._trainable,
--> 176 tags=self._tags)
177 # pylint: enable=protected-access
178

/usr/local/lib/python3.6/dist-packages/tensorflow_hub/native_module.py in _create_impl(self, name, trainable, tags)
384 trainable=trainable,
385 checkpoint_path=self._checkpoint_variables_path,
--> 386 name=name)
387
388 def _export(self, path, variables_saver):

/usr/local/lib/python3.6/dist-packages/tensorflow_hub/native_module.py in init(self, spec, meta_graph, trainable, checkpoint_path, name)
443 # TPU training code.
444 with scope_func():
--> 445 self._init_state(name)
446
447 def _init_state(self, name):

/usr/local/lib/python3.6/dist-packages/tensorflow_hub/native_module.py in _init_state(self, name)
446
447 def _init_state(self, name):
--> 448 variable_tensor_map, self._state_map = self._create_state_graph(name)
449 self._variable_map = recover_partitioned_variable_map(
450 get_node_map_from_tensor_map(variable_tensor_map))

/usr/local/lib/python3.6/dist-packages/tensorflow_hub/native_module.py in _create_state_graph(self, name)
503 meta_graph,
504 input_map={},
--> 505 import_scope=relative_scope_name)
506
507 # Build a list from the variable name in the module definition to the actual

/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py in import_meta_graph(meta_graph_or_file, clear_devices, import_scope, **kwargs)
1460 return _import_meta_graph_with_return_elements(meta_graph_or_file,
1461 clear_devices, import_scope,
-> 1462 **kwargs)[0]
1463
1464

/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py in _import_meta_graph_with_return_elements(meta_graph_or_file, clear_devices, import_scope, return_elements, **kwargs)
1470 """Import MetaGraph, and return both a saver and returned elements."""
1471 if context.executing_eagerly():
-> 1472 raise RuntimeError("Exporting/importing meta graphs is not supported when "
1473 "eager execution is enabled. No graph exists when eager "
1474 "execution is enabled.")

RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled.

Which python version?

Hello,

I am currently trying to reproduce the results in your code, however I am running into the following error:

Collecting absl-py==0.9.0 (from -r requirements.txt (line 1))
Downloading absl-py-0.9.0.tar.gz (104 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 104.0/104.0 kB 3.1 MB/s eta 0:00:00
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Preparing metadata (setup.py) ... error
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Which python version should I be using?

Doing a quick search online I found that there may not be support for python 3.10 and beyond

RasaHQ/rasa#11101

Vocab.txt for running Imdb is not available

I am getting the following output:

Model name '/content/drive/My Drive/imdb' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc). We assumed '/content/drive/My Drive/imdb/vocab.txt' was a path or url but couldn't find any file associated to this path or url.

Then the code runs and stops below:

Traceback (most recent call last):
  File "attack_classification.py", line 589, in <module>
    main()
  File "attack_classification.py", line 557, in main
    batch_size=args.batch_size)
  File "attack_classification.py", line 203, in attack
    orig_probs = predictor([text_ls]).squeeze()
  File "attack_classification.py", line 86, in text_pred
    dataloader = self.dataset.transform_text(text_data, batch_size=batch_size)
  File "attack_classification.py", line 184, in transform_text
    self.max_seq_length, self.tokenizer)
  File "attack_classification.py", line 150, in convert_examples_to_features
    tokens_a = tokenizer.tokenize(' '.join(text_a))
AttributeError: 'NoneType' object has no attribute 'tokenize'

May be it is related to cache directory path.

Can you help me to resolve this ?

Formula for calculating USE cosine similarities: dividing by π

Hi,

I see you are actually using the scaled angular distance between the two embeddings instead of the raw cosine similarity score.

https://github.com/jind11/TextFooler/blob/master/attack_classification.py#L32

After the call to tf.acos, do you not need to divide by π to scale the value between 0 and 1? That is the practice recommended in the Universal Sentence Encoder paper, section 5. Did you forget to divide by pi or am I missing something?

Invalid argument when calculating sim_predictor.senmantic_sim()

I'm able to load the USE model, but when running sim_predictor.senmantic_sim(), there is an error as below. How can I fix this?
2020-03-23 05:50:07.614773: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at sparse_to_dense_op.cc:128: Invalid argument: indices[16] = [0,16] is out of bounds: need 0 <= index < [49,16]
2020-03-23 05:50:07.614887: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Invalid argument: indices[16] = [0,16] is out of bounds: need 0 <= index < [49,16]
[[{{node text_preprocessor/SparseToDense}}]]
[[Encoder_en/Transformer/PrepareForTransformer/embedding_lookup/DynamicPartition/58]]
2020-03-23 05:50:07.615180: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Invalid argument: indices[16] = [0,16] is out of bounds: need 0 <= index < [49,16]
[[{{node text_preprocessor/SparseToDense}}]]
Traceback (most recent call last):
File "attack_classification.py", line 622, in
main()
File "attack_classification.py", line 590, in main
batch_size=args.batch_size)
File "attack_classification.py", line 281, in attack
embeddings1 = sim_predictor(sent1)
File "/usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line891, in call
outputs = self.call(cast_inputs, *args, **kwargs)
File "/usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_hub/keras_layer.py", line 209, in call
result = f()
File "/usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1081, in__call
_
return self._call_impl(args, kwargs)
File "/usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1121, in_call_impl
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File "/usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1224, in_call_flat
ctx, args, cancellation_manager=cancellation_manager)
File "/usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 511, in call
ctx=ctx)
File "/usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: indices[16] = [0,16] is out of bounds: need 0 <= index < [49,16]
[[node text_preprocessor/SparseToDense (defined at /usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
[[Encoder_en/Transformer/PrepareForTransformer/embedding_lookup/DynamicPartition/_58]]
(1) Invalid argument: indices[16] = [0,16] is out of bounds: need 0 <= index < [49,16]
[[node text_preprocessor/SparseToDense (defined at /usr/local/miniconda3/envs/dl/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_pruned_6865]

Function call stack:
pruned -> pruned

Questions about string cleaning

Thanks for this solid work.
In the clean_str, it seems that Every dataset is lower cased except for TREC but in the example, in Table 6 the sentence is cased. This looks like a conflict to me.

def clean_str(string, TREC=False):

Also in clean_str say Tokenization/string cleaning for all datasets except for SST.
Did you train the model on a cleaned uncased dataset but test it on a cased raw dataset? But the split 1000 dataset in 'data' is uncased. I'm really confused. Is there something I have missed?
I apologize that I didn't go through your code before directly asking the question. That would be very generous and helpful. Thanks in advance~

Results differ from paper on AG news dataset

I got the following result on AG news:

For target model bert: original accuracy: 94.200%, adv accuracy: 24.400%, avg changed rate: 24.965%, num of queries: 446.3

There are slight modifications in the attack_classification.py file from the orignal one, but I have not changed the logic at all. You can see the revisions in the gist, the first one is your code and the second revision shows my changes.

Is the accuracy drop because of the difference between computation of cosine_similarity_matrix between the two files ?
I used the implementation of compute_cos_sim_mat.py file in the attack_classification file because it fits in the memory.

Thanks!

The counter-fitting word embeddings

Hi,
This is more of a question than a real issue. I'm trying to attack a model with your code but I'm not sure what "the counter-fitting word embeddings" file should be like. Can you give an example?
Thanks

Question about the results of the attack

hi,
when I experimented with the 1000 samples you provided from the AG dataset and the bert model, the results are:
For dataset data/ag: For target model bert: original accuracy: 94.200%, adv accuracy: 29.500%, avg changed rate: 25.076%, num of queries: 481.7
But the adv accuracy in your paper is 11.5%.
I'm not quite sure what led to this result. So did you use the same 1000 examples in your paper?

Thanks.

Questions about the dataset

Thank you for the open source code. I am a graduate student just entering the field of textual attacks. I recently came across the paper and would like to reproduce it to see the effect. There are a few simple questions I would like to ask you. 1. should the model be trained in the order of the readme.txt? I think some details are omitted. 2. The format of the data demo is different from that required in the code, so can you conveniently tell me what to do with it?

image

Regarding Semantic Similarity

In the code, you are computing the semantic similarity of the current perturbed input with the previous perturbed input rather than the orignal input.
How this guarantees that the final adversarial text will be semantically similar to the orignal input ?

Regarding No. of classes in AG News

On AG news I am getting the following:

Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([4, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model is torch.Size([2]).

Is it because of 2 classes instead of 4 as in AG news ?
If yes, than how you mapped the 4 class labels to 2 class labels ?

EDIT :
The nclasses was set to 2 by default in attack_classification.py

How do you measure the after-attack accuracy ?

I am currently replicating the numbers from your paper, and I am not sure how you measure the after-attack accuracy ?
When I looked into the code, you do this:

if true_label != new_label:            
      adv_failures += 1 
...

(1-adv_failures/1000)*100.0

This suggests that in "adv_failures"  you consider non-flipped labels (in case of true_label != original_label) as successful attack.I don't see why ? Aren't successful attacks supposed to be only the ones that flip the original model label, while the true and original labels are equal (and then divide by the sum of true-positives and negatives, not the whole data) ?

Results slightly different from paper

Hi, we're trying to reproduce the results from your paper. Running your code on the 1000 Yelp data samples in the repo gave results:
original accuracy: 97.000%, adv accuracy: 6.600%, avg changed rate: 13.879%, num of queries: 827.1
The results are slightly different from those reported in your paper. We thought the issue might be the counterfitted embeddings but we tried all 3 versions in https://github.com/nmrksic/counter-fitting/tree/master/word_vectors. Any idea what we are missing?

examples

Can you post your successful examples? Would be really helpful to be able to analyze them on MR, Yelp, IMDB, SNLI, MNLI datasets. Thanks!

Doubt regarding synonym replacement

I have doubt from the word replacement section of the paper.

  1. Are you querying the target model for every synonym of a word (selected after POS and semantic similarity check) and than selecting the synonym with least confidence score, and than repeating the process for each word in the text ordered by their classification drop score ?

  2. How are you calculating the no. of queries to the model ?

Regarding --max_sequence_length

What was your max_sequence_length value in the orignal experiments, the results of which you reported in the paper ?
The default is set to 128.

Segmentation fault (core dumped) when running USE

Thank you for your code
I follow the instructions to try to reproduce the results. After Cos sim import finished! Segmentation fault (core dumped) happened, the previous steps are no problem, I use tensorflow 1.14.0, pytorch 0.41, gpu memory is 12gb. Can you please analyze what is the possible cause of the error?

Implementation on other dataset

Hi! I want to implement TextFooler on other data set, and I have a pretrained classifier with bert model pytorch_model.bin. I created a new folder in a directory and used it as --target_model_path. In this folder I put bert_config.json in, but I wonder if vocal.txt is the same for all pretrained models. For example, can I use the bert-base-uncased-vocal.txt directly?

version

Hi, could you please show your python/pytorch/cuda version? I always have problems installing the requirements. Thanks!

Evaluation of models on SNLI and MNLI

Hi,
Nice work analysing the model robustness against adversaries. I was wandering how did you evaluate the models on test sets of SNLI and MNLI. As far as I know, the test predictions are not publicly available. Can you clarify on this?

Cannot reproduce results with FAKE

Hi Di Jin and collaborators,

First of all, thank you for this great repo, this is highly valuable to have reproducible code/data/models of this quality.

I am currently trying to reproduce the scores from your paper, everything works well, I manage to observe the same accuracy than what's pointed in your article for imdb and yelp.

Nevertheless, the FAKE bert pretrained model that I took from your drive produces random results (I get 50% accuracy while it should be 96.7)

Are you sure that you uploaded the good model on drive ?

Best regards,

Antoine

Evaluation on AGnews

Hi, thank you for your work. However, I have encountered some issues while reproducing the code. The accuracy of AGnews on the BERT model is only 19%. I initially thought it was because the data provided by you was attacked, but I found that the accuracy of the original data is also only 19%. I made minimal changes to the code.

About Data Size

Hi, I have a question that if I have 10, 000 samples how can I change all the samples not part of samples.

I used BERT to produce adversaries samples for 10, 000 samples. But I found only part of them were changed. It seemed that the program randomly selected samples from dataset not selected all the samples.

How can I change 10, 000 samples?

Thank you for your time!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.