Coder Social home page Coder Social logo

octoberchang / x-transformer Goto Github PK

View Code? Open in Web Editor NEW
135.0 5.0 28.0 89 KB

X-Transformer: Taming Pretrained Transformers for eXtreme Multi-label Text Classification

License: BSD 3-Clause "New" or "Revised" License

Makefile 0.77% Shell 4.45% Python 41.38% C++ 53.40%
extreme-multi-label-classification transformers text-classification pytorch

x-transformer's Introduction

Taming Pretrained Transformers for XMC problems

This is a README for the experimental code of the following paper

Taming Pretrained Transformers for eXtreme Multi-label Text Classification

Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, Inderjit Dhillon

KDD 2020

Upates (2021-04-27)

Latest implementation (faster training with stronger performance) of X-Transformer is available at PECOS, feel free to try it out!

Installation

Depedencies via Conda Environment

> conda env create -f environment.yml
> source activate pt1.2_xmlc_transformer
> (pt1.2_xmlc_transformer) pip install -e .
> (pt1.2_xmlc_transformer) python setup.py install --force

**Notice: the following examples are executed under the > (pt1.2_xmlc_transformer) conda virtual environment

Reproduce Evaulation Results in the Paper

We demonstrate how to reproduce the evaluation results in our paper by downloading the raw dataset and pretrained models.

Download Dataset (Eurlex-4K, Wiki10-31K, AmazonCat-13K, Wiki-500K)

Change directory into ./datasets folder, download and unzip each dataset

cd ./datasets
bash download-data.sh Eurlex-4K
bash download-data.sh Wiki10-31K
bash download-data.sh AmazonCat-13K
bash download-data.sh Wiki-500K
cd ../

Each dataset contains the following files

  • label_map.txt: each line is the raw text of the label
  • train_raw_text.txt, test_raw_text.txt: each line is the raw text of the instance
  • X.trn.npz, X.tst.npz: instance's embedding matrix (either sparse TF-IDF or fine-tuned dense embedding)
  • Y.trn.npz, Y.tst.npz: instance-to-label assignment matrix

Download Pretrained Models (processed data, Indexing codes, fine-tuned Transformer models)

Change directory into ./pretrained_models folder, download and unzip models for each dataset

cd ./pretrained_models
bash download-models.sh Eurlex-4K
bash download-models.sh Wiki10-31K
bash download-models.sh AmazonCat-13K
bash download-models.sh Wiki-500K
cd ../

Each folder has the following strcture

  • proc_data: a sub-folder containing: X.{trn|tst}.{model}.128.pkl, C.{label-emb}.npz, L.{label-emb}.npz
  • pifa-tfidf-s0: a sub-folder containing indexer and matcher
  • pifa-neural-s0: a sub-folder containing indexer and matcher
  • text-emb-s0: a sub-folder containing indexer and matcher

Evaluate Linear Models

Given the provided indexing codes (label-to-cluster assignments), train/predict linear models, and evaluate with Precision/Recall@k:

bash eval_linear.sh ${DATASET} ${VERSION}
  • DATASET: the dataset name such as Eurlex-4K, Wiki10-31K, AmazonCat-13K, or Wiki-500K.
  • VERSION: v0=sparse TF-IDF features. v1=sparse TF-IDF features concatenate with dense fine-tuned XLNet embedding.

The evaluaiton results should located at ./results_linear/${DATASET}.${VERSION}.txt

Evaluate Fine-tuned X-Transformer Models

Given the provided indexing codes (label-to-cluster assignments) and the fine-tuned Transformer models, train/predict ranker of the X-Transformer framework, and evaluate with Precision/Recall@k:

bash eval_transformer.sh ${DATASET}
  • DATASET: the dataset name such as Eurlex-4K, Wiki10-31K, AmazonCat-13K, or Wiki-500K.

The evaluaiton results should located at ./results_transformer/${DATASET}.final.txt

Running X-Transformer on customized datasets

The X-Transformer framework consists of 9 configurations (3 label-embedding times 3 model-type). For simplicity, we show you 1 out-of 9 here, using LABEL_EMB=pifa-tfidf and MODEL_TYPE=bert.

We will use Eurlex-4K as an example. In the ./datasets/Eurlex-4K folder, we assume the following files are provided:

  • X.trn.npz: the instance TF-IDF feature matrix for the train set. The data type is scipy.sparse.csr_matrix of size (N_trn, D_tfidf), where N_trn is the number of train instances and D_tfidf is the number of features.
  • X.tst.npz: the instance TF-IDF feature matrix for the test set. The data type is scipy.sparse.csr_matrix of size (N_tst, D_tfidf), where N_tst is the number of test instances and D_tfidf is the number of features.
  • Y.trn.npz: the instance-to-label matrix for the train set. The data type is scipy.sparse.csr_matrix of size (N_trn, L), where n_trn is the number of train instances and L is the number of labels.
  • Y.tst.npz: the instance-to-label matrix for the test set. The data type is scipy.sparse.csr_matrix of size (N_tst, L), where n_tst is the number of test instances and L is the number of labels.
  • train_raw_texts.txt: The raw text of the train set.
  • test_raw_texts.txt: The raw text of the test set.
  • label_map.txt: the label's text description.

Given those input files, the pipeline can be divided into three stages: Indexer, Matcher, and Ranker.

Indexer

In stage 1, we will do the following

  • (1) construct label embedding
  • (2) perform hierarchical 2-means and output the instance-to-cluster assignment matrix
  • (3) preprocess the input and output for training Transformer models.

TLDR: we combine and summarize (1),(2),(3) into two scripts: run_preprocess_label.sh and run_preprocess_feat.sh. See more detailed explaination in the following.

(1) To construct label embedding,

OUTPUT_DIR=save_models/${DATASET}
PROC_DATA_DIR=${OUTPUT_DIR}/proc_data
mkdir -p ${PROC_DATA_DIR}
python -m xbert.preprocess \
    --do_label_embedding \
    -i ${DATA_DIR} \
    -o ${PROC_DATA_DIR} \
    -l ${LABEL_EMB} \
    -x ${LABEL_EMB_INST_PATH}
  • DATA_DIR: ./datasets/Eurlex-4K
  • PROC_DATA_DIR: ./save_models/Eurlex-4K/proc_data
  • LABEL_EMB: pifa-tfidf (you can also try text-emb or pifa-neural if you have fine-tuned instance embeddings)
  • LABEL_EMB_INST_PATH: ./datasets/Eurlex-4K/X.trn.npz

This should yield L.${LABEL_EMB}.npz in the PROC_DATA_DIR.

(2) To perform hierarchical 2-means,

SEED_LIST=( 0 1 2 )
for SEED in "${SEED_LIST[@]}"; do
    LABEL_EMB_NAME=${LABEL_EMB}-s${SEED}
    INDEXER_DIR=${OUTPUT_DIR}/${LABEL_EMB_NAME}/indexer
    python -u -m xbert.indexer \
    python -m xbert.preprocess \
        -i ${PROC_DATA_DIR}/L.${LABEL_EMB}.npz \
        -o ${INDEXER_DIR} --seed ${SEED}

This should yield code.npz in the INDEXIER_DIR.

(3) To preprocess input and output for Transformer models,

SEED=0
LABEL_EMB_NAME=${LABEL_EMB}-s${SEED}
INDEXER_DIR=${OUTPUT_DIR}/${LABEL_EMB_NAME}/indexer
python -u -m xbert.preprocess \
    --do_proc_label \
    -i ${DATA_DIR} \
    -o ${PROC_DATA_DIR} \
    -l ${LABEL_EMB_NAME} \
    -c ${INDEXER_DIR}/code.npz

This should yield the instance-to-cluster matrix C.trn.npz and C.tst.npz in the PROC_DATA_DIR.

OUTPUT_DIR=save_models/${DATASET}
PROC_DATA_DIR=${OUTPUT_DIR}/proc_data
python -u -m xbert.preprocess \
    --do_proc_feat \
    -i ${DATA_DIR} \
    -o ${PROC_DATA_DIR} \
    -m ${MODEL_TYPE} \
    -n ${MODEL_NAME} \
    --max_xseq_len ${MAX_XSEQ_LEN} \
    |& tee ${PROC_DATA_DIR}/log.${MODEL_TYPE}.${MAX_XSEQ_LEN}.txt
  • MODEL_TYPE: bert (or roberta, xlnet)
  • MODEL_NAME: bert-large-cased-whole-word-masking (or roberta-large, xlnet-large-cased)
  • MAX_XSEQ_LEN: maximum number of tokens, we set to 128

This should yield X.trn.${MODEL_TYPE}.${MAX_XSEQ_LEN}.pt and X.tst.${MODEL_TYPE}.${MAX_XSEQ_LEN}.pt in the PROC_DATA_DIR.

Matcher

In stage 2, we will do the following

  • (1) train deep Transformer models to map instances to the induced clusters
  • (2) output the predicted cluster scores and fine-tune instance embeddings

TLDR: run_transformer_train.sh. See more detailed explaination in the following.

(1) Assume we have 8 Nvidia V100 GPUs. To train the models,

MODEL_DIR=${OUTPUT_DIR}/${INDEXER_NAME}/matcher/${MODEL_NAME}
mkdir -p ${MODEL_DIR}
python -m torch.distributed.launch \
    --nproc_per_node 8 xbert/transformer.py \
    -m ${MODEL_TYPE} -n ${MODEL_NAME} --do_train \
    -x_trn ${PROC_DATA_DIR}/X.trn.${MODEL_TYPE}.${MAX_XSEQ_LEN}.pkl \
    -c_trn ${PROC_DATA_DIR}/C.trn.${INDEXER_NAME}.npz \
    -o ${MODEL_DIR} --overwrite_output_dir \
    --per_device_train_batch_size ${PER_DEVICE_TRN_BSZ} \
    --gradient_accumulation_steps ${GRAD_ACCU_STEPS} \
    --max_steps ${MAX_STEPS} \
    --warmup_steps ${WARMUP_STEPS} \
    --learning_rate ${LEARNING_RATE} \
    --logging_steps ${LOGGING_STEPS} \
    |& tee ${MODEL_DIR}/log.txt
  • MODEL_TYPE: bert (or roberta, xlnet)
  • MODEL_NAME: bert-large-cased-whole-word-masking (or roberta-large, xlnet-large-cased)
  • PER_DEVICE_TRN_BSZ: 16 if using Nvidia V100 (or set to 8 if using Nvidia 2080Ti)
  • GRAD_ACCU_STEPS: 2 if using Nvidia V100 (or set to 4 if using Nvidia 2080Ti)
  • MAX_STEPS: set to 1,000 for Eurlex-4K. Depending on your datasets
  • WARMUP_STEPS: set to 1,00 for Eurlex-4K. Depending on your datasets
  • LEARNING_RATE: set to 5e-5 for Eurlex-4K. Depending on your datasets
  • LOGGING_STEPS: set to 100

(2) To generate predictions and instance embedding,

GPID=0,1,2,3,4,5,6,7
PER_DEVICE_VAL_BSZ=32
CUDA_VISIBLE_DEVICES=${GPID} python -u xbert/transformer.py
    -m ${MODEL_TYPE} -n ${MODEL_NAME} \
    --do_eval -o ${MODEL_DIR} \
    -x_trn ${PROC_DATA_DIR}/X.trn.${MODEL_TYPE}.${MAX_XSEQ_LEN}.pkl \
    -c_trn ${PROC_DATA_DIR}/C.trn.${INDEXER_NAME}.npz \
    -x_tst ${PROC_DATA_DIR}/X.tst.${MODEL_TYPE}.${MAX_XSEQ_LEN}.pkl \
    -c_tst ${PROC_DATA_DIR}/C.tst.${INDEXER_NAME}.npz \
    --per_device_eval_batch_size ${PER_DEVICE_VAL_BSZ}

This should yield the following output in the MODEL_DIR

  • C_trn_pred.npz and C_tst_pred.npz: model-predicted cluster scores
  • trn_embeddings.npy and tst_embeddings.npy: fine-tuned instance embeddings

Ranker

In stage 3, we will do the following

  • (1) train linear rankers to map instances and predicted cluster scores to label scores
  • (2) output top-k predicted labels

TLDR: run_transformer_predict.sh. See more detailed explaination in the following.

(1) To train linear rankers,

LABEL_NAME=pifa-tfidf-s0
MODEL_NAME=bert-large-cased-whole-word-masking
OUTPUT_DIR=save_models/${DATASET}/${LABEL_NAME}
INDEXER_DIR=${OUTPUT_DIR}/indexer
MATCHER_DIR=${OUTPUT_DIR}/matcher/${MODEL_NAME}
RANKER_DIR=${OUTPUT_DIR}/ranker/${MODEL_NAME}
mkdir -p ${RANKER_DIR}
python -m xbert.ranker train \
    -x1 ${DATA_DIR}/X.trn.npz \
    -x2 ${MATCHER_DIR}/trn_embeddings.npy \
    -y ${DATA_DIR}/Y.trn.npz \
    -z ${MATCHER_DIR}/C_trn_pred.npz \
    -c ${INDEXER_DIR}/code.npz \
    -o ${RANKER_DIR} -t 0.01 \
    -f 0 --mode ranker

(2) To predict the final top-k labels,

PRED_NPZ_PATH=${RANKER_DIR}/tst.pred.npz
python -m xbert.ranker predict \
    -m ${RANKER_DIR} -o ${PRED_NPZ_PATH} \
    -x1 ${DATA_DIR}/X.tst.npz \
    -x2 ${MATCHER_DIR}/tst_embeddings.npy \
    -y ${DATA_DIR}/Y.tst.npz \
    -z ${MATCHER_DIR}/C_tst_pred.npz \
    -f 0 -t noop

This should yield the predicted top-k labels tst.pred.npz specified in PRED_NPZ_PATH.

Acknowledge

Some portions of this repo is borrowed from the following repos:

x-transformer's People

Contributors

octoberchang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

x-transformer's Issues

Neural label embeddings

Hi,

I am a bit unsure about how you created the neural label embeddings using XLNet or Roberta i.e, how are the files X.trn.finetune.xlnet.npy and Y.trn.finetune.xlnet.npy generated? I tried giving pifa-neural option in run_preprocess_label.sh but I get the error: FileNotFoundError: [Errno 2] No such file or directory: 'X.trn.finetune.xlnet.npy'

Any idea what I am missing?

Thanks!

how 桶

怎么用到中文数据集上

about the dataset

Hi~There are something wrong as I run the code to get the datasets. So would you please give me a link to the datasets or send them to me as a file? thank you.

multi-label classification / paperswithcode dataset

Hi guys,

Hope you are all well !

I was wondering if X-Transformer can handle multi-label classification with 1560 labels.

More precisely, I would like to apply it to paperswithcode dataset where labels are called tasks.

Refs:

Thanks for any insights or inputs on that.

Cheers,
X

How to generate the file X.trn.pnz

Hi, I'm wondering how you generate the file X.trn.npz, i.e. the instance TF-IDF feature matrix for the train set? Could you share your code about this process? Or is there any public code which have the same function?

Issue with training stage

Hi, when trying to run the pipeline on new data, I keep getting this error:

File "xbert/transformer.py", line 561, in train
labels = np.array(C_trn[inst_idx].toarray())
...
IndexError: index (17407) out of range

In other words, the C_trn array is being sliced at an index that is too large. I'm wondering if there is a way to fix this? I've tried with Eurlex-4k and it runs fine. I did notice the following: Num examples is listed at a number not equal to the size of my C_trn array, whereas in running Eurlex-4k I found that this Num examples (printed during training run) is equal to the C_trn row dimension. Thanks!

Assertion error in evaluation assert tY.shape == pY.shape fails

Hi,

I have been trying to run the classifier on a couple of custom datasets I have with 300 and 700 classes. But I keep running into this assertion any idea what goes wrong?

08/03/2020 01:39:52 - INFO - transformers.modeling_utils - loading weights file save_models/dbpedia_summaries/pifa-tfidf-s0/matcher/roberta-large/pytorch_model.bin
08/03/2020 01:40:23 - INFO - main - ***** Running evaluation *****
08/03/2020 01:40:23 - INFO - main - Num examples = 2379160
08/03/2020 01:40:23 - INFO - main - Batch size = 224
Traceback (most recent call last):
File "xbert/transformer.py", line 678, in
main()
File "xbert/transformer.py", line 653, in main
trn_loss, trn_metrics, C_trn_pred, trn_embeddings = matcher.predict(args, X_trn, C_trn, topk=args.only_topk, get_hidden=True)
File "xbert/transformer.py", line 450, in predict
eval_metrics = rf_linear.Metrics.generate(C_eval_true, C_eval_pred, topk=args.only_topk)
File "pt1.2_xmlc_transformer/lib/python3.7/site-packages/xbert-0.1-py3.7-linux-x86_64.egg/xbert/rf_linear.py", line 205, in generate
assert tY.shape == pY.shape, "tY.shape = {}, pY.shape = {}".format(tY.shape, pY.shape)
AssertionError: tY.shape = (2379312, 8), pY.shape = (2379160, 8)

Thanks.

Class Weight

Hi,

Is there any way to pass the class weight to this model to have better result in imbalance dataset?

Thanks

Clarification on different configurations

In the paper there (in Table 5) "all" is mentioned for configuration no.7 and no.9. I am not clear about the embeddings and model types used in these configurations.

about the pretrained models

Hi~I got some problems when trying to download the pretrained models. Could you give me a link of these pretrained models or send them to me as a file? Thank you vert much!

What is the non-linear scoring function sigma to obtain final ranking?

I realised you use a non-linear function called sigma to compute final ranking using the matcher score and ranker score using the OVA classifier. I didn't quite get what is this sigma function. I tried to search it in the code couldn't find it as well. Can you please define what it is?

No space left on device

In Matcher, I found the following error,

08/07/2021 02:16:27 - INFO - main - ***** Running training *****
08/07/2021 02:16:27 - INFO - main - Num examples = 15449
08/07/2021 02:16:27 - INFO - main - Num Epochs = 3
08/07/2021 02:16:27 - INFO - main - Instantaneous batch size per GPU = 8
08/07/2021 02:16:27 - INFO - main - Total train batch size (w. parallel, distributed & accumulation) = 32
08/07/2021 02:16:27 - INFO - main - Gradient Accumulation steps = 4
08/07/2021 02:16:27 - INFO - main - Total optimization steps = 1000
08/07/2021 02:18:56 - INFO - main - | [ 1/ 3][ 100/ 1000] | 399/1932 batches | ms/batch 4.5376 | train_loss 5.790506e-01 | lr 5.000000e-05
08/07/2021 02:21:27 - INFO - main - | [ 1/ 3][ 200/ 1000] | 799/1932 batches | ms/batch 4.6409 | train_loss 2.755803e-01 | lr 4.444444e-05
08/07/2021 02:23:57 - INFO - main - | [ 1/ 3][ 300/ 1000] | 1199/1932 batches | ms/batch 4.5217 | train_loss 2.105729e-01 | lr 3.888889e-05
08/07/2021 02:26:27 - INFO - main - | [ 1/ 3][ 400/ 1000] | 1599/1932 batches | ms/batch 4.5165 | train_loss 1.729266e-01 | lr 3.333333e-05
08/07/2021 02:28:58 - INFO - main - | [ 2/ 3][ 500/ 1000] | 67/1932 batches | ms/batch 4.5452 | train_loss 1.577764e-01 | lr 2.777778e-05
08/07/2021 02:31:29 - INFO - main - | [ 2/ 3][ 600/ 1000] | 467/1932 batches | ms/batch 4.5317 | train_loss 1.491586e-01 | lr 2.222222e-05
08/07/2021 02:33:59 - INFO - main - | [ 2/ 3][ 700/ 1000] | 867/1932 batches | ms/batch 4.4462 | train_loss 1.405306e-01 | lr 1.666667e-05
08/07/2021 02:36:30 - INFO - main - | [ 2/ 3][ 800/ 1000] | 1267/1932 batches | ms/batch 4.5738 | train_loss 1.316148e-01 | lr 1.111111e-05
08/07/2021 02:39:00 - INFO - main - | [ 2/ 3][ 900/ 1000] | 1667/1932 batches | ms/batch 4.5304 | train_loss 1.185597e-01 | lr 5.555556e-06
08/07/2021 02:41:31 - INFO - main - | [ 3/ 3][ 1000/ 1000] | 135/1932 batches | ms/batch 4.4576 | train_loss 1.158848e-01 | lr 0.000000e+00
08/07/2021 02:41:33 - INFO - transformers.configuration_utils - Configuration saved in ./save_models/Eurlex-4K/pifa-tfidf-s0/matcher/bert-large-cased-whole-word-masking/config.json
Traceback (most recent call last):
File "xbert/transformer.py", line 678, in
main()
File "xbert/transformer.py", line 626, in main
matcher.save_model(args)
File "xbert/transformer.py", line 335, in save_model
model_to_save.save_pretrained(args.output_dir)
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/site-packages/transformers/modeling_utils.py", line 249, in save_pretrained
torch.save(model_to_save.state_dict(), output_model_file)
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/site-packages/torch/serialization.py", line 224, in save
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/site-packages/torch/serialization.py", line 149, in _with_file_like
return body(f)
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/site-packages/torch/serialization.py", line 224, in
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/site-packages/torch/serialization.py", line 302, in _save
serialized_storages[key]._write_file(f, _should_read_directly(f))
RuntimeError: write(): fd 39 failed with No space left on device
Traceback (most recent call last):
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/site-packages/torch/distributed/launch.py", line 246, in
main()
File "/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/lib/python3.7/site-packages/torch/distributed/launch.py", line 242, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/khalid/anaconda3/envs/pt1.2_xmlc_transformer/bin/python', '-u', 'xbert/transformer.py', '--local_rank=0', '-m', 'bert', '-n', 'bert-large-cased-whole-word-masking', '--do_train', '-x_trn', './save_models/Eurlex-4K/proc_data/X.trn.bert.128.pkl', '-c_trn', './save_models/Eurlex-4K/proc_data/C.trn.pifa-tfidf-s0.npz', '-o', './save_models/Eurlex-4K/pifa-tfidf-s0/matcher/bert-large-cased-whole-word-masking', '--overwrite_output_dir', '--per_device_train_batch_size', '8', '--gradient_accumulation_steps', '4', '--max_steps', '1000', '--warmup_steps', '100', '--learning_rate', '5e-5', '--logging_steps', '100']' returned non-zero exit status 1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.