Coder Social home page Coder Social logo

randolphvi / hierarchical-multi-label-text-classification Goto Github PK

View Code? Open in Web Editor NEW
308.0 9.0 73.0 307 KB

The code of CIKM'19 paper《Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach》

License: Apache License 2.0

Python 100.00%
hierarchical-multilabel text-classification tensorflow python3 hierarchy-structure attention-mechanism

hierarchical-multi-label-text-classification's Introduction

Hierarchical Multi-Label Text Classification

Python Version Build StatusCodacy BadgeLicense

This repository is my research project, which has been accepted by CIKM'19. The paper is already published.

The main objective of the project is to solve the hierarchical multi-label text classification (HMTC) problem. Different from the multi-label text classification, HMTC assigns each instance (object) into multiple categories and these categories are stored in a hierarchy structure, is a fundamental but challenging task of numerous applications.

Requirements

  • Python 3.6
  • Tensorflow 1.15.0
  • Tensorboard 1.15.0
  • Sklearn 0.19.1
  • Numpy 1.16.2
  • Gensim 3.8.3
  • Tqdm 4.49.0

Introduction

Many real-world applications organize data in a hierarchical structure, where classes are specialized into subclasses or grouped into superclasses. For example, an electronic document (e.g. web-pages, digital libraries, patents and e-mails) is associated with multiple categories and all these categories are stored hierarchically in a tree or Direct Acyclic Graph (DAG).

It provides an elegant way to show the characteristics of data and a multi-dimensional perspective to tackle the classification problem via hierarchy structure.

The Figure shows an example of predefined labels in hierarchical multi-label classification of documents in patent texts.

  • Documents are shown as colored rectangles, labels as rounded rectangles.
  • Circles in the rounded rectangles indicate that the corresponding document has been assigned the label.
  • Arrows indicate a hierarchical structure between labels.

Project

The project structure is below:

.
├── HARNN
│   ├── train.py
│   ├── layers.py
│   ├── ham.py
│   ├── test.py
│   └── visualization.py
├── utils
│   ├── checkmate.py
│   ├── param_parser.py
│   └── data_helpers.py
├── data
│   ├── word2vec_100.model.* [Need Download]
│   ├── Test_sample.json
│   ├── Train_sample.json
│   └── Validation_sample.json
├── LICENSE
├── README.md
└── requirements.txt

Data

You can download the Patent Dataset used in the paper. And the Word2vec model file (dim=100) is also uploaded. Make sure they are under the /data folder.

⚠️ As for Education Dataset, they may be subject to copyright protection under Chinese law. Thus, detailed information is not provided.

:octocat: Text Segment

  1. You can use nltk package if you are going to deal with the English text data.

  2. You can use jieba package if you are going to deal with the Chinese text data.

:octocat: Data Format

See data format in /data folder which including the data sample files. For example:

{"id": "3930316", 
"title": ["sighting", "firearm"], 
"abstract": ["rear", "sight", "firearm", "ha", "peephole", "device", "formed", "hollow", "tube", "end", ...], 
"section": [5], "subsection": [104], "group": [512], "subgroup": [6535], 
"labels": [5, 113, 649, 7333]}
  • id: just the id.
  • title & abstract: it's the word segment (after cleaning stopwords).
  • section / subsection / group / subgroup: it's the first / second / third / fourth level category index.
  • labels: it's the total category which add the index offset. (I will explain that later)

:octocat: How to construct the data?

Use the sample of the Patent Dataset as an example. I will explain how to construct the label index. For patent dataset, the class number for each level is: [9, 128, 661, 8364].

Step 1: For the first level, Patent dataset has 9 classes. You should index these 9 classes first, like:

{"Chemistry": 0, "Physics": 1, "Electricity": 2, "XXX": 3, ..., "XXX": 8}

Step 2: Next, you index the next level (total 128 classes), like:

{"Inorganic Chemistry": 0, "Organic Chemistry": 1, "Nuclear Physics": 2, "XXX": 3, ..., "XXX": 127}

Step 3: Then, you index the third level (total 661 classes), like:

{"Steroids": 0, "Peptides": 1, "Heterocyclic Compounds": 2, ..., "XXX": 660}

Step 4: If you have the fourth level or deeper level, index them.

Step 5: Now suppose you have one record (id: 3930316 mentioned before):

{"id": "3930316", 
"title": ["sighting", "firearm"], 
"abstract": ["rear", "sight", "firearm", "ha", "peephole", "device", "formed", "hollow", "tube", "end", ...], 
"section": [5], "subsection": [104], "group": [512], "subgroup": [6535],
"labels": [5, 104+9, 512+9+128, 6535+9+128+661]}

Thus, the record should be construed as follows:

{"id": "3930316", 
"title": ["sighting", "firearm"], 
"abstract": ["rear", "sight", "firearm", "ha", "peephole", "device", "formed", "hollow", "tube", "end", ...], 
"section": [5], "subsection": [104], "group": [512], "subgroup": [6535], 
"labels": [5, 113, 649, 7333]}

This repository can be used in other datasets (text classification) in two ways:

  1. Modify your datasets into the same format of the sample.
  2. Modify the data preprocess code in data_helpers.py.

Anyway, it should depend on what your data and task are.

:octocat: Pre-trained Word Vectors

You can pre-training your word vectors(based on your corpus) in many ways:

  • Use gensim package to pre-train data.
  • Use glove tools to pre-train data.
  • Even can use bert to pre-train data.

Usage

See Usage.

Network Structure

Reference

If you want to follow the paper or utilize the code, please note the following info in your work:

@inproceedings{huang2019hierarchical,
  author    = {Wei Huang and
               Enhong Chen and
               Qi Liu and
               Yuying Chen and
               Zai Huang and
               Yang Liu and
               Zhou Zhao and
               Dan Zhang and
               Shijin Wang},
  title     = {Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach},
  booktitle = {Proceedings of the 28th {ACM} {CIKM} International Conference on Information and Knowledge Management, {CIKM} 2019, Beijing, CHINA, Nov 3-7, 2019},
  pages     = {1051--1060},
  year      = {2019},
}

About Me

黄威,Randolph

SCU SE Bachelor; USTC CS Ph.D.

Email: [email protected]

My Blog: randolph.pro

LinkedIn: randolph's linkedin

hierarchical-multi-label-text-classification's People

Contributors

randolphvi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hierarchical-multi-label-text-classification's Issues

测试集与预测值的对应

你好,请问结果出来的predictions.json文件预测的标签与测试集怎么对应起来?怎么看测试集的真实标签?

Raw Patent Dataset

Is it possible for you to provide the raw patent dataset link so that we can analyze the inner relationship within the hierarchical labels? Since we cannot extract a clear hierarchical label relationship from your processed data.

Thanks!

Why softmax is applied in the _local_layer function?

visual = tf.multiply(input_att_weight, tf.expand_dims(scores, -1))
visual = tf.nn.softmax(visual)
visual = tf.reduce_mean(visual, axis=1, name="visual")

May I ask why softmax is applied here? Based on my understanding from your paper, the calculated 'score' should directly be multiplied with 'W_att^h' and then average(mean) is applied to generate 'visual' for the next layer.
But from your code, additional softmax is applied before average(mean).

May I ask why softmax is applied here and please correct me if my understanding is wrong and thanks!

test的预测

您好,测试集没有真实标签的时候怎么预测呢?我给了一个虚假标签时,最后给的预测结果都是虚假的标签值。谢谢

Using other pretreined word2vec

Hi,

I'm trying to experiment with your code on other dataset. I wanted to add pretrained word2vec model. I have it in the form of .txt file with a word in each row and a corresponding vector next to it (see below). How can I smoothly include that into your code?

nazwa -1.232841 -0.047110 -4.865466 ...
wrzesień -0.958457 -3.139306 -1.263893 ...

I put it here:
`def load_data_and_labels(data_file, num_classes_list, total_classes, embedding_size, data_aug_flag):
"""
Load research data from files, splits the data into words and generates labels.
Return split sentences, labels and the max sentence length of the research data.

Args:
    data_file: The research data
    num_classes_list: <list> The number of classes
    total_classes: The total number of classes
    embedding_size: The embedding size
    data_aug_flag: The flag of data augmented
Returns:
    The class Data
"""
word2vec_file = '../data/word2vec_' + str(embedding_size) + '.model'

# Load word2vec model file
if not os.path.isfile(word2vec_file):
    #######################################
    TEXT_DIR = '../data/wiki-lemmas-all-100-cbow-hs.txt'
    #######################################
    create_word2vec_model(embedding_size, TEXT_DIR)

`

But I think its wrong. (Might be related to the other issue).

Runtime error

I tried to run the code,
python train_harnn.py
And got the following error:

☛ Train or Restore?(T/R): t
Traceback (most recent call last):
  File "train_harnn.py", line 80, in <module>
    for attr in sorted(FLAGS.__dict__['__wrapped'])], dilim]))
  File "train_harnn.py", line 80, in <listcomp>
    for attr in sorted(FLAGS.__dict__['__wrapped'])], dilim]))
TypeError: unsupported format string passed to NoneType.__format__

Does the code need any command-line arguments?
What could be the possible reason for this error?

模型支持label标注到父类目的吗?

非常棒的工作,赞!
这里请教有一个问题,训练集(测试集)里,是否支持样本的label只标注到父节点的样本?比如类目是四级,但是存在部分样本只标注到了二级。

Running on new dataset

Hey,

first of all - sorry for the mess (I opened and closed an issue couple of times).

Thanks for the repo - looks good. I was trying to run it on my dataset with new pretrained word2vec. Everything runs smooth until the actual training. Im getting a following error. Any idea what might be the reason ant what's the solution?

InvalidArgumentError: Expected size[1] in [0, 3], but got 5
[[node loss/Slice_15 (defined at /patent/patent_HMLTC/HARNN/text_harnn.py:165) ]]

Caused by op 'loss/Slice_15', defined at:
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 505, in start
self.io_loop.start()
File "/usr/local/lib/python3.6/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.6/asyncio/base_events.py", line 438, in run_forever
self._run_once()
File "/usr/lib/python3.6/asyncio/base_events.py", line 1451, in _run_once
handle._run()
File "/usr/lib/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 758, in _run_callback
ret = callback()
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 1233, in inner
self.run()
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 1147, in run
yielded = self.gen.send(value)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 370, in dispatch_queue
yield self.process_one()
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 346, in wrapper
runner = Runner(result, future, yielded)
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 1080, in init
self.run()
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 1147, in run
yielded = self.gen.send(value)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 357, in process_one
yield gen.maybe_future(dispatch(*args))
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 534, in execute_request
user_expressions, allow_stdin,
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2819, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2845, in _run_cell
return runner(coro)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner
coro.send(None)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 3020, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 3185, in run_ast_nodes
if (yield from self.run_code(code, result)):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 3267, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 52, in
pretrained_embedding=pretrained_word2vec_matrix)
File "/patent/patent_HMLTC/HARNN/text_harnn.py", line 292, in init
violation_losses = _hierarchical_violation(self.first_scores, self.second_scores)
File "/patent/patent_HMLTC/HARNN/text_harnn.py", line 165, in _hierarchical_violation
current_child_scores = tf.slice(child_scores, [0, left_index], [batch_size, step])
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 707, in slice
return gen_array_ops.slice(input, begin, size, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 8236, in _slice
"Slice", input=input, begin=begin, size=size, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Expected size[1] in [0, 3], but got 5
[[node loss/Slice_15 (defined at /patent/patent_HMLTC/HARNN/text_harnn.py:165) ]]

Attention visuals tensor has exactly same values

Hello sir, I appreciate your hard work. Now, straight to the issue.

The problem is that when I try to run the visualization.py file only the attention scores associated with the first hierarchy level are different from 0. I checked the tensors of the other hierarchy levels and each one of them has exactly the same values, that's why the attention scores are equal to zero.
This problem persists with different models, trained with different parameters and with different data. Did I miss something?

validation的数量级

您好,请问在patent的数据集中,相较于train的数量,validation的数量大概多少是比较合适的?

why highway is used in producing global prediction?

`

#Fully Connected Layer    
self.fc_out = _fc_layer(self.ham_out)
# Highway Layer
    with tf.name_scope("highway"):
        self.highway = _highway_layer(self.fc_out, self.fc_out.get_shape()[1], num_layers=1, bias=0)`

In the paper, to get global predicts, one should use avg pooling operation to reduce dimension, while in the code, firstly, one use a fc layer, and then use highway to enhance the representation, I'm a little confused here by the code, can u explain why?

使用GPU训练需要修改哪些参数

感谢您的工作
我已经在电脑上成功运行了代码,但是貌似只在CPU上工作,我已经设置了gpu-options-allow-growth=True,且能看到有一部显存占用,但似乎训练都是再CPU上完成的,我需要调整别的什么配置么?多谢

关于专利数据集的问题

老师,您好:
专利数据集中,数据集的标签与原始专利数据的对应关系是什么?USPTO网站给出了以下四种专利分类方式(以Test.json文件中"id":"5973818"的专利为例),但似乎都和标签对应不上。

  1. Current U.S. Class: | 359/265; 351/44; 359/267; 359/275
  2. Current CPC Class: | G02F 1/163 (20130101)
  3. Current International Class: | G02F 1/01 (20060101); G02F 1/163 (20060101); G02F 001/15 (); G02F001/153 (); G02F 001/163 ()
  4. Field of Search: | ;359/265,267,275 ;345/239,105 ;351/44,45
    谢谢您的解答!

Some problems for running the codes

Thanks for your great work! However, I met with some problems when running the codes. All the settings are kept the same in github. The errors are as bellow:
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(32, 356), b.shape=(356, 1024), m=32, n=1024, k=356
[[node Bi-lstm/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul (defined at /.conda/envs/tensorflow1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
[[zero_fraction_16/counts_to_fraction/truediv/_713]]
(1) Internal: Blas GEMM launch failed : a.shape=(32, 356), b.shape=(356, 1024), m=32, n=1024, k=356
[[node Bi-lstm/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul (defined at /.conda/envs/tensorflow1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
0 successful operations.
1 derived errors ignored.

您好,请教一下数据的问题

您好,看了您的代码之后我有两个疑问:
1.数据中的title具体指的是什么意思,我做我自己的数据的时候可不可以不加title
2.如果我使用自己的数据是不是需要准备两个数据文件,一个是content.txt,另一个是我自己写好格式的json,您这里是不是没有生成json文件的程序,感谢!

关于专利数据集的问题

老师您好!看了您的文章后深受启发,我这边也想探索一下专利分类的工作。看了一下data里面的数据,应该是只有部分样例。想问一下您这边方便提供训练和测试的原始数据集吗?原始数据集里面有发表作者/发表公司的信息吗?谢谢您了!

Evaluation Method

Hi @RandolphVI,

correct me if I'm wrong, but it seems like the evaluation method here is including prediction at all levels, not only at the final one - thus I may have a classifier which does not provide any correct prediction at level 4-th (which is the only one I'm interested in) while at the same time it shows validation accuracy to be 75 % (because it gives perfect match at level 1, 2 and 3). Is that the case?

Regards,
White Rabit

感谢您优秀的分享!有一些问题想请教!

1.想设定更多层级怎么写?例如8层级的话key怎么命名?
2.如果设定8层级,出现一些只有6层级的数据怎么办?空着吗?
3.用中文的话word2vec要怎么改变吗?如果选择中文数据集的话可以给出一点必要的修改建议吗?
十分感谢!

关于数据结构labels中索引偏移量的问题

对于如下的数据样本:
{"id": "1", "title": ["tokens"], "abstract": ["tokens"], "section": [1, 2], "subsection": [1, 2, 3, 4], "group": [1, 2, 3, 4], "labels": [1, 2, 1+N, 2+N, 3+N, 4+N, 1+N+M, 2+N+M, 3+N+M, 4+N+M]}
其中N和M分别是层级2和层级3的类别数。这里的表述是不是有问题,N应该是层级1的类别数,M是层级2的类别数。

另外在data_helper.py中,函数load_data_and_labels中的_create_onehot_labels,其中label[int(item)] = 1应该改为label[int(item)-1] = 1才对吧?

Value Error: Cannot create a tensor proto whose content is larger than 2GB.

Hi there,

I was just trying re-run the code by given default paramaters and dataset, but word2vec model. I'm using word2vec-google-news-300 model therefore the embedding-dim in my case is 300. So the final command I run :

python3 train_harnn.py --epochs 5 --batch-size 2 --embedding-dim 300 --embedding-type 0

But It throws me the error below.

...
2024-04-19 11:49:46,589 - INFO - Loading data...
2024-04-19 11:49:46,589 - INFO - Data processing...
2024-04-19 11:49:46.734756: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-04-19 11:49:46.756399: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
Traceback (most recent call last):
  File "train_harnn.py", line 275, in <module>
    train_harnn()
  File "train_harnn.py", line 55, in train_harnn
    harnn = TextHARNN(
  File "/home/ybkaratas/Desktop/HieararchMC/Hierarchical-Multi-Label-Text-Classification/HARNN/text_harnn.py", line 165, in __init__
    self.embedding = tf.constant(pretrained_embedding, dtype=tf.float32, name="embedding")
  File "/home/ybkaratas/miniconda3/envs/cht3.8/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 162, in constant_v1
    return _constant_impl(value, dtype, shape, name, verify_shape=verify_shape,
  File "/home/ybkaratas/miniconda3/envs/cht3.8/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 277, in _constant_impl
    const_tensor = ops._create_graph_constant(  # pylint: disable=protected-access
  File "/home/ybkaratas/miniconda3/envs/cht3.8/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1008, in _create_graph_constant
    tensor_util.make_tensor_proto(
  File "/home/ybkaratas/miniconda3/envs/cht3.8/lib/python3.8/site-packages/tensorflow/python/framework/tensor_util.py", line 585, in make_tensor_proto
    raise ValueError(
ValueError: Cannot create a tensor proto whose content is larger than 2GB.

Initially I thougt it was due to insufficient memory, but I got the same error even when I reduced the both batch size and training data size. What could be the reason ? Can you help me ?

My hardware features: Nvidia GeForce 3060 12GB

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.