Coder Social home page Coder Social logo

lip_ssl's Introduction

Self-supervised Structure-sensitive Learning (SSL)

Ke Gong, Xiaodan Liang, Xiaohui Shen, Liang Lin, "Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing", CVPR 2017.

Introduction

SSL is a state-of-the-art deep learning methord for human parsing built on top of Caffe. This novel self-supervised structure-sensitive learning approach can impose human pose structures into parsing results without resorting to extra supervision (i.e., no need for specifically labeling human joints in model training). The self-supervised learning framework can be injected into any advanced neural networks to help incorporate rich high-level knowledge regarding human joints from a global perspective and improve the parsing results.

This distribution provides a publicly available implementation for the key model ingredients reported in our latest paper which is accepted by CVPR2017.

We newly introduce a novel Joint Human Parsing and Pose Estimation Network (JPPNet), which is accepted by T-PAMI 2018. (Paper and Code)

Please consult and consider citing the following papers:

@InProceedings{Gong_2017_CVPR,
  author = {Gong, Ke and Liang, Xiaodan and Zhang, Dongyu and Shen, Xiaohui and Lin, Liang},
  title = {Look Into Person: Self-Supervised Structure-Sensitive Learning and a New Benchmark for Human Parsing},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {July},
  year = {2017}
}
@article{liang2018look,
  title={Look into Person: Joint Body Parsing \& Pose Estimation Network and a New Benchmark},
  author={Liang, Xiaodan and Gong, Ke and Shen, Xiaohui and Lin, Liang},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2018},
  publisher={IEEE}
}

Look into People (LIP) Dataset

The SSL is trained and evaluated on our LIP dataset for human parsing. Please check it for more model details. The dataset is also available at google drive and baidu drive.

Pre-trained models

We have released our trained models with best performance at google drive and baidu drive.

Train and test

  1. Download LIP dataset or prepare your own data.
  2. Put the images(.jpg) and segmentations(.png) into ssl/human/data/images and ssl/human/data/labels
  3. Put the train, val, test lists into ssl/human/list. Each type contains a list for path and a list for id (e.g., train.txt and train_id.txt)
  4. Download the pre-trained model and put it into ssl/human/model/attention/. You can also refer DeepLab for more models.
  5. Set up your init.caffemodel before training and test.caffemodel before testing. You can simply use a soft link.
  6. The prototxt files for network config are saved in ssl/human/config
  7. In run_human.sh, you can set the value of RUN_TRAIN adn RUN_TEST to train or test the model.
  8. After you run TEST, the computed features will be saved in ssl/human/features. You can run the provided MATLAB script, show.m to generate visualizable results. Then you can run the Python script, test_human.py to evaluate the performance.

Related work

  • Joint Body Parsing & Pose Estimation Network JPPNet, T-PAMI2018
  • Instance-level Human Parsing via Part Grouping Network PGN, ECCV2018
  • Graphonomy: Universal Human Parsing via Graph Transfer Learning Graphonomy, CVPR2019

lip_ssl's People

Contributors

engineering-course avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lip_ssl's Issues

Unable to download your LIP dataset

When I click on the link to the dataset, it just leads me to some random chinese page. Can you please tell me where to download your dataset from? Thanks

out of memory issue using GTX1080

I0524 00:14:45.317162 21868 net.cpp:816] Ignoring source layer accuracy_first_res1
I0524 00:14:45.317179 21868 net.cpp:816] Ignoring source layer loss_first_res075
I0524 00:14:45.317183 21868 net.cpp:816] Ignoring source layer accuracy_first_res075
I0524 00:14:45.317186 21868 net.cpp:816] Ignoring source layer label_shrink16
I0524 00:14:45.317189 21868 net.cpp:816] Ignoring source layer label_shrink16_label_shrink16_0_split
I0524 00:14:45.317193 21868 net.cpp:816] Ignoring source layer loss_first_res05
I0524 00:14:45.317195 21868 net.cpp:816] Ignoring source layer accuracy_first_res05
I0524 00:14:45.319630 21868 caffe.cpp:252] Running for 9999 iterations.
F0524 00:14:45.558805 21868 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
@ 0x7f0562f8cdaa (unknown)
@ 0x7f0562f8cce4 (unknown)
@ 0x7f0562f8c6e6 (unknown)
@ 0x7f0562f8f687 (unknown)
@ 0x7f056379ae91 caffe::SyncedMemory::to_gpu()
@ 0x7f056379a1f9 caffe::SyncedMemory::mutable_gpu_data()
@ 0x7f0563784822 caffe::Blob<>::mutable_gpu_data()
@ 0x7f05636ebe88 caffe::BaseConvolutionLayer<>::forward_gpu_gemm()
@ 0x7f05637aa9b6 caffe::ConvolutionLayer<>::Forward_gpu()
@ 0x7f0563726831 caffe::Net<>::ForwardFromTo()
@ 0x7f0563726ba7 caffe::Net<>::ForwardPrefilled()
@ 0x406d74 test()
@ 0x4059dc main
@ 0x7f0562297f45 (unknown)
@ 0x406111 (unknown)
@ (nil) (unknown)
run_human.sh: line 82: 21868 Aborted (core dumped) ${CMD}

Hi, I'm using GTX1080, it seems like I face cuda memory issue if all my setting is good.
Have you tried on GTX 1080 and is it possible that I make batch size smaller?

Is it impossible to train with new data for sketch-segmentation?

Hi. I want to apply this model on sketch-segmentation. It means my input is sketch and my expected output is segmented image of sketch. Is it impossible to apply this model on my new data? and what will i need to change in config to correct my training. So far my result was wrong.
Thanks

ask for help

Set up your init.caffemodel before training and test.caffemodel before testing. You can simply use a soft link.
这句话的意思是需要把code里面的caffe先编译一遍么
编译caffe时出现的错误是
too few arguments to function \u2018cudnnStatus_t cudnnSetPooling2dDescriptor(cudnnPoolingDescriptor_t, cudnnPoolingMode_t, cudnnNanPropagation_t, int, int, int, int, int, int)\u2019 pad_h, pad_w, stride_h, stride_w));
^

Error in Training

It's an amazing work, I just follow your step, and your pretrained model performs quite well in the Testset.
But when I start training in training and val set by finetuning your pretrained model, I met this error:

I0516 17:22:28.534246 34577 layer_factory.hpp:77] Creating layer data
I0516 17:22:28.534312 34577 net.cpp:106] Creating Layer data
I0516 17:22:28.534332 34577 net.cpp:411] data -> data
I0516 17:22:28.534373 34577 net.cpp:411] data -> label
I0516 17:22:28.534395 34577 net.cpp:411] data -> (automatic)
I0516 17:22:28.534835 34577 image_seg_data_layer.cpp:46] Opening file human/list/train.txt
I0516 17:22:28.566068 34577 image_seg_data_layer.cpp:63] Shuffling data
I0516 17:22:28.569484 34577 image_seg_data_layer.cpp:68] A total of 40462 images.
I0516 17:22:28.584575 34577 image_seg_data_layer.cpp:137] output data size: 1,3,321,321
I0516 17:22:28.584602 34577 image_seg_data_layer.cpp:141] output label size: 1,1,321,321
I0516 17:22:28.584614 34577 image_seg_data_layer.cpp:145] output data_dim size: 1,1,1,2
I0516 17:22:28.594056 34577 net.cpp:150] Setting up data
I0516 17:22:28.594171 34577 net.cpp:157] Top shape: 1 3 321 321 (309123)
I0516 17:22:28.594243 34577 net.cpp:157] Top shape: 1 1 321 321 (103041)
I0516 17:22:28.594271 34577 net.cpp:157] Top shape: 1 1 1 2 (2)
I0516 17:22:28.594316 34577 net.cpp:165] Memory required for data: 1648664
I0516 17:22:28.594347 34577 layer_factory.hpp:77] Creating layer data_data_0_split
I0516 17:22:28.594403 34577 net.cpp:106] Creating Layer data_data_0_split
I0516 17:22:28.594442 34577 net.cpp:454] data_data_0_split <- data
I0516 17:22:28.594481 34577 net.cpp:411] data_data_0_split -> data_data_0_split_0
I0516 17:22:28.594521 34577 net.cpp:411] data_data_0_split -> data_data_0_split_1
I0516 17:22:28.594554 34577 net.cpp:411] data_data_0_split -> data_data_0_split_2
I0516 17:22:28.594708 34577 net.cpp:150] Setting up data_data_0_split
I0516 17:22:28.594743 34577 net.cpp:157] Top shape: 1 3 321 321 (309123)
I0516 17:22:28.594769 34577 net.cpp:157] Top shape: 1 3 321 321 (309123)
I0516 17:22:28.594792 34577 net.cpp:157] Top shape: 1 3 321 321 (309123)
I0516 17:22:28.594813 34577 net.cpp:165] Memory required for data: 5358140
I0516 17:22:28.594836 34577 layer_factory.hpp:77] Creating layer label_data_1_split
I0516 17:22:28.594862 34577 net.cpp:106] Creating Layer label_data_1_split
I0516 17:22:28.594884 34577 net.cpp:454] label_data_1_split <- label
I0516 17:22:28.594910 34577 net.cpp:411] label_data_1_split -> label_data_1_split_0
I0516 17:22:28.594944 34577 net.cpp:411] label_data_1_split -> label_data_1_split_1
I0516 17:22:28.595024 34577 net.cpp:150] Setting up label_data_1_split
I0516 17:22:28.595055 34577 net.cpp:157] Top shape: 1 1 321 321 (103041)
I0516 17:22:28.595078 34577 net.cpp:157] Top shape: 1 1 321 321 (103041)
I0516 17:22:28.595098 34577 net.cpp:165] Memory required for data: 6182468
I0516 17:22:28.595119 34577 layer_factory.hpp:77] Creating layer shrink_data05
I0516 17:22:28.595193 34577 net.cpp:106] Creating Layer shrink_data05
I0516 17:22:28.595221 34577 net.cpp:454] shrink_data05 <- data_data_0_split_0
I0516 17:22:28.595247 34577 net.cpp:411] shrink_data05 -> shrink_data05
I0516 17:22:28.595319 34577 net.cpp:150] Setting up shrink_data05
I0516 17:22:28.595353 34577 net.cpp:157] Top shape: 1 3 161 161 (77763)
I0516 17:22:28.595376 34577 net.cpp:165] Memory required for data: 6493520
I0516 17:22:28.595396 34577 layer_factory.hpp:77] Creating layer shrink_data075
I0516 17:22:28.595424 34577 net.cpp:106] Creating Layer shrink_data075
I0516 17:22:28.595445 34577 net.cpp:454] shrink_data075 <- data_data_0_split_1
I0516 17:22:28.595471 34577 net.cpp:411] shrink_data075 -> shrink_data075
I0516 17:22:28.595533 34577 net.cpp:150] Setting up shrink_data075
I0516 17:22:28.595563 34577 net.cpp:157] Top shape: 1 3 241 241 (174243)
I0516 17:22:28.595584 34577 net.cpp:165] Memory required for data: 7190492
I0516 17:22:28.595628 34577 layer_factory.hpp:77] Creating layer conv1_1
I0516 17:22:28.595676 34577 net.cpp:106] Creating Layer conv1_1
I0516 17:22:28.595702 34577 net.cpp:454] conv1_1 <- data_data_0_split_2
I0516 17:22:28.595729 34577 net.cpp:411] conv1_1 -> conv1_1
E0516 17:22:28.599076 34580 io.cpp:81] Could not open or find file human/data/
F0516 17:22:28.599761 34580 data_transformer.cpp:475] Check failed: img_height == seg_height (246 vs. 0)
*** Check failure stack trace: ***
@ 0x7f6489caddaa (unknown)
@ 0x7f6489cadce4 (unknown)
@ 0x7f6489cad6e6 (unknown)
@ 0x7f6489cb0687 (unknown)
@ 0x7f648a31d77d caffe::DataTransformer<>::TransformImgAndSeg()
@ 0x7f648a442c51 caffe::ImageSegDataLayer<>::load_batch()
@ 0x7f648a453acc caffe::BasePrefetchingDataLayer<>::InternalThreadEntry()
@ 0x7f648a351d80 caffe::InternalThread::entry()
@ 0x7f64818afa4a (unknown)
@ 0x7f647d414184 start_thread
@ 0x7f648908a37d (unknown)
@ (nil) (unknown)

Any advice would be nice : )

代码测试问题

1.LIP数据集中的test_image部分没有test_id.txt只有val_id.txt,而且和trainVal中的val_id.txt一模一样,是否正常?
2.如果单单跑测试的话,是不仅仅需要test数据集么。我是把test压缩包中的文件放到human/data/images底下,并将val_id.txt放到相应的list底下,同时下载预训练好的模型改名成test.caffemodel放到相应model/attention文件夹下,不过跑run_human.sh的RUN_TEST的时候,提示缺少的是val.txt,所以我是根据文件路径生成了val.txt,这样的话跟第三点中的有冲突,并不是test.txt和test_id.txt。这样跑出来的话,跑出来结果loss为0。然后跑test_human.py的时候就提示需要val的图像。是我哪里理解错了么?

About LIP dataset

Since the images are cropped from coco dataset, I checked the cropped image size and the bbox of coco dataset. The bbox of coco is a tuple of four float values [x0,y0, H, W], the image provided by the LIP dataset is not exactly matched with the bbox. For example, the
image size of 393224_462133.png is (604, 426), the bbox is [1.44, 37.39, 424.27, 602.61]. So how to restore the cropped image back to the original size?

Don't find train.txt

Hi ,I download the dataset, yes, it has train_id.txt, but I don't find train.txt. When is it ? Should I create it by myself?

"Check failed: matfp Error creating MAT file" during test process

When I test the model, I got this errror when I run ./run_human.sh.

How should I deal with this problem?

...
...
F0912 10:56:18.593981  8397 matio_io.cpp:66] Check failed: matfp Error creating MAT file human/features/attentional/fc8_mask/labels/1270_1715217.png_blob_0.mat
*** Check failure stack trace: ***
    @     0x7fac995ea5cd  google::LogMessage::Fail()
    @     0x7fac995ec433  google::LogMessage::SendToLog()
    @     0x7fac995ea15b  google::LogMessage::Flush()
    @     0x7fac995ece1e  google::LogMessageFatal::~LogMessageFatal()
    @     0x7fac99e17a7b  caffe::WriteBlobToMat<>()
    @     0x7fac99dc1092  caffe::MatWriteLayer<>::Forward_cpu()
    @     0x7fac99e46886  caffe::Net<>::ForwardFromTo()
    @     0x7fac99e46c47  caffe::Net<>::ForwardPrefilled()
    @           0x409115  test()
    @           0x4075c8  main
    @     0x7fac9809e830  __libc_start_main
    @           0x407d39  _start
    @              (nil)  (unknown)
Aborted (core dumped)

different crop_size in TRAIN and TEST phase?

I observed that in the $root_dir/LIP_SSL/human/config/attention, the crop_sizes in train.prototxt and test.prototxt are not the same, training use crop_size=321,while testing use crop_size=640, It's true and why this is resonable? Is the accracy performances in the paper were reached by this setting? Thanks for your response.

"caffe.ImageDataParameter" has no field named "label_type"

Hello!
I get this error when trying to test the pretrained model. Could you help me figure it out please?

Testing net human/attention
Running /usr/bin/caffe.bin test              --model=human/config/attention/test_val.prototxt              --weights=human/model/attention/test.caffemodel              --iterations=10000
I1121 16:58:08.414149 24292 caffe.cpp:284] Use CPU.
[libprotobuf ERROR google/protobuf/text_format.cc:307] Error parsing text-format caffe.NetParameter: 14:15: Message type "caffe.ImageDataParameter" has no field named "label_type".
F1121 16:58:08.419540 24292 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: human/config/attention/test_val.prototxt

I believe the key part is this: "caffe.ImageDataParameter" has no field named "label_type".

This looks related to #22, but since that thread is closed with no posted solution, I'm opening a new issue.
Thank you for your time.

Incorrect output in test

Hi!

I'm trying to use your code to get segmentations.
In my input images there is usually one rather large person.
But output segmentation mask is always filled with fives.

Can you tell me what I'm doing wrong?

Train model with new data get error

Hi, i tried to train new model with my data. My data includes "sketch" and "segmentation image ".
It got this error when i run "run_human.sh":
F0417 02:14:02.873652 8924 seg_accuracy_layer.cpp:87] Unexpected label 241. num: 0. row: 1. col: 10

init.caffemodel

Hi,
I wanted to train the model on my custom dataset. Is there any source from where i can get the init.caffemodel file.

Thanks

ask

压缩包里面只有val_id.txt,没有 val.txt是要自己写么

caffe and pytorch

Hello! Do you have the version of pytorch? The module of you work is kind of complex, thus I don't know how to convert it to pytorch. Could you give me some help? Thank you very much!

Error in testing with pretrained model

I am trying to test with the pre-trained model available in the readme.

Downloaded the LIP dataset and placed the images, segmentations and list as specified
Downloaded the caffemodel and renamed it as test.caffemodel

I have existing caffe in my machine (windows 7 64 bit) and using the same for testing. Changed the path in run_human.sh
Getting the following error when running the run_human.sh

Testing net human/attention
Running D://Projects//caffe//tools//Release//caffe.exe test --model=human/config/attention/test_val.prototxt --weights=human/model/attention/test.caffemodel --gpu=0 --iterations=1
I0404 20:37:13.862496  9404 caffe.cpp:276] Use GPU with device ID 0
I0404 20:37:13.940496  9404 caffe.cpp:280] GPU device name: GeForce GT 720
I0404 20:37:14.190094  9404 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
[libprotobuf ERROR C:\Users\guillaume\work\caffe-builder\build_v120_x64\packages\protobuf\protobuf_download-prefix\src\protobuf_download\src\google\protobuf\text_format.cc:298] Error parsing text-format caffe.NetParameter: 14:15: Message type "caffe.ImageDataParameter" has no field named "label_type".
*** Check failure stack trace: ***

error

In the process of training, I met this error:
F0307 17:08:57.373412 27727 data_transformer.cpp:475] Check failed: img_height == seg_height (409 vs. 0)

I don't know how to solve it, could you please give me some suggestions? Thank you very much!

questions for training

Hi, I have some questions for training. It seems like run_human.sh contains only one training step, but your paper says there are two training steps. The batch size in train.prototxt is 1 but that is 10 in your paper, which one is correct?

And I have downloaded the attention models from the deeplab website. There are init.caffemodel and train2_iter_8000.caffemodel in the zip file. Which one should be used as the init.caffemodel for training?

I'm looking forward to your reply.

follow your instructions but wrong answer

Hi , I followed your config and data, training from initial state and using LIP_train_images,but the loss always be around 5~10. I use the final caffemodel to test but all the outputs are zeros.

error Class_id

Hi author,

Thanks for your projects, I have problem with your LIPS dataset (image + mask):
As you can see I show this image (LIP/TrainVal_images/TrainVal_images/train_images/124117_202251.jpg):
class_id = [ 2 5 13 14 15]
image

LIP/TrainVal_images/TrainVal_images/train_images/124122_1735422.jpg:
class_id = [ 2 5 8 9 13 17]
image

LIP/TrainVal_images/TrainVal_images/train_images/12413_1224729.jpg
class_id = [ 1 2 7 9 14 18 19]
image

I have follow your id class but maybe it is not right:
1.Hat
2.Hair
3.Sunglasses
4.Upper-clothes
5.Dress
6.Coat
7.Socks
8.Pants
9.Glove
10.Scarf
11.Skirt
12.Jumpsuits
13.Face
14.Right-arm
15.Left-arm
16.Right-leg
17.Left-leg
18.Right-shoe
19.Left-shoe

Please help me

Thanks
Nam

error while running run_human.sh

I0517 08:18:45.982748 136 layer_factory.hpp:77] Creating layer data
I0517 08:18:45.982776 136 net.cpp:106] Creating Layer data
I0517 08:18:45.982784 136 net.cpp:411] data -> data
I0517 08:18:45.982805 136 net.cpp:411] data -> label
I0517 08:18:45.982815 136 net.cpp:411] data -> (automatic)
I0517 08:18:45.982834 136 image_seg_data_layer.cpp:46] Opening file human/list/val.txt
I0517 08:18:45.992491 136 image_seg_data_layer.cpp:68] A total of 10000 images.
E0517 08:18:45.992519 136 io.cpp:81] Could not open or find file human/data/images/100034_483681.jpg
F0517 08:18:45.992681 136 image_seg_data_layer.cpp:82] Check failed: cv_img.data Could not load images/100034_483681.jpg

Test results are consistent with the reported values in the paper ?

Hi, I've attempted to test your provided best caffemodel in the validation set.
Following the results you reported in SSL, the overall acc., mean acc. and mIoU should be
84.36%, 54.94%, 44.73%, respectively, but I used the provided code and only got the
82.06%, 51.64%, 40.95%. Besides, I've also tried other methods (deeplab_v2 , attention without ssl ) according to your training strategies and all of them get the lower results. Are there any changes to the validation set ? Thx !

models trained on ATR dataset

Are there any pre-trained models trained on ATR dataset?
Network presented here is only suitable for LIP dataset, since ATR has 18 categories but not 20 categories of LIP.
Thank you very much!

Init model for training

Hi, is the init model for training taken from DeepLab-Attention-COCO (http://liangchiehchen.com/projects/DeepLab_Attention_COCO.html) ?
Thank

test pretrain model with customized data

Hi @Engineering-Course ,
thank you for the great project, I''ve successfully compiled ur caffe version and try to execute run_human.sh, but it prompts the error in image transformer processing.
data_transformer.cpp:474] Check failed: img_channels == data_channels (1 vs. 3)
However I follow your instruction to make a val.txt to construct my testing image list.
the sample in my txt file such as
/images/23454.jpg
/images/3534.jpg
all of them are 3 channel RGB images, but still I got this error , may I ask about the reason? seems like img_channels should also be 3?
Thank you if you can check this for me.

Git clone failed

Git clone fails with the below message

$ git clone --recursive https://github.com/Engineering-Course/LIP_SSL.git
Cloning into 'LIP_SSL'...
remote: Counting objects: 92, done.
remote: Total 92 (delta 0), reused 0 (delta 0), pack-reused 92
Unpacking objects: 100% (92/92), done.
Checking connectivity... done.
Submodule 'code' ([email protected]:Engineering-Course/caffe_ssl.git) registered for path 'code'
Cloning into 'code'...
Warning: Permanently added the RSA host key for IP address '192.30.253.112' to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
Clone of '[email protected]:Engineering-Course/caffe_ssl.git' into submodule path 'code' failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.