Coder Social home page Coder Social logo

overlapnet's Introduction

OverlapNet - Loop Closing for 3D LiDAR-based SLAM

OverlapNet was nominated as the Best System Paper at Robotics: Science and Systems (RSS) 2020

This repo contains the code for our RSS2020 paper, OverlapNet.

OverlapNet is modified Siamese Network that predicts the overlap and relative yaw angle of a pair of range images generated by 3D LiDAR scans.

Developed by Xieyuanli Chen and Thomas Läbe.

Pipeline overview of OverlapNet.

Table of Contents

  1. Introduction
  2. Publication
  3. Logs
  4. Dependencies
  5. How to use
  6. Application
  7. License

Publication

If you use our implementation in your academic work, please cite the corresponding paper (PDF):

@inproceedings{chen2020rss, 
		author = {X. Chen and T. L\"abe and A. Milioto and T. R\"ohling and O. Vysotska and A. Haag and J. Behley and C. Stachniss},
		title  = {{OverlapNet: Loop Closing for LiDAR-based SLAM}},
		booktitle = {Proceedings of Robotics: Science and Systems (RSS)},
		year = {2020}
}

The extended journal version of OverlapNet is here (PDF):

@article{chen2021auro,
	author = {X. Chen and T. L\"abe and A. Milioto and T. R\"ohling and J. Behley and C. Stachniss},
	title = {{OverlapNet: A Siamese Network for Computing LiDAR Scan Similarity with Applications to Loop Closing and Localization}},
	journal = {{Autonomous Robots}},
	year = {2021},
	doi = {10.1007/s10514-021-09999-0},
	issn = {1573-7527},
	volume={46},
	pages={61--81},
	url = {http://www.ipb.uni-bonn.de/pdfs/chen2021auro.pdf}
}

News and Logs

New Version

Based on the OverlapTransformer, we propose a novel transformer exploiting sequential LiDAR data to achieve better LiDAR place recognition (code).

We develop a novel lightweight neural network called OverlapTransformer exploiting transformer and achieve fast execution with less than 4 ms per frame in estimating LiDAR scans similarity.

Version 1.2

Version 1.1

  • Added a method to the Infer class for inference with multiple frames versus multiple frames.
  • Updated TensorFlow version in dependencies.
  • Fixed bugs in generating ground truth overlap and yaw.
  • Added an application and a link to our overlap-based MCL implementation.

Version 1.0

Open source initial submission

Dependencies

We are using standalone keras with a tensorflow backend as a library for neural networks.

In order to do training and testing on a whole dataset, you need a nvidia GPU. The demos still are fast enough when using the neural network on CPU.

To use a GPU, first you need to install the nvidia driver and CUDA, so have fun!

  • CUDA Installation guide: link

  • System dependencies:

    sudo apt-get update 
    sudo apt-get install -y python3-pip python3-tk
    sudo -H pip3 install --upgrade pip
  • Python dependencies (may also work with different versions than mentioned in the requirements file)

    sudo -H pip3 install -r requirements.txt

How to use

This repository contains the neural network for doing a detection of loop closing candidates.

For a complete pipline for online LiDAR preprocessing, one could find the fast implementation in our SuMa++.

In this repository, we provide demos to show the functionaly. Additional to that, we explain how to train a model.

Demos

Demo 1: generate different types of data from the LiDAR scan

To try demo 1, you could directly run the script with one command line:

python3 demo/demo1_gen_data.py

The generated data are stored in /data/preprocess_data, and you will get a visualization like this:

For generating range and normal images, we have also a much faster implementation in C (with python interface) available in our repo https://github.com/PRBonn/overlap_localization (look at src/prepare_training).

Demo 2: Inferring overlap and relative yaw angle between two LiDAR scans

To run demo 2, you need to first download the pre-trained model.

Then, you should

  • either copy it into the default location folder data or
  • you need to modify the pretrained_weightsfilename in the config file /config/network.yml accordingly.

Run the second demo script with one command line:

python3 demo/demo2_infer.py

You will get a visualization like this:

Demo 3: Loop closure detection

To run demo 3, you need to first download several data:

  • pre-trained model,
  • KITTI odometry data, where we also provide the covariance information generated from the SLAM,
  • pre-processed data.

If you follow the recommended data structure below, you extract the downloaded data into the folder data.

Otherwise, you need to specify the paths of data in both config/network.yml and config/demo.yml accordingly,

and then run the third demo script with one command line:

python3 demo/demo3_lcd.py

You will get an animation like this:

Demo 4: Generate ground truth overlap and yaw for training and testing

To run demo 4, you need only the raw KITTI odometry data. We are using the same setup as in demo 3.

Run the fourth demo script with one command line:

python3 demo/demo4_gen_gt_files.py

You will generated the ground truth data in data/preprocess_data_demo/ground_truth and get a plot like this:

The colors represent the ground truth overlap value of each frame with respect to the given current frame which is located at (0.0, 0.0).

Train and test a model

For a quick test of the training and testing procedures, you could use our preprocessed data as used in demo3.

We only provide the geometric-based preprocessed data. But it will also be possible to generate other inputs (semantics, intensity) by yourself.

A simple example to generate different types of data from LiDAR scan is given in demo1.

For 3D LiDAR semantic segmentation, we provide a fast c++ inferring library rangenetlib.

Data structure

For training a new model with OverlapNet, you need to first generate preprocessed data and ground truth overlap and yaw angle which you could find examples in demo1 and demo4.

The recommended data structure is as follows:

data
    ├── 07
    │   ├── calib.txt
    │   ├── covariance.txt
    │   ├── poses.txt
    │   ├── depth
    │   │   ├── 000000.npy
    │   │   ├── 000001.npy
    │   │   └── ...
    │   ├── normal
    │   │   ├── 000000.npy
    │   │   ├── 000001.npy
    │   │   └── ...
    │   ├── velodyne
    │   │   ├── 000000.bin
    │   │   ├── 000001.bin
    │   │   └── ...
    │   └── ground_truth
    │       ├── ground_truth_overlap_yaw.npz
    │       ├── test_set.npz
    │       └── train_set.npz
    └── model_geo.weight

Training

The code for training can be found in src/two_heads/training.py.

If you download our preprocessed data, please put the data into the folder data. If you want to use another directory, please change the parameter data_root_folder in the configuration file network.yml.

Notice that default weight file is set in the configuration file with parameter pretrained_weightsfilename. If you want to train a completely new model from scratch, leave this parameter empty. Otherwise you will fine-tune the provided model.

Then you can start the training with

python3 src/two_heads/training.py config/network.yml

All configuration data is in the yml file. You will find path definitions and training parameters there. The main path settings are:

  • experiments_path: the folder where all the training data and results (log files, tensorboard logs, network weights) will be saved. Default is /tmp. Change this according to your needs
  • data_root_folder: the dataset folder. Is should contain the sequence folders of the dataset e.g. as 00, 01, ..., For the provided preproecessed data, it should be 07.

We provide tensorboard logs in experiment_path/testname/tblog for visualizing training and validation details.

Testing

Once a model has been trained (thus a file .weight with the network weights is available), the performance of the network can be evaluated. Therefore you can start the testing script in the same manner as the training with the testing script:

python3 src/two_heads/testing.py config/network.yml

The configuration file should have the following additional settings:

  • pretrained_weightsfilename: the weight filename mentioned as parameter
  • testing_seqs: sequences to test on, e.g. 00 01. (Please comment out training_seqs.) The pairs where the tests are computed are in the file ground_truth/ground_truth_overlap_yaw.npz. If one still uses the parameter training_seqs, the validation is done on the test sets of the sequences (ground_truth/validation_set.npz) which contain only a small amount of data used for validation during training.

Note that: the provided pre-trained model and preprocessed ground truth are with the constraint that the current frame only finds loop closures in the previous frames.

Application

This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D LiDAR Localization.

It uses the OverlapNet to train an observation model for Monte Carlo Localization and achieves global localization with 3D LiDAR scans.

License

Copyright 2020, Xieyuanli Chen, Thomas Läbe, Cyrill Stachniss, Photogrammetry and Robotics Lab, University of Bonn.

This project is free software made available under the MIT License. For details see the LICENSE file.

overlapnet's People

Contributors

bit-mjy avatar chen-xieyuanli avatar dependabot[bot] avatar laebe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

overlapnet's Issues

Datasets

How can we get the relevant data? I only see the data structure given :(

Performance on 16 beam lidars

Hi Chen, I am really impressed on OverlapNet's excellent performance even using cheap geometric information of point clouds. One main concern of mine is how it works on sparser point clouds, e.g., produced by VLP-16. It's really common in industrial fields that you can't afford 64-beam lidars.
Thank you for your great work and looking forward to your reply.

ground truth

Thanks you! But I have a problem for the ground truth of the loop . How do you get the ground truth of the loop close decetion in kitti? The code in the project should be to obtain the true value of the network.

Questions about Evaluation Performance

Hi! Thanks for sharing your work! This work is quite interesting.
However, I have some confusion about the experiments.

  1. How about the performance on KITTI 08? As we know, KITTI 08 has reverse loops, which is much more challenging than KITTI 00, 02, 05. I didn't see results on KITTI 08.
  2. Scan Context is a great loop closure detection method and achieve state-of-the-art performance on KITTI dataset. Are you going to report the results of comparision with Scan Context?
  3. As we know, it is important for algorithm to be robustness to view point changes (rotation in yaw) and occlusions. I didn't see the relevent evaluation in paper.

Looking forward to your reply!

Best,
Xin

Reproducing paper results

Dead authors,

Thank you for your work!
I'm trying to reproduce the results reported in the paper using the provided pre-trained model.
I generated the preprocessed data and the ground truths using the scripts demo1 and demo 4, and I'm testing the network with the testing.py file.

Since the pre-trained model use only depths and normals, I expected to obtain a mean rotation error of ~2.97° (as reported in Table V).
However, I'm getting a mean error of 8.90°.

Generated histograms:
image
image

PS: To generate the ground truths I'm using the poses provided with the semantic KITTI dataset

Question about covariance ?

Dear Sir,
You have mentioned the covariance data are included into the odometry folder, And I wonder how can you get the covariance from KITTI dataset ? Or where you suggested the place I can get the reference ?

Thank you for your attention~

Some questions about training models

Thanks for your amazing work. I have some questions about overlapnet.

  1. I found that demo4 only generates the overlaps between the first frame and other frames, but ground_truth_mapping[:,0] is set to len(scan_paths)-1.
  2. Did you only use the overlaps between the first frame and other frames during training, or did you use other sampling methods?
  3. There is a random rotation in the training code, but this function is disabled by default in the configuration file. Does this have an impact on the result?
  4. In the process of testing with the pre-trained model, we found that the random rotation of the data will have a significant impact on the results, but we found that you said that overlapnet is rotation invariance.

How to generate train_set

How to generate the train_set for each sequence?
For example, the sequence of 00 contains 4541 scans. The total number of overlaps ratio is 4541 x 4541 = 20620681.
In your split_train_val.py, the ratio of training and testing are 0.9 and 0.1, so the number of training data is 20620681 x 0.9 = 18558612.9.
But in your train_set of sequence 00, the size of overlaps is 90738 x 3.

Using overlapnet for a different dataset

Hi,
I am referring to Overlapnet for my research, I am using the Oxford Newer College dataset for my study.

The dataset itself comes in rosbag file, I successfully saved the pointcloud msgs into .bin file format same as in the KITTI odometry dataset. Also, I created semantic_probs by inferring using rangenet_lib. I have attached few outputs of demo-1 here.

000001

000401

001100

Could you please tell me how I can verify the correctness of the cues?

Reproducing results of Fig.5 in paper

Hi!
Thanks for your awesome work!
I have some questions. When you use RangeNet + + to get semantic clues, Are the parameters of RangeNet++ same between KITTI sequence and Ford campus sequence?
If It is different, how do I adjust parameters?
Do I need to train the RangeNet++ by using Ford campus dataset?

Reproducing results of Fig.7 in paper

Hi,

Thanks for your awesome work!

I am wondering how can I get the evaluation result of OverlapNet like Fig. 7? This is because I am planning to compare several other methods with OverlapNet, and they are using the top k recall and PR curve as metrics.

Some questions.

Thank you for your very nice work.
This is not an issue about the code, but I have general questions about the OverlapNet, and I couldn't wait for the RSS officially starts :)
in detail, the definition of the overlapness and the idea helped me.

The questions are:

  1. If we do not have a semantic channel (that is a very general option so far), the performance degradation is expected or not, if there is, how much you expect?
  2. Does the alg. supports the overlap estimation (and loop detection) between the (same sensor but) mounted at the different systems (e.g., robot 1 and robot 2) / or between the different sessions?
  3. I'd like to know how much the system is, empirically, resistible to the existence of dynamic objects. I think the deep learning-based loop detection is good for the kind of object-aware or -robust things.

thank you!

Questions about overlapnet

This is a great study
I want to compare my method with yours in other dataset.

  1. But I don't want to train model by myself. Can I use your trained model to compare directly?
  2. The input mltiple cues has 4 channels, can I only use the first three channels for verification, because obtaining semantic information is complicated for me.

Thank you and great work!

Question about computing overlap ground truth

Hi, thanks for your great work!

I have a question while looking into how you generate the ground truth (demo 4).
In the paper, you mentioned that you use P_1 and P_2 and compute the overlap value with equation 3. And the result of it is figure 2 (a).
However, I am still not quite sure how you compute the overlap value ground truth shown in figure (c).
By looking into the com_overlap_yaw() function, you seem to compare the same point cloud with a different pose.
What confuses me is that reference_range and current_range are using the same point cloud, not P_1 and P_2.

It would be great if you could explain more. Thanks!

FileNotFoundError: [Errno 2] No such file or directory: 'config/demo.yml'

When I follow the instructions by the authors to run demo1, I met the following error:

Traceback (most recent call last):
  File "demo1_gen_data.py", line 69, in <module>
    config = yaml.load(open(config_filename))
FileNotFoundError: [Errno 2] No such file or directory: 'config/demo.yml'

I soled it by changing the verision of pyyaml

pip install pyyaml==5.4.1

After doing this, there is still a warning:

demo/demo1_gen_data.py:69: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(open(config_filename))

But it can produce the range image correctly.

Record this for someone who may meet the same problem. :)

Way to integrate with ROS ??

Hello, I was wondering if there is a way to integrate this work with ROS or not. Say, we are providing the pointcloud messages (scans) in a topic and each message is splitted into 3/4 set of data (normal, range, intensity and semantic) as input (for testing) to the model and the model predicts loop closure candidates as output in a ROS message format.
Do you have any idea about this or do you have any knowledge that someone has already done something regarding this matter ?

Some confusion about real overlap calculation

Dear author
I had some problems running your code with the overlap truth calculation. In the 00 sequence, frame 0 and frame 4449 are the same region, but their true overlap is only 0.14.
微信图片_20220411160408
微信图片_20220411160444
I look forward to your reply. Thank you

Questions about my test

Hi, I test OverlapNet in other datasets and the results are shown in the picture below.Robot is in the same place with different orientation. I want to know if the orientation will affect the output. Because the reality is that robot often come back to one place with different orientation.
Screenshot from 2020-12-31 10-30-15
Screenshot from 2020-12-31 10-16-46

Pytorch version training.

Thanks for your great work!
I want to compare my lopp closure method with Overlap, but I have problem with running the work. I use the pytorch version, and I already generate depth, intensity, normal data. I want to compute a score for every pair, but I have no model weights. So I want to train one, but I can not find which file could generate the 'overlaps/train_set.npz'. Could you help me, or could you provide a pre-model. Thanks.

Preprocessing time

Dear Authors,

in your paper the time required to generate the geometric input features is reported to be around 10ms.
However, with this code on my computer it takes more than 2 seconds to generate the normal image.

Did you use a different implementation to achieve the runtime reported in your paper? Or am I doing something wrong?

Problem in the first version

I used Ubuntu18.04 to run it in terminl. But when i run the demo2_infer.py , i meet the following problem

2022-11-03 20:27:14.318140: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "/home/ydragon/Downloads/OverlapNet-master/demo/demo2_infer.py", line 15, in
from infer import *
File "/home/ydragon/Downloads/OverlapNet-master/demo/../src/two_heads/infer.py", line 11, in
import keras
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/init.py", line 3, in
from . import utils
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/utils/init.py", line 25, in
from .multi_gpu_utils import multi_gpu_model
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/utils/multi_gpu_utils.py", line 7, in
from ..layers.merge import concatenate
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/layers/init.py", line 4, in
from ..engine import Layer
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/engine/init.py", line 3, in
from .topology import InputSpec
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/engine/topology.py", line 18, in
from .. import initializers
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/initializers/init.py", line 124, in
populate_deserializable_objects()
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/initializers/init.py", line 82, in populate_deserializable_objects
generic_utils.populate_dict_with_module_objects(
AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'

Strangely ,I didn't enconter this problem in demo1_gen_data.py with"from utils import *" and " import keras". what's wrong with it in demo2_infer.py and infer.py? Could you help me solve it ? Thank you very much .

Questions when Generating train_set

Dear Author,
Thank you for sharing your work, and I enjoy your paper very much.
I found some problems when I tried to generate train_set under ground_truth folder:

  1. The length of the array "train_set" (which is the result I need) is always shortened greatly after the process of normalization, showing that the number of overlaps between 0.4 and 0.5 is quite small. If my understanding of your code is right, the training set will be extremely small (which isn't). For example, sequence 05 has 2761 frames, but in the process, the size of normalized data(printed by function normalize_data) is only 31.
    q-1
  2. I downloaded your provided 'train_set' files when viewing the github page of OverlapTransformer (which claims that: "overlaps folder of each sequence below data_root_folder is provided by the authors of OverlapNet here. "). However, I found that the files are extremely large causing slow training speed. For example, sequence 03 has only 801 frames. In theory, since demo4 only implement the function "com_overlap_yaw" once setting frame_idx = 0, the maximum length of train_set should be smaller than 801. However, when I check the length of the array, it appears to be an astonishing number of 16218. I don't understand where did that come from. I guess I've missed something, but I can't figure out what it is.
    q0
    q1
    Looking forward to your reply!

Question about the semantic_prob data

Hi I have a question about the semantic_prob data.

I would like to test with other scan data in KITTI, but I got an error message from the semantic probs that "cannot reshape array of size # into shape (20)".

So I have question below.

  1. Do the label data in the 'data/semantic_probs' are the RangeNet++ result?

  2. What if the answer of the question 1 is YES, what is difference between the RangeNet++ label result and SemanticKITTI label data?

Thank you.

Question about how to judge a loop closure True Positive or not

Hi, thanks for your great work!

I have a question about how to judge a loop closure True Positive or not. Some papers use the distance between two frames, think it True Positive if the distance less than 3 meters or 4 meters. Is OverlapNet use this method or think it true if overlap lager than threshold.

It would be great if you could explain more. Thanks!

Some confusion about normalize_data.py

Dear author, during generating training set and verification set, there are three versions of normalize_data.py, which are as follows,

  1. in tensorflow_version overlapNet: OverlapNet/src/utils/normalize_data.py

Selection_029

  1. in pytorch_version overlapNet: OverlapNet/tools/utils/normalize_data.py

Selection_031

  1. in overlap_localization: overlap_localization/src/prepare_training/normalize_data.py
    Selection_030

I want to know which version you are using in the paper to keep different bins the same amount of samples.

It would be great if you could explain more. Thank you very much.

Questions about training procedure

Hi, thank you for your interesting work.

I am trying to train a model in order to achieve the results obtained in your paper on the KITTI odometry dataset. I followed the steps that you described in this repository and trained the model with multiple KITTI sequences by exploiting also intensity and semantic information. However, the performances I obtained are not good.
Therefore my questions are:

  1. There are network parameters that need to be changed with respect to the default ones?

  2. I noticed that in demo4, when ground truth are generated for the sequence 07, a data normalization step is performed in order to balance the data by considering the overlap rate. However, it seems that the example that you provided is calibrated on that run, therefore how can I perform this task when multiple sequences are considered?
    For example, some KITTI sequences have only few samples with an overlap rate >0.5 (e.g. sequence 03 contains only one sample that matches this condition) and at the moment I perform such balancing by considering the overlap distribution across all the KITTI sequences.

  3. Can you explain what "use_class_probabilities_pca" parameter is (network.yml)?

If you could give me any suggestion, it would be great! :)

Thank you and great work!

some errors encountered during training

Dear author,
when I try to train the model, I always encounter the following error:

Traceback (most recent call last): File "src/two_heads/training.py", line 351, in <module> model.save(weights_filename) File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/engine/topology.py", line 2580, in save save_model(self, filepath, overwrite, include_optimizer) File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/models.py", line 111, in save_model 'config': model.get_config() File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/engine/topology.py", line 2353, in get_config layer_config = layer.get_config() File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/layers/convolutional.py", line 471, in get_config config = super(Conv2D, self).get_config() File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/layers/convolutional.py", line 231, in get_config 'bias_initializer': initializers.serialize(self.bias_initializer), File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/initializers/__init__.py", line 132, in serialize return generic_utils.serialize_keras_object(initializer) File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/utils/generic_utils.py", line 131, in serialize_keras_object 'config': instance.get_config() TypeError: get_config() missing 1 required positional argument: 'self'

Although I have tried various solutions to solve this problem, I still can't solve it. Thank you very much for your advice.
Best wishes to you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.