Coder Social home page Coder Social logo

pglira / simpleicp Goto Github PK

View Code? Open in Web Editor NEW
287.0 7.0 59.0 47.55 MB

Implementations of a rather simple version of the Iterative Closest Point algorithm in various languages.

License: MIT License

CMake 0.64% C++ 19.40% Julia 9.14% MATLAB 17.20% Python 48.18% Shell 2.55% Dockerfile 2.90%
iterative-closest-point computer-vision point-cloud-registration point-cloud-processing point-cloud robotics

simpleicp's Introduction

simpleICP

simpleICP

This repo contains implementations of a rather simple version of the Iterative Closest Point (ICP) algorithm in various languages.

Currently, an implementation is available for:

Language Code Main dependencies
C++ Link nanoflann, Eigen, cxxopts
Julia Link NearestNeighbors.jl
Matlab Link Statistics and Machine Learning Toolbox
Octave Link
Python Link NumPy, SciPy, lmfit, pandas

I've tried to optimize the readability of the code, i.e. the code structure is as simple as possible and tests are rather rare.

The C++ version can be used through a cli interface.

Also available at:

  • Matlab: View simpleICP on File Exchange
  • Python: Downloads

Features of the ICP algorithm

Basic features

The following basic features are implemented in all languages:

  • Usage of the signed point-to-plane distance (instead of the point-to-point distance) as error metric. Main reasons:

    • higher convergence speed, see e.g. here and here
    • better final point cloud alignment (under the assumption that both point clouds are differently sampled, i.e. no real point-to-point correspondences exist)
  • Estimation of a rigid-body transformation (rotation + translation) for the movable point cloud. The final transformation is given as homogeneous transformation matrix H:

    H = [R(0,0) R(0,1) R(0,2)   tx]
        [R(1,0) R(1,1) R(1,2)   ty]
        [R(2,0) R(2,1) R(2,2)   tz]
        [     0      0      0    1]
    

    where R is the rotation matrix and tx, ty, and tz are the components of the translation vector. Using H, the movable point cloud can be transformed with:

    Xt = H*X
    

    where X is a 4-by-n matrix holding in each column the homogeneous coordinates x, y, z, 1 of a single point, and Xt is the resulting 4-by-n matrix with the transformed points.

  • Selection of a fixed number of correspondences between the fixed and the movable point cloud. Default is correspondences = 1000.

  • Automatic rejection of potentially wrong correspondences on the basis of

    1. the median of absolute deviations. A correspondence i is rejected if |dist_i-median(dists)| > 3*sig_mad, where sig_mad = 1.4826*mad(dists).
    2. the planarity of the plane used to estimate the normal vector (see below). The planarity is defined as P = (ev2-ev3)/ev1 (ev1 >= ev2 >= ev3), where ev are the eigenvalues of the covariance matrix of the points used to estimate the normal vector. A correspondence i is rejected if P_i < min_planarity. Default is min_planarity = 0.3.
  • After each iteration a convergence criteria is tested: if the mean and the standard deviation of the point-to-plane distances do not change more than min_change percent, the iteration is stopped. Default is min_change = 1.

  • The normal vector of the plane (needed to compute the point-to-plane distance) is estimated from the fixed point cloud using a fixed number of neighbors. Default is neighbors = 10.

  • The point clouds must not fully overlap, i.e. a partial overlap of the point cloud is allowed. An example for such a case is the Bunny dataset, see here. The initial overlapping area between two point clouds can be defined by the parameter max_overlap_distance. More specifically, the correspondences are only selected across points of the fixed point cloud for which the initial distance to the nearest neighbor of the movable point cloud is <= max_overlap_distance.

Extended features

The extended features are currently not implemented in all languages. The differences are documented in the following table:

Feature C++ Julia Matlab Octave Python
observation of rigid-body transformation parameters no no no no yes

Extended feature: observation of rigid-body transformation parameters

This is useful in at least these cases:

  1. If only a subset of the 6 rigid-body transformation parameters should be estimated. This can be accomplished by setting the weight of individual parameters to infinite, see example below.

  2. If all or a subset of the 6 rigid-body transformation parameters have been directly observed in any other way, e.g. by means of a manual measurement.

  3. If estimates for the rigid-body transformation parameters exist, e.g. from a previous run of simpleICP. In this case the observation weight should be set (according to the theory of least squares adjustments) to w = 1/observation_error^2 whereby the observation_error is defined as std(observation_value). The observation error of all parameters is reported by simpleICP as "est.uncertainty" in the logging output.

This feature introduces two new parameters: rbp_observed_values and rbp_observation_weights. Both parameters have exactly 6 elements which correspond to the rigid-body transformation parameters in the following order:

  1. alpha1: rotation angle around the x-axis
  2. alpha2: rotation angle around the y-axis
  3. alpha3: rotation angle around the z-axis
  4. tx: x component of translation vector
  5. ty: y component of translation vector
  6. tz: z component of translation vector

The rigid-body transformation is defined in non-homogeneous coordinates as follows:

Xt = RX + t

where X and Xt are n-by-3 matrices of the original and transformed movable point cloud, resp., t is the translation vector, and R the rotation matrix. R is thereby defined as:

R = [ca2*ca3               -ca2*sa3                sa2    ]
    [ca1*sa3+sa1*sa2*ca3    ca1*ca3-sa1*sa2*sa3   -sa1*ca2]
    [sa1*sa3-ca1*sa2*ca3    sa1*ca3+ca1*sa2*sa3    ca1*ca2]

with the substitutions:

sa1 := sin(alpha1), ca1 := cos(alpha1)
sa2 := sin(alpha2), ca2 := cos(alpha2)
sa3 := sin(alpha3), ca3 := cos(alpha3)

The two parameters rbp_observed_values and rbp_observation_weights can be used to introduce an additional observation to the least squares optimization for each transformation parameter:

residual = observation_weight * (estimated_value - observed_value)

Example which demonstrates the most important combinations:

# parameters:              alpha1   alpha2   alpha3   tx      ty     tz
rbp_observed_values =     (10.0     0.0     -5.0      0.20   -0.15   0.0)
rbp_observation_weights = (100.0    0.0      0.0      40.0    40.0   inf)

Consequently:

  • alpha1: is observed to be 10 degrees with an observation weight of 100.

  • alpha2: is not observed since the corresponding weight is zero. However, the observed value is used as initial value for alpha2 in the non-linear least squares optimization.

  • alpha3: is also not observed, but has an initial value of -5 degrees.

  • tx: is observed to be 0.20 with an observation weight of 40.

  • ty: is observed to be -0.15 with an observation weight of 40.

  • tz: is observed to be 0 with an infinite observation weight, i.e. this parameter becomes a constant and is fixed to be exactly the observation value. Thus, in this case only 5 (out of 6) rigid-body transformation parameters are estimated.

Output

All implementations generate the same screen output. This is an example from the C++ version for the Bunny dataset:

$ run_simpleicp.sh
Processing dataset "Dragon"
Create point cloud objects ...
Select points for correspondences in fixed point cloud ...
Estimate normals of selected points ...
Start iterations ...
Iteration | correspondences | mean(residuals) |  std(residuals)
   orig:0 |             767 |          0.0001 |          0.3203
        1 |             767 |         -0.0061 |          0.2531
        2 |             773 |         -0.0035 |          0.1669
        3 |             771 |         -0.0008 |          0.0835
        4 |             741 |         -0.0006 |          0.0196
        5 |             762 |          0.0000 |          0.0025
        6 |             775 |          0.0001 |          0.0022
Convergence criteria fulfilled -> stop iteration!
Estimated transformation matrix H:
[    0.998696     0.052621    -0.034179    -0.206737]
[   -0.052090     0.999028     0.020119    -0.408088]
[    0.034822    -0.018663     0.999436    -0.593361]
[    0.000000     0.000000     0.000000     1.000000]
Finished in 1.729 seconds!

Test data sets

The test data sets are included in the data subfolder. An example call for each language can be found in the run_simpleicp.* files, e.g. run_simpleicp.jl for the julia version.

Dataset pc1 (no_pts) pc2 (no_pts) Overlap Source
Dragon Dragon pc1 (100k) pc2 (100k) full overlap The Stanford 3D Scanning Repository
Airborne Lidar AirborneLidar pc1 (1340k) pc2 (1340k) full overlap Airborne Lidar flight campaign over Austrian Alps
Terrestrial Lidar TerrestrialLidar pc1 (1250k) pc2 (1250k) full overlap Terrestrial Lidar point clouds of a stone block
Bunny Bunny pc1 (21k) pc2 (22k) partial overlap The Stanford 3D Scanning Repository

Benchmark

These are the runtimes on my PC for the data sets above:

Dataset C++ Julia Matlab Octave* Python
Dragon 0.16s 3.99s 1.34s 95.7s 4.51s
Airborne Lidar 3.98s 5.38s 15.08s - 16.49s
Terrestrial Lidar 3.62s 5.22s 13.24s - 14.45s
Bunny 0.13s 0.38s 0.37s 72.8s 4.20s

For all versions the same input parameters (correspondences, neighbors, ...) are used.

* Unfortunately, I haven't found an implementation of a kd tree in Octave (it is not yet implemented in the Statistics package). Thus, a (very time-consuming!) exhaustive nearest neighbor search is used instead. For larger datasets the Octave timings are missing, as the distance matrix does not fit into memory.

References

Please cite related papers if you use this code:

@article{glira2015a,
  title={A Correspondence Framework for ALS Strip Adjustments based on Variants of the ICP Algorithm},
  author={Glira, Philipp and Pfeifer, Norbert and Briese, Christian and Ressl, Camillo},
  journal={Photogrammetrie-Fernerkundung-Geoinformation},
  volume={2015},
  number={4},
  pages={275--289},
  year={2015},
  publisher={E. Schweizerbart'sche Verlagsbuchhandlung}
}

Related projects

  • globalICP: A multi-scan ICP implementation for Matlab

Star History

Star History Chart

simpleicp's People

Contributors

chriscline avatar mitjap avatar pglira avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

simpleicp's Issues

Not having the correct transformation Matrix

Hi, Im trying to use your code for point cloud of small point (about 10 -20 points per pointscloud ) in order to test that i use a .xyz containing 20 points of the the 20 vertices of a dodecahedron (fixed point cloud). i created a second files of this one where i moved each X coordinate by ading +10 (movable point cloud) .
Im feeding the algorithm with this two files and the algorithm give me back this matrix :
H =
[ 0.930487 0.366326 0.000000 -9.831626]
[ -0.366326 0.930487 0.000000 0.130998]
[ -0.000000 -0.000000 1.000000 0.000000]
[ 0.000000 0.000000 0.000000 1.000000]
i saw on the README file that to move a point to the position of the fixed, i have to do this operation to get the movable point tof fit the fixed point ;
Xt = H*X
im doing so :
X =[ 10.57735027; 0.57735027 ;0.57735027 ; 1] coordinate of first point in the movable point cloud
H =
[ 0.930487 0.366326 0.000000 -9.831626]
[ -0.366326 0.930487 0.000000 0.130998]
[ -0.000000 -0.000000 1.000000 0.000000]
[ 0.000000 0.000000 0.000000 1.000000]

but when i'm trying it, im not recovering the correct point :
Xt=X*A
Xt=[0.2220
-3.2065
0.5774
1.0000]

and if i understand well i should get the coordinate of this :
xt =0.57735027 0.57735027 0.57735027 1 wich is the coordinate of the first point in the fixed point cloud

so im asking myself if the algorithm is working properly . or im not using it correctly

here is the two points cloud and some screenshot .
dode1.xyz (fixed pointcloud) :
0.57735027 0.57735027 0.57735027
0.57735027 0.57735027 -0.57735027
0.57735027 -0.57735027 0.57735027
0.57735027 -0.57735027 -0.57735027
-0.57735027 0.57735027 0.57735027
-0.57735027 0.57735027 -0.57735027
-0.57735027 -0.57735027 0.57735027
-0.57735027 -0.57735027 -0.57735027
0.35682209 0.93417236 0.00000000
-0.35682209 0.93417236 0.00000000
0.35682209 -0.93417236 0.00000000
-0.35682209 -0.93417236 0.00000000
0.93417236 0.00000000 0.35682209
0.93417236 0.00000000 -0.35682209
-0.93417236 0.00000000 0.35682209
-0.93417236 0.00000000 -0.35682209
0.00000000 0.35682209 0.93417236
0.00000000 -0.35682209 0.93417236
0.00000000 0.35682209 -0.93417236
0.00000000 -0.35682209 -0.93417236

dode2.xyz (movable pointcloud)
10.57735027 0.57735027 0.57735027
10.57735027 0.57735027 -0.57735027
10.57735027 -0.57735027 0.57735027
10.57735027 -0.57735027 -0.57735027
9,42264973 0.57735027 0.57735027
9,42264973 0.57735027 -0.57735027
9,42264973 -0.57735027 0.57735027
9,42264973 -0.57735027 -0.57735027
10.35682209 0.93417236 0.00000000
9,64317791 0.93417236 0.00000000
10.35682209 -0.93417236 0.00000000
9,64317791 -0.93417236 0.00000000
10.93417236 0.00000000 0.35682209
10.93417236 0.00000000 -0.35682209
9,06582764 0.00000000 0.35682209
9,06582764 0.00000000 -0.35682209
10.00000000 0.35682209 0.93417236
10.00000000 -0.35682209 0.93417236
10.00000000 0.35682209 -0.93417236
10.00000000 -0.35682209 -0.93417236

here is the image of the result of the multiplication of the matrix (i did on Matlab online )
multiplication of matrix for finding new points

and here the algorithm :
Capture d’écran du 2023-05-03 16-53-10

if you have any clue about it would help me !

[Python] Exception performing ICP

In some cases the algorithm is crashing peforming ICP for two similar meshes. The same two point clouds (set of vertices) can converge to a result and sometimes can crash when the mesh have another set of world transforms.

Error: Python: Traceback (most recent call last):
  File "/Users/__USER__/model.blend/Text", line 163, in <module>
  File "/Applications/Blender.app/Contents/Resources/3.2/python/lib/python3.10/site-packages/simpleicp/simpleicp.py", line 211, in run
    distance_residuals_new = optim.estimate_parameters()
  File "/Applications/Blender.app/Contents/Resources/3.2/python/lib/python3.10/site-packages/simpleicp/optimization.py", line 120, in estimate_parameters
    weighted_distance_residuals = self._optim_results.residual[
TypeError: 'float' object is not subscriptable

rejection of potentially wrong corrspondences

Thank you for your work in simpleICP, I have used this method to register 3D point cloud in the given DataSet and the result is good. However, when it is used to register the 2D point cloud, this method can't get the right result. I have extended the 2D point (x, y) to 3D point (x, y, z) with z=0, but it still can't calculate the true planarity, which cause all selected correspondences is rejected in the condition (P_i = (ev2-ev3)/ev1 and P_i < min_planarity).
I want to know what is the definition of planarity when the dimension decrease to 2D, and what is the rejection criteria of planarity.

No transform for Lidar Scans

I am attempting to use this ICP algorithm as part of my SLAM model but every H found states my robot is not moving (H = I).

Setup: I have 2D Lidar with a 180 degree FOV that publishes a 150 point pointcloud @ 10Hz. I would like to find some H between each Lidar Scan (or at least between some) to correct my dynamics model.

Attached are 2 lidar scans taken consecutively (I had to change them to txt for upload). I have noticed that after the reject() function there are no differences left so pehaps I just need to tune the parameters? I have tried messing with the parameters a bit (like setting min_planarity arbitrarily high) but have had no luck.

I appreciate the help!

test_point_cloud (copy).txt
test_point_cloud_minus_one (copy).txt

Pytorch Point to Plane similar to SimpleICP

Hi, I tried out SimpleICP and it seems to work very well for my problem. The only issue with SimpleICP for me is that it's very slow due to not being batched (as I need a distribution of ICP results). I was wondering if you knew any pytorch implementations of point to plane ICP? pytorch3d's implementation I believe is point to point and produces much worse fits than SimpleICP. If not, I am interested in porting this python implementation to pytorch to support batching.

module import results in ValueError

In python 3.11.2, installed using pip install simpleicp. Simply importing the module results in the following error:

`

import simpleicp
Traceback (most recent call last):
File "", line 1, in
File "/Users/iremaltan/.pyenv/versions/3.11.2/lib/python3.11/site-packages/simpleicp/init.py", line 6, in
from .simpleicp import SimpleICP
File "/Users/iremaltan/.pyenv/versions/3.11.2/lib/python3.11/site-packages/simpleicp/simpleicp.py", line 17, in
from . import corrpts, mathutils, optimization, pointcloud
File "/Users/iremaltan/.pyenv/versions/3.11.2/lib/python3.11/site-packages/simpleicp/optimization.py", line 329, in
@DataClass
^^^^^^^^^
File "/Users/iremaltan/.pyenv/versions/3.11.2/lib/python3.11/dataclasses.py", line 1220, in dataclass
return wrap(cls)
^^^^^^^^^
File "/Users/iremaltan/.pyenv/versions/3.11.2/lib/python3.11/dataclasses.py", line 1210, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/iremaltan/.pyenv/versions/3.11.2/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/iremaltan/.pyenv/versions/3.11.2/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'simpleicp.optimization.Parameter'> for field alpha1 is not allowed: use default_factory
`

The problem in the code seems to be in the RigidBodyParameters class definition, where alpha1 and others default to an instance of the Parameter class, which is a mutable variable.

Identity matrix when point clouds are planar?

I am feeding the algorithm 2 point clouds where all the points are contained in the same plane. FOr example these are a subset of both:

Cloud1:

79.9322  -29.1766         0
  79.0491  -29.9871         0
  77.9883  -30.5455         0
  76.8435   -30.901         0
  75.6688  -31.1422         0
   74.497  -31.3972         0
   73.326  -31.6563         0
  72.1549  -31.9149         0
  70.9831  -32.1699         0
  69.8104   -32.421         0
  68.6367  -32.6678         0
  67.4619  -32.9089         0
  66.2842  -33.1353         0
  65.1034  -33.3449         0

Cloud2:

 -53.1773    -46.1855           0
 -52.0631    -46.0045           0
 -50.9379    -45.8592           0
  -49.806    -45.7634           0
 -48.6745    -45.6633           0
 -47.5434    -45.5648           0
 -46.4184    -45.4074           0
 -45.2863      -45.32           0
 -44.1639    -45.1455           0
 -44.1639    -45.1455           0
 -42.9911    -45.0785           0
 -41.8164    -45.0597           0
 -40.6471    -44.9518           0
 -39.4825    -44.7987           0
  -38.313    -44.6947           0

The icp method for this data is returning the identity matrix, i.e.

Estimated transformation matrix H:
[    1.000000     0.000000     0.000000     0.000000]
[    0.000000     1.000000     0.000000     0.000000]
[    0.000000     0.000000     1.000000     0.000000]
[    0.000000     0.000000     0.000000     1.000000]

With iterations:

Iteration | correspondences | mean(residuals) |  std(residuals)
   orig:0 |            1582 |          0.0000 |          0.0000
        1 |            1582 |          0.0000 |          0.0000
        2 |            1582 |          0.0000 |          0.0000
        3 |            1582 |          0.0000 |          0.0000
        4 |            1582 |          0.0000 |          0.0000
        5 |            1582 |          0.0000 |          0.0000
        6 |            1582 |          0.0000 |          0.0000
        7 |            1582 |          0.0000 |          0.0000
        8 |            1582 |          0.0000 |          0.0000
        9 |            1582 |          0.0000 |          0.0000
       10 |            1582 |          0.0000 |          0.0000
       ```

I tried multiple different parameters and I always get the identity despite the point clouds being clearly different. I was wondering if you could help me identify the issue.

Unexpected keyword "workers" (scipy)

I installed the SimpleICP package with pip (python), and tested the code presented in the documentation. And now I had this problem. Can you help me please ?
image

(Python) Singular matrix

Hi, thanks for offering this sort of convenient implementation. However, I randomly made some inputs, and often found that it reported the error of singular matrix. See below.
image

(C++) rotation in transformation matrix is incorrect

When building transformation matrix from 'alpha1', 'alpha2', 'alpha3', rotation part is not correct. 'alpha1', 'alpha2', 'alpha3' are Euler angles and can not be just inserted in matrix like this:

H(0, 1) = -alpha3;

Rotation part of the matrix should be calculated the same way it is done in python version:

def euler_angles_to_rotation_matrix(

Backpropagation in Python

Hi, thanks for the implementation!

I hope to adopt your algorithm in a deep learning framework, which requires backpropagation in Pyotrch. Nevertheless, your algorithm is written in numpy. I suppose it does not accumulate the gradients and cannot be used in deep learning projects?

Verbose option choice

If would be great to have a possibility to choose of being verbose or not using python.

We should replace print by logger python functionality

Does this implementation not work for 2d points?

I did some testing with the Python implementation and it seems that it fails with 2D point clouds. Is this expected? If so, it would be good if that was spelt out more clearly in the docs. If it was there I didn't see it. It might be a helpful feature to have.

A PointCloud must have three columns

In [35]: pc_zfix = PointCloud(x_f[:,0:2], columns=["x", "y"])
---------------------------------------------------------------------------
PointCloudException                       Traceback (most recent call last)
Cell In[35], line 1
----> 1 pc_zfix = PointCloud(x_f[:,0:2], columns=["x", "y"])

File ~/mambaforge/envs/icp/lib/python3.10/site-packages/simpleicp/pointcloud.py:42, in PointCloud.__init__(self, remapping, *args, **kwargs)
     40 for coordinate in ("x", "y", "z"):
     41     if coordinate not in self:
---> 42         raise PointCloudException(
     43             f'Column "{coordinate}" is missing in DataFrame.'
     44         )
     46 self._num_points = len(self)
     48 if "selected" not in self:

PointCloudException: Column "z" is missing in DataFrame.

Setting Z to zero causes errors when running ICP

I can just make a dummy Z column and set all the numbers in it to zero, but when I do that I see a lot of invalid value encountered in scalar divide messages and it not produce a meaningful answer. I tried this with both my own dataset and also the bunny dataset.

(Python) Calculation of RMSE between point clouds

Hi!
I am attempting to detect differences between multiple clouds. I am afraid that I am a little confused...does your [python] implementation generate any products that would allow me to calculate the RMSE between the final estimation given by the algo and my target cloud. What I am looking for are the inlier correspondences and I am unsure of how to generate them/find them from the outputs.

Apologies if it obvious, I would be very grateful if you could point it out to me, or if there is a process involved, give me a run down of what I need to do.

Cheers and thanks in advance,
J

using in unity

hello, i want to aggregate depth scans from a game camera in Unity.

Do you have an implementation in C# i could use or is there any way I could use your code? I reaaaally need to do this and I don't understand other tutorials out there.

Every 10 frames I would make a "picture" and get the depth from the camera, then I would turn that into a point cloud and use for example your implementation to make a point cloud of my scene.

I hope you can help me @pglira

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.