Coder Social home page Coder Social logo

ayooshkathuria / yolo_v3_tutorial_from_scratch Goto Github PK

View Code? Open in Web Editor NEW
2.3K 2.3K 726.0 1.89 MB

Accompanying code for Paperspace tutorial series "How to Implement YOLO v3 Object Detector from Scratch"

Home Page: https://blog.paperspace.com/how-to-implement-a-yolo-object-detector-in-pytorch/

Python 100.00%
object-detection pytorch-implmention pytorch-tutorial yolo yolov3

yolo_v3_tutorial_from_scratch's People

Contributors

ayooshkathuria avatar e-sha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yolo_v3_tutorial_from_scratch's Issues

Different results using my own training model

Hi, @ayooshkathuria , recently, I have trained my own model on ImageNet2015 using the C source code from the official site. When I tested my model on the official C code using some images, I got some normal results which means my model has no problem. However, when I tested my model on your code, results were different and bad. I don't know why. I have already changed .cfg file, .weights file and the class name file.
This is the result I got using the official C code:
image
But I would get no car if I run your code using my model.
image

different results from the original darknet

HI, I have run the code and the original darknet with the same image,but i got different results. I cannot understand because the cfg file and weights file are all the same.
This is the result running the original darknet:
predictions

This is the result running this code:
det_000000
why??

Save detected video

How can I save the detected video? I am using video_detect.append(frame) instead of cv2.imshow("frame",frame) to collect the detected frames. I tried to combine them to video using the following code:

video_detection_save = cv2.VideoWriter('video.avi',-1,20,(416,416))
for j in range(frames):
video_detection_save.write(video_detect[j])

cv2.destroyAllWindows()
video_detection_save.release()

This is my loop over the frames:
` while cap.isOpened():
ret, frame = cap.read()

  if ret:   
      img = prep_image(frame, inp_dim)
      '''cv2.imshow("a", frame)'''
      im_dim = frame.shape[1], frame.shape[0]
      im_dim = torch.FloatTensor(im_dim).repeat(1,2)   

      if CUDA:
          im_dim = im_dim.cuda()
          img = img.cuda()

      with torch.no_grad():
          output = model(Variable(img, volatile = True), CUDA)
      output = write_results(output, confidence, num_classes, nms_conf = nms_thesh)


      if type(output) == int:
          frames += 1
          #cv2.imshow("frame", frame)
          video_detect.append(frame)
          key = cv2.waitKey(1)
          if key & 0xFF == ord('q'):
              break
          continue

      im_dim = im_dim.repeat(output.size(0), 1)
      scaling_factor = torch.min(416/im_dim,1)[0].view(-1,1)

      output[:,[1,3]] -= (inp_dim - scaling_factor*im_dim[:,0].view(-1,1))/2
      output[:,[2,4]] -= (inp_dim - scaling_factor*im_dim[:,1].view(-1,1))/2

      output[:,1:5] /= scaling_factor

      for i in range(output.shape[0]):
          output[i, [1,3]] = torch.clamp(output[i, [1,3]], 0.0, im_dim[i,0])
          output[i, [2,4]] = torch.clamp(output[i, [2,4]], 0.0, im_dim[i,1])


      classes = load_classes('data/coco.names')
      colors = pkl.load(open("pallete", "rb"))

      list(map(lambda x: write(x, frame), output))

      video_detect.append(frame)
      key = cv2.waitKey(1)
      if key & 0xFF == ord('q'):
          FPS = frames // (time.time() - start)
          break
      frames += 1
      
  else:
      FPS = frames // (time.time() - start)
      break`

Please help me to save the problem.

Hard coded resolution in video.py and detect.py lead to wrong bounding boxes

Hi,

first of all thanks for the great tutorial!

I think in video.py, line 134 and detect.py, lin 164:
scaling_factor = torch.min(416/im_dim,1)[0].view(-1,1)
is using a hard coded value for the image resolution (See standard value for the resolution parameter).

When I was using this implementation with my own yolov3 net the bounding boxes were not drawn in the proper locations since i set the parameter for resolution to 960.

Changing the lines to:
scaling_factor = torch.min(int(args.reso)/im_dim,1)[0].view(-1,1)
solved the problem for me.

Best Regards,

Oliver

Not able to trace this indexing error in detector. PyTorch 0.4.

Traceback (most recent call last): File "detector.py", line 177, in <module> im_dim_list = torch.index_select(im_dim_list, 0, output[:,0].long()) RuntimeError: index out of range at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/TH/generic/THTensorMath.c:343

Printing:
im_dim_list: tensor([[ 602., 452., 602., 452.]])
output: tensor([[ 4.0000, 67.9070, 164.1937, 174.7524, 386.2670, 0.9999, 0.9997, 16.0000]])

What am I missing?

getting error while running the tutorial code part 2

File "darknet.py", line 145, in create_modules
anchors = x["anchors"].split(",")
TypeError: string indices must be integers, not str

getting error while running
blocks = parse_cfg("cfg/yolov3.cfg")
print(create_modules(blocks))

ValueError: not enough values to unpack (expected 2, got 1)

hi, I got a trouble during the tutorial chapter 2.
when I did this part,

=============================================-
Testing the code

You can test your code by typing the following lines at the end of darknet.py and running the file.
blocks = parse_cfg("cfg/yolov3.cfg")
print(create_modules(blocks))

=============================================-

there's an error occured.

=============================================-
File "darknet.py", line 147, in
blocks = parse_cfg("cfg/yolov3.cfg")
File "darknet.py", line 26, in parse_cfg
key, value = line.split("=")
ValueError: not enough values to unpack (expected 2, got 1)
=============================================-

REALLY NEED HELP, thanks

Problem with objects detection at YOLO then change model

Then I run detection with yolov3.cfg , yolov3.weights and coco.names (80 class list) all working good at video and images, and accuracy too fine. But if I change cfg to yolo3_openimages.cfgweights to yolov3-openimages.weights and openimages.name(601 class list) accuracy is so bad. At the images and videos all classes are generalized. Bee like animal, horse like enimal cars like vehicles and many other objects are not recognized.

I tried to play with NMS IoU threshold, changed from 0.2 to 0.9 (default 0.4) but unsuccessfully. May have tips why such a bad accuracy on pretrained models and config from official repo?

RuntimeError in util.py : torch.cuda.Floattensor

In util.py file I am getting an error at the line

prediction[:,:,:2] += x_y_offset

saying that

Traceback (most recent call last):
File "darknet.py", line 239, in
pred = model(inp, torch.cuda.is_available())
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "darknet.py", line 225, in forward
x = predict_transform(x, inp_dim, anchors, num_classes, CUDA)
File "C:\Users\Siddharth Kale\Desktop\Coding Essentials\codes\Convoulution_Neural_Nets\yolo-pytorch\util.py", line 43, in predict_transform
prediction[:,:,:2] += x_y_offset
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 'other'

Couldn't find any solutions, could someone suggest something to get rid of this error``

Run the video.py code

Hi ive got the detect.py working but think im using the wrong augments for video.py can someone show me a example thanks

can't test forward

prediction[:,:,:2]+=x_y_offset
RuntimeError: The expanded size of the tensor (507) must match the existing size (13520) at non-singleton dimension 1

Difficulty on understanding the anchors

Hello, ayooshkathuria

Can you help explain the below line from yolov3.cfg

anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326

It is understood that there are three different detection layers such that the last layer has mask with 0, 1, 2 representing the first three anchors. But what are these numbers representing ? I doubt there are dimensions for each anchors such that the height and width (pw and ph) of the pre-defined default bounding boxes ?

Please correct me if I am wrong. Thanks for your explanation

How can we train the model

First thanks a lot for you wrote the tutorial, which helps me understand the yolo algorithm simply.So the next, if I want to train the model by myself, rely on the work you have done, how can I do this? Can you give me some tips please, which mean how can I implement the backward process?

In function prep_image

the code in tutorial is different from here:

github:

img = (letterbox_image(img, (inp_dim, inp_dim)))

in tutorial:

img = cv2.resize(img, (inp_dim, inp_dim))

which will make scaling step a little bit wrong(because no letterbox step, then bbox coordinate scaling should be calculated separately)

Anyway, not a big deal, but it's easy to misunderstand, so just issue here:)

Different anchors

Hi, thanks so much for the great tutorial! I am wondering whether anyone has tried different number of anchors, other than the three anchors 6,7,8 as in the cfg file? There's a RuntimeError due to this line if different number of anchors is used.

prediction = prediction.view(batch_size, bbox_attrs*num_anchors, grid_size*grid_size)

I am wondering how to solve this?

Update

Based on pjreddie/darknet#561, the author says

If you use a different number of anchors you have to figure out which layer you want to predict which anchors and the number of filters will depend on that distribution

CFG file input dimension [advice]

Hello everyone,
First thanks to @ayooshkathuria for such a nice tutorial. Second I would like to point out that the input dimensions in the current version of the yolov3.cfg file from yolo repository (the one recommended to download in the tutorial part 2) contains a height = 608, width = 608. So you might encounter an error like this at some point:

RuntimeError: shape '[1, 255, 3025]' is invalid for input of size 689520

My suggestion is to change the yolov3.cfg to height=416, width = 416 as those are the dimensions used in the tutorial, particularly in line 14. Another solution is to resize the image to height = 608 and width = 608 when reading it, in the same line. A more general solution would be to read whatever dimensions are in the yolov3.cfg file and resize the image with those dimensions.

Cheers!

What is the purpose of adding 1 while calculating the box area?

In the IoU function

#Intersection area inter_area = torch.clamp(inter_rect_x2 - inter_rect_x1 + 1, min=0) * torch.clamp(inter_rect_y2 - inter_rect_y1 + 1, min=0) inter_areaa = torch.clamp(inter_rect_x2 - inter_rect_x1, min=0) * torch.clamp(inter_rect_y2 - inter_rect_y1, min=0)

what is the reason for the + 1 while calculating the area?

Is it to avoid 0 division error for calculating IoU? But won't it affect the value by adding a constant to the area?

Train

Hello, i am a student come from Taiwan, i expectation the train code, please><

what does "pad = (kernel_size - 1) // 2" mean?

darknet.py line 94
still don't get it, I googled a lot, at yolov3.cfg where padding always equal to 1, some say padding is constant( links), and i tried padding = 1 in every loop, end up with error, so I use the original code from darknet.py in this repository, it works, but how "pad = (kernel_size - 1) // 2" ? why?

if padding:
    pad = (kernel_size -1) // 2
else:
    pad = 0

Hierarchy YOLO

Hi! Thanks for work!

Is it Hierarchy YOLO like in original paper?

Hard coded stride in create_modules for upsample

There is an error in darknet.py line 118 in 0057114

upsample = nn.Upsample(scale_factor = 2, mode = "nearest") should be upsample = nn.Upsample(scale_factor = stride, mode = "nearest")

This bug is silent since stride=2 in the cfg file. But if someone changed the value it would not be applied. The value 2 would be used since it is hard coded in create_modules.

Help me from tutorial code error.

Thanks for your tutorial from scratch. It helps me a lot.
In article part 3, there are some codes you wrote. I copy that codes but some error for me.

model = Darknet("cfg/yolov3.cfg")
inp = get_test_input()
pred = model(inp)
print (pred)

TypeError: forward() missing 1 required positional argument: 'CUDA'

and I want to ask you that training module is the only left work

run with error

totally use your code, does not change any thing. but error comes out....
here is error info
C:\Users\Max\Anaconda3\envs\Pytorch\lib\site-packages\torch\nn\modules\upsampling.py:122: UserWarning: nn.Upsampling is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.Upsampling is deprecated. Use nn.functional.interpolate instead.")

RuntimeError Traceback (most recent call last)
in ()
1 model = Darknet("cfg/yolov3.cfg")
2 inp = get_test_input()
----> 3 pred = model(inp, torch.cuda.is_available())
4 print (pred)

~\Anaconda3\envs\Pytorch\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)

in forward(self, x, CUDA)
216 #Transform
217 x = x.data
--> 218 x = predict_transform(x, inp_dim, anchors, num_classes, CUDA)
219 if not write: #if no collector has been intialised.
220 detections = x

F:\condaDev\util.ipynb in predict_transform(prediction, inp_dim, anchors, num_classes, CUDA)

RuntimeError: invalid argument 2: size '[1 x 255 x 3025]' is invalid for input with 689520 elements at ..\aten\src\TH\THStorage.cpp:84

About the performance

Hi,
This is a great work!
I wondered if this implementation can reproduce 33% mAP on COCO as stated in YOLO_v3 paper?
I think it will be great to list current performance of this code in README.md.

Thanks a lot!

torch.FloatTensor type error

Without any changes on the code downloaded, when I ran command 'python detect.py --images imgs --det det', I got errors below:

Loading network.....
Network successfully loaded

256.4507 62.3959 374.7849 120.3694 0.9988 0.9377 7.0000
265.0707 62.5722 379.6248 121.3135 0.9726 0.8795 7.0000
254.2707 69.4995 376.2362 125.0286 0.9040 0.9200 7.0000
58.2381 80.8954 312.3383 296.0082 0.9004 0.9983 1.0000
61.3210 58.3361 311.3177 316.3376 0.8639 0.9972 1.0000
89.2337 73.0579 308.4990 302.5144 0.9240 0.9960 1.0000
63.8950 89.8635 308.2446 312.9947 0.9908 0.9991 1.0000
54.9051 76.0959 312.2937 337.0284 0.9859 0.9981 1.0000
87.4243 84.2302 307.2956 313.9760 0.9861 0.9996 1.0000
66.4022 72.3304 323.7131 332.9052 0.5739 0.9988 1.0000
59.7964 112.0306 173.2647 379.6315 0.9257 0.9818 16.0000
66.6987 160.9137 173.3377 393.2019 0.9999 0.9980 16.0000
73.8193 154.1310 187.0550 391.4006 0.9704 0.9931 16.0000
63.6121 176.9761 171.9654 404.1670 0.9892 0.9789 16.0000
[torch.cuda.FloatTensor of size 14x7 (GPU 0)]

Traceback (most recent call last):
File "detect.py", line 126, in
prediction = write_results(prediction, confidence, num_classes, nms_conf = nms_thesh)
File "/media/lliu/hdd4/YOLO_v3_tutorial_from_scratch/YOLO_v3_tutorial_from_scratch-master/util.py", line 159, in write_results
ious = bbox_iou(image_pred_class[i].unsqueeze(0), image_pred_class[i+1:])
File "/media/lliu/hdd4/YOLO_v3_tutorial_from_scratch/YOLO_v3_tutorial_from_scratch-master/util.py", line 43, in bbox_iou
iou = inter_area / (b1_area + b2_area - inter_area)
File "/home/lliu/.conda/envs/pyTorch_35_new/lib/python3.5/site-packages/torch/tensor.py", line 300, in sub
return self.sub(other)
TypeError: sub received an invalid combination of arguments - got (torch.FloatTensor), but expected one of:

  • (float value)
    didn't match because some of the arguments have invalid types: (torch.FloatTensor)
  • (torch.cuda.FloatTensor other)
    didn't match because some of the arguments have invalid types: (torch.FloatTensor)
  • (float value, torch.cuda.FloatTensor other)

After tracing it back I found it's an bug in line 37, util.py: All the variables before this line are in (GPU 0), whereas the output inter_area of this line doesn't contain (GPU 0), which causes the subsequent iou calculation failed (line 43).

The data inspection of inter_rect_x2, note the (GPU 0):

In [5]: inter_rect_x2
Out[5]:

307.2956
308.2446
308.2446
308.2446
308.2446
308.2446
[torch.cuda.FloatTensor of size 6 (GPU 0)]

The data inspection of inter_area :

In [4]: inter_area
Out[4]:

49504.1289
54990.4844
47005.5273
50822.8750
54990.4844
54428.5430
[torch.FloatTensor of size 6]

To fix the issue, I just simply added the .cuda() function to the np commands in line 37:

inter_area = np.maximum(inter_rect_x2 - inter_rect_x1 + 1, 0).cuda()*np.maximum(inter_rect_y2 - inter_rect_y1 + 1, 0).cuda()

This seems working to me; however, I don't know why just me saw this issue. Wrote this down for others reference.

BTW, I am using Pytorch 0.3, cuda 8 and cudnn 6

customizing number of classes

Hi, I'm wondering how would I be able to modify it to train on my own 10-class dataset?

I tried changing the cfg file 3 'Yolo' sections' classes to 10. But I guess I need to modify the darknet forward pass somewhere.

thanks

yolo v3 scratch is slow

I check the yolo v3 of this version, but it is slow at the forward time(6.509258985519409)).
image
Thanks for your reply.

error occur when load network

I train my yolov3 model using my own datasets and get the .weight file. When i use the darnet.py to load my .weights, some error is occur as followed:

RuntimeError: invalid argument 2: size '[18 x 256 x 1 x 1]' is invalid for input with 4607 elements at ..\src\TH\THStorage.c:41

my pytorch verson is 0.4.0 on windows.
how should i do to solve the problem?
Tnanks?

Store image path adapt to all OSes

det_names = pd.Series(imlist).apply(lambda x: "{}/det_{}".format(args.det,x.split("/")[-1]))

This line is right for Mac or Linux. There get wrong path under Windows.

just change :
1 det_names = pd.Series(imlist).apply(lambda x: "{}/det_{}".format(args.det,x.split(os.sep)[-1]))
2 and add import os at the top of file

All OSes would be running well.

About NMS

Why sort bbox with objectness score (Pobj) rather than individual class score (Pc)
util.py line 149
conf_sort_index = torch.sort(image_pred_class[:,4], descending = True )[1]
I searched NMS sort bbox with Pc and compute the rest bbox IoU with Top Pc score bbox then throw away bboxes IoU bigger than treshhold.
And this code is Pobj...

How to use the author's video.py code

I would be very grateful if someone can teach me how to use the author's video.py? Where should I put the video file? Creating a folder named video and put the video file in it ?Or something else?

ISSUE:'NoneType' object has no attribute 'shape'

When I ran the detect.py, the error issued as
` img_w, img_h = img.shape[1], img.shape[0]

AttributeError: 'NoneType' object has no attribute 'shape'` in util.py
I think it might be the question of loading img, but when I debug darknet.py, I could get the result of img.

Anyone could help me solve the problem? Thanks a lot.

YOLOv3 with Pytorch1.

Have you tested for performance with pytorch ver.1. , or it works only old version?

Detecting colours

Hello, got a quick question regarding colours detection.

Would it be possible to detect the dominant colour of each object that is detected?
I tried (and it works) to detect the colour for the entire picture*, but I was wondering if it would be possible to detect the dominant/average colour of each object that is detected.
So if I have a picture of 4 chairs, each in different colour, and the network correctly detects those 4 chairs, it could also check what colour each chair is.

*in 'util.py' under 'prep_image(img, inp_dim):'
average_color = [img[:, :, i].mean() for i in range(img.shape[-1])]
this returns a [float, float, float] and I can then use this and a range check to determine what colour it is but I would need it for each object in the image, not the entire image.

Any help would be appreciated!

Different results from the original darknet

I have run your code and darknet_alexey with the same images and same weights but I got different results. Earlier you had the same issue and you resolve this issue, but I still got the error with your updated repo. can you please help me to resolve the issue?

Originally posted by @priyankasin in #9 (comment)

images not being saved

Hi. Thank you for this tutorial. I am actually running the detection.py file, the output of the images describing their labels is showing, however the images with the bounding boxes are not being saved in the det folder. I debugged the code part by part. I corrected what is needed and what was raised in previous issues. However, i still can't get the images saved. May i know what's the problem?

error when testing forward pass

when I test forward pass ,I got such error:
Traceback (most recent call last):
File "darknet_debug.py", line 317, in
model = Darknet('cfg/yolov3.cfg')
File "darknet_debug.py", line 172, in init
self.net_info, self.module_list = create_modules(self.blocks)
File "darknet_debug.py", line 154, in create_modules
anchors = x["anchors"].split(",")
TypeError: string indices must be integers, not str
and I don't think the error originated from my code, because it was reported later on the code that I copied for GitHub.
os:ubontu16.04

Clarification in first post

In your first post, you said:

The resultant predictions, bw and bh, are normalised by the height and width of the image. (Training labels are chosen this way). So, if the predictions bx and by for the box containing the dog are (0.3, 0.8), then the actual width and height on 13 x 13 feature map is (13 x 0.3, 13 x 0.8).

Shouldn't it actually be: So, if the predictions bx and by for the box containing the dog are (0.3, 0.8), then the actual width and height on the actual image is (32 x 0.3, 32 x 0.8).?

This is because you need to scale everything up by a factor of stride (=32). Or am I getting something wrong?

bounding boxes not enclosing object

Hi, thanks for the amazing tutorial! I followed each part of your tutorial but when i use it to predict bounding boxes, the boxes aren't enclosing the objects well. Infact the performance is really poor. Can you suggest why this might be happening? I am attaching an image. Also it seems as if the IOU isn't working properly.
det_dog-cycle-car

Error doing detection using Tiny_yolo network

I get this error:

Traceback (most recent call last):
File "detect.py", line 171, in
model(get_test_input(inp_dim, CUDA), CUDA)
File "/home/akshayj/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/akshayj/pytorch-yolo-v3/darknet.py", line 325, in forward
x = torch.cat((map1, map2), 1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 416 and 832 in dimension 2 at /opt/conda/conda-bld/pytorch_1524580978845/work/aten/src/THC/generic/THCTensorMath.cu:111

Any Idea Why this error occurs. Thanks in advance.

About IoU computation.

Why add 1...
I need help...
util.py line37 to line41
b1_area = (b1_x2 - b1_x1 + 1) * (b1_y2 - b1_y1 + 1)
b2_area = (b2_x2 - b2_x1 + 1) * (b2_y2 - b2_y1 + 1)

multi class prediction

Hello,
it seems that the current code can not predict multiple classes, like Women and Person at the same time, is that right?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.