Coder Social home page Coder Social logo

models's Introduction

Caffe2 Model Repository

This is a repository for storing pre-trained Caffe2 models. You can use Caffe2 to help you download or install the models on your machine.

Prerequisites

Install Caffe2 with Python bindings.

Download

To download a model locally, run

python -m caffe2.python.models.download squeezenet

which will create a folder squeezenet/ containing both an init_net.pb and predict_net.pb.

Install

To install a model, run

python -m caffe2.python.models.download -i squeezenet

which will allow later imports of the model directly in Python:

from caffe2.python.models import squeezenet
print(squeezenet.init_net.name)
print(squeezenet.predict_net.name)

Subdirectories

To download a model in a subdirectory (for example, style transfer), run

python -m caffe2.python.models.download style_transfer/crayon

and this will create a folder style_transfer/crayon/ containing both an init_net.pb and predict_net.pb.

Same applies to the -i install option.

models's People

Contributors

bwasti avatar facebook-github-bot avatar harouwu avatar houseroad avatar newstzpz avatar orionr avatar sf-wind avatar shuolongbj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

models's Issues

Why was terminal eliminted?

I liked how in Caffe I could easily train a model using the terminal, I don't see that in Caffe 2, nor a short example on how to do it on Python.

resnet50_quantized: errors in load and initialization

Symptom: When I try to load the resnet50_quantized network using the method from the "load pretrained models" tutorial (https://github.com/caffe2/tutorials/blob/master/Loading_Pretrained_Models.ipynb), I get a Cannot create operator of type 'Int8GivenTensorFill' on the device 'CPU'. error.

Reproduction:

INIT_NET= '/home/ubuntu/segmentation_notebooks/resnet_quantized/resnet50_quantized_init_net.pb'
PREDICT_NET ='/home/ubuntu/segmentation_notebooks/resnet_quantized/resnet50_quantized_predict_net.pb'

# the predictor approach
# Read the contents of the input protobufs into local variables
with open(INIT_NET, "rb") as f:
    init_net = f.read()
with open(PREDICT_NET, "rb") as f:
    predict_net = f.read()

# Initialize the predictor from the input protobufs
p = workspace.Predictor(init_net, predict_net)

spew:

RuntimeError                              Traceback (most recent call last)
<ipython-input-7-5bf6f491eeeb> in <module>()
     10 
     11 # Initialize the predictor from the input protobufs
---> 12 p = workspace.Predictor(init_net, predict_net)

RuntimeError: [enforce fail at operator.cc:114] op. Cannot create operator of type 'Int8GivenTensorFill' on the device 'CPU'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: output: "gpu_0/conv1_w_0_int8" name: "" type: "Int8GivenTensorFill" arg { name: "shape" ints: 64 ints: 7 ints: 7 ints: 3 } arg { name: "values" s:

Here's what happens if I try to run it with the GPU:
code:

workspace.ResetWorkspace()
device_opts = core.DeviceOption(caffe2_pb2.CUDA,0)


init_def = caffe2_pb2.NetDef()
with open(INIT_NET, 'rb') as f:
    init_def.ParseFromString(f.read())
    init_def.device_option.CopyFrom(device_opts)
    workspace.RunNetOnce(init_def.SerializeToString())

net_def = caffe2_pb2.NetDef()
with open(PREDICT_NET, 'rb') as f:
    net_def.ParseFromString(f.read())
    net_def.device_option.CopyFrom(device_opts)
    workspace.CreateNet(net_def.SerializeToString())

name = net_def.name
out_name = net_def.external_output[-1];
in_name = net_def.external_input[0]

spew:

UnicodeDecodeError                        Traceback (most recent call last)
<ipython-input-8-29ddafde7509> in <module>()
      7     init_def.ParseFromString(f.read())
      8     init_def.device_option.CopyFrom(device_opts)
----> 9     workspace.RunNetOnce(init_def.SerializeToString())
     10 
     11 net_def = caffe2_pb2.NetDef()

/home/ubuntu/src/caffe2/build/caffe2/python/workspace.pyc in RunNetOnce(net)
    181         C.Workspace.current._last_failed_op_net_position,
    182         GetNetName(net),
--> 183         StringifyProto(net),
    184     )
    185 

/home/ubuntu/src/caffe2/build/caffe2/python/workspace.pyc in CallWithExceptionIntercept(func, op_id_fetcher, net_name, *args, **kwargs)
    168         op_id = op_id_fetcher()
    169         net_tracebacks = operator_tracebacks.get(net_name, None)
--> 170         print("Traceback for operator {} in network {}".format(op_id, net_name))
    171         if net_tracebacks and op_id in net_tracebacks:
    172             tb = net_tracebacks[op_id]

UnicodeDecodeError: 'ascii' codec can't decode byte 0xaf in position 21: ordinal not in range(128)

System information:

AWS p3.2xlarge in the Python environment caffe2_p27

FWIW, I get a similar error using my own build from source of caffe2 running with python 3.6 on Ubuntu 16.04 and a GTX1080

faster rcnn cannot inferred by GPU

faster rcnn cannot inferred by GPU. the GenerateProposal didn't implemented in 'CUDA'.
Is there any plan to support all op in GPU inference?

Error installing model

On Windows, the command 'python -m caffe2.python.models.download -i squeezenet' fails with the error "'module' object has no attribute 'symlink'. It is my understanding that symlink is not available on non-Unix OSes.

any quantize tool like tflite?

in tflite,when excute
import tensorflow as tf converter = tf.lite.TocoConverter.from_saved_model(saved_model_dir) converter.post_training_quantize = True tflite_quantized_model = converter.convert() open("quantized_model.tflite", "wb").write(tflite_quantized_model)
i can get a 8bit model and then i can move is to mobile devices.

is there any docs about caffe2 like this?i can't find any example or tutorial about quant a model and excute on mobile devices.
what's more, how caffe2 and pytorch support 8bit ops ?

Timeline for raw caffe2 c++ to support FPN

Hi,

I would love to use e2e_faster_rcnn_R-50-FPN_1/2x for inference in caffe2 c++. I know that currently FPN is not yet supported. Do you guys have a timeline for when it would be supported?

Thanks a lot!

Missing softmax at the output layer of densenet121

It looks like the output layer of densenet121 is conv, see here

It does not happen for other CV models like resnet50, inception, etc.
Is there any particular reason it's missing?

I understand it should not impact classification as a max value on the conv output will signify top label, but it would be good to be consistent on these models.

Model quantization

Someone can share the details of quantizes the mobilenetv2 or resnet50?

Error when loading MaskRCNN2Go

When I run the run_eval.sh in the mask_rcnn_2go, the error message shows:

Traceback (most recent call last):
  File "code/eval_seg_cpu.py", line 193, in <module>
    main()
  File "code/eval_seg_cpu.py", line 188, in main
    net = load_model(args)
  File "code/eval_seg_cpu.py", line 73, in load_model
    args.net, args.init_net, is_run_init=True, is_create_net=True
  File "/home/thk/gitlab_proj/caffe2_models/mask_rcnn_2go/code/model_utils.py", line 19, in load_model_pb
    net.Proto().ParseFromString(open(net_file, "rb").read())
google.protobuf.message.DecodeError: Error parsing message

The content of my run_eval.sh is:

DATASET_IM_DIR="~/Downloads/val2014/" #"path_to_/coco_val2014"
DATASET_ANN="~/Downloads/instances_minival2014.json" #"path_to_/coco/instances_minival2014.json"

python code/eval_seg_cpu.py \
    --net "model/fp32/model.pb" \
    --init_net "model/fp32/model_init.pb" \
    --dataset "coco_2014_minival" \
    --dataset_dir "$DATASET_IM_DIR" \
    --dataset_ann "$DATASET_ANN" \
    --output_dir output \
    --min_size 320 \
    --max_size 640 \

The load_model_pb() has received the net_file path correctly, so the error seems to be caused by the model when it's been loaded.

Is the models given in the mask_rcnn_2go are bug free? Or did I do something wrong when trying the sample code?

Runtime error in concat operator when converting pytorch model to caffe2

Error is in concat operator
RuntimeError Traceback (most recent call last)
in ()
25
26 # Run the Caffe2 net:
---> 27 c2_out = prepared_backend.run(W)[0]

RuntimeError: [enforce fail at concat_split_op.h:289] dim == dim_j. Expect dimension = 32 got 1 at axis = 0 for input: 1. The input tensors can only have different dimensions when arg 'add_axis' = 0 and along the axis = 1 <[32, 3, 256, 256]> vs <[1, 512, 8, 8]>.
Error from operator:
input: "0" input: "1" output: "24" output: "OC2_DUMMY_0" name: "" type: "Concat" arg { name: "axis" i: 1 } device_option { device_type: 0 device_id: 0 }frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f9cb682c441 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)

Normalization of inputs

Hi, I would like to use those models and I'm not sure about the expected normalization of the inputs: do they expect inputs in [0…1] or [0…255]?

Mobilenet_v2_quantized predict_net.pb file possible error

TL;DR

Can't run the net using caffe2 following the 'Load Pretrained Net' tutorial. Keep showing error saying that 325 is not CPU Tensor. Fixed by changing the deserialized ascii prototxt and re-serialization.


I had posted this issue here before.

https://discuss.pytorch.org/t/caffe2-mobilenetv2-quantized-using-caffe2-blobistensortype-blob-cpu-blob-is-not-a-cpu-tensor-325/29065

After some experiments, I've discovered the problem and come up w/ a fix.

After I deserialized predict_net.pb file to ascii prototxt, I found out at the end, the network is supposed to output blob softmax instead of a Int8CPUTensor called 325. The problem, although I'm not entirely sure of on a source code basis, is probably because an Int8CPUTensor somehow fail the CAFFE_ENFORCE test.

When I change the very last line of the dumped ascii prototxt from exported_output: 325 to exported_output: softmax, everything worked just fine. So I'm thinking the file given in the official repo is not correct, at least for me.

I'm not sure if this happens to anyone else. Thought I put it here in case anyone encounters the same situation.

Enforce fail, ParseProtobufFromLargeString

I'm trying to run the Ipython Notebook Load Pretrained Models tutorial using Squeezenet, but it's failing at p = workspace.Predictor(init_net, predict_net) with the error RuntimeError: [enforce fail at E:\caffe2\caffe2\python\pybind_state.cc:553] ParseProtobufFromLargeString(predict_net, &predict_net_).

I'm a little confused about the files I'm meant to be using here: the two mentioned in the tutorial and that are downloaded from an AWS address are init_net.pb and predict_net.pb, while on the github there's a file called exec_net.pb and no init_net.pb. init_net/exec_net also seem to be in a totally different format to predict_net.

There seems to be an empty line at the top of predict_net.pb, and removing it changes the error to RuntimeError: [enforce fail at E:\caffe2\caffe2\core\operator.cc:110] op. Cannot create operator of type '' on the device 'CPU'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: output: "conv1_w"

Is there an error in the file?

cannot run mobilenet_v2_quantized on pytorch/caffe2

I am trying to run mobilenet_v2_quantized on pytorch/caffe2 repo which supports int8 .

after compiling pytorch on tx2 and run the model, I meet the following error:


WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:caffe2.python.workspace:Original python traceback for operator `-2110861432` in network `mobilenet_v2_quant` in exception above (most recent call last):
Traceback (most recent call last):
  File "test_caffe2.py", line 470, in <module>
    net_def = createNet(pred_net)
  File "test_caffe2.py", line 406, in createNet
    workspace.CreateNet(net_def, overwrite=True)
  File "/home/gg/pytorch/build/caffe2/python/workspace.py", line 154, in CreateNet
    StringifyProto(net), overwrite,
  File "/home/gg/pytorch/build/caffe2/python/workspace.py", line 180, in CallWithExceptionIntercept
    return func(*args, **kwargs)
RuntimeError: [enforce fail at operator.cc:46] blob != nullptr. op NCHW2NHWC: Encountered a non-existing input blob: data
frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void const*) + 0x7c (0x7f5ac89ed4 in /home/gg/pytorch/build/lib/libc10.so)
frame #1: caffe2::OperatorBase::OperatorBase(caffe2::OperatorDef const&, caffe2::Workspace*) + 0x560 (0x7f5bc5fa18 in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #2: <unknown function> + 0x13e5efc (0x7f5c088efc in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #3: std::_Function_handler<std::unique_ptr<caffe2::OperatorBase, std::default_delete<caffe2::OperatorBase> > (caffe2::OperatorDef const&, caffe2::Workspace*), std::unique_ptr<caffe2::OperatorBase, std::default_delete<caffe2::OperatorBase> > (*)(caffe2::OperatorDef const&, caffe2::Workspace*)>::_M_invoke(std::_Any_data const&, caffe2::OperatorDef const&, caffe2::Workspace*&&) + 0x34 (0x7f5cb5dd9c in /home/gg/pytorch/build/caffe2/python/caffe2_pybind11_state.cpython-35m-aarch64-linux-gnu.so)
frame #4: <unknown function> + 0xfbacfc (0x7f5bc5dcfc in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #5: <unknown function> + 0xfbcd0c (0x7f5bc5fd0c in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #6: caffe2::CreateOperator(caffe2::OperatorDef const&, caffe2::Workspace*, int) + 0x430 (0x7f5bc60898 in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #7: caffe2::SimpleNet::SimpleNet(std::shared_ptr<caffe2::NetDef const> const&, caffe2::Workspace*) + 0x3dc (0x7f5bcc016c in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #8: <unknown function> + 0x101eb3c (0x7f5bcc1b3c in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #9: <unknown function> + 0xfa5904 (0x7f5bc48904 in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #10: caffe2::CreateNet(std::shared_ptr<caffe2::NetDef const> const&, caffe2::Workspace*) + 0x90c (0x7f5bc98f94 in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #11: caffe2::Workspace::CreateNet(std::shared_ptr<caffe2::NetDef const> const&, bool) + 0x1e4 (0x7f5bcab824 in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #12: caffe2::Workspace::CreateNet(caffe2::NetDef const&, bool) + 0xa4 (0x7f5bcaca7c in /home/gg/pytorch/build/lib/libcaffe2.so)
frame #13: <unknown function> + 0x51060 (0x7f5cb55060 in /home/gg/pytorch/build/caffe2/python/caffe2_pybind11_state.cpython-35m-aarch64-linux-gnu.so)
frame #14: <unknown function> + 0x512d0 (0x7f5cb552d0 in /home/gg/pytorch/build/caffe2/python/caffe2_pybind11_state.cpython-35m-aarch64-linux-gnu.so)
frame #15: <unknown function> + 0x8edfc (0x7f5cb92dfc in /home/gg/pytorch/build/caffe2/python/caffe2_pybind11_state.cpython-35m-aarch64-linux-gnu.so)
<omitting python frames>

what is the problem?

Error

What does this mean?

RuntimeError: [enforce fail at cross_entropy_op.cc:30] (label.ndim() == 1) || (label.ndim() == 2 && label.dim32(1) == 1). Error from operator:
input: "softmax" input: "label" output: "xent" name: "" type: "LabelCrossEntropy"

module 'caffe2.python.models.squeezenet' has no attribute 'init_net'

After I download init_net.pb and predict_net.pb of squeezenet and put them into "C:\Program Files\Caffe2\caffe2\python\models\squeezenet".
when I run the code as follow:
tim 20180318221706

There is an error as pictures.The problem has confused me so long time!
How can I solve it.Thank you very much.

Git LFS Problem

During cloning faced an error:

batch response: This repository is over its data quota. Purchase more data packs to restore access.                                                                                                         
error: failed to fetch some objects from 'https://github.com/caffe2/models.git/info/lfs'

To reproduce:

GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/caffe2/models.git
cd models
git lfs install
git lfs pull --include mask_rcnn_2go/

Detectron e2e_faster_rcnn_R-50-C4_1x enforce fail at generate_proposals_op.cc:205

results = p.run([img])
Hello. I am trying to run the facebook detectron network following the basic tutorial for loading and using a pre-trained network (swapping out the information for detectron rather than squeezenet) and am running into the following error:

Traceback (most recent call last):
File "/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
results = p.run([img])

RuntimeError: [enforce fail at generate_proposals_op.cc:205] im_info_tensor.dims() == (vector{num_images, 3}). 0 vs 1 3 Error from operator:
input: "rpn_cls_probs" input: "rpn_bbox_pred" input: "im_info" input: "anchor" output: "rpn_rois" output: "rpn_roi_probs" name: "" type: "GenerateProposals" arg { name: "nms_thres" f: 0.7 } arg { name: "min_size" f: 0 } arg { name: "spatial_scale" f: 0.0625 } arg { name: "correct_transform_coords" i: 1 } arg { name: "post_nms_topN" i: 1000 } arg { name: "pre_nms_topN" i: 6000 }

Looking at the network definition, there seems to be an object called "im_info" that is supposedly externally generated. I however cannot figure out where or how this object should be initialized, or if it is actually being initialized. Either way, there seems to be an issue with verifying the size of the im_info_tensor blob when the data reaches the region proposal portion of the network.

VGG16 pretrained model

Is there a way to get a pretrained VGG16 model in caffe2?

Also is there an equivalent of torch's loadcaffe for caffe? (Basically to convert existing caffe models to caffe2)

Thanks

SSD and Faster RCNN

It's planned to add Faster RCNN and SSD (VGG/Resnet) models trained on MSCOCO?

Retrain models

Hi all

It's possible to retrain (also known as Transfer Learning) models like AlexNet?

Regards

permission denied when creating folders

I tried to follow the instruction of downloading models and installing them locally by
python -m caffe2.python.models.download -i squeezenet
But I got permission denied and error like this:

Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/ubuntu/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/caffe2/python/models/download.py", line 174, in
downloadModel(model, args)
File "/usr/local/caffe2/python/models/download.py", line 136, in downloadModel
os.makedirs(model_folder)
File "/home/ubuntu/anaconda3/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/usr/local/caffe2/python/models/squeezenet'

It might be python is not authorized to create directory

Reproducing on Android

Thank you for immense contribution! Could you please share details for transferring model to Android device, if one wish to reproduce given results on mobile device?

wire_type is error

DATASET_IM_DIR="path_to_/coco_val2014"
DATASET_ANN="path_to_/coco/instances_minival2014.json"

python code/eval_seg_cpu.py
--net "model/fp32/model.pb"
--init_net "model/fp32/model_init.pb"
--dataset "coco_2014_minival"
--dataset_dir "$DATASET_IM_DIR"
--dataset_ann "$DATASET_ANN"
--output_dir output
--min_size 320
--max_size 640 \

show error ,wire_type is 6,but itis Wrong wire type in tag

def _DecodeUnknownField(buffer, pos, wire_type):
"""Decode a unknown field. Returns the UnknownField and new position."""

if wire_type == wire_format.WIRETYPE_VARINT:
(data, pos) = _DecodeVarint(buffer, pos)
elif wire_type == wire_format.WIRETYPE_FIXED64:
(data, pos) = _DecodeFixed64(buffer, pos)
elif wire_type == wire_format.WIRETYPE_FIXED32:
(data, pos) = _DecodeFixed32(buffer, pos)
elif wire_type == wire_format.WIRETYPE_LENGTH_DELIMITED:
(size, pos) = _DecodeVarint(buffer, pos)
data = buffer[pos:pos+size].tobytes()
pos += size
elif wire_type == wire_format.WIRETYPE_START_GROUP:
(data, pos) = _DecodeUnknownFieldSet(buffer, pos)
elif wire_type == wire_format.WIRETYPE_END_GROUP:
return (0, -1)
else:
raise _DecodeError('Wrong wire type in tag.')

return (data, pos)

Resnset-50 model accuracy

I downloaded the resnet-50 model and ran it but I am not getting the correct result for any image . Can anyone help me with that. Any support will be appreciated

how to run on gpu mode?

I run this follow code with alexnet
p = workspace.Predictor(init_net, predict_net)
results = p.run([img])

It seem like the caffe2 run on cpu because I can not find the PID on nvidia-smi. I have already compile caffe2 in gpu mode.

pretrained MaskRCNN2GO

Has anyone trained the MaskRCNN2GO model ?
And is it available for download (pre-trained) ?

Thank you

ResNet50 pretrained model doesn't work well

I am really confused about this pretrianed model
I use the ILSVRC2012 validation set to test the pretrained models (AlexNet GoogleNet VGG)
They are just pretty good
When it comes to ResNet50,accuary is 0

Or something wrong i have done?
I am really lookforward to answers.

How to retrain the modelzoo model in Caffe2?

Is there any update/documentation on how to retrain (transfer learning) a pertained model from modelzoo with custom dataset in Caffe2?
I want to use Mask R-CNN2Go model from modelzoo and retrain it with my own dataset, however I don't see any documentation or tutorial on this topic.
Can somebody please point me in right direction?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.