Coder Social home page Coder Social logo

asanakoy / deeppose_tf Goto Github PK

View Code? Open in Web Editor NEW
143.0 143.0 61.0 38 KB

DeepPose implementation on TensorFlow. Original Paper http://arxiv.org/abs/1312.4659

License: Other

Shell 0.90% Python 99.10%
computer-vision deep-learning deep-neural-networks lsp-dataset mpii-dataset tensorflow

deeppose_tf's People

Contributors

asanakoy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeppose_tf's Issues

Error during testing data.

I tried to test trained lsp datasets during training

wonjinlee@alpha:~/deeppose/out/lsp_alexnet_imagenet_small$ ls
checkpoint events.out.tfevents.1510238719.alpha
checkpoint-100000.data-00000-of-00001 params.dump_171108_222950.txt
checkpoint-100000.index params.dump_171108_223930.txt
checkpoint-100000.meta params.dump_171108_224108.txt
checkpoint-110000.data-00000-of-00001 params.dump_171108_224641.txt
checkpoint-110000.index params.dump_171109_002231.txt
checkpoint-110000.meta params.dump_171109_020558.txt
checkpoint-120000.data-00000-of-00001 params.dump_171109_034216.txt
checkpoint-120000.index params.dump_171109_043955.txt
checkpoint-120000.meta params.dump_171109_060922.txt
checkpoint-130000.data-00000-of-00001 params.dump_171109_061701.txt
checkpoint-130000.index params.dump_171109_145127.txt
checkpoint-130000.meta params.dump_171109_145344.txt
checkpoint-90000.data-00000-of-00001 params.dump_171109_145635.txt
checkpoint-90000.index params.dump_171109_170839.txt
checkpoint-90000.meta params.dump_171109_234514.txt

But it shows this kind of error.

2017-11-10 17:42:08.970095: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: Unable to open table file out/lsp_alexnet_imagenet_small/checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1327, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1306, in _run_fn
status, run_metadata)
File "/usr/lib/python3.5/contextlib.py", line 66, in exit
next(self.gen)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file out/lsp_alexnet_imagenet_small/checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[Node: save/RestoreV2_5 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_5/tensor_names, save/RestoreV2_5/shape_and_slices)]]
[[Node: save/RestoreV2/_37 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_74_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "tests/test_snapshot.py", line 116, in
main(dataset_name, snapshot_path)
File "tests/test_snapshot.py", line 79, in main
test_net(test_dataset, test_iterator, dataset_name, snapshot_path)
File "tests/test_snapshot.py", line 92, in test_net
gpu_memory_fraction=0.32) # Set how much GPU memory to reserve for the network
File "/home/wonjinlee/deeppose/scripts/regressionnet.py", line 94, in create_regression_net
saver.restore(net.sess, init_snapshot_path)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1560, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run
options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file out/lsp_alexnet_imagenet_small/checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[Node: save/RestoreV2_5 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_5/tensor_names, save/RestoreV2_5/shape_and_slices)]]
[[Node: save/RestoreV2/_37 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_74_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

Caused by op 'save/RestoreV2_5', defined at:
File "tests/test_snapshot.py", line 116, in
main(dataset_name, snapshot_path)
File "tests/test_snapshot.py", line 79, in main
test_net(test_dataset, test_iterator, dataset_name, snapshot_path)
File "tests/test_snapshot.py", line 92, in test_net
gpu_memory_fraction=0.32) # Set how much GPU memory to reserve for the network
File "/home/wonjinlee/deeppose/scripts/regressionnet.py", line 93, in create_regression_net
saver = tf.train.Saver()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1140, in init
self.build()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1172, in build
filename=self._filename)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 688, in build
restore_sequentially, reshape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 407, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 247, in restore_op
[spec.tensor.dtype])[0])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 663, in restore_v2
dtypes=dtypes, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1204, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

DataLossError (see above for traceback): Unable to open table file out/lsp_alexnet_imagenet_small/checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
[[Node: save/RestoreV2_5 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_5/tensor_names, save/RestoreV2_5/shape_and_slices)]]
[[Node: save/RestoreV2/_37 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_74_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

Why this kind of error happens?
Test doesn't work while training?
How can I resolve this error?

Always get the same prediction

Hi after training on LSP dataset.
When I output the prediction for different image, the output joint location are really close.
The reason could be I just train for 20000 iteration.
Could you let me know the reasonable training iteration or could you provide trained weights?
Thx
screen shot 2017-04-05 at 11 40 23 am
screen shot 2017-04-05 at 11 41 28 am

How do I use my own images for prediction?

Hello,

I was wondering if I could use my own images for prediction. I tried to run the test_snapshot.py. I'm looking for a result similar to #2 where the images are visualized. Could someone help me out?

Thank you.

260k interations, mPCP on LSP is close to 0.42 with Imagenet initialization

I train the LSP dataset with imagenet initialization. It has been 260k iterations, the train/pose_error is around[0.02 0.03]. The mPCP score (0.42) is still far from the result listed in the table. It does increase, but pretty slowly.

Does it mean it cannot converge for this time? I need restart the training? Thanks.
wechat image_20170707094329

issue while importing scripts.config

after running the command python datasets/mpii_dataset.py I am getting the error as no module named scripts.config.
Here is what I got when i run the command.
aniket@LAPTOP-DAK58UAQ:~/deeppose_tf$ python datasets/mpii_dataset.py
Traceback (most recent call last):
File "datasets/mpii_dataset.py", line 14, in
from scripts.config import *
ImportError: No module named scripts.config

predicting specific parts

I am trying to create a system that detects only specific parts of human-like bodies; arms to be precise. Is it possible to train this via images of subject specified above?

How to realize next stages?

Hello, I noticed that you marked that the code is for the first stage.
However, there are three stages in total in the original paper, how can I realize the next two stages by your code?? Maybe just repeat the code by three times??
Looking forward to your reply! Thank you~

Error while installing cupy

I tried to instal Cupy with python 2.7 on windows 10 but i am getting following error

C:\Users\acer.LAPTOP-DAK58UAQ>pip2 install cupy
Collecting cupy
Downloading cupy-2.5.0.tar.gz (1.8MB)
100% |################################| 1.8MB 161kB/s
Complete output from command python setup.py egg_info:
**************************************************
*** WARNING: Cannot find nvToolsExt. nvtx was disabled.
**************************************************
Options: {'profile': False, 'annotate': False, 'linetrace': False, 'no_cuda': False}
Include directories: ['C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\include', 'C:\Program Files\NVIDIA Corporation\NvToolsExt\include']
Library directories: ['C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin', 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64', 'C:\Program Files\NVIDIA Corporation\NvToolsExt\lib\x64']
Microsoft Visual C++ 9.0 is required. Get it from http://aka.ms/vcpython27
**************************************************
*** WARNING: Include files not found: ['cublas_v2.h', 'cuda.h', 'cuda_profiler_api.h', 'cuda_runtime.h', 'curand.h', 'cusparse.h', 'nvrtc.h']
*** WARNING: Skip installing cuda support
*** WARNING: Check your CFLAGS environment variable
**************************************************
Traceback (most recent call last):
File "", line 1, in
File "c:\users\acer.laptop-dak58uaq\appdata\local\temp\pip-build-66dhvz\cupy\setup.py", line 32, in
ext_modules = cupy_setup_build.get_ext_modules()
File "cupy_setup_build.py", line 385, in get_ext_modules
extensions = make_extensions(arg_options, compiler, use_cython)
File "cupy_setup_build.py", line 275, in make_extensions
raise Exception('Your CUDA environment is invalid. '
Exception: Your CUDA environment is invalid. Please check above error log.

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in c:\users\acer.laptop-dak58uaq\appdata\local\temp\pip-build-66dhvz\cupy\

i have used cuda 8 with openpose as well and it works fine for me but in case of deep pose i am getting the above error.

Regarding Tensorflow 1.0

I am unable to find TensorFlow version 1.0 for python 2.7. Please help how shall I proceed next?

Train error does not decrease.

Hi, I use the train_lsp_alexnet_imagenet.sh to train. But it seems the training error get stucks to [0.1 0.11] and cannot decrease any more.

IndexError: list index out of range

  bash examples/train_lsp_alexnet_imagenet.sh 

Elapsed time for finding uninitialized variables: 0.42s
Elapsed time to init them: 0.16s
args.resume: False
args.snapshot: /home/wonjinlee/deeppose/weights/bvlc_alexnet.tf
Reading dataset from /home/wonjinlee/data/lsp/train_joints.csv
0it [00:00, ?it/s]
Traceback (most recent call last):
File "/home/wonjinlee/deeppose/scripts/train.py", line 233, in
main(sys.argv[1:])
File "/home/wonjinlee/deeppose/scripts/train.py", line 168, in main
downscale_height=args.downscale_height
File "/home/wonjinlee/deeppose/scripts/dataset.py", line 37, in init
self.load_images()
File "/home/wonjinlee/deeppose/scripts/dataset.py", line 129, in load_images
print('Joints shape:', self.joints[0][1].shape)
IndexError: list index out of range

How could I deal with this kind of error?

Error "Invalid argument" running on windows, python 3.5

I try to run the implementation with python 3.5 on windows 10.
I have to change some function to change from python 2.x to python 3.5 such as:
print "hello" ---> print("hello")
xrange() --> range()
izip --> zip
Change some import style
Finally, I can run data preparation.
Then when I try to train network from lsp dataset and pretrained AlexNet.
+It load the input data successfully.
+Then, the error come when it run the code in function evaluate_pcp in regressionnet.py.
for i, batch in tqdm(enumerate(test_it), total=num_batches):
I think it comes from the reading data with multithread, but I don't know where the bug come from, and how can I debug the error.
Please guide me how can I check the error, thank you.

The following is the stack trace:

Reading dataset from datasets/lsp_ext/train_joints.csv
8046it [00:43, 184.71it/s]G:\Pose\deeppose_tf-master\scripts\dataset.py:111: Use
rWarning: Skipping joint with incorrect joints coordinates. They are out of the
image.
image: G:/Pose/deeppose_tf-master\datasets/lsp_ext\images\im08075.jpg, joint: [
386.32211538  150.625     ], im.shape: (161, 241)
  'image: {}, joint: {}, im.shape: {}'.format(img_path, joints[i_joint], image_s
hape[:2]))
11000it [00:59, 186.13it/s]
Joints shape: (14, 2)
Reading dataset from datasets/lsp_ext/test_joints.csv
1000it [00:11, 83.85it/s]
Joints shape: (14, 2)
Reading dataset from datasets/lsp_ext/train_lsp_small_joints.csv
1000it [00:10, 99.88it/s]
Joints shape: (14, 2)
1000
<enumerate object at 0x000000002BE6D168>
  0%|                                                    | 0/8 [00:00<?, ?it/s]
Traceback (most recent call last):
File "scripts/train.py", line 242, in <module>
    main(sys.argv[1:])
  File "scripts/train.py", line 237, in main
    output_dir=args.o_dir
  File "scripts/train.py", line 77, in train_loop
    tag_prefix='test')
  File "G:\Pose\deeppose_tf-master\scripts\regressionnet.py", line 279, in evalu
ate_pcp
    for i, batch in tqdm(enumerate(test_it), total=num_batches):
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\site-package
s\tqdm\_tqdm.py", line 959, in __iter__
    for obj in iterable:
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\site-package
s\chainer\iterators\multiprocess_iterator.py", line 87, in __next__
    self._thread = self._prefetch_loop.launch_thread()
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\site-package
s\chainer\iterators\multiprocess_iterator.py", line 307, in launch_thread
    initargs=(self.dataset, self.mem_size, self.mem_bulk))
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\context.py", line 118, in Pool
    context=self.get_context())
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\pool.py", line 168, in __init__
    self._repopulate_pool()
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\pool.py", line 233, in _repopulate_pool
    w.start()
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\context.py", line 313, in _Popen
    return Popen(process_obj)
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\popen_spawn_win32.py", line 66, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\reduction.py", line 59, in dump
    ForkingPickler(file, protocol).dump(obj)
# OSError: [Errno 22] Invalid argument

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\spawn.py", line 106, in spawn_main
    exitcode = _main(fd)
  File "C:\Users\icomlab\AppData\Local\Programs\Python\Python35\lib\multiprocess
ing\spawn.py", line 116, in _main
    self = pickle.load(from_parent)
# EOFError: Ran out of input

Prediction for Real Time Video

Is it possible to use it for predicting an action from a real-time video feed?(something like checking if a person is running)
I am very new to this field and even if can give me an idea, it would be great.
Thanks

Test bbox extension range

In scripts/train.py

....
    print 'args.resume: {}\nargs.snapshot: {}'.format(args.resume, args.snapshot)
    bbox_extension_range = (args.bbox_extension_min, args.bbox_extension_max)
    if bbox_extension_range[0] is None or bbox_extension_range[1] is None:
        bbox_extension_range = None
        test_bbox_extension_range = None
    else:
        test_bbox_extension_range = (bbox_extension_range[1], bbox_extension_range[1])
...

Why do you use bbox_extension range[1] for the last line, is it a typo?
It should be?

...
 else:
        test_bbox_extension_range = (bbox_extension_range[0], bbox_extension_range[1])
...

Thanks.

Unable to download lsp dataset

--2017-10-14 17:34:56-- https://engineering.leeds.ac.uk/info/20132/school_of_computing
Resolving engineering.leeds.ac.uk (engineering.leeds.ac.uk)... 129.11.26.47
Connecting to engineering.leeds.ac.uk (engineering.leeds.ac.uk)|129.11.26.47|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘lsp_dataset_original.zip’

lsp_dataset_original.zip [ <=> ] 76.47K 19.8KB/s in 3.9s

2017-10-14 17:35:08 (19.8 KB/s) - ‘lsp_dataset_original.zip’ saved [78308]

Archive: lsp_dataset_original.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of lsp_dataset_original.zip or
lsp_dataset_original.zip.zip, and cannot find lsp_dataset_original.zip.ZIP, period.
mkdir: cannot create directory ‘lsp’: File exists
mv: cannot stat 'images': No such file or directory
mv: cannot stat 'joints.mat': No such file or directory
mv: cannot stat 'README.txt': No such file or directory
--2017-10-14 17:35:09-- http://www.comp.leeds.ac.uk/mat4saj/lspet_dataset.zip
Resolving www.comp.leeds.ac.uk (www.comp.leeds.ac.uk)... 129.11.133.104
Connecting to www.comp.leeds.ac.uk (www.comp.leeds.ac.uk)|129.11.133.104|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://engineering.leeds.ac.uk/info/20132/school_of_computing [following]
--2017-10-14 17:35:09-- https://engineering.leeds.ac.uk/info/20132/school_of_computing
Resolving engineering.leeds.ac.uk (engineering.leeds.ac.uk)... 129.11.26.47
Connecting to engineering.leeds.ac.uk (engineering.leeds.ac.uk)|129.11.26.47|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘lspet_dataset.zip’

lspet_dataset.zip [ <=> ] 76.47K 169KB/s in 0.5s

2017-10-14 17:35:11 (169 KB/s) - ‘lspet_dataset.zip’ saved [78308]

Archive: lspet_dataset.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of lspet_dataset.zip or
lspet_dataset.zip.zip, and cannot find lspet_dataset.zip.ZIP, period.
mkdir: cannot create directory ‘lsp_ext’: File exists
mv: cannot stat 'images': No such file or directory
mv: cannot stat 'joints.mat': No such file or directory
mv: cannot stat 'README.txt': No such file or directory

Syntax error

I setup everything as you mentioned. But I got the following error.
image

not a bug... but a question

If i have to use this code and test a non mpii data set which does not have any attributes ( the position of the various body parts), can it be done? Can i make some changes and feed in images and then see if the different parts of the body are identified.

the maii dataset

the maii dataset the official dataset dont split it as train or test????
I have to split it myself??
different people will have different train/test data????

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.