Coder Social home page Coder Social logo

webmachinelearning / webnn-native Goto Github PK

View Code? Open in Web Editor NEW
49.0 10.0 21.0 6.8 MB

🧠⚙️ Standalone native implementation of the Web Neural Network API

License: Apache License 2.0

Python 1.86% JavaScript 0.53% C++ 91.31% C 6.18% Shell 0.01% Batchfile 0.03% Pawn 0.07%

webnn-native's Introduction

Backend \ OS Windows Linux
null (for unit test) null backend
DirectMLX DirectMLX backend (Windows)
Node Binding (DirectMLX backend / Windows)
Memory leak check - DirectMLX backend (Windows)
OpenVINO OpenVINO backend (Windows)
Node Binding (OpenVINO backend / Windows)
OpenVINO backend (Linux)
Node Binding (OpenVINO backend / Linux)
XNNPACK XNNPACK backend (Windows) XNNPACK backend (Linux)
oneDNN oneDNN backend (Windows) oneDNN backend (Linux)
MLAS MLAS backend (Windows)

clang format

WebNN-native

WebNN-native is a native implementation of the Web Neural Network API.

It provides several building blocks:

  • WebNN C/C++ headers that applications and other building blocks use.
    • The webnn.h that is an one-to-one mapping with the WebNN IDL.
    • A C++ wrapper for the webnn.h
  • Backend implementations that use platforms' ML APIs:
    • DirectML on Windows 10
    • DirectMLX on Windows 10
    • OpenVINO on Windows 10 and Linux
    • oneDNN on Windows 10 and Linux
    • XNNPACK on Windows 10 and Linux
    • MLAS on Windows 10 and Linux
    • Other backends are to be added

WebNN-native uses the code of other open source projects:

  • The code generator and infrastructure code of Dawn project.
  • The DirectMLX and device wrapper of DirectML project.
  • The XNNPACK project.
  • The oneDNN project.
  • The MLAS project.

Build and Run

Install depot_tools

WebNN-native uses the Chromium build system and dependency management so you need to install depot_tools and add it to the PATH.

Notes:

  • On Windows, you'll need to set the environment variable DEPOT_TOOLS_WIN_TOOLCHAIN=0. This tells depot_tools to use your locally installed version of Visual Studio (by default, depot_tools will try to download a Google-internal version).

Get the code

Get the source code as follows:

# Clone the repo as "webnn-native"
> git clone https://github.com/webmachinelearning/webnn-native.git webnn-native && cd webnn-native

# Bootstrap the gclient configuration
> cp scripts/standalone.gclient .gclient

# Fetch external dependencies and toolchains with gclient
> gclient sync

Setting up the build

Generate build files using gn args out/Debug or gn args out/Release.

A text editor will appear asking build options, the most common option is is_debug=true/false; otherwise gn args out/Release --list shows all the possible options.

To build with a backend, please set the corresponding option from following table.

Backend Option
DirectML webnn_enable_dml=true
DirectMLX webnn_enable_dmlx=true
OpenVINO webnn_enable_openvino=true
XNNPACK webnn_enable_xnnpack=true
oneDNN webnn_enable_onednn=true
MLAS webnn_enable_mlas=true

Build

Then use ninja -C out/Release or ninja -C out/Debug to build WebNN-native.

Notes

  • To build with XNNPACK backend, please build XNNPACK first, e.g. by ./scripts/build-local.sh. For Windows build, it requires supplying -DCMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded$<$CONFIG:Debug:Debug>" to set MSVC static runtime library.
  • To build with oneDNN backend, please build oneDNN first by following the build from source instructions.
  • To build with MLAS backend, please build MLAS (part of ONNX Runtime) first by following the Build ONNX Runtime for inferencing, e.g., by .\build.bat --config Release --parallel --enable_msvc_static_runtime for Windows build.

Run tests

Run unit tests:

> ./out/Release/webnn_unittests

Run end2end tests on a default device:

> ./out/Release/webnn_end2end_tests

You can also specify a device to run end2end tests using "-d" option, for example:

> ./out/Release/webnn_end2end_tests -d gpu

Currently "cpu", "gpu" and "default" are supported, more devices are to be supported in the future.

Notes:

Run examples

License

Apache 2.0 Public License, please see LICENSE.

webnn-native's People

Contributors

anssiko avatar bbernhar avatar brucedai avatar fujunwei avatar honry avatar huningxin avatar lisa0314 avatar miaobin avatar mingmingtasd avatar ravirajsitaram avatar vbenni avatar wangli69087 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webnn-native's Issues

[DML] ResNet NCHW model tests doesn't meet the tolerance requirement

We defined tolerance as <0.005 here, but the results of resNet NCHW model tests only meet <0.01. So we need discussion on how to solve this problem.

[----------] 3 tests from ResNetNchwTests
[ RUN      ] ResNetNchwTests.NchwTest0
Error: The output value at index 5 is expected as -9.77821, but got -9.77252
../../src/tests/end2end/models/ResNetNchw.cpp(32): error: Value of: utils::CheckValue(result, outputNpy.as_vec<float>())
  Actual: false
Expected: true
[  FAILED  ] ResNetNchwTests.NchwTest0 (3628 ms)
[ RUN      ] ResNetNchwTests.NchwTest1
Error: The output value at index 26 is expected as -19.3955, but got -19.3886
../../src/tests/end2end/models/ResNetNchw.cpp(32): error: Value of: utils::CheckValue(result, outputNpy.as_vec<float>())
  Actual: false
Expected: true
[  FAILED  ] ResNetNchwTests.NchwTest1 (3197 ms)
[ RUN      ] ResNetNchwTests.NchwTest2
Error: The output value at index 5 is expected as -8.86224, but got -8.8562
../../src/tests/end2end/models/ResNetNchw.cpp(32): error: Value of: utils::CheckValue(result, outputNpy.as_vec<float>())
  Actual: false
Expected: true
[  FAILED  ] ResNetNchwTests.NchwTest2 (3144 ms)
[----------] 3 tests from ResNetNchwTests (9969 ms total)

[Node.js] Tests for cos, sin, tan can't meet the tolerance requirement

Based on dml backend, running npm run test-ops through node.js under webnn-native/node will get results as below:

test element-wise unary operations
    √ abs
    √ ceil
a(-0.3924400508403778) b(-0.39242464) delta(0.00001541084037781637)
    59) cos
    √ exp
    √ floor
    √ log
    √ neg
a(0.9874919056892395) b(0.98748258) delta(0.000009325689239503276)
    60) sin
a(6.260713577270508) b(6.26066374) delta(0.00004983727050777986)
    61) tan

Failed to run mobilenetv2_nchw.js model tests through node.js binding

I verified that since this commit, I failed to run npm run test-models under webnn-native/node. It seems that clamp will cause memory leak.

image
You can reproduce by these steps:

  1. Based on PR#105
  2. Modify webnn-polyfill commit id in DEPS as:
    'node/third_party/webnn-polyfill': { 'url': '{github_git}/webmachinelearning/webnn-polyfill.git@73ac6627c19b6e1f640b161166137c7b5e40698a' },
  3. gclient sync

cd webnn-native/node
npm install --webnn_native_lib_path="../out/Release"
npm run build --webnn_native_lib_path="../out/Release"
npm run test-models

[node][DML] Failed to run TinyYolov2 model tests with assertion error.

Following WebNN-native Binding for Node.js README, when running TinyYolov2 model tests with Node Binding of WebNN API by DirectML backend, test failed with following assertion error:

test tinyYolov2 nchw
a(0.2928530275821686) b(0.2941698133945465) delta(0.0013167858123779297)
1) test_data_set_0
a(0.4636850357055664) b(0.4621993899345398) delta(0.0014856457710266113)
2) test_data_set_1
a(0.1699068546295166) b(0.16871242225170135) delta(0.0011944323778152466)
3) test_data_set_2

test tinyYolov2 nhwc
a(0.007876233197748661) b(0.002362951636314392) delta(0.005513281561434269)
4) test_data_set_0
a(-0.017894059419631958) b(-0.011623484082520008) delta(0.00627057533711195)
5) test_data_set_1
a(-0.13853715360164642) b(-0.14467230439186096) delta(0.006135150790214539)
6) test_data_set_2

This issue isn't reproduced by Node Binding for OpenVINO backend.

Test Device info:
ASUS ZenBook Flip S laptop -- CPU:11th Gen Intel i7-1165G7, GPU:Intel Xe Graphics (driver version 30.0.100.9684)

Windows specifications:
Edition:Windows 10 Version:2004 OS build:19041.1110

[OV] build warnings of ngraph_c_api

$ ninja -C out/Release/
ninja: Entering directory `out/Release/'
[1/15] ACTION //src/webnn_native:build_ngraph_c_api(//build/toolchain/linux:clang_x64)
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp: In function ‘IEStatusCode ngraph_constant(const tensor_desc_t*, const ie_blob_t*, ngraph_node_t**)’:
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp:168:21: warning: ignoring return value of ‘IEStatusCode ie_blob_get_buffer(const ie_blob_t*, ie_blob_buffer_t*)’, declared with attribute warn_unused_result [-Wunused-result]
   ie_blob_get_buffer(blob, &buffer);
   ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp: In function ‘IEStatusCode ngraph_average_pool(const ngraph_node_t*, const size_t*, uint32_t, const size_t*, uint32_t, const size_t*, uint32_t, ngraph_auto_pad, ngraph_node_t**)’:
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp:457:56: warning: ‘auto_pad’ may be used uninitialized in this function [-Wmaybe-uninitialized]
       true, ngraph::op::RoundingType::FLOOR, GetAutoPad(mode));
                                              ~~~~~~~~~~^~~~~~
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp: In function ‘IEStatusCode ngraph_max_pool(const ngraph_node_t*, const size_t*, uint32_t, const size_t*, uint32_t, const size_t*, uint32_t, ngraph_auto_pad, ngraph_node_t**)’:
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp:477:50: warning: ‘auto_pad’ may be used uninitialized in this function [-Wmaybe-uninitialized]
       ngraph::op::RoundingType::FLOOR, GetAutoPad(mode));
                                        ~~~~~~~~~~^~~~~~
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp: In function ‘IEStatusCode ngraph_convolution(const ngraph_node_t*, const ngraph_node_t*, const size_t*, uint32_t, const int32_t*, uint32_t, const size_t*, uint32_t, ngraph_auto_pad, ngraph_node_t**)’:
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp:498:35: warning: ‘auto_pad’ may be used uninitialized in this function [-Wmaybe-uninitialized]
       dilations_vector, GetAutoPad(mode));
                         ~~~~~~~~~~^~~~~~
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp: In function ‘IEStatusCode ngraph_group_convolution(const ngraph_node_t*, const ngraph_node_t*, const size_t*, uint32_t, const int32_t*, uint32_t, const size_t*, uint32_t, ngraph_auto_pad, ngraph_node_t**)’:
/home/nhu/code/webnn-native/third_party/openvino/ngraph_c_api/src/ngraph_c_api.cpp:519:35: warning: ‘auto_pad’ may be used uninitialized in this function [-Wmaybe-uninitialized]
       dilations_vector, GetAutoPad(mode));
                         ~~~~~~~~~~^~~~~~
[6/6] STAMP obj/all.stamp

[Node] App crashed on Windows with TFLite WebNN delegate multi-threading build

Reported by @Honry , thanks!

Test: https://honry.github.io/webnn-samples/semantic_segmentation/ + Electron.js + WebNN OV/DML

Relevant PR: Honry/webnn-samples#14

The root cause is related to upstream issue for building node.js addon against electron on Windows: electron/electron#29893. We rely accessing v8::SharedArrayBuffer::GetBackingStore() in fix #107. However, it turns out it doesn't work on Electron 13+ on Windows because it is built by clang/libc++. Eventually, we should use N-API to access SharedArrayBuffer, however that capability depends on nodejs/node#23276.

For short term, I would suggest to disable the SharedArrayBuffer support till it is officially supported by N-API.

Implement the changes/additions of ops for style transfer models

Implement the spec change of webmachinelearning/webnn#123.

The changes include:

  • Extend the conv2d operation to support transposed convolution, an essential upsample tool for encoder-decoder models.
  • Add instanceNormalization in addition to batch-normalization. Instance-normalization is a fused operation for a normalization subgraph that computes the mean and variance values per-feature instance on the fly.
  • Replace the sqrt unary operation with a more generic pow binary operation used by the normalization process.
  • Add pad operation that supports all 4 padding modes found in various frameworks.
  • Add resample operation to support both upsampling and downsampling of feature instances. This operation is used in the ONNX version of the style-transfer models.

The corresponding webnn-polyfill issue: webmachinelearning/webnn-polyfill#34
The style transfer sample: https://webmachinelearning.github.io/webnn-samples/style_transfer/

[DML] 28 failed tests for conv2d with transpose option

[ FAILED ] Conv2dTests.Conv2dTransposeDefault
[ FAILED ] Conv2dTests.Conv2dTransposeNchwHwio
[ FAILED ] Conv2dTests.Conv2dTransposeNchwOhwi
[ FAILED ] Conv2dTests.Conv2dTransposeNchwIhwo
[ FAILED ] Conv2dTests.Conv2dTransposeNhwcOihw
[ FAILED ] Conv2dTests.Conv2dTransposeNhwcHwio
[ FAILED ] Conv2dTests.Conv2dTransposeNhwcOhwi
[ FAILED ] Conv2dTests.Conv2dTransposeNhwcIhwo
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputShapeDefault
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputShapeNchwHwio
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputShapeNchwOhwi
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputShapeNchwIhwo
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputShapeNhwcOihw
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputShapeNhwcHwio
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputShapeNhwcOhwi
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputShapeNhwcIhwo
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputPaddingDefault
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputPaddingNchwHwio
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputPaddingNchwOhwi
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputPaddingNchwIhwo
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputPaddingNhwcOihw
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputPaddingNhwcHwio
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputPaddingNhwcOhwi
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputPaddingNhwcIhwo
[ FAILED ] Conv2dTests.Conv2dTransposeWithAutoPadSameUpperDefault
[ FAILED ] Conv2dTests.Conv2dTransposeWithAutoPadExplicitNhwcIhwo
[ FAILED ] Conv2dTests.Conv2dTransposeWithAutoPadSameLowerNhwcIhwo
[ FAILED ] Conv2dTests.Conv2dTransposeWithOutputSizeIgnoredOutputPadding

Error message:

error: Value of: utils::CheckValue(result, expected.value)
  Actual: false
Expected: true

@miaobin PTAL, thanks.

[DML/IE] Re-implement and optimize clamp

Need to re-implement clamp with dml::Clip when the min and max parameters are both float values as following description:

struct DML_ELEMENT_WISE_CLIP_OPERATOR_DESC {
  const DML_TENSOR_DESC *InputTensor;
  const DML_TENSOR_DESC *OutputTensor;
  const DML_SCALE_BIAS  *ScaleBias;
  FLOAT                 Min;
  FLOAT                 Max;
};

Also need to refactor Clamp implementation on IE.

Implement the ops for the first-wave vision models

According to the first wave models, there are 4 vision models including SqueezeNet, MobileNetV2, ResNetV2 and TinyYOLOV2.

To support those models, there are 5 ops need to be implemented. They are batchNormalization, clamp, concat, gemm and leakyRelu.

These ops are supported by webnn-polyfill. It would be good to keep them aligned.

[OV] conv2d and pool2d tests crash for debug build

Build debug version and run conv2d or pool2d tests:

$ out/Debug/webnn_end2end_tests --gtest_filter=*Conv2dTests*
Note: Google Test filter = *Conv2dTests*
[==========] Running 66 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 66 tests from Conv2dTests
[ RUN      ] Conv2dTests.Conv2dWithPaddingDefault
webnn_end2end_tests: /home/nhu/code/webnn-native/third_party/openvino/ienn/src/ie_model.cpp:414: ie_operand_t* InferenceEngine::Model::AddConv2d(ie_operand_t*, ie_operand_t*, ie_conv2d_options_t*): Assertion `0' failed.
Aborted (core dumped)

The crash is caused by setting autoPad as explicit for conv2d and pool2d. The code of conv2d to handle explicit case should break instead of passing through to default case. Same to pool2d.

[unitTests] GraphValidationTest.BuildCallBackSuccess failed

One case failed as below:

[ RUN      ] GraphValidationTest.BuildCallBackSuccess
unknown file: Failure

Unexpected mock function call - returning directly.
    Function call: Call(0, 0x558ee7392c20, NULL, 0x558ee738e6c0)
Google Mock tried the following 1 expectation, but it didn't match:

../../src/tests/unittests/validation/GraphValidationTests.cpp:65: EXPECT_CALL(*mockGraphBuildCallback, Call(MLBuildGraphStatus_Error, _, _, this))...
  Expected arg #0: is equal to 1
           Actual: 0
         Expected: to be called once
           Actual: never called - unsatisfied and active
../../src/tests/unittests/validation/GraphValidationTests.cpp:65: Failure
Actual function call count doesn't match EXPECT_CALL(*mockGraphBuildCallback, Call(MLBuildGraphStatus_Error, _, _, this))...
         Expected: to be called once
           Actual: never called - unsatisfied and active
[  FAILED  ] GraphValidationTest.BuildCallBackSuccess (9 ms)

[IE] Failed to build IE backend with OpenVINO 2021.4

Using directory INTEL_OPENVINO_DIR="/opt/intel/openvino_2021" here which links to latest 2021.4 version to build ienn will fail. INTEL_OPENVINO_DIR="/opt/intel/openvino_2021.2.185" has been verified, so please wait for our support of 2021.4.

Implement WebNN specification in Chromium browser

WebNN-native is a standalone component to implement Web Neural Network API, we have exposed the JavaScript API with Node.js Binding. We start to integrate WebNN-native into Chromium browser, there are two approaches to implement IPC between render process and gpu process

  1. Mojom
  2. Command Buffer

Mojom is simple but maybe the interoperability with WebGPU/WebGL is hard, The implementation of WebNN with Command buffer is complex that is designed for web Graphics.

webnn_end2end_tests with webnn_enable_dml fail

These tests report unhandled NOT_IMPL errors when using DML:

[ FAILED ] PowTests.Sqrt1d
[ FAILED ] PowTests.Sqrt3d
[ FAILED ] PowTests.Pow1d
[ FAILED ] PowTests.PowBroadcastScalar
[ FAILED ] PowTests.PowBroadcast1d

We should suppress them if failure is expected so we can keep the bots green. This is particularly important for WebNN integrations.

@huningxin @fujunwei

[Node] Invalid argument when passing SharedArrayBuffer as argument

Node.js would throw "Invalid argument" error when passing typed array buffer backing by SharedArrayBuffer to WebNN-native Node.js addon API, in particular, the builder.constant and graph.compute. The SharedArrayBuffer is commonly used for web worker. This issue would impact the Wasm Lib, e.g. TFLite Web and ONNXRuntime Web, multi-thread build.

Run webnn-polyfill JS tests via node.js binding

There are 455 JS test cases that test APIs, ops and models for webnn-polyfill. Thanks to mocha, developers can also use npm test to run these tests by node.js, e.g. for continuous integration.

To avoid the duplicated work and porting of these tests to C++, it would be useful to run these JS tests against webnn-native. As webnn.h is a one-to-one mapping of WebNN spec, this is possible to add a node.js binding to webnn-native and expose the WebNN JS API through this binding.

[OV] Fail to build deeplab model with NHWC + OpenVino backend

Test case: #52

When test DeepLab sample with NHWC layout and OpenVino backend (Node.js Electron app), we failed to build the graph at resample op at following code line:

    const resample0 = this.builder_.resample(
        conv4, {sizes: [1, 65, 65, 256], mode: 'linear'});

https://github.com/webmachinelearning/webnn-samples/blob/master/semantic_segmentation/deeplabv3_mnv2_nhwc.js#L137.

While this works on DML backend, and NCHW layout also works on OpenVino backend.

Error Log:

Error: Failed to build graph.
    at DeepLabV3MNV2Nhwc.build (deeplabv3_mnv2_nhwc.js:151)
    at main (main.js:334)
    at async window.onload (index.html?undefined:205)

Refactor codes once a backend requires fusing clamp when creating graph

Now we make a workaround to support fused clamp. OpenVINO can fuse clamp by its graph compiler and DML doesn't support fuse clamp today. So We added a clamp node in GraphBuilder directly to ensure
that we can find the min and max operands from the graph. We need to refactor codes once a backend requires fusing clamp.

Unable to run webnn_end2end_tests.exe under third_party using webnn_enable_dml

I am trying to run webnn_end2end_tests.exe where webnn_native component gets built under the .\third_party sub-directory like how most components of Chromium reside (ex. third_party\Dawn).

But when I run webnn_end2end_tests.exe, built using webnn_enable_dml = true edited in build_overrides/webnn_features.gni, the following output returns:

[==========] Running 194 tests from 32 test suites.
[----------] Global test environment set-up.
[----------] 3 tests from AddTests
[ RUN      ] AddTests.AddConstantAndInput
Detected memory leaks!
Dumping objects ->
{2566} normal block at 0x000001B988199940, 12 bytes long.
 Data: <            > 03 00 00 00 04 00 00 00 05 00 00 00 
{2564} normal block at 0x000001B98819A160, 12 bytes long.
 Data: <            > 03 00 00 00 04 00 00 00 05 00 00 00 
...

And the test fails (does not continue to run).

Any idea how to proceed (maybe memory leak checking is causing an issue)?

@huningxin @fujunwei

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.