Facebook AI Performance Evaluation Platform is a framework and backend agnostic benchmarking platform to compare machine learning inferencing runtime metrics on a set of models and on variety of backends. It also provides a means to check performance regressions on each commit. It is licensed under Apache License 2.0. Please refer to the LICENSE file for details.
Currently, the following performance metrics are collected:
- Delay : the latency of running the entire network and/or the delay of running each individual operator.
- Error : the error between the values of the outputs running a model and the golden outputs.
- Energy/Power : the energy per inference and average power of running the ML model on a phone without battery.
- Other User Provided Metrics : the harness can accept any metric that the user binary generates.
Machine learning is a rapidly evolving area with many moving parts: new and existing framework enhancements, new hardware solutions, new software backends, and new models. With so many moving parts, it is very difficult to quickly evaluate the performance of a machine learning model. However, such evaluation is vastly important in guiding resource allocation in:
- the development of the frameworks
- the optimization of the software backends
- the selection of the hardware solutions
- the iteration of the machine learning models
This project aims to achieve the two following goals:
- Easily evaluate the runtime performance of a model selected to be benchmarked on all existing backends.
- Easily evaluate the runtime performance of a backend selected to be benchmarked on all existing models.
The flow of benchmarking is illustrated in the following figure:
The flow is composed of three parts:
- A centralized model/benchmark specification
- A fair input to the comparison
- A centralized benchmark driver with distributed benchmark execution
- The same code base for all backends to reduce variation
- Distributed execution due to the unique build/run environment for each backend
- A centralized data consumption
- One stop to compare the performance
The currently supported frameworks are: Caffe2, TFLite
The currently supported model formats are: Caffe2, TFLite
The currently supported backends: CPU, GPU, DSP, Android, iOS, Linux based systems
The currently supported libraries: Eigen, MKL, NNPACK, OpenGL, CUDA
The benchmark platform also provides a means to compare performance between commits and detect regressions. It uses an A/B testing methodology that compares the runtime difference between a newer commit (treatment) and an older commit (control). The metric of interest is the relative performance difference between the commits, as the backend platform's condition may be different at different times. Running the same tests on two different commit points at the same time removes most of the variations of the backend. This method has been shown to improve the precision of detecting performance regressions.
The benchmarking codebase resides in benchmarking
directory. Inside, the frameworks
directory contains all supported ML frameworks. A new framework can be added by creating a new directory, deriving from framework_base.py
, and implementing all its methods. The platforms
directory contains all supported ML backend platforms. A new backend can be added by creating a new directory, deriving from platform_base.py
, and implementing all its methods.
The model specifications resides in specifications
directory. Inside, the models
directory contains all model and benchmarking specifications organized in model format. The benchmarks
directory contains a sequence of benchmarks organized in model format. The frameworks
directory contains custom build scripts for each framework.
The models and benchmarks are specified in json format. It is best to use the example in /specifications/models/caffe2/squeezenet/squeezenet.json
as an example to understand what data is specified.
A few key items in the specifications
- The models are hosted in third party storage. The download links and their MD5 hashes are specified. The benchmarking tool automatically downloads the model if not found in the local model cache. The MD5 hash of the cached model is computed and compared with the specified one. If they do not match, the model is downloaded again and the MD5 hash is recomputed. This way, if the model is changed, only need to update the specification and the new model is downloaded automatically.
- In the
inputs
field oftests
, one may specify multiple shapes. This is a short hand to indicate that we benchmark the tests of all shapes in sequence. - In some field, such as
identifier
, you may find some string like{ID}
. This is a placeholder to be replaced by the benchmarking tool to differentiate multiple test runs specified in one test specification, as in the above item.
To run the benchmark, you need to run run_bench.py
, given a model meta data or a benchmark meta data. An example of the command is the following (when running under FAI-PEP directory):
benchmarking/run_bench.py -b specifications/models/caffe2/shufflenet/shufflenet.json
When you run the command for the first time, you are asked several questions. The answers to those questions, together with other sensible defaults, are saved in a config file: ~/.aibench/git/config.txt
. You can edit the file to update your default arguments.
The arguments to the driver are as follows. It also takes arguments specified in the following sections and pass them to those scripts.
usage: run_bench.py [-h] [--reset_options]
Perform one benchmark run
optional arguments:
-h, --help show this help message and exit
--reset_options Reset all the options that is saved by default.
run_bench.py
can be the single point of entry for both interactive and regression benchmark runs.
The harness.py
is the entry point for one benchmark run. It collects the runtime for an entire net and/or individual operator, and saves the data locally or pushes to a remote server. The usage of the script is as follows:
usage: harness.py [-h] [--android_dir ANDROID_DIR] [--ios_dir IOS_DIR]
[--backend BACKEND] -b BENCHMARK_FILE
[--command_args COMMAND_ARGS] [--cooldown COOLDOWN]
[--device DEVICE] [-d DEVICES]
[--excluded_devices EXCLUDED_DEVICES] --framework
{caffe2,generic,oculus,tflite} --info INFO
[--local_reporter LOCAL_REPORTER]
[--monsoon_map MONSOON_MAP]
[--simple_local_reporter SIMPLE_LOCAL_REPORTER]
--model_cache MODEL_CACHE -p PLATFORM
[--platform_sig PLATFORM_SIG] [--program PROGRAM] [--reboot]
[--regressed_types REGRESSED_TYPES]
[--remote_reporter REMOTE_REPORTER]
[--remote_access_token REMOTE_ACCESS_TOKEN]
[--root_model_dir ROOT_MODEL_DIR]
[--run_type {benchmark,verify,regress}] [--screen_reporter]
[--simple_screen_reporter] [--set_freq SET_FREQ]
[--shared_libs SHARED_LIBS] [--string_map STRING_MAP]
[--timeout TIMEOUT] [--user_identifier USER_IDENTIFIER]
[--wipe_cache WIPE_CACHE]
[--hash_platform_mapping HASH_PLATFORM_MAPPING]
[--user_string USER_STRING]
Perform one benchmark run
optional arguments:
-h, --help show this help message and exit
--android_dir ANDROID_DIR
The directory in the android device all files are
pushed to.
--ios_dir IOS_DIR The directory in the ios device all files are pushed
to.
--backend BACKEND Specify the backend the test runs on.
-b BENCHMARK_FILE, --benchmark_file BENCHMARK_FILE
Specify the json file for the benchmark or a number of
benchmarks
--command_args COMMAND_ARGS
Specify optional command arguments that would go with
the main benchmark command
--cooldown COOLDOWN Specify the time interval between two test runs.
--device DEVICE The single device to run this benchmark on
-d DEVICES, --devices DEVICES
Specify the devices to run the benchmark, in a comma
separated list. The value is the device or device_hash
field of the meta info.
--excluded_devices EXCLUDED_DEVICES
Specify the devices that skip the benchmark, in a
comma separated list. The value is the device or
device_hash field of the meta info.
--framework {caffe2,generic,oculus,tflite}
Specify the framework to benchmark on.
--info INFO The json serialized options describing the control and
treatment.
--local_reporter LOCAL_REPORTER
Save the result to a directory specified by this
argument.
--monsoon_map MONSOON_MAP
Map the phone hash to the monsoon serial number.
--simple_local_reporter SIMPLE_LOCAL_REPORTER
Same as local reporter, but the directory hierarchy is
reduced.
--model_cache MODEL_CACHE
The local directory containing the cached models. It
should not be part of a git directory.
-p PLATFORM, --platform PLATFORM
Specify the platform to benchmark on. Use this flag if
the framework needs special compilation scripts. The
scripts are called build.sh saved in
specifications/frameworks/<framework>/<platform>
directory
--platform_sig PLATFORM_SIG
Specify the platform signature
--program PROGRAM The program to run on the platform.
--reboot Tries to reboot the devices before launching
benchmarks for one commit.
--regressed_types REGRESSED_TYPES
A json string that encodes the types of the regressed
tests.
--remote_reporter REMOTE_REPORTER
Save the result to a remote server. The style is
<domain_name>/<endpoint>|<category>
--remote_access_token REMOTE_ACCESS_TOKEN
The access token to access the remote server
--root_model_dir ROOT_MODEL_DIR
The root model directory if the meta data of the model
uses relative directory, i.e. the location field
starts with //
--run_type {benchmark,verify,regress}
The type of the current run. The allowed values are:
benchmark, the normal benchmark run.verify, the
benchmark is re-run to confirm a suspicious
regression.regress, the regression is confirmed.
--screen_reporter Display the summary of the benchmark result on screen.
--simple_screen_reporter
Display the result on screen with no post processing.
--set_freq SET_FREQ On rooted android phones, set the frequency of the
cores. The supported values are: max: set all cores to
the maximum frquency. min: set all cores to the
minimum frequency. mid: set all cores to the median
frequency.
--shared_libs SHARED_LIBS
Pass the shared libs that the framework depends on, in
a comma separated list.
--string_map STRING_MAP
A json string mapping tokens to replacement strings.
The tokens, surrounded by \{\}, when appearing in the
test fields of the json file, are to be replaced with
the mapped values.
--timeout TIMEOUT Specify a timeout running the test on the platforms.
The timeout value needs to be large enough so that the
low end devices can safely finish the execution in
normal conditions. Note, in A/B testing mode, the test
runs twice.
--user_identifier USER_IDENTIFIER
User can specify an identifier and that will be passed
to the output so that the result can be easily
identified.
--wipe_cache WIPE_CACHE
Specify whether to evict cache or not before running
--hash_platform_mapping HASH_PLATFORM_MAPPING
Specify the devices hash platform mapping json file.
--user_string USER_STRING
Specify the user running the test (to be passed to the
remote reporter).
The repo_driver.py
is the entry point to run the benchmark continuously. It repeatedly pulls the framework from github, builds the framework, and launches the harness.py
with the built benchmarking binaries
The accepted arguments are as follows:
usage: repo_driver.py [-h] [--ab_testing] [--base_commit BASE_COMMIT]
[--branch BRANCH] [--commit COMMIT]
[--commit_file COMMIT_FILE] --exec_dir EXEC_DIR
--framework {caffe2,oculus,generic,tflite}
[--frameworks_dir FRAMEWORKS_DIR] [--interval INTERVAL]
--platforms PLATFORMS [--regression]
[--remote_repository REMOTE_REPOSITORY]
[--repo {git,hg}] --repo_dir REPO_DIR [--same_host]
[--status_file STATUS_FILE] [--step STEP]
Perform one benchmark run
optional arguments:
-h, --help show this help message and exit
--ab_testing Enable A/B testing in benchmark.
--base_commit BASE_COMMIT
In A/B testing, this is the control commit that is
used to compare against. If not specified, the default
is the first commit in the week in UTC timezone. Even
if specified, the control is the later of the
specified commit and the commit at the start of the
week.
--branch BRANCH The remote repository branch. Defaults to master
--commit COMMIT The commit this benchmark runs on. It can be a branch.
Defaults to master. If it is a commit hash, and
program runs on continuous mode, it is the starting
commit hash the regression runs on. The regression
runs on all commits starting from the specified
commit.
--commit_file COMMIT_FILE
The file saves the last commit hash that the
regression has finished. If this argument is specified
and is valid, the --commit has no use.
--exec_dir EXEC_DIR The executable is saved in the specified directory. If
an executable is found for a commit, no re-compilation
is performed. Instead, the previous compiled
executable is reused.
--framework {caffe2,oculus,generic,tflite}
Specify the framework to benchmark on.
--frameworks_dir FRAMEWORKS_DIR
Required. The root directory that all frameworks
resides. Usually it is the
specifications/frameworksdirectory.
--interval INTERVAL The minimum time interval in seconds between two
benchmark runs.
--platforms PLATFORMS
Specify the platforms to benchmark on, in comma
separated list.Use this flag if the framework needs
special compilation scripts. The scripts are called
build.sh saved in
specifications/frameworks/<framework>/<platforms>
directory
--regression Indicate whether this run detects regression.
--remote_repository REMOTE_REPOSITORY
The remote repository. Defaults to origin
--repo {git,hg} Specify the source control repo of the framework.
--repo_dir REPO_DIR Required. The base framework repo directory used for
benchmark.
--same_host Specify whether the build and benchmark run are on the
same host. If so, the build cannot be done in parallel
with the benchmark run.
--status_file STATUS_FILE
A file to inform the driver stops running when the
content of the file is 0.
--step STEP Specify the number of commits we want to run the
benchmark once under continuous mode.
The repo_driver.py
can also take the arguments that are recognized by harness.py
. The arguments are passed over.
fai-pep's People
Forkers
grseb9s xieydd jerryzh168 lly-zero-one sf-wind bstocks101 xrojan hl475 wangyangaidev tyoungroy pritamdamania87 jspark1105 harouwu nieshaoshuai hzhang57 f18298335152h cedrickchee sjoerdapp gauravlath07 neuralnoise zhizhenqin bhanditz jteller edenshin guoqiang0148666 setkinanne bharatr21 warrelis jinan-zhou jeffistyping chaoso urantialife taihulight mousaab89 facebook-gad jason-cooke porterzz ashutosh1919 aaaeeee pengyuliu donaldaq asankaran virtan mohitanand001 hammersport-marketing circletana christian7877 stivensss haomiaoliu730 obheda12 terrorizer1980 abhinavgar microsoft871 markandersonix asiminaath jsrimr axitkhurana xta0 isabella232 rainbowman1 rashirungta-zz classicvalues ekmixon karlsimonsen maredroudina khileshchauhan hartl3y94 pchandiwal-livongo wenwanchen vmpuri saiteja13427 test-mass-forker-org-1 digantdesai joolstorrentecalo captainbarber99 ethicalsecurity-agency sumit46656 ll2l huonglarne chunlicui sunilgitb papapig-melody meslubi2021 navsud pixee-bot-python conglesolutionxfai-pep's Issues
Path Manipulation
Allowing user input to control paths used in file system operations could enable an attacker to access or modify otherwise protected system resources
UnboundLocalError: local variable 'abs_name' referenced before assignment
When I first run the command,
python ${FAI_PEP_DIR}/benchmarking/run_bench.py -b "${BENCHMARK_FILE}" --config_dir "${CONFIG_DIR}"
I encounter this message.
"UnboundLocalError: local variable 'abs_name' referenced before assignment"
However, when I ran the same command again, it disappears. I doubt .md5 autogeneration somehow broke at the first time.
Contribution
Sorry for being creating an issue in this repo. Are you guys working on Django ?
When running multiple tests, failures can be "swallowed"
We run multiple tests from a single config, and introduced an error in the second test in the group. This failure was not noticed, because each test reuses the same temp directory so report.json from the previous run is reused. That means that even though the run fails, we report data for the previous run.
What's the magic behind "--platform android"
Hello, I'm trying to get benchmarks from my phone running MobilenetV2.
I was reading this document and got one question.
Just passing "--platform=android" is enough to run benchmark binaries? How does this tell where's my phone to the executed binary file? I thought I should set up some "adb" kind of things, but I couldn't find any mention of "adb". Now I am wondering if FAI-PEP has some android simulators that runs binary..
Could you tell me where should I refer to if I want to run FAI-PEP on the phone I connect?
Change dictionary syntax to use get
FAI-PEP/ailab/benchmark/templatetags/nvd3_tags.py
Lines 44 to 54 in 6f74b68
The above syntax can be changed to the following.
kw_extra['x_is_date'] = kw_extra.get('x_is_date', False)
kw_extra['x_axis_format'] = kw_extra.get('x_axis_format', "%d %b %Y")
kw_extra['color_category'] = kw_extra.get('color_category', "category20")
kw_extra['tag_script_js'] = kw_extra.get('tag_script_js', True)
kw_extra['chart_attr'] = kw_extra.get('chart_attr', {})
Wiki needs update?
Hello all,
It seems that the Wiki of this project needs an update.
For example, "~/caffe2/pytorch" in https://github.com/facebook/FAI-PEP/wiki/Run-FAI-PEP-for-the-first-time has already been invalid because caffee2 is part of pytorch now.
If I use pytorch/caffe2 instead, the build.sh for caffe2@Android cannot generate caffe2_benchmark binary file as expected for the latest pytorch repo.
Thank you!
Problems when running the experiment with docker
When trying to run the experiment with docker for TFLite example, I encountered the following error:
+ python /tmp/FAI-PEP/benchmarking/run_bench.py -b /tmp/FAI-PEP/specifications/models/tflite/mobilenet_v2/mobilenet_v2_0.35_96.json --config_dir /tmp/config
usage: run_bench.py [-h] [--app_id APP_ID] [-b BENCHMARK_FILE] [--lab]
[--logger_level {info,warning,error}] [--remote]
--root_model_dir ROOT_MODEL_DIR [--token TOKEN]
[-c CUSTOM_BINARY] [--pre_built_binary PRE_BUILT_BINARY]
[--user_string USER_STRING]
run_bench.py: error: argument --root_model_dir is require
Does anyone know how to resolve this? Did I missed anything before running the example?
Failed to run benchmark scripts in Android
Hi there,
I just new to here. So I typed by following the tutorial:
benchmarking/run_bench.py -b specifications/models/caffe2/shufflenet/shufflenet.json --platforms android
After a long time compiling, all of compile and link tasks are finished in the build_android folder in my pytorch repo. But it throws an error:
cmake unknown rule to install xxxx
It looks the caffe2_benchmark executable has been generated by failed to copied to the install folder, so I manually copied to the folder, namely:
/home/new/.aibench/git/exec/caffe2/android/2019/4/5/fefa6d305ea3e820afe64cec015d2f6746d9ca88
Then I modified repo_driver.py to avoid compiling again and run function _runBenchmarkSuites
But failed :
In file included from ../third_party/zstd/lib/common/pool.h:20:0,
from ../third_party/zstd/lib/common/pool.c:14:
../third_party/zstd/lib/common/zstd_internal.h:382:37: error: unknown type name ‘ZSTD_dictMode_e’; did you mean ‘FSE_decode_t’?
ZSTD_dictMode_e dictMode,
^~~~~~~~~~~~~~~
FSE_decode_t
Questions:
- Any suggestions on how to run the tutorial correctly?
- How to avoid the long time compiling for each time running
benchmarking/run_bench.py -b specifications/models/caffe2/shufflenet/shufflenet.json --platforms android
thanks!
What should be the directory the framework repo resides for run_bench.py
I tried benchmarking/run_bench.py -b specifications/models/tflite/mobilenet_v2/mobilenet_v2_1.4_224.json
but don't know what should I enter at the prompt "Please enter the directory the framework repo resides" - should it be the local PyTorch or TF Lite repo directory? An example of this?
What is meant by "without a battery" in top level README.md ?
In top level README one of the bullets about performance metrics says,
"energy/power : the energy per inference and average power of running the the ML model on a phone without battery"
I assume the "the the" is a typing error that should be "the".
But what about "without battery" ? If inference engine hardware in mobile phones could truly run without a battery that would be very low power indeed !! Is this a typo or if not, what is meant by this ?
-- jS
Example usage for iOS
Can you show an example on how to use the system with iOS?
Does the system compiles the .ipa that get sent to the iOS device or do we have to provide it?
Failed to run shuffleNet on host
I have tried to run shufflenet and modified inputs to gpu_0/data getting following issue
INFO 12:24:56 subprocess_with_logger.py: 24: Running: /tmp/FAI-PEP/libraries/python/imagenet_test_map.py --image-dir /tmp/imagenet/val --label-file /tmp/FAI-PEP/libraries/python/labels.txt --output-image-file /tmp/tmpLwDpGk/caffe2/host/images.txt --output-label-file /tmp/tmpLwDpGk/caffe2/host/labels.txt --shuffle
INFO 12:24:56 subprocess_with_logger.py: 24: Running: awk (NR>050000/1000)&&(NR<=050000/1000+50000/1000) {print > "/tmp/tmpLwDpGk/caffe2/host/inputs/labels_0.txt"} /tmp/tmpLwDpGk/caffe2/host/labels.txt
INFO 12:24:56 subprocess_with_logger.py: 24: Running: /tmp/config/exec/caffe2/host/incremental/2019/5/23/90182a7332997fb0edf666abc4b554b83a1670d1/convert_image_to_tensor --input_image_file /tmp/tmpLwDpGk/caffe2/host/inputs/labels_0.txt --output_tensor /tmp/tmpLwDpGk/caffe2/host/images_tensor.pb --batch_size 1 --scale 256,-1 --crop 224,224 --preprocess normalize,mean,std --report_time json|Caffe2Observer
INFO 12:24:57 hdb.py: 27: push /tmp/tmpLwDpGk/caffe2/host/images_tensor.pb to /tmp/tmpJ2TNu5/6ea951fe0a41/images_tensor.pb
INFO 12:24:57 hdb.py: 27: push /tmp/config/model_cache/caffe2/shufflenet/model.pb to /tmp/tmpJ2TNu5/6ea951fe0a41/model.pb
INFO 12:24:57 hdb.py: 27: push /tmp/config/model_cache/caffe2/shufflenet/model_init.pb to /tmp/tmpJ2TNu5/6ea951fe0a41/model_init.pb
{u'softmax': u'/tmp/tmpJ2TNu5/6ea951fe0a41/output/softmax.txt'}
INFO 12:24:57 subprocess_with_logger.py: 24: Running: /tmp/tmpJ2TNu5/6ea951fe0a41/caffe2_benchmark --net /tmp/tmpJ2TNu5/6ea951fe0a41/model.pb --init_net /tmp/tmpJ2TNu5/6ea951fe0a41/model_init.pb --warmup 0 --iter 50 --input gpu_0/data --input_file /tmp/tmpJ2TNu5/6ea951fe0a41/images_tensor.pb --input_type float --output gpu_0/softmax --text_output true --output_folder /tmp/tmpJ2TNu5/6ea951fe0a41/output
/tmp/tmpLwDpGk/caffe2
/tmp/tmpLwDpGk/caffe2/output
INFO 12:25:00 hdb.py: 38: pull /tmp/tmpJ2TNu5/6ea951fe0a41/output/softmax.txt to /tmp/tmpLwDpGk/caffe2/output/softmax.txt
INFO 12:25:00 hdb.py: 40: directory /tmp/tmpJ2TNu5/6ea951fe0a41/output
INFO 12:25:00 hdb.py: 46: filenames /tmp/tmpJ2TNu5/6ea951fe0a41/output/
INFO 12:25:00 benchmark_driver.py: 64: Exception caught when running benchmark
INFO 12:25:00 benchmark_driver.py: 65: [Errno 2] No such file or directory: u'/tmp/tmpJ2TNu5/6ea951fe0a41/output/softmax.txt'
ERROR 12:25:00 benchmark_driver.py: 69: Traceback (most recent call last):
As gpu_0 is prefixed in every node in shufflenet checkpoint.
General tutorial on running FAI-PEP
It would be great if you could add a general tutorial which allows practitioners to benchmark all sorts of models. The idea behind FAI-PEP is really good but all tutorials are geared towards image-classification models.
[Proposal]Replace bazel build command with the binary provided from tensorflow document
bazel build command in specifications/frameworks/tflite/android/build.sh requires a lot of dependencies, like appropriate version of bazel, Android SDK and NDK. It's burdensome.
I found that the resulted binary is provided in here which is form https://www.tensorflow.org/lite/performance/measurement.
I commented out
# --config=android_arm \
# --cxxopt='--std=c++11' \
# tensorflow/lite/tools/benchmark:benchmark_model
these lines and saved the downloaded binary in {tensorflow_dir}/bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model
Then,
python ${FAI_PEP_DIR}/benchmarking/run_bench.py -b "${BENCHMARK_FILE}" --config_dir "${CONFIG_DIR}"
executed without problems.
Below is my configuration.
{
\"--commit\": \"master\",
\"--exec_dir\": \"${CONFIG_DIR}/exec\",
\"--framework\": \"tflite\",
\"--local_reporter\": \"${CONFIG_DIR}/reporter\",
\"--model_cache\": \"${CONFIG_DIR}/model_cache\",
\"--platforms\": \"android\",
\"--remote_repository\": \"origin\",
\"--repo\": \"git\",
\"--repo_dir\": \"${REPO_DIR}\",
\"--tmp_model_dir\": \"${CONFIG_DIR}/tmp_model_dir\",
\"--root_model_dir\": \"${CONFIG_DIR}/root_model_dir\",
\"--screen_reporter\": null
}
Is there a visualization component to understand models and their output?
Checking if the key exists in dictionary instead of using dict.get(key, None)
FAI-PEP/benchmarking/driver/benchmark_driver.py
Lines 35 to 36 in a26fa88
FAI-PEP/benchmarking/driver/benchmark_driver.py
Lines 42 to 43 in a26fa88
Some places in this file
FAI-PEP/benchmarking/driver/benchmark_driver.py
has check if a key is in the dictionary and then tries to access it. It may be inefficient and less clean to go throughthe dictionary twice. Can I raise a PR to change these to
minfo["shared_libs"] = info.get("shared_libs", "")
cinfo["shared_libs"] = info.get("shared_libs", "")
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.