Coder Social home page Coder Social logo

bobheadxi / gobenchdata Goto Github PK

View Code? Open in Web Editor NEW
129.0 129.0 12.0 8.82 MB

๐Ÿ“‰ Run Go benchmarks, publish results to an interactive web app, and check for performance regressions in your pull requests

Home Page: https://gobenchdata.bobheadxi.dev

License: MIT License

Go 63.67% Makefile 1.22% Dockerfile 0.78% Shell 4.60% JavaScript 0.49% HTML 0.98% CSS 0.04% Vue 12.14% TypeScript 15.80% SCSS 0.27%
benchmarking charts continuous-benchmarking continuous-benchmarks github-actions golang typescript vuejs

gobenchdata's Introduction

gobenchdata's People

Contributors

angeleneleow avatar bobheadxi avatar jrascagneres avatar nikolaydubina avatar piotrekmonko avatar srenatus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gobenchdata's Issues

Parsing of benchmark results with panic/timeout causes another panic

I'm not sure what the best course of action is, but panic doesn't seem right in any case:

go test ./internal/langserver/handlers \
    -bench=InitializeFolder_basic \
    -run=^# \
    -benchtime=60s \
    -timeout=30m | tee /home/runner/work/_temp/benchmarks.txt
goos: linux
goarch: amd64
pkg: github.com/hashicorp/terraform-ls/internal/langserver/handlers
cpu: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz
BenchmarkInitializeFolder_basic/local-single-module-no-provider-2         	    2354	  29887759 ns/op	 1466327 B/op	   23874 allocs/op
BenchmarkInitializeFolder_basic/local-single-submodule-no-provider-2      	     471	 147021517 ns/op	 2709140 B/op	   40481 allocs/op
BenchmarkInitializeFolder_basic/local-single-module-random-2              	     510	 143119210 ns/op	 1911226 B/op	   27656 allocs/op
BenchmarkInitializeFolder_basic/local-single-module-aws-2                 	      51	1371407234 ns/op	105128302 B/op	  745133 allocs/op
BenchmarkInitializeFolder_basic/aws-consul-2                              	      42	1447751904 ns/op	127841282 B/op	 1055052 allocs/op
BenchmarkInitializeFolder_basic/aws-eks-2                                 	      28	2296038135 ns/op	199662055 B/op	 1481714 allocs/op
BenchmarkInitializeFolder_basic/aws-vpc-2                                 	      46	1530870496 ns/op	123258121 B/op	  862732 allocs/op
BenchmarkInitializeFolder_basic/google-project-2                          	      38	2780695563 ns/op	148429337 B/op	 1076845 allocs/op
BenchmarkInitializeFolder_basic/google-network-2                          	      28	2585281011 ns/op	133152685 B/op	  878487 allocs/op
BenchmarkInitializeFolder_basic/google-gke-2                              	      15	4587726688 ns/op	131472710 B/op	  844753 allocs/op
BenchmarkInitializeFolder_basic/k8s-metrics-server-2                      	      56	1301099738 ns/op	60277646 B/op	  378735 allocs/op
BenchmarkInitializeFolder_basic/k8s-dashboard-2                           	SIGQUIT: quit
PC=0x46ae5c m=0 sigcode=0

goroutine 181042 [running]:
runtime.memclrNoHeapPointers()

See whole log at https://github.com/hashicorp/terraform-ls/runs/6137664889?check_suite_focus=true#step:7:10

  BENCHDATA="go run go.bobheadxi.dev/gobenchdata"
  
  BENCH_PATH="/home/runner/work/_temp/benchmarks.txt"
  DATA_PATH="/home/runner/work/_temp/benchdata.json"
  RESULTS_PATH="/home/runner/work/_temp/benchdata-results.json"
  CHECKS_CONFIG_PATH="/home/runner/work/terraform-ls/terraform-ls/.github/gobenchdata-checks.yml"
  
  cat $BENCH_PATH | $BENCHDATA --json ${DATA_PATH} -v "${GITHUB_SHA}" -t "ref=${GITHUB_REF}"
  
  $BENCHDATA checks eval \
    ${DATA_PATH} \
    ${DATA_PATH} \
    --checks.config ${CHECKS_CONFIG_PATH}
panic: BenchmarkInitializeFolder_basic/k8s-dashboard-2: could not parse run: strconv.Atoi: parsing "SIGQUIT: quit": invalid syntax (line: BenchmarkInitializeFolder_basic/k8s-dashboard-2                               SIGQUIT: quit)

goroutine 1 [running]:
main.main()
    /home/runner/go/pkg/mod/go.bobheadxi.dev/[email protected]/main.go:230 +0x135d
exit status 2

tracking issue: gobenchdata v1

If you use gobenchdata, I'd love to hear from you! Please feel free to respond to this issue or reach out to me at [email protected]

Previews are now available! See #38 for more details

Won't Change

  • benchmark storage format will stay the same - all existing run histories will work with v1 without a change
    • by extension, existing generated web apps will continue to work

Will Change

Improvements for CI Usage

โš ๏ธ to continue using the gobenchdata Action as-is when I release v1, you will need to pin your action version to bobheadxi/[email protected]!

  • #28 and related output (check status? PR comments? generate image, a la #32?)
    • at the moment my thinking is: all the output is available as JSON through some means, so advanced reports like PR comments and automated image generation+updates should be handled in separate actions (this one already does a lot as it is, since it includes the publishing mechanism)
  • More generic usage: allow pushing to different branches/repos
  • Follow GitHub Actions versioning conventions (i.e. use bobheadxi/gobenchdata@v1)

Revamped Visualization

Better charts, more control over what gets charted (chart groups, more places to put descriptions, better ways to select groups of benchmarks)

CLI improvements

Drafts

improved chart configuration

Right now config is purely by command-line flags - would a yml-based configuration file make sense to introduce more options?

as of v0.4.1:

โฏ gobenchdata-web help                 
gobenchdata-web generates a simple website for visualizing gobenchdata benchmarks.

usage:

  gobenchdata-web [flags]

other commands:

  version        show gobenchdata version
  help           show help text

flags:

      --benchmarks-file string    path to file where benchmarks are saved (default "benchmarks.json")
  -i, --canonical-import string   canonical import path for package, eg 'go.bobheadxi.dev/gobenchdata' (used to generate links to source code)
  -c, --charts-types string       additional chart types to generate (comma-separated, in addition to 'ns/op') (default "bytes/op,allocs/op")
      --desc string               a description to include in the generated web app
  -o, --out string                directory to output website in (default ".")
  -s, --source string             source repository for package, eg 'github.com/bobheadxi/gobenchdata'
      --title string              title for generated website (default "gobenchdata continuous benchmarks")

Option to have graph for each benchmark

The per-package graphs work well for scenarios such as the one used for the demo web app however in the case where there many more benchmarks the graphs don't scale well. As shown in this screenshot:
image

I'd like to be able to generate a web app to display a graph for each benchmark individually. If you're happy for this to be an option I'd be happy to work on a PR.

Ah, yeah if you don't use the GitHub Actions actor (`GITHUB_ACTOR`) which is available in an Action run, new commits can trigger new builds.

Ah, yeah if you don't use the GitHub Actions actor (GITHUB_ACTOR) which is available in an Action run, new commits can trigger new builds.

Unfortunately this can't really be resolved by gobenchdata - you'll have to either:

  • use the default actor, such that the commit shows author shows up like this (sorry dont have a gobenchdata-relevant example)
  • push to an alternative branch or repo, like gh-pages

Originally posted by @bobheadxi in https://github.com/bobheadxi/
gobenchdata/issues/43#issuecomment-647562271

Problem on second run: destination path '.' already exists and is not an empty directory

First of all thanks for this amazing GitHub action!

I'm trying to publish benchmarks results to a separate repository. I already fixed the problem with the standard GitHub token and I'm using a personal one with write permissions.

I'm having a problem when executing a second run of benchmarks. If there is a benchmark.json file already on the remote repository, the process fails with exit status 128: fatal: destination path '.' already exists and is not an empty directory.

These are my workflow config files:

Main benchmark.yml file:

name: run benchmarks
on:
  workflow_call:
    inputs:
      publish:
        required: true
        type: boolean
      test-flags:
        required: true
        type: string
jobs:
  benchmarks:
    runs-on: self-hosted
    steps:
    - name: checkout
      uses: actions/checkout@v2
    - uses: actions/setup-go@v2
      with: 
        go-version: "1.20"
    - name: "gobenchdata publish: ${{ inputs.publish }}"
      run: go run go.bobheadxi.dev/gobenchdata@v1 action
      env:
        GITHUB_TOKEN: ${{ secrets.WRITE_TOKEN }}
        INPUT_PRUNE_COUNT: 30
        INPUT_GO_TEST_FLAGS: ${{ inputs.test-flags }}
        INPUT_PUBLISH: ${{ inputs.publish }}
        INPUT_PUBLISH_REPO: ajnavarro/gno-benchmarks
        INPUT_PUBLISH_BRANCH: gh-pages
        INPUT_BENCHMARKS_OUT: benchmarks.json
        INPUT_CHECKS: ${{ !inputs.publish }}
        INPUT_CHECKS_CONFIG: .benchmarks/gobenchdata-checks.yml

benchmark-check.yml that is running on every PR:

name: run benchmarks on every PR

on:
  pull_request:
jobs:
  check:
    uses: ./.github/workflows/benchmark.yml
    secrets: inherit
    with:
      publish: false
      test-flags: "-short -run=^$"

benchmark-publish.yml file that will run every day with main changes:

name: run benchmarks on main branch every day

on:
  workflow_dispatch:
  schedule:
    - cron:  '0 0 * * *' # run on default branch every day
jobs:
  publish:
    uses: ./.github/workflows/benchmark.yml
    secrets: inherit
    with:
      publish: true
      test-flags: "-short -run=^$" # TODO: remove short flag

I really appreciate any help you can provide. Thanks!

add ability to fail check if benchmark regression is outside range

While reading /r/Golang I noticed a new post about cob, which is a continuous benchmark tool focused on performing checks during PR or commit builds with TravisCI, GitHub Actions, and CircleCI to see if there is regression in benchmark performance outside of an acceptable range. If the regression is outside of a certain range, then the CI build will fail. It might be useful to integrate this kind of functionality within gobenchdata. If so I can take a stab at implementing it.

checks always compare against first recorded benchmark

So on every merge, we'll update and commit benchmarks.json in the benchmarks branch of the repo. It works well, I can see one new entry for every merged commit. โœ…

For every PR, we're running checks against the recorded benchmark.json file. The checks run. But they always use the first recorded commit for base, which doesn't seem right to me:

๐Ÿ“Š Running benchmarks...
detected 2 benchmark suites
successfully output results as json to '/tmp/gobenchdata/benchmarks.json'

๐Ÿ“š Checking out StyraInc/yadda@benchmarks...
branch 'benchmarks' set up to track 'origin/benchmarks'.

๐Ÿ”Ž Evaluating results against base runs...
report output written to /tmp/gobenchdata/checks-results.json
Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

๐Ÿ“ Generating checks report...
|    |                   BASE                   |                 CURRENT                  | PASSED CHECKS | FAILED CHECKS | TOTAL |
|----|------------------------------------------|------------------------------------------|---------------|---------------|-------|
| โœ… | 2cb3161f3907093c4a5cbd9cf2641b92243051ce | 8f83bf07bd43eb463b5488f30dea5ea5fa098c78 |             3 |             0 |     3 |

Am I holding the tool wrong? I'm quite confused about this one ๐Ÿ˜…

cannot find GOROOT directory

GitHub Action complains: go: cannot find GOROOT directory: /opt/hostedtoolcache/go/1.14.1/x64.
Log:

2020-06-22T13:06:46.8623176Z ========================
2020-06-22T13:06:46.8626203Z /bin/gobenchdata
2020-06-22T13:06:46.8631710Z gobenchdata version unknown
2020-06-22T13:06:46.8654639Z INPUT_GO_BENCHMARKS=.
2020-06-22T13:06:46.8655465Z INPUT_GO_TEST_FLAGS=-cpu 1,2
2020-06-22T13:06:46.8655599Z INPUT_PUBLISH_BRANCH=dev
2020-06-22T13:06:46.8655716Z INPUT_PRUNE_COUNT=30
2020-06-22T13:06:46.8655836Z INPUT_CHECKS_CONFIG=.github/gobench.yml
2020-06-22T13:06:46.8655952Z INPUT_SUBDIRECTORY=.
2020-06-22T13:06:46.8656066Z INPUT_CHECKS=true
2020-06-22T13:06:46.8656328Z INPUT_PUBLISH_REPO=ironpeakservices/micro-domain
2020-06-22T13:06:46.8656458Z INPUT_GIT_COMMIT_MESSAGE=chore: add new benchmark
2020-06-22T13:06:46.8656576Z INPUT_PUBLISH=true
2020-06-22T13:06:46.8656689Z INPUT_GO_TEST_PKGS=./...
2020-06-22T13:06:46.8656798Z INPUT_BENCHMARKS_OUT=.github/benchmarks.json
2020-06-22T13:06:46.8658780Z GITHUB_ACTOR=hazcod
2020-06-22T13:06:46.8658913Z GITHUB_WORKSPACE=/github/workspace
2020-06-22T13:06:46.8659367Z GITHUB_REPOSITORY=ironpeakservices/micro-domain
2020-06-22T13:06:46.8659496Z GITHUB_SHA=90100fcb04f70612ac97ed374841de48e4d279df
2020-06-22T13:06:46.8659619Z GITHUB_REF=refs/pull/690/merge
2020-06-22T13:06:46.8659747Z ========================
2020-06-22T13:06:46.8701051Z 
2020-06-22T13:06:46.8702245Z ๐Ÿ“Š Running benchmarks...
2020-06-22T13:06:46.8754991Z go: cannot find GOROOT directory: /opt/hostedtoolcache/go/1.14.1/x64
2020-06-22T13:06:46.8786763Z detected 0 benchmark suites
2020-06-22T13:06:46.8791656Z successfully output results as json to '/tmp/gobenchdata/benchmarks.json'
2020-06-22T13:06:46.8801012Z 
2020-06-22T13:06:46.8803094Z ๐Ÿ“š Checking out ironpeakservices/micro-domain@dev...
2020-06-22T13:06:46.8813842Z Cloning into '.'...
2020-06-22T13:06:48.0722298Z Switched to a new branch 'dev'
2020-06-22T13:06:48.0728086Z Branch 'dev' set up to track remote branch 'dev' from 'origin'.
2020-06-22T13:06:48.0728191Z 
2020-06-22T13:06:48.0728784Z ๐Ÿ”Ž Evaluating results against base runs...
2020-06-22T13:06:48.0803675Z panic: unexpected end of JSON input
2020-06-22T13:06:48.0803787Z 
2020-06-22T13:06:48.0803903Z goroutine 1 [running]:
2020-06-22T13:06:48.0804030Z main.load(0xc0000bfde8, 0x2, 0x2, 0x0, 0x0, 0x0)
2020-06-22T13:06:48.0804461Z 	/tmp/build/io.go:143 +0x275
2020-06-22T13:06:48.0804566Z main.main()
2020-06-22T13:06:48.0804713Z 	/tmp/build/main.go:162 +0xb4d

failing benchmarks ignored

I've just observed this locally:

$ go test -run=- -bench=BenchmarkMonster ./pkg/rego_vm 
--- FAIL: BenchmarkMonster
    monster_test.go:125: open monster.rego: no such file or directory
FAIL
exit status 1
FAIL    github.com/some/monster    0.230s
FAIL
$ echo $?
1
$ go test -run=- -bench=BenchmarkMonster ./pkg/rego_vm | gobenchdata --json test.json -v version
detected 0 benchmark suites
successfully output results as json to 'test.json'

So one obvious workaround is to add a test stage that'll run the benchmarks before the gobenchdata run. It would mean we run the benchmarks twice.

An alternative might be to add pipefail to the set line in entrypoint.sh?

Another alternative would be to have the action consume a file, so we could run the benchmarks once manually, save the result, and feed the result into gobenchdata.

What do you think?

Clicking on chart doesn't open commit version

I've updated my template to version 1.1.0 by re-running generate,

gobenchdata web generate .

However clicking points on the graph no longer launches the repo URL.

Running serve exhibits the same behavior,

gobenchdata web serve

Are changes to the config required to upgrade the web template?

Regression checks

Allow the action to check for regressions and fail the build if there are any

Skip stdout which is not benchmark result, or provide better error message

After enabling gobenchdata in a GitHub action with the out-of-the-box example yml file, the benchmark failed with the following error:

๐Ÿ“Š Running benchmarks...
panic: cpu: Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz: could not parse run: strconv.Atoi: parsing "": invalid syntax (line: cpu: Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz)

goroutine 1 [running]:
main.main()
	/tmp/build/main.go:230 +0x19b3

For a minute I was wondering if this was the repo under test invoking panic, or caused by error in my github action yml file, or a bug in gobenchdata. I can see now this error is returned by this line of code:

return nil, fmt.Errorf("%s: could not parse run: %w (line: %s)", bench.Name, err, line)

Instead of the following code to parse the line:

split := strings.Split(line, "\t")

How about using a regex that matches the expected benchmark output? Maybe skip lines that don't match the expected output? Define a config parameter in the yml file to decide whether to skip or not? Extra output may be generated by go test:

#go test -bench=.
goos: linux
goarch: amd64
pkg: github.com/CiscoM31/godata
cpu: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz

The cpu line above causes the panic

[Checks] Mem.BytesPerOp not supported

The following check tests for a regression in Mem.BytesPerOp. However, the result is always 0.

checks:
- name: BytesPerOp
  package: .
  benchmarks: [ '.' ]
  diff: (current.Mem.BytesPerOp - base.Mem.BytesPerOp) / base.Mem.BytesPerOp * 100
  thresholds:
    max: 25

Results:

{
  "Status": "pass",
  "Checks": {
    "BytesPerOp": {
      "Status": "pass",
      "Diffs": [
        {
          "Status": "pass",
          "Package": "...",
          "Benchmark": "...",
          "Value": 0
        }
      ],
      "Thresholds": {
        "Min": null,
        "Max": 25
      }
    }
  }
}

Sample input:

[
  {
    "Date": 1651034107,
    "Suites": [
      {
        "Goos": "linux",
        "Goarch": "amd64",
        "Pkg": "...",
        "Benchmarks": [
          {
            "Name": "...",
            "Runs": 5,
            "NsPerOp": 111754445,
            "Mem": {
              "BytesPerOp": 14284630,
              "AllocsPerOp": 503144,
              "MBPerSec": 0
            }
          }
        ]
      }
    ]
  }
]

usage with private repo?

Hello! This is a fine tool, and I've enjoyed using it so far, thank you ๐ŸŽˆ

What I didn't get to work is the publishing of the processed benchmark results into another git branch of a private github repo. The workflow run ends in

Cloning into '.'...
remote: Invalid username or password.
fatal: Authentication failed for 'https://github.com/cant/tell.git/'
exit status 1

My workflow definition is this:

      - name: gobenchdata publish
        run: go run go.bobheadxi.dev/gobenchdata@v1 action
        env:
          INPUT_GO_TEST_FLAGS: -timeout=120m
          INPUT_PUBLISH: true
          INPUT_PUBLISH_BRANCH: benchmarks
          GITHUB_TOKEN: ${{ secrets.ACCESS_TOKEN }}
        timeout-minutes: 120

I also got this in the yaml workflow file:

permissions:
  # contents permission to update benchmark contents in 'benchmarks' branch
  contents: write

and I'm not entirely sure if "write" implies "read" or if the problem is something else.

FWIW looking at entrypoint.sh, I've found that it's making a call like

git clone https://${GITHUB_ACTOR}:${GITHUB_TOKEN}@github.com/${INPUT_PUBLISH_REPO}.git .

which doesn't seem entirely correct to me -- the GITHUB_ACTOR is the committer of the last commit, but the GITHUB_TOKEN isn't belonging to them. Is that a problem?

Thanks!

Monorepo Support

SUBDIRECTORY is useful when we have a project in another directory. However it doesn't work with multiple directories. It'd be nice to add something like SUBDIRECTORIES to use this tool in monorepo projects.

Example Layout: https://github.com/gofiber/template

xaxis labels missing

The x-axis labels appear to be missing. I'm wondering if it's possible to enable the commit hash to appear as the x-axis labels?

before

image

after

image

panic: could not parse runs: strconv.Atoi: parsing "2020-04-25T22:43:13.377Z": invalid syntax

Version Information

go: 1.14.2
gobenchdata: v0.5.1

Bug Description

I'm using gobenchdata in a private github repository, and as far as I'm aware, github actions have problems integrating with private github repositories. As such my process for using gobenchdata is a bit different than normal, and consists of the following process during release builds:

  • Manually run benchmark with go test -bench=. --benchtime 1000x ./benchmarks/releasemark/... -benchmem > bench.txt
  • Feed that into gobenchdata with cat bench.txt | gobenchdata --json release/tex-gobenchdata-$(git describe --tags).json
  • Upload the generated json along with the binaries built during release

Up until today this has been working without issues beyond manual labour, however today this failed with the error:

panic: could not parse runs: strconv.Atoi: parsing "2020-04-25T22:43:13.377Z": invalid syntax

Log 1

go test -bench=. --benchtime 1000x ./benchmarks/releasemark/... -benchmem > bench.txt
cat bench.txt | gobenchdata --json release/tex-gobenchdata-`git describe --tags`.json
panic: BenchmarkTemporalX_DownloadMulti-2: could not parse runs: strconv.Atoi: parsing "2020-04-25T22:43:13.377Z": invalid syntax
goroutine 1 [running]:
main.main()
	/home/travis/gopath/pkg/mod/go.bobheadxi.dev/[email protected]/main.go:66 +0xa37
Makefile:152: recipe for target 'release-bench' failed
make: *** [release-bench] Error 2
The command "if [ "$TRAVIS_OS_NAME" = "linux" ]; then make release-bench ; fi" failed and exited with 2 during .
Your build has been stopped.

Log 2

I tried downgrading to v0.5.0 and got the same error:

go test -bench=. --benchtime 1000x ./benchmarks/releasemark/... -benchmem > bench.txt
cat bench.txt | gobenchdata --json release/tex-gobenchdata-`git describe --tags`.json
panic: BenchmarkTemporalX_DownloadMulti-2: could not parse runs: strconv.Atoi: parsing "2020-04-25T23:45:42.353Z": invalid syntax
goroutine 1 [running]:
main.main()
	/home/travis/gopath/pkg/mod/go.bobheadxi.dev/[email protected]/main.go:66 +0xa37
Makefile:152: recipe for target 'release-bench' failed
make: *** [release-bench] Error 2
The command "if [ "$TRAVIS_OS_NAME" = "linux" ]; then make release-bench ; fi" failed and exited with 2 during .

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.