Coder Social home page Coder Social logo

appleaccelerate.jl's Introduction

AppleAccelerate.jl

This provides a Julia interface to some of the macOS Accelerate framework. At the moment, this package provides:

  1. Access to Accelerate BLAS and LAPACK using the libblastrampoline framework,
  2. An interface to the array-oriented functions, which provide a vectorised form for many common mathematical functions

The performance is significantly better than using standard libm functions in some cases, though there does appear to be some reduced accuracy.

OS Requirements

MacOS 13.4 is required in order to run AppleAccelerate.jl, especially for the libblastrampoline forwarding. On older MacOS versions, your mileage may vary.

Supported Functions

The following functions are supported:

  • Rounding: ceil, floor, trunc, round
  • Logarithmic: exp, exp2, expm1, log, log1p, log2, log10
  • Trigonometric: sin, sinpi, cos, cospi, tan, tanpi, asin, acos, atan, atan2, cis
  • Hyperbolic: sinh, cosh, tanh, asinh, acosh, atanh
  • Convolution: conv, xcorr
  • Other: sqrt, copysign, exponent, abs, rem

Note there are some slight differences from behaviour in Base:

  • No DomainErrors are raised, instead NaN values are returned.
  • round breaks ties (values with a fractional part of 0.5) by choosing the nearest even value.
  • exponent returns a floating point value of the same type (instead of an Int).

Some additional functions that are also available:

  • rec(x): reciprocal (1.0 ./ x)
  • rsqrt(x): reciprocal square-root (1.0 ./ sqrt(x))
  • pow(x,y): power (x .^ y in Base)
  • fdiv(x,y): divide (x ./ y in Base)
  • sincos(x): returns (sin(x), cos(x))

Example

To use the Accelerate BLAS and LAPACK, simply load the library:

julia> peakflops(4096)
3.6024175318268243e11

julia> using AppleAccelerate

julia> peakflops(4096)
5.832806459434183e11

To avoid naming conflicts with Base, methods are not exported and so need to be accessed via the namespace:

using AppleAccelerate
using BenchmarkTools
X = randn(1_000_000);
@btime exp.($X); # standard libm function
@btime AppleAccelerate.exp($X); # Accelerate array-oriented function

The @replaceBase macro replaces the relevant Base methods directly

@btime sin.($X); # standard libm function
AppleAccelerate.@replaceBase sin cos tan
@btime sin($X);  # will use AppleAccelerate methods for vectorised operations

X = randn(1_000_000);
Y = fill(3.0, 1_000_000);
@btime $X .^ $Y;
AppleAccelerate.@replaceBase(^, /) # use parenthesised form for infix ops
@btime $X ^ $Y;

Output arrays can be specified as first arguments of the functions suffixed with !:

out = zeros(Float64, 1_000_000)
@btime AppleAccelerate.exp!($out, $X)

Warning: no dimension checks are performed on the ! functions, so ensure your input and output arrays are of the same length.

Operations can be performed in-place by specifying the output array as the input array (e.g. AppleAccelerate.exp!(X,X)). This is not mentioned in the Accelerate docs, but this comment by one of the authors indicates that it is safe.

appleaccelerate.jl's People

Contributors

chenspc avatar christiangnrd avatar daviehh avatar dependabot[bot] avatar dilumaluthge avatar dlfivefifty avatar femtocleaner[bot] avatar juliatagbot avatar mcabbott avatar ranocha avatar rprechelt avatar simonbyrne avatar staticfloat avatar timholy avatar tk3369 avatar tkelman avatar viralbshah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

appleaccelerate.jl's Issues

Cholesky performance could (perhaps) be better

I'm using an M2 Max. On matmul, Accelerate gives good performance compared to openblas:

julia> peakflops(4000)
2.423671700050223e11

julia> using AppleAccelerate

julia> peakflops(4000)
4.325152941800755e11

On Cholesky decomposition, Accelerate has roughly the same performance as openblas, perhaps because openblas has a multi-threaded cholesky while Apple is probably only getting multi-threading from L3 BLAS through the stock LAPACK cholesky.

julia> a = rand(12000,12000); ata = a'a;

julia> using LinearAlgebra

julia> BLAS.set_num_threads(8)

julia> @time cholesky(ata);
  2.618964 seconds (3 allocations: 1.073 GiB, 0.53% gc time)

julia> using AppleAccelerate

julia> @time cholesky(ata);
  2.674928 seconds (3 allocations: 1.073 GiB, 0.11% gc time)

no library present

I am not sure if this is a newbie error but my files are like

shell> ls /System/Library/Frameworks/Accelerate.framework/
Frameworks	Resources	Versions

Hence, this may be the reason why the following occurs. I am not sure why AppleAccelerate does not error upon loading.

julia> @btime AppleAccelerate.exp($X);
ERROR: could not load symbol "vvexp":
```

some functions got broken on the latest 13b7

Hi! Maybe, that's on my end, but...

After the update to the latest beta (beta7) of macos 13, the package throughs an error like could not load symbol "vvsin"

Minimal working example:

using AppleAccelerate
x = rand(10, 1);
AppleAccelerate.sin(x)

it throughs

ERROR: could not load symbol "vvsin":
dlsym(0x39e90f318, vvsin): invalid handle

Unsatisfiable requirements

I was trying to add this package on Julia 1.1.1, and got the following error message:

(v1.1) pkg> add AppleAccelerate
 Resolving package versions...
ERROR: Unsatisfiable requirements detected for package AppleAccelerate [13e28ba4]:
 AppleAccelerate [13e28ba4] log:
 ├─possible versions are: [0.1.0-0.1.1, 0.2.0-0.2.1] or uninstalled
 ├─restricted to versions * by an explicit requirement, leaving only versions [0.1.0-0.1.1, 0.2.0-0.2.1]
 └─restricted by julia compatibility requirements to versions: uninstalled — no versions left

Numerical uncertainty causing Travis CI to fail

When cloning the repo onto a new machine, I noticed the failing Travis CI badge and decided to take a look. As you may know, the failure is a test not generating the correct expected output; in this case, the sin() function is causing Travis CI to fail. I diff'ed the expected and computed arrays and the differences are of the order of 1e-16 and are "randomly" distributed; this would suggest that this build error is just floating-point inaccuracies.

Considering that roughly() calls base.isapprox(), are you aware of a method that we could implement to both fix this test error, and to prevent similar test errors from arising? I'd be happy to make a PR with a solution, but beyond writing a custom floating-point test suite with slightly larger acceptable error, I don't know a method of fixing this.

Failed to precompile AppleAccelerate

Hello

I did a Pkg.add of the pacakge in julia 0.6.
It gives me the following errors:

julia> using AppleAccelerate
INFO: Precompiling module AppleAccelerate.

ERROR: LoadError: LoadError: UndefVarError: symbol not defined
Stacktrace:
 [1] macro expansion at /Users/david/.julia/v0.6/AppleAccelerate/src/Array.jl:10 [inlined]
 [2] anonymous at ./<missing>:?
 [3] include_from_node1(::String) at ./loading.jl:569
 [4] include(::String) at ./sysimg.jl:14
 [5] include_from_node1(::String) at ./loading.jl:569
 [6] include(::String) at ./sysimg.jl:14
 [7] anonymous at ./<missing>:2
while loading /Users/david/.julia/v0.6/AppleAccelerate/src/Array.jl, in expression starting on line 3
while loading /Users/david/.julia/v0.6/AppleAccelerate/src/AppleAccelerate.jl, in expression starting on line 10
ERROR: Failed to precompile AppleAccelerate to /Users/david/.julia/lib/v0.6/AppleAccelerate.ji.
Stacktrace:
 [1] compilecache(::String) at ./loading.jl:703
 [2] _require(::Symbol) at ./loading.jl:4

Transitioning to a multi-file package

Do you have any thoughts on transitioning to a multi-file module instead of keeping everything in AppleAccelerate.jl?

With the addition of conv/xcorr, and with the upcoming completion of the FFT and biquad/FIR functions that I have been working on, the file is getting excessively large. It might be worth breaking it up now, as opposed to much later.

My initial impression would be to do something like:

  1. Trigonometry.jl
  2. Math.jl - log/exp other future additions. (Would this be better as Array.jl?)
  3. FFT.jl - with the huge number of FFT variants that Accelerate provides (1D and 2D), these probably deserve their own file.
  4. Filtering.jl - conv/xcorr, biquad/FIR (There may be a better name for this file....)
  5. Util.jl - replaceBase, round, trunc

Any thoughts on the above decision, and/or the distribution of functions among the files? FFT and Filtering could also be combined into a DSP.jl... that might be the cleanest way to go right now.

Improved @replaceBase method

As the number of functions that we support continues to grow, it seems like the current method of manually updating the hard-coded values in @replaceBase is not going to scale well, and is somewhat error prone; I've forgotten to update it twice when adding new functions.

I wanted to start a discussion about finding a more scalable/cleaner method of collecting the functions that we wish to support in @replaceBase. My first thought is a macro @register that we could wrap around any function definitions that we wish to make "replaceable", i.e.

@register .+ function vadd(X::Vector{T}, Y::Vector{T})
       ## definition of function
 end

This would store the Base function .+ and the Accelerate function vadd in a module-wide dictionary indexed by the Base function name. We could then use this in @replaceBase instead of the hardcoded values.

Does anyone see any issues with the above methodology, or has an alternative suggestion?

Accelerate Functions

Below is a list of Apple Accelerate functions that should be implemented in AppleAccelerate.jl. This list is not exhaustive but does cover most of the functionality of Accelerate. If you are reading this and interested in contributing to AppleAccelerate.jl, find a function that you need in your own code, or that you want to contribute, and make a PR. Each function has a simple name first, followed by the corresponding Accelerate name; please use judgement when naming functions as this list is not 100% accurate.

vDSP:

  • abs - vDSP_vabs (Float32, Float64, Int32, ComplexFloat32, ComplexFloat64)
  • nabs - vDSP_vnabs (Float32, Float64)
  • neg - vDSP_vneg (Float32, Float64, ComplexFloat32, ComplexFloat64)
  • ramp - vDSP_vramp (Float32, Float64)
  • rampmul - vDSP_vrampmul (Float32, Float64)
  • rampmul2 - vDSP_vrampmul2 (Float32, Float64)
  • sqr - vDSP_vsq (Float32, Float64, ComplexFloat32, ComplexFloat64)
  • ssqr - vDSP_vssq (Float32, Float64)
  • sqradd - vDSP_zvmagsD (ComplexFloat32, ComplexFloat64)
  • normalize - vDSP_normalize (Float32, Float64)
  • polar - vDSP_polar (ComplexFloat32, ComplexFloat64)
  • rect - vDSP_rect (ComplexFloat32, ComplexFloat64)
  • db - vDSP_vdbcon (ComplexFloat32, ComplexFloat64)
  • frac - vDSP_vfrac (Float32, Float64)
  • conj - vDSP_zvconj (ComplexFloat32, ComplexFloat64)
  • phase - vDSP_zvphase (ComplexFloat32, ComplexFloat64)
  • clip - vDSP_vclip (Float32, Float64)
  • iclip - vDSP_viclip (Float32, Float64)
  • thresh - vDSP_vthr (Float32, Float64)
  • threshzero - vDSP_vthres (Float32, Float64)
  • compress - vDSP_vcmprs (Float32, Float64)
  • reverse - vDSP_vrvrs (Float32, Float64)
  • copy - vDSP_zvmov (ComplexFloat32, ComplexFloat64)
  • zcross - vDSP_nzcros (Float32, Float64)
  • avg - vDSP_vavlin (Float32, Float64)
  • lerp - vDSP_vlint (Float32, Float64)
  • runsum - vDSP_vrsum (Float32, Float64)
  • simpson - vDSP_vsimps (Float32, Float64)
  • trapezoid - vDSP_vtrapz (Float32, Float64)
  • swsum - vDSP_vswsum (Float32, Float64)
  • swmax - vDSP_vswmax (Float32, Float64)
  • double - vDSP_vspdp (Float32)
  • single - vDSP_vdpsp (Float64)
  • vadd - vDSP_vsadd (Int32, Float32, Float64)
  • vdiv - vDSP_vsdiv (Int32, Float32, Float64)
  • sdiv - vDSP_svdiv (Float32, Float64)
  • vsmul - vDSP_vsmul (Float32, Float64, ComplexFloat32, ComplexFloat64)
  • vsmsa - vDSP_vsmsa (Float32, Float64)
  • vadd - vDSP_vadd (Float32, Float64)
  • vsub - vDSP_vsub (Float32, Float64)
  • vaddsub - vDSP_vaddsub (Float32, Float64)
  • vam - vDSP_vam (Float32, Float64)
  • vsbm - vDSP_vsbm (Float32, Float64)
  • vaam - vDSP_vaam (Float32, Float64)
  • vsbsbm - vDSP_vsbsbm (Float32, Float64)
  • vasbm - vDSP_vasbm (Float32, Float64)
  • vasm - vDSP_vasm (Float32, Float64)
  • vsbsm - vDSP_vsbsm (Float32, Float64)
  • vasm - vDSP_vasm (Float32, Float64)
  • vmsa - vDSP_vsma (Float32, Float64)
  • vdiv - vDSP_vdiv (Float32, Float64, ComplexFloat32, ComplexFloat64)
  • vmul - vDSP_vmul (Float32, Float64, ComplexFloat32, ComplexFloat64)
  • max - vDSP_vmax` (Float32, Float64)
  • maxmag - vDSP_vmaxmg (Float32, Float64)
  • min - vDSP_min (Float32, Float64)
  • minmag - vDSP_minmg (Float32, Float64)
  • dist - vDSP_dist (Float32, Float64)
  • distsqr - vDSP_distancesq (Float32, Float64)
  • vlerp - vDSP_vintb (Float32, Float64)
  • vqlerp - vDSP_vqint (Float32, Float64)
  • vpoly - vDSP_vpoly (Float32, Float64)
  • pythagoras - vDSP_vpythg (Float32, Float64)
  • extrema - vDSP_venvlp (Float32, Float64)
  • merge - vDSP_vtmerg (Float32, Float64)
  • spectra - vDSP_zaspec (Float32, Float64)
  • coherence - vDSP_zcoher (Float32, Float64)
  • transferfunction - vDSP_ztrans (Float32, Float64)
  • rfilter - vDSP_deq22 (Float32, Float64)
  • dot - vDSP_dotpr (Float32, Float64, ComplexFloat32, ComplexFloat64)
  • max - vDSP_maxv (Float32, Float64)
  • maxi - vDSP_maxvi (Float32, Float64)
  • min - vDSP_minv (Float32, Float64)
  • mini - vDSP_minvi (Float32, Float64)
  • mean - vDSP_meanv (Float32, Float64)
  • meansqr - vDSP_meansqv (Float32, Float64)
  • sum - vDSP_sve (Float32, Float64)
  • fft - vDSP_fft* (Float32, Float64, ComplexFloat32, ComplexFloat64, 1D, 2D, in-place, out-of-place)
  • dft - vDSP_DFT (Float32, Float64)
  • FIR - vDSP_desamp (Float32, Float64, ComplexFloat32, ComplexFloat64)
  • conv - vDSP_zconv (ComplexFloat32, ComplexFloat64)
  • wiener - vDSP_wiener (Float32, Float64)
  • blackman - vDSP_blkman_window (Float32, Float64)
  • hamming - vDSP_hamm_window (Float32, Float64)
  • hann - vDSP_hann_window (Float32, Float64)
  • biquadm - vDSP_biquadm* (Float32, Float64)

Can't install on Big Sur

Hi,
AppleAccelerate package can't be used in Julia in Mac OS Big Sur:
Julia> using AppleAccelerate ERROR: LoadError: Accelerate framework not found at /System/Library/Frameworks/Accelerate.framework/Accelerate

I only found Alias as Frameworks and Resources under /System/Library/Frameworks/Accelerate.framework

Thanks for any advice!

Warning: Method definition overwritten on the same line

This package gives warnings during precompilation, on Julia nightly:

julia> using AppleAccelerate
[ Info: Precompiling AppleAccelerate [13e28ba4-7ad8-5781-acae-3021b1ed3924]
WARNING: Method definition blackman(Int64) in module AppleAccelerate at /Users/me/.julia/packages/AppleAccelerate/lc8gu/src/DSP.jl:227 overwritten on the same line (check for duplicate calls to `include`).
  ** incremental compilation may be fatally broken for this module **

WARNING: Method definition blackman(Int64, DataType) in module AppleAccelerate at /Users/me/.julia/packages/AppleAccelerate/lc8gu/src/DSP.jl:227 overwritten on the same line (check for duplicate calls to `include`).
  ** incremental compilation may be fatally broken for this module **

[...and 6 more ...]

┌ Warning: The call to compilecache failed to create a usable precompiled cache file for AppleAccelerate [13e28ba4-7ad8-5781-acae-3021b1ed3924]
│   exception = Required dependency Statistics [10745b16-79ce-11e8-11f9-7d13ad32a3b2] failed to load from a cache file.
└ @ Base loading.jl:1043

julia> VERSION
v"1.6.0-DEV.707"

It looks like this is because some of the method definitions inside a loop don't depend on the loop variables (T, suff):

https://github.com/JuliaMath/AppleAccelerate.jl/blob/c1fc438b40a87682c8194704f1246a928ea7445e/src/DSP.jl#L219-L230

Neural Net functions

I see here [https://developer.apple.com/documentation/accelerate/1642537-bnnsfiltercreateconvolutionlayer?language=objc#declarations]
that Accelerate includes some Convolution instructions for neural networks:

BNNSFilterCreateConvolutionLayer

They don't seem to be in AppeAccelerate at the moment. Any plans to include them?

Failed Precompilation after Mac OS X Ventura Update - Could Not Load Symbosl "vvsin"

Since updating to Mac OS X Venture I get this issue when calling AppleAccelerate functions:

ERROR: could not load symbol "vvsin":
dlsym(0x3a2d8ff48, vvsin): invalid handle
Stacktrace:
[1] dlsym(hnd::Ptr{Nothing}, s::String; throw_error::Bool)
@ Base.Libc.Libdl ./libdl.jl:59
[2] dlsym
@ ./libdl.jl:56 [inlined]
[3] get_fptr
@ ~/.julia/packages/AppleAccelerate/UAidl/src/AppleAccelerate.jl:13 [inlined]
[4] sin!(out::Matrix{Float64}, X::Matrix{Float64})
@ AppleAccelerate ~/.julia/packages/AppleAccelerate/UAidl/src/Array.jl:30

Split LinearAlgebra capabilities from other Math capabilities

I think it would make sense to separate this into two packages, and move the BLAS/LAPACK into a separate package in JuliaLinearAlgebra called AppleAccelerateLinalg and keep this one as is (but it can depend on AppleAccelerateLinalg`).

Otherwise, people who want the BLAS capabilities also have to opt into all the other things this package provides.

cc @staticfloat

AppleAccelerate.jl severely decreases the performance of SVD

Hi!

I noticed that after loading AppleAccelerate.jl, the SVD performance is severely decreased:

julia> using BenchmarkTools, LinearAlgebra

julia> A = rand(5, 5)
5×5 Matrix{Float64}:
 0.210478  0.485058    0.893071  0.0038541  0.242167
 0.618708  0.880626    0.35424   0.572958   0.721676
 0.539943  0.00980111  0.232398  0.220709   0.196985
 0.19735   0.441403    0.696092  0.527777   0.342491
 0.658019  0.397196    0.212173  0.518869   0.521641

julia> @btime svd($A)
  2.958 μs (7 allocations: 4.05 KiB)
SVD{Float64, Float64, Matrix{Float64}, Vector{Float64}}
U factor:
5×5 Matrix{Float64}:
 -0.379855  -0.756404   0.155695    0.472175    0.190718
 -0.633198   0.298961  -0.517143    0.298206   -0.39156
 -0.233213   0.192713   0.808918    0.056695   -0.500909
 -0.439546  -0.33197   -0.0529993  -0.826642   -0.102225
 -0.455171   0.437188   0.226196   -0.0396693   0.74091
singular values:
5-element Vector{Float64}:
 2.2379393452621
 0.8199698332648618
 0.4548207142220164
 0.32431597799931705
 0.019407475774695128
Vt factor:
5×5 Matrix{Float64}:
 -0.439643  -0.499995   -0.455901   -0.394957   -0.439184
  0.42926   -0.0910059  -0.808755    0.320191    0.225492
  0.633132  -0.671715    0.340672   -0.0610589  -0.1678
  0.386216   0.343976   -0.133626   -0.83768     0.113812
 -0.269129  -0.414974    0.0644648  -0.189869    0.845671

julia> using AppleAccelerate

julia> @btime svd($A)
  5.639 μs (7 allocations: 4.05 KiB)
SVD{Float64, Float64, Matrix{Float64}, Vector{Float64}}
U factor:
5×5 Matrix{Float64}:
 -0.379855  -0.756404   0.155695    0.472175    0.190718
 -0.633198   0.298961  -0.517143    0.298206   -0.39156
 -0.233213   0.192713   0.808918    0.056695   -0.500909
 -0.439546  -0.33197   -0.0529993  -0.826642   -0.102225
 -0.455171   0.437188   0.226196   -0.0396693   0.74091
singular values:
5-element Vector{Float64}:
 2.2379393452620997
 0.8199698332648626
 0.45482071422201675
 0.32431597799931694
 0.01940747577469517
Vt factor:
5×5 Matrix{Float64}:
 -0.439643  -0.499995   -0.455901   -0.394957   -0.439184
  0.42926   -0.0910059  -0.808755    0.320191    0.225492
  0.633132  -0.671715    0.340672   -0.0610589  -0.1678
  0.386216   0.343976   -0.133626   -0.83768     0.113812
 -0.269129  -0.414974    0.0644648  -0.189869    0.845671

System information:

  • M1 Ultra.
  • macOS 14.5.
  • Julia 1.10.4

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Move to JuliaLang

Shall we move this to JuliaLang, so that openlibm, Yeppp, VML and AppleAccelerate all together, and we can build a common interface as you referred to on julia-users?

Perhaps a separate organization?

`peakflops` not showing any difference with loading `AppleAccelerate`

see e.g.

julia> peakflops(4096)
6.095799831981197e10

julia> using AppleAccelerate

julia> peakflops(4096)
6.073612665240428e10

but calling the explicit functions in AppleAccelerate does seem to work and results in a speed difference

julia> using BenchmarkTools

julia> X = randn(1_000_000);

julia> @btime exp.($X); # standard libm function
  5.061 ms (2 allocations: 7.63 MiB)

julia> @btime AppleAccelerate.exp($X);
  2.003 ms (2 allocations: 7.63 MiB)

Systeminfo:

julia> versioninfo()
Julia Version 1.9.3
Commit bed2cd540a1 (2023-08-24 14:43 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: macOS (arm64-apple-darwin22.4.0)
  CPU: 8 × Apple M1
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-14.0.6 (ORCJIT, apple-m1)
  Threads: 1 on 4 virtual cores

julia> 
AppleAccelerate v0.4.0

Support for SVD

Hi

thanks for putting this together.
I cannot see if SVD is supported in this lib - do we know if it is supported please?

thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.