Coder Social home page Coder Social logo

Comments (25)

grovesNL avatar grovesNL commented on July 17, 2024 5

Ecosystem: Ecosystems exist around humans and the languages they write. There is no large popular repository of shaders in a non-human-readable format.

There are repositories in several shading languages such as HLSL and GLSL. A non-human-readable language designed to act as a compile target could allow these ecosystems to converge. It could also drive innovation in this area, such as the creation of entirely new shading languages.

Documentation Accuracy: Documentation (à la MDN) will present concepts in the language authors are writing in, not the language the browser ingests. The additional level of indirection between what documentation describes and what the browser is actually doing will make the concepts more difficult to understand for web authors.

How this is different than how Metal and D3D12 may ingest AIR and DXIL layers, respectively? It seems this is the same type of indirection.

With a common non-human-readable language (which acts as a compile target), documentation could express concepts through multiple high-level languages and the web author could choose whichever language is familiar.

Security: No non-human-writable shading language meets the same security claims as the above human-writable shading language.

A non-human-readable shading language could meet the same security claims as a human-readable shading language.

Debugability: Source maps in the only non-human-writable language browsers directly ingest (WebAssembly) still are not implemented in any major browser. Debugging a view of source code, without direct access to the original source, is difficult for browsers to perform.

Source maps are intended to provide direct access to the original source code. It doesn't seem anyone has suggested for browsers to attempt decompilation to an unknown source language.

Ownership: The WebGPU Community Group does not own any non-human-writable shading languages. Any time the Community Group desires to make a modification to the language, they would need to ask a separate group, perhaps in a separate standards body.

This issue still exists with Secure HLSL unless the WebGPU Community Group owns HLSL as well. The languages may diverge whenever new concepts are added to HLSL.

Portability: An additional compile step adds one more place where behavior deviances can occur. The same shader run through different compilers may have different characteristics (either semantic differences or performance differences). Directly ingesting the exact program the author writes means that this whole class of variance is eliminated.

The human-readable language will still eventually be compiled to a non-human-readable language regardless (at least for Metal, Vulkan, and D3D12). So instead of the user compiling the human-readable language, the user has to rely on the browser to perform this compile step instead.

Instead, if the web author controls the compile step, then the author doesn't have to be as concerned about various browsers compiling the human-readable language differently.

from gpuweb.

grovesNL avatar grovesNL commented on July 17, 2024 5

@grorg

If an application is going to offline compile their shaders, and WebGPU will accept the shaders in the format they compile to, and there is no functional, performance or security difference between formats, then surely it doesn't actually matter what the format is, right?

I think it depends on the capabilities of the offline compilation/optimization. Would there also be HLSL to HLSL offline compilation? It seems unlikely that HLSL could benefit from offline optimizations to the same extent as a lower level language.

can the format be implemented with interoperability, no loss of performance, functionality or security?

With regards to performance, it depends what the runtime performance trade-off will be. More effort will be required to parse, optimize, and compile HLSL than SPIR-V, because some of this work is done upfront for SPIR-V (trading additional initial compilation for a faster runtime). Based on the shader compilation performance issues in WebGL, I feel that runtime compilation should be as fast as possible in WebGPU.

As well, if SPIR-V were used, in the future there could be a very efficient SPIR-V to LLVM transformation to generate AIR and DXIL directly (both are LLVM-based). This could potentially allow skipping or simplifying some LLVM optimization passes for HLSL and MSL; instead SPIR-V could undergo these passes offline. It seems unlikely that HLSL would be able to provide similar benefits in this case.

Another point worth mentioning is the existing and future compiler ecosystem. As of today, it appears that more efforts exist to compile source languages to SPIR-V than HLSL. I believe this is because:

  • It's more difficult to compile to HLSL than SPIR-V because SPIR-V was designed as a compile target.
  • SPIR-V already has an open specification.

from gpuweb.

devshgraphicsprogramming avatar devshgraphicsprogramming commented on July 17, 2024 4

As the language in which humans write shaders changes, the non-human-representation of the language must also change. Similarly, changing the non-human-representation of the language means that a similar change should be made in the language humans are writing. This makes it difficult to evolve WebGPU.

I don't see SPIR-V having difficulties with evolving.

Ecosystems exist around humans and the languages they write. There is no large popular repository of shaders in a non-human-readable format.

Because you can trivially compile the existing popular repositories to non-human-readable format, same reason you dont see popular repositories of unlinked GCC 6.3 or MSVC2015 bytecode objects.

I don't understand how this matters to the discussion. As long as repositories exist in HLSL people can use that and compile down to SPIR-V.

@Kangz backs me up.

No non-human-writable shading language meets the same security claims as the above human-writable shading language.

Elaborate? Because w.r.t. SPIR-V it may be a complete opposite.

"The above human-writable shading language", you are comparing existing IR shading languages security to the security of a not-yet-created-not-yet-specified-not-yet-implemented-not-yet-tested shading language.
That is not an apple to apple comparison.

Yes, could. None currently do.

I'm just going to quote @grorg to make my point stick more.

Source maps in the only non-human-writable language browsers directly ingest (WebAssembly) still are not implemented in any major browser. Debugging a view of source code, without direct access to the original source, is difficult for browsers to perform.

For debugging graphics and shaders you need a proper tool like AMD GPU Profilers, PIX, Nsight or at the very least RenderDoc... you can't actually step shaders unless and unitl you CPU emulate them.

These are the tools that we, graphics developers use and are used to, and the working-group better look into integrating with them as we sure won't be debugging our shaders and render-passes in-browser.

So while source-maps would be useful for tools like RenderDoc if webGPU was to be supported an actual other graphics API, being forced to debug outside of these tools will make us hate webGPU.

The WebGPU Community Group does not own any non-human-writable shading languages. Any time the Community Group desires to make a modification to the language, they would need to ask a separate group, perhaps in a separate standards body. Such a process would make it more difficult and slower to modify. Not all of the members of the WebGPU Community Group are present in other standards groups.

I've heard membership in Khronos and getting extensions rubber-stamped is pretty straight forward, even a GSoC student got it and a VENDOR_ID for his CPU Vulkan Kazan implementation project.

This issue still exists with Secure HLSL unless the WebGPU Community Group owns HLSL as well. The languages may diverge whenever new concepts are added to HLSL.

Ditto.

Outside of the core structure everything can be done in this group: extending the functionality of SPIR-V will be possible with "extended instruction sets" like for the GLSL builtins and adding constraints on SPIR-V will be possible as part of the execution environment.

This is a very good point, also in Vulkan/SPIR-V it is easier (compared to OpenGL) to get extensions approved that change existing functionality as opposed to just add functionality.

This is because extensions have to be queried and explicitly enabled during runtime (VK) or shader compilation (SPIR-V), as opposed in the OpenGL land where if an implementation lists an extension at runtime.. bam you're using it, whether you like it or not.

The Working Group would own Secure HLSL

And then it ceases to be HLSL, so wont really be compatible with HLSL and you loose the whole "lets stay compatible with the copious amounts of shaders already written in HLSL" benefit.

An additional compile step adds one more place where behavior deviances can occur. The same shader run through different compilers may have different characteristics (either semantic differences or performance differences). Directly ingesting the exact program the author writes means that this whole class of variance is eliminated.

I don't see how having a human-readable-language needing a parse and an AST followed by a round of cross-compilation with different cross-compilers depending on backend platform would have less deviance over a native IR being fed to the driver or an IR2IR (SPIR-V to DXIL/AIR) compiler.

Your Secure/Web HLSL to SPIR-V/DXIL/AIR compiler would most probably end up being a {S/W}HLSL to "some IR" compiler then a "some IR" to SPIR-V/DXIL/AIR compiler.

I see far more deviance in this path.

Why require all users to use heavier JIT parsing/compilation during runtime if they would prefer to pay the cost of compilation and optimization upfront? Web authors are already using solutions such as Prepack today to move some of the runtime cost to initial compilation cost.

Agreed.

This is extremely different. Metal's AIR is not an API. Apple does not publish a specification for it. The API for Metal is Metal Shading Language, and is what all the documentation is written in. Yes, Metal does allow direct ingestion of AIR, but that's tied to the toolchain. Also, it allows direct ingestion of Metal Shading Language code.

Maybe its time for Apple to play nice in the Computer Graphics domain? I would be happy even if they made their own closed extended SPIR-V to AIR compiler, although some sort of a specification for AIR would be a good call on their part.

Finally I wholeheartedly agree with everything @grovesNL wrote.

from gpuweb.

kainino0x avatar kainino0x commented on July 17, 2024 3

I haven't read this thread in depth yet, but I need to point out an important point before we go off the rails on this topic for the nth time.

Yes, could. None currently do.

While this is true, no human-writable language meets these security claims either. There is nothing which meets them. It is a moot point to say "Future Secure HLSL will be able to do this, but today's SPIR-V can't do it." We can make new versions of both HLSL and SPIR-V that add these security claims. The only debate here is whether "Secure HLSL" or "Secure SPIR-V" is better. And if they both provide the same security guarantees, the only security related metric (AFAIK) is how performant the resulting codegen will be (i.e. whether the extra info gained by ingesting "Secure HLSL" over "Secure SPIR-V" gives us useful optimizations).

This is a question we simply cannot discuss without the "Secure HLSL" explainer doc. There is no point in discussing it until we all understand the propositions in question.

from gpuweb.

grorg avatar grorg commented on July 17, 2024 3

(This comment is probably more appropriate in the issue #42, but I'll put it here anyway)

If an application is going to offline compile their shaders, and WebGPU will accept the shaders in the format they compile to, and there is no functional, performance or security difference between formats, then surely it doesn't actually matter what the format is, right?

e.g. webGPUCompile -i myshader.glsl -o myshader.webgpu

You're not going to look at the .webgpu file to check the SPIR-V, or the HLSL, or whatever it is, because it doesn't matter to you. As long as WebGPU accepts it interoperably, you're happy.

At which point the argument boils down to:

  • can the format be implemented with interoperability, no loss of performance, functionality or security?

  • are there other benefits to the format, such as human-readability and easy online compilation?

from gpuweb.

litherum avatar litherum commented on July 17, 2024 2

@kainino0x Yes, you're right.

A few weeks ago, Microsoft publicly stated that they would be comfortable donating HLSL (the language itself and some set of associated tools) to be forked by this CG.

I don't want to be misleading: HLSL for WebGPU would definitely be a fork of HLSL, and it would necessarily require specifying additional requirements on the language that are not present currently. The fork would be a subset that this CG would own that both specifies some currently underspecified pieces of the language, as well as removing some of the features to both improve security and make the language more extendable in the future.

We still need to write a comprehensive spec for HLSL for WebGPU. (Which makes me wonder, if we're paying this cost anyway, why we're not going to add in slices, generics, and pointers, but it seems like that ship has already sailed.)

from gpuweb.

Coder-256 avatar Coder-256 commented on July 17, 2024 1

@litherum Why was this closed, has consensus been reached?

from gpuweb.

donny-dont avatar donny-dont commented on July 17, 2024

If there is no non-human-writable shading language ingest-able at the JS level will there be one at the C API level?

from gpuweb.

litherum avatar litherum commented on July 17, 2024

WebGPU has no (official) C API. It is an API for the web, and we've agreed that we want to pursue making a WebAssembly API.

In the group, we haven't discussed the option of making the injested shading language different depending on which language of API you're using. Doing so would be unfortunate because the two languages are unrelated.

from gpuweb.

donny-dont avatar donny-dont commented on July 17, 2024

I'm speaking more in terms of glShaderBinary vs glShaderSource as some platforms don't allow runtime compilation of shader source but allow offline compilation.

from gpuweb.

litherum avatar litherum commented on July 17, 2024

Shader compilation needs to occur regardless of which shading language (or runtime API language) is used.

Something like getShaderBinary is only useful if the website is able to provide the resulting machine code back through the API later. This is out of the question because it would be a back door to all the security measures the API needs to enforce.

Also, even being able to read the raw machine code would leak more information about your hardware and driver than we would accept.

This is the same policy as compiled JavaScript or WebAssembly.

from gpuweb.

grorg avatar grorg commented on July 17, 2024

Ecosystem: Ecosystems exist around humans and the languages they write. There is no large popular repository of shaders in a non-human-readable format.

There are repositories in several shading languages such as HLSL and GLSL. A non-human-readable language designed to act as a compile target could allow these ecosystems to converge. It could also drive innovation in this area, such as the creation of entirely new shading languages.

This is not unique to non-human-readable languages. A human-readable language can be a compile target e.g. the many examples that compile to JavaScript.

We've already agreed that it will be possible to translate between these shading languages while retaining functionality. There are already tools that do this. So I think this argument applies equally to both human and non-human languages.

Documentation Accuracy: Documentation (à la MDN) will present concepts in the language authors are writing in, not the language the browser ingests. The additional level of indirection between what documentation describes and what the browser is actually doing will make the concepts more difficult to understand for web authors.

How this is different than how Metal and D3D12 may ingest AIR and DXIL layers, respectively? It seems this is the same type of indirection.

This is extremely different. Metal's AIR is not an API. Apple does not publish a specification for it. The API for Metal is Metal Shading Language, and is what all the documentation is written in. Yes, Metal does allow direct ingestion of AIR, but that's tied to the toolchain. Also, it allows direct ingestion of Metal Shading Language code.

With a common non-human-readable language (which acts as a compile target), documentation could express concepts through multiple high-level languages and the web author could choose whichever language is familiar.

I think this would be terrible. You should use the most common language to document your technology. It would be as if MDN started documenting everything in CoffeeScript.

Security: No non-human-writable shading language meets the same security claims as the above human-writable shading language.

A non-human-readable shading language could meet the same security claims as a human-readable shading language.

Yes, could. None currently do.

Debugability: Source maps in the only non-human-writable language browsers directly ingest (WebAssembly) still are not implemented in any major browser. Debugging a view of source code, without direct access to the original source, is difficult for browsers to perform.

Source maps are intended to provide direct access to the original source code. It doesn't seem anyone has suggested for browsers to attempt decompilation to an unknown source language.

This is not the point. WebAssembly is hard to debug because it uses a non-human-writable language, and no implementation has worked out a good way to provide a way to debug by looking at the source language. We're trying to avoid this problem.

Ownership: The WebGPU Community Group does not own any non-human-writable shading languages. Any time the Community Group desires to make a modification to the language, they would need to ask a separate group, perhaps in a separate standards body.

This issue still exists with Secure HLSL unless the WebGPU Community Group owns HLSL as well. The languages may diverge whenever new concepts are added to HLSL.

The Working Group would own Secure HLSL.

Portability: An additional compile step adds one more place where behavior deviances can occur. The same shader run through different compilers may have different characteristics (either semantic differences or performance differences). Directly ingesting the exact program the author writes means that this whole class of variance is eliminated.

The human-readable language will still eventually be compiled to a non-human-readable language regardless (at least for Metal, Vulkan, and D3D12). So instead of the user compiling the human-readable language, the user has to rely on the browser to perform this compile step instead.

Yes, exactly.

Instead, if the web author controls the compile step, then the author doesn't have to be as concerned about various browsers compiling the human-readable language differently.

But it means the browser engines have much less opportunity to optimize the program. The huge improvements in JavaScript performance, across all browser engines, show the benefit of this approach.

from gpuweb.

grorg avatar grorg commented on July 17, 2024

@kainino0x - Yes, good point.

from gpuweb.

litherum avatar litherum commented on July 17, 2024

I thought we agreed that we'll be pursuing pure HLSL, not Secure HLSL? This doesn't obviate our need for a spec or test suite, but it does obviate our need for a Secure HLSL explainer.

from gpuweb.

Kangz avatar Kangz commented on July 17, 2024

I think we agreed to pursue "an" HLSL and Secure HLSL will hopefully be a better version of HLSL.

That said until the explainer for Secure HLSL is published, let's assume here that the high-level language will be HLSL and hope that most of the discussion will translate easily to Secure HLSL.

from gpuweb.

kainino0x avatar kainino0x commented on July 17, 2024

@litherum you're right. Since it sounds like, for now, we will be discussing "Today's HLSL but with a spec" vs "Today's SPIR-V" there are some points above that we all need to consider again in that context. (Especially the Security bullet point.)

from gpuweb.

litherum avatar litherum commented on July 17, 2024

(I say this because unmodified HLSL is unsuitable for the Web, for many reasons)

from gpuweb.

grovesNL avatar grovesNL commented on July 17, 2024

@grorg

This is not unique to non-human-readable languages. A human-readable language can be a compile target e.g. the many examples that compile to JavaScript.

That's true, however a language designed as a compile target (human-readable or not) would simplify compilation to that language greatly. Statically compiled languages compile to JavaScript because there was no alternative until WebAssembly.

This is extremely different. Metal's AIR is not an API. Apple does not publish a specification for it. The API for Metal is Metal Shading Language, and is what all the documentation is written in. Yes, Metal does allow direct ingestion of AIR, but that's tied to the toolchain. Also, it allows direct ingestion of Metal Shading Language code.

The point is that there is already a level of indirection, regardless of whether it's provided by an owned toolchain itself or documented/available from an external toolchain. DXIL is already moving in that direction by open sourcing some of the toolchain, so perhaps that is a better example.

I think this would be terrible. You should use the most common language to document your technology. It would be as if MDN started documenting everything in CoffeeScript.

It's just another option that would be (more easily) available. If the WebGPU Community Group recommends HLSL then documentation could be written only in HLSL. Although it does seem beneficial for authors to be able to view code samples in either HLSL or GLSL if they'd like, for example. Unity and other scriptable engines usually provide something similar to this.

Yes, could. None currently do.

HLSL doesn't have those security claims either. This point needs clarification before it can be debated.

This is not the point. WebAssembly is hard to debug because it uses a
non-human-writable language, and no implementation has worked out a good way
to provide a way to debug by looking at the source language. We're trying to
avoid this problem.

It's unclear what debug means in this context. Source maps will already display the original source code for debugging. Do you mean editing shader source code and hot recompiling within the browser? It seems better to handle outside of the browser.

The Working Group would own Secure HLSL.

Yes, but not HLSL, which could diverge from Secure HLSL.

But it means the browser engines have much less opportunity to optimize the
program. The huge improvements in JavaScript performance, across all browser
engines, show the benefit of this approach.

Why require all users to use heavier JIT parsing/compilation during runtime if they would prefer to pay the cost of compilation and optimization upfront? Web authors are already using solutions such as Prepack today to move some of the runtime cost to initial compilation cost.

from gpuweb.

Kangz avatar Kangz commented on July 17, 2024
  • Consistency: As the language in which humans write shaders changes, the non-human-representation of the language must also change. Similarly, changing the non-human-representation of the language means that a similar change should be made in the language humans are writing. This makes it difficult to evolve WebGPU.

A lot of the changes you would have in high-level language would be around syntax, such as adding generics, pointers etc. The point of an intermediate language like SPIR-V (or LLVM IR) is that a lot of the features of the high-level language are already compiled down to simpler constructs which are easier to work with. It is true however minor, that adding new builtins variable and functions will require changing both the intermediate language and the high-level one

  • Ecosystem: Ecosystems exist around humans and the languages they write. There is no large popular repository of shaders in a non-human-readable format.

I don't understand how this matters to the discussion. As long as repositories exist in HLSL people can use that and compile down to SPIR-V.

  • Documentation Accuracy: Documentation (à la MDN) will present concepts in the language authors are writing in, not the language the browser ingests. The additional level of indirection between what documentation describes and what the browser is actually doing will make the concepts more difficult to understand for web authors.

This is why we think HLSL should be a "blessed" language so that it is used consistently in the documentation. Overall I think this is minor because people using WebGPU are not your everyday Javascript developer, but are supposed to know about GPU stuff.

  • Ownership: The WebGPU Community Group does not own any non-human-writable shading languages. Any time the Community Group desires to make a modification to the language, they would need to ask a separate group, perhaps in a separate standards body. Such a process would make it more difficult and slower to modify. Not all of the members of the WebGPU Community Group are present in other standards groups.

This is a real concern. I think the core structure of SPIR-V is solid enough that we won't need to change it which makes it unnecessary for everyone to be present in this standards group. Also we will be happy to upstream any issues we find and make sure they are addressed.

Outside of the core structure everything can be done in this group: extending the functionality of SPIR-V will be possible with "extended instruction sets" like for the GLSL builtins and adding constraints on SPIR-V will be possible as part of the execution environment.

  • Portability: An additional compile step adds one more place where behavior deviances can occur. The same shader run through different compilers may have different characteristics (either semantic differences or performance differences). Directly ingesting the exact program the author writes means that this whole class of variance is eliminated.

Most shaders will have been compiled offline and applications will be tested against these offline compiled shaders. So the portability of the HLSL -> SPIR-V compilation doesn't matter.

Even for runtime compiled shaders, this doesn't matter because an application's compiler would be deterministic (one would hope) and produce the same SPIR-V for a given HLSL snippet.

from gpuweb.

othermaciej avatar othermaciej commented on July 17, 2024

I think it depends on the capabilities of the offline compilation/optimization. Would there also be HLSL to HLSL offline compilation? It seems unlikely that HLSL could benefit from offline optimizations to the same extent as a lower level language.

There's certainly source-to-source optimizations that are valuable and worth doing for other languages, such as JavaScript. Ultimately, we're sending something to a compiler that compiles to the GPU's native instruction set, so some low-level optimizations can't be done up front, and anything else you do ahead of time is to some extent guessing at the behavior of that later compiler. In this aspect, I don't think HLSL and SPIR-V are materially different as targets for higher-level ahead-of-time optimization.

What you describe with LLVM for offline optimization is likely not practical. Many interesting optimizations would destroy the information needed to prove safety guarantees. Ingesting SPIR-V and ingesting arbitrary LLVM bitcode are not the same thing.

With regards to performance, it depends what the runtime performance trade-off will be. More effort will be required to parse, optimize, and compile HLSL than SPIR-V, because some of this work is done upfront for SPIR-V

This is a question that can only be answered by measuring, not speculating. I'd expect the difference in parsing+validation time to be negligible compared to native shader compile time. And because SPIR-V requires you to verify some things that are manifest for HLSL (for example, that control flow is block structured), it's not even a given that the total additional cost for ingesting SPIR-V will be lower.

from gpuweb.

grovesNL avatar grovesNL commented on July 17, 2024

@othermaciej

In this aspect, I don't think HLSL and SPIR-V are materially different as targets for higher-level ahead-of-time optimization.

What you describe with LLVM for offline optimization is likely not practical.

It's possible that these shaders wouldn't greatly benefit from AOT optimization. However, if there is potential, I don't think we should leave the possibility of performance gains on the table. Ideally we could measure expected use cases, as you said.

from gpuweb.

othermaciej avatar othermaciej commented on July 17, 2024

@grovesNL I agree that measurement is the best final source of truth. I think in this case, the burden of proof would be on you to show an AOT optimization that helps even after running through the full native shader compiler pipeline, and which could be done on SPIR-V but not HLSL. Otherwise you're asking everyone else to prove the negative that no such optimization exists, which can't be done by measurement alone.

The reason for my intuition:

  • An LLVM based back end would still need to run the full optimizer pipeline whether or not the code it's presented has been AOT optimized. It has no way to know which optimizations have been run if it's still presented with code in the same format.
  • The only way to get a win out of AOT optimization with LLVM, when using a system that ultimately compiles to native, would be to find an optimization that the native compiler won't do (maybe too expensive to do at runtime), which is effective when run very early in the optimization pipeline, which won't be undone by latter passes, and which still results in valid SPIR-V. That seems fairly unlikely, other than fairly high-level transforms.
  • Such high-level transforms are very likely to also be doable with HLSL as input and output.
  • It's worth noting that it's perfectly possible to make an LLVM has back ends that output a high-level language, Emscripten is a well known example. So even if LLVM had the exact right optimization, there's not much reason to think it could do it while outputting SPIR-V but not while outputting HLSL.

However, I'd accept a concrete counter-example as evidence that my intuition is wrong.

from gpuweb.

kainino0x avatar kainino0x commented on July 17, 2024

Consensus was reached at last week's face-to-face meeting (minutes TBA) to develop a proposed language which is human-authorable but defined directly by its translation to SPIR-V. (This is at the specification level, so as always this is an "as-if" - implementations aren't required to actually go through SPIR-V.)

from gpuweb.

devshgraphicsprogramming avatar devshgraphicsprogramming commented on July 17, 2024

can we still send SPIR-V to webGPU?

from gpuweb.

kainino0x avatar kainino0x commented on July 17, 2024

We will produce tooling to convert SPIR-V into the proposed language.

from gpuweb.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.