Coder Social home page Coder Social logo

victordibia / neuralqa Goto Github PK

View Code? Open in Web Editor NEW
233.0 233.0 32.0 31.04 MB

NeuralQA: A Usable Library for Question Answering on Large Datasets with BERT

Home Page: https://victordibia.github.io/neuralqa/

License: MIT License

HTML 3.99% CSS 9.00% JavaScript 51.40% Python 33.99% Dockerfile 0.44% SCSS 1.18%
bert-model deep-learning elastic-search information-retrieval natural-language-processing

neuralqa's People

Contributors

andrewrreed avatar dependabot[bot] avatar victordibia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuralqa's Issues

Visual communication of model status (loading)

  • Some reader requests can take longer than others (when reading long documents), implement a better "loading" visual cute. Right now it just says asking bert for answers .. perhaps a more prominent visual bar.

Add documentation on how to compile and test locally

It would be great to add instructions on how to compile and test the program on a local machine (without having to pip install neuralqa). Instructions must include:

  • how to build react front-end app
  • how to build python-based server

Retrieval username & password ignored

I specified an elasticsearch retrieval that requires authentication. I specified the username and password in the options section of the retrieval. When I try to run a query, the app fails with:

Error Fetching Passages. ConnectionError(<urllib3.connection.HTTPConnection object at 0x15cd3a490>: Failed to establish a new connection: [Errno 61] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x15cd3a490>: Failed to establish a new connection: [Errno 61] Connection refused)

Looking at the code seems to indicate that the username and password are not used anywhere in the code.

http_auth=('user', 'secret') required in ElasticSearchRetriever - will submit a pull request.

v0.0.1alpha Overall Roadmap

v0.0.1

Backend

  • Add sample questions/passages from yaml backend

  • RelSnip toggle, also visualize RelSnip

  • Tests for bad yaml

  • Revise probability scoring function (explore guidance from HF piplelines)

  • Load state from local storage

  • Query expansion methods

    • Word2Vec,
    • MLM,
  • endpoint access control?

  • Index interface refactor ..

    • Index query config files. ie for each index, what fields should be used for queriese etc
    • Support for Solr indexes
  • Test suite

    • models
    • index
    • ui
    • expander
    • explainer
  • Fix Search index commandline credentials interface ... Move index params to config.yaml strictly

Deployment/Testing CI/CD

  • .. add build passing,
  • test passing,
  • doc passing badges to keep track of build errors, tests, docs

Endpoint Access Control

Currently, there are no in built security check to manage access to the neuralqa rest end point. On the minimum, we want to enable/disable open access to the rest endpoint when we launch the ui.

  • Implement accesscontrol (e.g. apikey or uname/pass etc) for calls to the endpoints
  • find a clean way to enable just the frontned ui component authenticate

Resources

Pytorch in requirements?

Both torch and torchvision are in requirements.txt and setup.py but they don't seem to be used anywhere. Any particular reason you have both torch and tensorflow in requirements?

'TFBertEmbeddings' object has no attribute 'word_embeddings'

Hi , when I run the code : model.bert.embeddings.word_embeddings
there is an error : 'TFBertEmbeddings' object has no attribute 'word_embeddings' .
Could anyone has the same problem ? Could you give me sone insights on how to fix this problem ?

Models in Memory

Figure out the right way to handle which models to hold in memory

  • Best way to hold models in memory and configure this from config.yaml

  • Use the readerPool class to manage which models are loaded in memory

Improve Explanations: Better explanation visualization, more Explanation methods

Improved Explanation Visualization

Currently, we use a simple color density approach to visualize importance. Early feedback suggests this is helpful for the user immediately see the most important words/tokens, but does not offer quantities or further interaction (e.g. top n). Can we make this better or provide better alternatives?

  • convert explanations to a single modal view: switcher between visualization ntypes
  • bar + density visualization for easier comparisons similar to what was done here?
  • Top n important words: show only highlights for the top x most important words?

More Explanation Methods

Currently, explanations are based on vanilla gradients. We might want to explore:

  • Integrated gradients - maybe explore
  • SmoothGrad -
  • GradCam maybe?

Useful Resources

Results in NeuralQA inconsistent with same model running on HF

I've tested a model that I've deployed on NeuralQa vs one deployed on HF and noticed that the same inputs are yielding different outputs even though it's using the exact same model. This can of course be attributed to a few things but I can't seem to identify the culprit.

Here's the context:

Question:
Are your handsets locked or unlocked?

Corpus:
['No, all our handsets are unlocked.','Since your SIM isn’t working in your handset while other SIM cards are, it might be an issue with your handset provider; or the mobile phone could be locked, meaning it only accepts SIM cards from a particular service provider. Please contact the handset dealer for more assistance.']

The following returns 'unlocked' which is the correct response:
See Demo on HuggingFace

I've configured the exact same model in NeuralQA (with relsnip disabled) and the result is 'locked' even though I'm feeding exactly the same inputs.
Here my log:

0:No, all our handsets are unlocked.
[{'answer': 'unlocked', 'took': 0.35032129287719727, 'start_probability': '0.92030567', 'end_probability': '0.00026586326', 'probability': '0.460418697912246', 'question': 'Are your handsets locked or unlocked?', 'context': 'no, all our handsets are unlocked '}]
1:Since your SIM isn’t working in your handset while other SIM cards are, it might be an issue with your handset provider; or the mobile phone could be locked, meaning it only accepts SIM cards from a particular service provider. Please contact the handset dealer for more assistance.
[{'answer': 'locked', 'took': 0.5319299697875977, 'start_probability': '0.9462091', 'end_probability': '0.007203659', 'probability': '0.48030819557607174', 'question': 'Are your handsets locked or unlocked?', 'context': 'since your sim isn ’ t working in your handset while other sim cards are, it might be an issue with your handset provider ; or the mobile phone could be locked , meaning it only accepts sim cards from a particular service provider. please contact the handset dealer for more assistance'}]

As you can see the 2nd answer gets a higher probability but that doesn't really make sense as it's exactly the same model.
The main difference is that the NeuralQA model is feeding the corpus content independently while in the HF example, we're feeding the entire corpus.

Any ideas on why this is happening?

Add documentation for customizing retrieval process

Firstly, thank you for creating one of the most useful project I have seen for Q&A and NLP. It would be great to add documentation for configuring the retrieval with:
1- Our own data imported into ElasticSearch
2- Custom retrievals.

Enable stemmer filter in elasticsearch index

I think it would be a good idea to update data_utils.py to include a Stemming filter by default when creating Elasticsearch indices. This would tremendously improve the results returned by ES.

Query Expansion - User in the loop design

  • Formalize definitions of QE (if QE .. then display/visualize). Add a dropdown to select which BERT model is used for expansion.

  • Create separate endpoint for expansion. This workflow decouples expansion from QA, making it an optional "user in the loop" step. Currently, we add the expand button, which the user can click to generate potential expansion terms.

  • If QE selected, show/highlight which terms were added to the query.

  • include explanation visualization. Show which token was expanded and why, i.e. a chart/map that shows how each token was expanded.

  • Add logic to backend to also detect named entities and not expand them.

image

In the example screenshot implementation above some terms are not expanded, while others are (blue). By default named entities (and pos terms such as proper nouse) are not expanded.

RuntimeError on a fresh install

(Seemingly) I successfully installed NeuralGIA with the following command: pip3 install neuralqa (based on the information on this page: https://victordibia.com/neuralqa/index.html). But when I would like to start the ui with the following command: neuralqa ui --port 5000, I just receive a lot of errors, with "RuntimeError: random_device could not be read" as the last one. What shall I do, what I can check? I did not found anything in the docs or in the FAQ...
Thanks
log.txt

v0.0.1alpha*

v0.01

  • Backend

    • Support Yaml Configuraton for Entire Application

      • Manual dataset
      • Supported indexes
      • Token Stride
      • Passage highlighting
      • Base path
      • Show/hide manual samples.
      • Elastic server params (hostname and port ... default localhost 9200)
      • UI Application port for Flask
      • Support for NGINX deployment (optional)
      • Automatic index generation from files. Supported file types
        • Json files
        • Jsonl files
    • Add explanation utils - v0 gradient explanations

    • Refactor application core modules from utils into individual folders and classes - ui, models, index, expander, explainer, data

  • If RelSnip is used, return RelSnip passage as part of answer result

    • Include front end toggle for this and yaml file config.
  • UI

    • Support searching from a given index
    • Get config data from back end
    • Show Or hide retrieved passages and highlights
      • If this is off, disable highlights altogether
    • Show or hide attention weights.
    • Visualize model attention weights explanations via saliency maps
    • Add information on where each answer came from .. e.g. we searched 5 passages and found 2 answers across 1 document.
    • Add qa sampls to config.yaml
  • Make NeuralQA pip installable and runable from commandline.

    • neuralqa start -config config.yaml

Transformers 4.x Compatibility Issue

Issue

NeuralQA's requirements do not have pinned package versions, instead they use >= to pull the latest version of each library. With the latest release of transformers, there is a compatibility issue with NeuralQA <> Tensorflow <> Transformers.

UI Error

image

Error Message

INFO:     127.0.0.1:53966 - "POST /api/answers HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 394, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 190, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 190, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 188, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 135, in run_endpoint_function
    return await dependant.call(**values)
  File "/usr/local/lib/python3.8/site-packages/neuralqa/server/routehandlers.py", line 45, in get_answers
    answers = self.reader_pool.model.answer_question(
  File "/usr/local/lib/python3.8/site-packages/neuralqa/reader/bertreader.py", line 111, in answer_question
    answer = self.get_chunk_answer_span(model_input)
  File "/usr/local/lib/python3.8/site-packages/neuralqa/reader/bertreader.py", line 26, in get_chunk_answer_span
    answer_start, answer_end = self.get_best_start_end_position(
  File "/usr/local/lib/python3.8/site-packages/neuralqa/reader/bertreader.py", line 18, in get_best_start_end_position
    answer_start = tf.argmax(start_scores, axis=1).numpy()[0]
  File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 173, in argmax_v2
    return gen_math_ops.arg_max(input, axis, name=name, output_type=output_type)
  File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 837, in arg_max
    return arg_max_eager_fallback(
  File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 872, in arg_max_eager_fallback
    _result = _execute.execute(b"ArgMax", 1, inputs=_inputs_flat, attrs=_attrs,
  File "/usr/local/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.NotFoundError: Could not find valid device for node.
Node:{{node ArgMax}}
All kernels registered for op ArgMax :
  device='XLA_CPU'; output_type in [DT_INT32, DT_INT64]; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='XLA_CPU_JIT'; output_type in [DT_INT32, DT_INT64]; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='CPU'; T in [DT_INT64]; output_type in [DT_INT64]
  device='CPU'; T in [DT_INT64]; output_type in [DT_INT32]
  device='CPU'; T in [DT_INT32]; output_type in [DT_INT64]
  device='CPU'; T in [DT_INT32]; output_type in [DT_INT32]
  device='CPU'; T in [DT_UINT16]; output_type in [DT_INT64]
  device='CPU'; T in [DT_UINT16]; output_type in [DT_INT32]
  device='CPU'; T in [DT_INT16]; output_type in [DT_INT64]
  device='CPU'; T in [DT_INT16]; output_type in [DT_INT32]
  device='CPU'; T in [DT_UINT8]; output_type in [DT_INT64]
  device='CPU'; T in [DT_UINT8]; output_type in [DT_INT32]
  device='CPU'; T in [DT_INT8]; output_type in [DT_INT64]
  device='CPU'; T in [DT_INT8]; output_type in [DT_INT32]
  device='CPU'; T in [DT_HALF]; output_type in [DT_INT64]
  device='CPU'; T in [DT_HALF]; output_type in [DT_INT32]
  device='CPU'; T in [DT_BFLOAT16]; output_type in [DT_INT64]
  device='CPU'; T in [DT_BFLOAT16]; output_type in [DT_INT32]
  device='CPU'; T in [DT_FLOAT]; output_type in [DT_INT64]
  device='CPU'; T in [DT_FLOAT]; output_type in [DT_INT32]
  device='CPU'; T in [DT_DOUBLE]; output_type in [DT_INT64]
  device='CPU'; T in [DT_DOUBLE]; output_type in [DT_INT32]
 [Op:ArgMax]
ERROR:uvicorn.error:Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 394, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 190, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 190, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 188, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 135, in run_endpoint_function
    return await dependant.call(**values)
  File "/usr/local/lib/python3.8/site-packages/neuralqa/server/routehandlers.py", line 45, in get_answers
    answers = self.reader_pool.model.answer_question(
  File "/usr/local/lib/python3.8/site-packages/neuralqa/reader/bertreader.py", line 111, in answer_question
    answer = self.get_chunk_answer_span(model_input)
  File "/usr/local/lib/python3.8/site-packages/neuralqa/reader/bertreader.py", line 26, in get_chunk_answer_span
    answer_start, answer_end = self.get_best_start_end_position(
  File "/usr/local/lib/python3.8/site-packages/neuralqa/reader/bertreader.py", line 18, in get_best_start_end_position
    answer_start = tf.argmax(start_scores, axis=1).numpy()[0]
  File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 173, in argmax_v2
    return gen_math_ops.arg_max(input, axis, name=name, output_type=output_type)
  File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 837, in arg_max
    return arg_max_eager_fallback(
  File "/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 872, in arg_max_eager_fallback
    _result = _execute.execute(b"ArgMax", 1, inputs=_inputs_flat, attrs=_attrs,
  File "/usr/local/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.NotFoundError: Could not find valid device for node.
Node:{{node ArgMax}}
All kernels registered for op ArgMax :
  device='XLA_CPU'; output_type in [DT_INT32, DT_INT64]; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='XLA_CPU_JIT'; output_type in [DT_INT32, DT_INT64]; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='CPU'; T in [DT_INT64]; output_type in [DT_INT64]
  device='CPU'; T in [DT_INT64]; output_type in [DT_INT32]
  device='CPU'; T in [DT_INT32]; output_type in [DT_INT64]
  device='CPU'; T in [DT_INT32]; output_type in [DT_INT32]
  device='CPU'; T in [DT_UINT16]; output_type in [DT_INT64]
  device='CPU'; T in [DT_UINT16]; output_type in [DT_INT32]
  device='CPU'; T in [DT_INT16]; output_type in [DT_INT64]
  device='CPU'; T in [DT_INT16]; output_type in [DT_INT32]
  device='CPU'; T in [DT_UINT8]; output_type in [DT_INT64]
  device='CPU'; T in [DT_UINT8]; output_type in [DT_INT32]
  device='CPU'; T in [DT_INT8]; output_type in [DT_INT64]
  device='CPU'; T in [DT_INT8]; output_type in [DT_INT32]
  device='CPU'; T in [DT_HALF]; output_type in [DT_INT64]
  device='CPU'; T in [DT_HALF]; output_type in [DT_INT32]
  device='CPU'; T in [DT_BFLOAT16]; output_type in [DT_INT64]
  device='CPU'; T in [DT_BFLOAT16]; output_type in [DT_INT32]
  device='CPU'; T in [DT_FLOAT]; output_type in [DT_INT64]
  device='CPU'; T in [DT_FLOAT]; output_type in [DT_INT32]
  device='CPU'; T in [DT_DOUBLE]; output_type in [DT_INT64]
  device='CPU'; T in [DT_DOUBLE]; output_type in [DT_INT32]
 [Op:ArgMax]
INFO:     127.0.0.1:53982 - "GET /images/icon.png HTTP/1.1" 304 Not Modified
INFO:     127.0.0.1:53984 - "GET /manifest.json HTTP/1.1" 304 Not Modified
INFO:     127.0.0.1:53982 - "GET /android-chrome-192x192.png HTTP/1.1" 304 Not Modified

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.