Coder Social home page Coder Social logo

diffgram / diffgram Goto Github PK

View Code? Open in Web Editor NEW
1.8K 29.0 114.0 57.42 MB

The AI Datastore for Schemas, BLOBs, and Predictions. Use with your apps or integrate built-in Human Supervision, Data Workflow, and UI Catalog to get the most value out of your AI Data.

Home Page: https://diffgram.com

License: Other

Dockerfile 0.05% Python 45.26% Shell 0.01% JavaScript 6.68% HTML 0.03% Vue 41.36% CSS 0.38% TypeScript 6.22% Mako 0.01% Makefile 0.02%
annotation annotation-tool training-data video-annotation data-annotation kubernetes data-science data-analytics image-annotation machine-learning

diffgram's Introduction

DocsDiffgram.comRequest Slack Invite

News

Sept 28 2023: New Diffgram license version 2 (DLv2). Featuring new contributor license (CL) available at no financial cost. MSA customers will receive a financial credit for all contributions.

The AI Datastore

The AI Datastore for Schemas, BLOBs, and Predictions. Use with your apps or integrate built-in Human Supervision, Data Workflow, and UI Catalog to get the most value out of your AI Data.

Learn more

Use Cases

  • Use with your AI Apps - One place for Compliant PII AI data.
  • Human Supervision (Data Labeling) - Label all media types and scale your annotation.
  • AI Data Application Workflow - Move data between your AI Apps and control your AI through a friendly UI/UX exp.
  • UI Catalog - Visually Explore your AI Datastore.

Data

Diffgram is installed by you and you have control over your data.

Supervision (Data Labeling) Media Types

A popular use case is for human supervision

Getting Started

Watch the Video Explainer, read the commercial open source license.

More

Commercial firms have been using Diffgram since 2018 and we continue to stay up to date with the latest advances. Diffgram has 706 tests (E2E, unit etc) and we care greatly about quality.

diffgram's People

Contributors

0o001 avatar anthony-sarkis avatar dependabot[bot] avatar francescov1 avatar kant avatar kyoto7250 avatar mathijsnl avatar oss-maintainer-12 avatar pjestrada avatar ppolxda avatar ravi-dhime avatar sergzak022 avatar steveoliver avatar sureshvytla avatar vitalii-bulyzhyn avatar zayec77 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diffgram's Issues

Images core process_profile_image is not using Data_tools()

In the context of users using other Cloud Providers different from GCP.

The functions:

process_profile_image

and

process_image_generic

Are not using Data_Tools() class to upload the profile pictures. This makes it throw exceptions when using different providers from GCP.

Copy Schema (Labels, attributes, spatial templates) to a new project

Desire

As a user, I may wish to have multiple projects that use the same Schema.

Workaround

Edit The Diffgram export includes large aspects of the Schema,
that file can be re-ingested using the "import diffgram format" on wizard. This achieves a lot of this tickets goal.

The current workaround is to manually recreate the Schema,
Or to use the API/SDK

Context

The context here is that currently you can attach different labels to different tasks inside a project, so this is generally applicable to users with 10,000+s of tasks (10s of projects and 100s of task sets).

Brainstorm

One approach would be to first implement V1 of the Schema concept (an abstract set like a dataset that contains the various labels, attributes, spatial templates, label specific permissions, etc).

Then we can more easily work with multiple Schema's in the same project, and also do things like copy or share Schema's between projects. I like the idea of doing this at the schema level since it would be simpler for a user to think about, and simpler to implement probably.

Docker error on installing development with remote Postgress SQL

Launching Diffgram...
WARNING: The CLOUD_STORAGE_BUCKET variable is not set. Defaulting to a blank string.
WARNING: The ML__CLOUD_STORAGE_BUCKET variable is not set. Defaulting to a blank string.
WARNING: The DIFFGRAM_AWS_ACCESS_KEY_ID variable is not set. Defaulting to a blank string.
WARNING: The DIFFGRAM_AWS_ACCESS_KEY_SECRET variable is not set. Defaulting to a blank string.
WARNING: The DIFFGRAM_S3_BUCKET_NAME variable is not set. Defaulting to a blank string.
WARNING: The ML__DIFFGRAM_S3_BUCKET_NAME variable is not set. Defaulting to a blank string.
WARNING: The MAILGUN_KEY variable is not set. Defaulting to a blank string.
Docker Compose is now in the Docker CLI, try docker compose up
Pulling db (scratch:)...
ERROR: 'scratch' is a reserved name
Diffgram Successfully Launched!
View the Web UI at: http://localhost:8085

The install.py script also ignores the error and assumes launching was successful.

"starting annotating" path - migration issue

Workaround: The following ways are some of the known work arounds

  1. Launch from Job list
  2. Click any "view" task. pagination and next task work as expected
  3. Share task / any direct task links all work as expected

Description:
In enterprise we have an "Apply" process. This was not migrated to open source.
Let's look at replacing this button with a first available task in open source or something like that

image

Better handle multiple installations

Context that a new person may install multiple times then user is deleted
So when they run the docker compose up again the user is gone and appears to still be not working

Add Support for Data Tools In Semantic Segmentation

The file semantic_segmentation.py still requires a bit of refactoring to stop using GCP directly and instead use data_tools to manage cloud operations. Not a huge priority since it's not used that much but leaving it here for reference

Support more popular formats for mapping off shelf

if possible set this up in such way that the mapping can be done from front end template and saved
eg so someone can go and setup coco and other existing formats as pre-defined templates
(then a user could edit too)

Select the key that has the data
Instead of assuming the instances are root (or something like that)

Probably worth building this to be user configurable
eg we provide generic coco template and they have "near-coco" they can adapt it easily

more generally let's double down on the import wizard becuase people seem to really like it

Pipeline Feature Requests

Context:
image (18)

1. Click a dataset to view Data Explorer of it.

User goal of visually inspecting information (and further filtering / extraction?). Starting with the work at a certain known stage. eg As a user I want to inspect the output of QA step 2.

Ideas:
open in a full screen panel (like import wizard)
or a new tab
or go to it.

I like the idea of maintaining context, since a user may want to skim through multiple sets in one session.

2. See file counts at each stage.

There's a LOT more we could do here, but just start maybe seeing file counts would help better visualize the work in progress.

Other ideas?

If you want something further here on the pipeline context please feel free to add to this issue or create a new issue :)

[low priority] Scroll / cloud file selector blocks continue button

Reproduce:
On new import wizard, selecting cloud, then selecting a folder that causes expansion of the table.

Observations:
When I select a folder that causes the selector to expand (desired behavior), the "continue" button goes below the fold. When I try to scroll down, for some reason it fails. I have to close the import menu to see continue button.

If I had a really long root list I'm not sure if it would be possible to get to continue.

image

image


Expected showing button: ![image](https://user-images.githubusercontent.com/18080164/120874044-99eadf80-c559-11eb-818a-d82647f5c3de.png)

Pagination error on File List

Context of Exception

TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
at default_metadata (/app/methods/source_control/file/file_browser.py:599)
at __init__ (/app/methods/source_control/file/file_browser.py:524)
at view_file_list_web_route (/app/methods/source_control/file/file_browser.py:496)
at inner (/app/shared/permissions/project_permissions.py:64)
at dispatch_request (/usr/local/lib/python3.7/site-packages/flask/app.py:1935)
at full_dispatch_request (/usr/local/lib/python3.7/site-packages/flask/app.py:1949)
at reraise (/usr/local/lib/python3.7/site-packages/flask/_compat.py:39)
at handle_user_exception (/usr/local/lib/python3.7/site-packages/flask/app.py:1820)
at full_dispatch_request (/usr/local/lib/python3.7/site-packages/flask/app.py:1951)
at wsgi_app (/usr/local/lib/python3.7/site-packages/flask/app.py:2446)

Main issue is that frontend is not sending a start_index value. It seems to be happening only when changing a dataset on the file explorer. However on the new refactored branch the issue seems to be gone.

I'm linking the decouple media PR to close this when it's merged.

Idea: Process Media Supervisor

In the context of the process media failing or halting, I think it can be a good idea to have a process supervisor that checks something like this

if number_of_input_items_pending_processing > 0 and more than 30 mins without decreasing:
    restart walrus

That way we can have a self healing mechanism if the process media thread dies and stops processing items for whatever reason.

Stronger Backend Connection Test at Installation

Context that we need write permissions. (5 permissions total for AWS context for example)

Recently a connection test was added. Let's add more checks at installation to avoid harder to debug issues later.

Labels appear not loading (both regular view and explorer)

Context: New View Only for Public Projects

For this new view
https://diffgram.com/studio/annotate/coco-dataset

While testing from incognito,

The labels don't appear to load
image

I'm not sure if this is related, but the auto suggest then is getting 500 and not showing the labels
image

Interestingly refreshing the page causes labels to load:
image

But the explorer continues to throw 500. this is traceback

File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/app/shared/permissions/project_permissions.py", line 64, in inner
    return func(*args, **kwds)
  File "/app/methods/query_engine/query_suggest.py", line 40, in query_suggest_web
    client_id = request.authorization.get('username', None)
AttributeError: 'NoneType' object has no attribute 'get'

Maybe some assumptions about availability of this in new "allow if public" context?

tests issue after merge

It was showing as passing here: #53
But then when merged to master now shows as failing
I'm not sure what happened- could I get your help please Pablo

Better surface permission errors on UI in Super Admin context

In new open source installation context, the default first user is a super admin.

Various things can cause the users permissions to be stale and throw 403.
In general we handle this for regular users with JS regular error handling and message.

In new super admin context though we don't handle it was well, which makes the error silent from UI perspective.
eg label list appears stuck on loading, reports etc.

We already have some logic for auto log out so that could work. Maybe should switch (back?) to auto log out on 403 permissions (not 4xx in general, just 403).

cocoSsd issue - Userscript

In the object detection example, as well as in my local development environment the script throws the following exception: Error: Cannot find TensorFlow.js. If you are using a <script> tag, please also include @tensorflow/tfjs on the page before using this model.

Support Large Uploads (Multi GB) on Local Machines

Some users are having trouble uploading large videos either via azure connectors or via local uploads.

Some reasons for the local:

  • Dropzone may be having chunks that are too big and sometimes give a timeout (when wifi connection is not that good)
  • Same thing with azure connector, since video is being downloaded from azure, a big bandwith is required to avoid timeouts of the requests.

Possible Solutions:

  • Chunk downloads to local machine

Exception "File /etc/gcp/sa_credentials.json was not found" while using Azure Storage Account

Hello.

I am using the Helm chart provided in the other repository for installing diffgram in a Kubernetes cluster. The problem is the two containers (diffgram-default and diffgram-walrus) cannot find the following file /etc/gcp/sa_crendetials. If I am using Azure Storage Account to store the static files, this error should not occur. Anyways, I wonder if I am doing this right so I will paste the values.yaml file and the log file of the exception that I obtained.

values.yaml:

# Default values for diffgram.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# The Diffgram Version. Whenever a new update arrives, this will be changed.
diffgramVersion: latest

# Either 'opencore' or 'enterprise'. Please note that selecting 'enterprise'
# requires that you also set imagePullCredentials.gcrCredentials.
diffgramEdition: opencore

# Set this to your public domain where you want diffgram to be.
# This must be a domain name and not a public IP address.
# The chart will generate TLS certificates for the provided domain if useCertManager is 'true'
diffgramDomain: *HIDDEN*

# Set this to true if you want to use cert manager for TLS certificates generation.
useCertManager: false

dbSettings:
  # Specify How the DB Service should be created
  # - local: use a Postgres Image and Service (No external service) Recommended only for non-production enviroments.
  # - rds: use an ExternalService with an AWS RDS instance. If you set this you need to provide the rdsEndpoint field.
  # - azure: use an ExternalService with an Azure Postgres instance. If you set this you need to provide the azureSqlEndpoint field.
  # - gcsql: use an ExternalService with a Google Cloud SQL instance. If you set this you need to provide the gcSqlEndpoint field.
  dbProvider: local
  rdsEndpoint: none
  azureSqlEndpoint: none
  gcsqlEndpoint: none
  dbUser: *HIDDEN*
  dbName: *HIDDEN*
  dbPassword: *HIDDEN*
  # For the local postgres DB. Does not have effect on RDS or GCP services.
  storageAmount: 5Gi

# All the Secrets Used in Diffgram.
diffgramSecrets:
  STRIPE_API_KEY: none
  DIFFGRAM_AWS_ACCESS_KEY_ID: none
  DIFFGRAM_AWS_ACCESS_KEY_SECRET: none
  _ANALYTICS_WRITE_KEY: provided_by_diffgram_team
  MAILGUN_KEY: provided_by_diffgram_team
  HUB_SPOT_KEY: provided_by_diffgram_team
  SECRET_KEY: provided_by_diffgram_team
  INTER_SERVICE_SECRET: provided_by_diffgram_team
  # Use diffgram-postgres, postgres-rds-service depending on which DB service you set on dbSettings
  USER_PASSWORDS_SECRET: provided_by_diffgram_team
  # The service account JSON for GCP Static Storage Encoded in Base64.
  SERVICE_ACCOUNT_JSON_B64: put_your_gcp_secret_in_base_64_here
  DIFFGRAM_AZURE_CONNECTION_STRING: *HIDDEN*

diffgramSettings:
  USERDOMAIN: kubernetes
  DIFFGRAM_SYSTEM_MODE: production
  DIFFGRAM_STATIC_STORAGE_PROVIDER: azure
  DIFFGRAM_S3_BUCKET_NAME: none
  DIFFGRAM_AZURE_CONTAINER_NAME: diffgram-dev
  ML__DIFFGRAM_AZURE_CONTAINER_NAME: diffgram-dev
  ML__DIFFGRAM_S3_BUCKET_NAME: diffgram-testing
  CLOUD_STORAGE_BUCKET: diffgram-testing
  ML__CLOUD_STORAGE_BUCKET: diffgram-testing
  GOOGLE_APPLICATION_CREDENTIALS: /etc/gcp/sa_credentials.json # Check the volume in deployment.yaml and service_account_secret.yaml
  SERVICE_ACCOUNT_FULL_PATH: /etc/gcp/sa_credentials.json
  
  # Set this value if you want to use GCP as your storage. Put your json service account encoded in base 64
  SERVICE_ACCOUNT_JSON_B64: none

  SERVICE_ACCOUNT: sa_credentials.json

imagePullCredentials:
  # The service account with permissions to pull from the GCR Repository. [Should be Provided by Diffgram Team.]
  gcrCredentials: provided_by_diffgram_team

# The service for API calls.
# This are minimal defaults. Please feel free to change them as you start having more usage
defaultService:
  numReplicas: 1
  requests:
    cpu: "2.0"
    memory: "2G"
  limits:
    cpu: "2.0"
    memory: "2G"
# The service for the UI frontend.
# This are minimal defaults. Please feel free to change them as you start having more usage
frontendService:
  numReplicas: 1
  requests:
    cpu: "1.0"
    memory: "2G"
  limits:
    cpu: "1.0"
    memory: "2G"
# The service for video processing. This is where the heavy processing takes place.
# This are minimal defaults. Please feel free to change them as you start having more usage
walrusService:
  numReplicas: 1
  requests:
    cpu: "1.0"
    memory: "2G"
  limits:
    cpu: "1.0"
    memory: "2G"

The logs of the container (the error is the same on both):

[2021-06-25 10:49:00 +0000] [8] [INFO] Starting gunicorn 20.0.4
[2021-06-25 10:49:00 +0000] [8] [INFO] Listening at: http://0.0.0.0:8080 (8)
[2021-06-25 10:49:00 +0000] [8] [INFO] Using worker: sync
[2021-06-25 10:49:00 +0000] [11] [INFO] Booting worker with pid: 11
[2021-06-25 10:49:00 +0000] [12] [INFO] Booting worker with pid: 12
[2021-06-25 10:49:00 +0000] [13] [INFO] Booting worker with pid: 13
[2021-06-25 10:49:00 +0000] [14] [INFO] Booting worker with pid: 14
[2021-06-25 10:49:01 +0000] [15] [INFO] Booting worker with pid: 15
[2021-06-25 10:49:05 +0000] [11] [ERROR] Exception in worker process
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/gunicorn/arbiter.py", line 583, in spawn_worker
    worker.init_process()
  File "/usr/local/lib/python3.6/dist-packages/gunicorn/workers/base.py", line 119, in init_process
    self.load_wsgi()
  File "/usr/local/lib/python3.6/dist-packages/gunicorn/workers/base.py", line 144, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/wsgiapp.py", line 49, in load
    return self.load_wsgiapp()
  File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/usr/local/lib/python3.6/dist-packages/gunicorn/util.py", line 358, in import_app
    mod = importlib.import_module(module)
  File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/app/main.py", line 37, in <module>
    import shared.database_setup_supporting
  File "/app/shared/database_setup_supporting.py", line 2, in <module>
    from shared.database.discussion.discussion_comment import DiscussionComment
  File "/app/shared/database/discussion/discussion_comment.py", line 2, in <module>
    from shared.database.common import *
  File "/app/shared/database/common.py", line 41, in <module>
    from shared import data_tools_core
  File "/app/shared/data_tools_core.py", line 3, in <module>
    from shared.data_tools_core_gcp import DataToolsGCP
  File "/app/shared/data_tools_core_gcp.py", line 15, in <module>
    logger = get_shared_logger()
  File "/app/shared/shared_logger.py", line 8, in get_shared_logger
    shared_abstract_logger.configure_concrete_logger(system_mode=settings.DIFFGRAM_SYSTEM_MODE)
  File "/app/shared/utils/logging.py", line 53, in configure_concrete_logger
    self.logger = self.configure_gcp_logger()
  File "/app/shared/utils/logging.py", line 72, in configure_gcp_logger
    logging_client = gcp_logging.Client()
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/logging/client.py", line 127, in __init__
    project=project, credentials=credentials, _http=_http
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/client.py", line 318, in __init__
    _ClientProjectMixin.__init__(self, project=project, credentials=credentials)
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/client.py", line 266, in __init__
    project = self._determine_default(project)
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/client.py", line 285, in _determine_default
    return _determine_default_project(project)
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/_helpers.py", line 186, in _determine_default_project
    _, project = google.auth.default()
  File "/usr/local/lib/python3.6/dist-packages/google/auth/_default.py", line 454, in default
    credentials, project_id = checker()
  File "/usr/local/lib/python3.6/dist-packages/google/auth/_default.py", line 222, in _get_explicit_environ_credentials
    os.environ[environment_vars.CREDENTIALS]
  File "/usr/local/lib/python3.6/dist-packages/google/auth/_default.py", line 108, in load_credentials_from_file
    "File {} was not found.".format(filename)
google.auth.exceptions.DefaultCredentialsError: File /etc/gcp/sa_credentials.json was not found.
No module named 'shared.settings.secrets'
[2021-06-25 10:49:05 +0000] [11] [INFO] Worker exiting (pid: 11)
PROCESS_MEDIA_TRY_BLOCK_ON True
PROCESS_MEDIA_REMOTE_QUEUE_ON True
PROCESS_MEDIA_ENQUEUE_LOCALLY_IMMEDIATELY False
DIFFGRAM_SYSTEM_MODE DIFFGRAM_SYSTEM_MODE  production
DATABASE_URL DATABASE_URL  True

I specified the variable SERVICE_ACCOUNT_FULL_PATH with a dummy path since it is recommended in this issue #68. Also, I followed this tutorial: https://medium.com/diffgram/tutorial-installing-diffgram-on-azure-aks-b9447685e271

Appreciate any help.

Default report templates

Create default report templates for a newly created database automatically

Context

  1. We have reports on Diffgram.com. These are created as database objects.
  2. In open source, these are not created by default

Steps

  1. Determine a good place to put them, eg we could put it perhaps in in install.py, or the database setup, etc.

Change The DB Creation to use Alembic in the Testing context.

We are still using the setup_supporting.py to create the tables on unit tests and e2e tests. We need to call alembic here to be consistent with the schema on both contexts and get schema error during unit tests if there are any problems with alembic.

getting started -- docker noob

Hi. I have what I imagine is a very basic question for many/all of you reading this.

I am on Windows 10. I have WSL2 running, using Ubuntu 18.04. I installed Docker and made sure of the appropriate WSL2-related settings.

However, I'm not sure what to do next! Lol. Where do I paste the code you give for installing, found on the README page?

Any help would be so appreciated. I've searched and am coming up empty + confused.

Deprecate FPS Conversion Rate Feature

This feature was developed earlier and with time we've realized it is not adding that much value to the system and adding some unnecesary complexities.

So the main idea would be to remove the logic related to the fps_conversion_ratio value on video.py and leave it as a default of

Example related files:

if fps == 0 or fps == original_fps:

v-model="project.settings_input_video_fps"

We can discuss re-adding it if in the future there is any need for it, but most likely we'll no see the need of this any time soon.

Task Template cards with long names create text overflow.

image

The main issue is when we add a word that is super long and has no spaces. Usually this issue comes up if people name their task template with underscore (super_long_name_with_no_spaces). That will cause the card text to overflow.

Deferring a task is adding to the completed count.

On this function:

We are deffering the task but for some reason adding to the completed count. I think this should be change as it can cause confusion on the completion statistic.

Also on the function archive_related_tasks. When we archive a task that is complete, we should also update the overall completion count by calling refresh_stat_count_tasks

Same thing with change_status:

def change_status(self):

. We can be archiving a completed task so the task count needs to be refreshed too.

Option to create labels automatically at magic import stage

Context that a user may desire to have all the new labels created automatically.

This an option becuase the validation at this step is still valuable.

For example, maybe it could say
"We Discovered x new labels (show list of them)"
Would you like to automatically create them? (yes/no)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.