Coder Social home page Coder Social logo

filecoin-saturn / l1-node Goto Github PK

View Code? Open in Web Editor NEW
138.0 17.0 49.0 3.69 MB

Filecoin Saturn L1 Node β€’ The edge cache layer of Filecoin's decentralized CDN πŸͺ

License: Other

Dockerfile 7.63% JavaScript 77.98% Shell 14.39%
filecoin saturn dcdn

l1-node's Introduction

Saturn L1 Node πŸͺ

Saturn L1 nodes are CDN edge caches in the outermost layer of the Filecoin Saturn network. L1 nodes serve CAR files to retrieval clients as requested by their CIDs. Cache misses are served by the IPFS Network and Filecoin Storage Providers.

Saturn is live. Do you have a server that meets the minimum hardware requirements? If so, follow the setup instructions below to get started. You can run an L1 node, contribute bandwidth to the network, and earn Filecoin (FIL) today.

Beyond running a node, Saturn is a community run project and we'd love for you to get involved. Come say hi in #filecoin-saturn on Filecoin Slack! πŸ‘‹

Table of Contents

Requirements

General requirements

  • Filecoin wallet address
  • Email address

Node hardware requirements

  • Linux server with a static public IPv4 address in a unique /24 CIDR block
  • Root access / passwordless sudo user (How to)
  • Ports 80 and 443 free and public to the internet
  • Docker installed (Instructions here) with Docker Compose v2
  • CPU with 6 cores (12+ cores recommended). CPU Mark of 8,000+ (20,000+ recommended)
  • 10Gbps upload link minimum1 (Why 10Gbps?)
  • 32GB RAM minimum (128GB+ recommended)
  • 4TB SSD storage minimum (16TB+ NVMe recommended)2

Only one node per physical host is allowed. If you want to run multiple nodes, you need to run them on dedicated hardware.
Multi-noding (Sharing CPU, RAM, Uplink or storage among nodes) is not allowed.

1 The more you can serve β†’ greater FIL earnings
2 Bigger disk β†’ bigger cache β†’ greater FIL earnings

Set up a node

If you want to switch your node from test net to main net, or vice versa, see Switch networks below.

  1. Install

  2. Change directory to $SATURN_HOME (default: $HOME) to download the required files

    cd ${SATURN_HOME:-$HOME}
  3. Download the .env file

    Note: the .env file does not take precedence over env variables set locally.

    • Set the mandatory FIL_WALLET_ADDRESS and NODE_OPERATOR_EMAIL environment variables in the .env file.
    • Set the SATURN_NETWORK environment variable in the .env file.
      • To join Saturn's Main network and earn FIL rewards, make sure to set SATURN_NETWORK to main.
      • To join Saturn's Test network, which doesn't earn FIL rewards, set SATURN_NETWORK to test. Note that this is the default value!
    • By default, the Saturn volume is mounted from $HOME. It can be changed by setting the $SATURN_HOME environment variable.
    curl -s https://raw.githubusercontent.com/filecoin-saturn/L1-node/main/.env -o .env

    You can use the text editor of your choice (e.g. nano or vim on Linux)

  4. Download the docker-compose.yml file:

    curl -s https://raw.githubusercontent.com/filecoin-saturn/L1-node/main/docker-compose.yml -o docker-compose.yml
  5. Download the docker_compose_update.sh script and make it executable:

    curl -s https://raw.githubusercontent.com/filecoin-saturn/L1-node/main/docker_compose_update.sh -o docker_compose_update.sh
    chmod +x docker_compose_update.sh
  6. Set up the cron job to auto-update the docker-compose.yml file:

    (crontab -l 2>/dev/null; echo "*/5 * * * * cd $SATURN_HOME && sh docker_compose_update.sh >> docker_compose_update.log 2>&1") | crontab -
  7. Launch it:

    sudo docker compose up -d
  8. Check logs with docker logs -f -n 100 saturn-node

  9. Check for any errors. Registration will happen automatically and the node will restart once it receives its TLS certificate

In most instances speedtest does a good job of picking "close" servers but for small networks it may be incorrect. If the speedtest value reported by speedtest seems low, you may want to configure SPEEDTEST_SERVER_CONFIG to point to a different public speedtest server. You will need to install speedtest CLI in the host and fetch close servers' IDs by doing speedtest --servers, then setting SPEEDTEST_SERVER_CONFIG="--server-id=XXXXX"

Update a node

We are using a Watchtower container to update the saturn-node container. Your node will be updated automatically. You can see the update logs with docker logs -f saturn-watchtower. Make sure to setup the cron job to auto-update docker-compose.yml as well (see above).

Set up a node with Ansible

From here:

"Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates."

This playbook is meant as a bare-bones approach to set up an L1. It simply automates running the steps described above. A consequence of this is that when run it will restart a crashed L1 node docker container. It also presents a basic approach to server hardening which is by no means thorough.

Note: The security of your servers is your responsibility. You should do your own research to ensure your server follows security best practices.

If you're looking for a playbook which covers server hardening, monitoring and logging please check out https://github.com/hyphacoop/ansible-saturn-l1.

Currently, this playbook runs on the following Linux distros:

  • Ubuntu
  • Debian
  • CentOS

These instructions are to be run in a machine with Ansible >= 2.14 installed. This machine is known as your control node and it should not be the one to run your L1 node.

Most commands are run as root and your ssh user should have root access on the target machine.

  1. Install the required Ansible modules
ansible-galaxy collection install community.docker
  1. Clone this repository and cd into it.

  2. For target host connectivity, ssh keys are recommended and this playbook can help you with that.

    Note: Using the playbook for this is completely optional.

    1. Make sure you have configured ansible_user and ansible_ssh_pass for your target host in your inventory file. See more here.
    2. Setup an authorized_keys file with your public ssh keys in the cloned repository root.
    3. Run ansible-playbook -i <path_to_your_inventory> -l <host_label> --skip-tags=config,harden,run playbooks/l1.yaml
  3. Ensure your control node has ssh access to your target machine(s).

  • Make sure to specify which hosts you want to provision in your inventory file.
ansible -vvv -i <path_to_your_inventory> <host_label> -m ping
  1. Replace the environment variable values where appropriate and export them.
  • If Main network: Set SATURN_NETWORK to main
  • If you are switching networks check Switching networks and rerun step 4 and 5.
  • You can define a host-specific SATURN_HOME by setting a saturn_root variable for that host on your inventory file. See more here.
export FIL_WALLET_ADDRESS=<your_fil_wallet_address>; export NODE_OPERATOR_EMAIL=<your_email>; export SATURN_NETWORK=test
  1. Run the playbook
  • Feel free to use host labels to filter them or to deploy incrementally.
  • We're skipping the bootstrap play by default, as it deals with setting authorized ssh keys on the target machine. See 2 for more info.
ansible-playbook -i <path_to_your_inventory> -l <host_label> --skip-tags=bootstrap playbooks/l1.yaml
  • To skip the hardening step run this instead:
ansible-playbook -i <path_to_your_inventory> -l <host_label> --skip-tags=bootstrap,harden playbooks/l1.yaml

Stopping a node

To gracefully stop a node and not receive a penalty, run:

  sudo docker stop --time 1800 saturn-node

or if you are in your $SATURN_HOME folder with the Saturn docker-compose.yml file:

  sudo docker compose down

We are setting the stop_signal and the stop_grace_period in our Docker compose file to avoid issues. If you have a custom setup, make sure to send a SIGTERM to your node and wait at least 30 minutes for it to drain.

Switch networks between test net and main net

If you want to switch your node from Saturn's test network (aka test) to Saturn's main network (aka main), or vice versa, follow these steps:

  1. Gracefully halt your node as explained in Stopping a node.
  2. Set the network environment variable SATURN_NETWORK to main, or test, in your $SATURN_HOME/.env file (default: $HOME/.env).
  3. Delete contents in $SATURN_HOME/shared/ssl (default: $HOME/shared/ssl).
  4. Start the node again with docker compose -f $SATURN_HOME/docker-compose.yml up -d.
  5. Check logs with docker logs -f saturn-node

Node operator guide

For answers to common questions about operating a node, see the L1 node FAQ page.

Network Uptime Requirement

To be eligible for the end of month earnings on the Saturn Network, nodes must satisfy a minimum uptime requirement each month. Having an uptime requirement has the following benefits:

  • Addresses node churn on the network. Node churn reduced performance and the stability of the network. Having a minimum uptime requirement incentivizes nodes to remain online for longer periods.
  • Certain malicious nodes on the network run on compromised cloud accounts. Those usually have a lifetime of ~3 days. The uptime requirement ensures that those nodes do not receive earnings.

The current uptime requirement is as follows:

  • For a node operator to qualify for earnings at the end of the month, their node must be online for at least 14 days during the month. It is important to note that earnings for nodes STILL get calculated before the node satisfies the uptime requirement.
  • When a node first registers during a given month, its uptime requirement check will be postponed to the following month to give a fair opportunity for each node to meet the uptime requirement. Its earnings will remain pending until it has satisfied the uptime requirement. If the node satisfied the uptime requirement during the same month it registers, it will be eligible to receive earnings that month.
  • Nodes that do not satisfy the uptime requirement in the months after their registration month will forfeit their earnings for the given month. Their forfeited earnings will be added to the network reward pool for the following month.
  • The uptime requirement must be satisfied on a monthly basis after the registration month is over.

Note: Beginning July 1st, 2023, Saturn’s monthly node uptime requirement increased from 7 days to 14 days. This means that for a node to qualify for earnings on August 1st, 2023, that node must have been online and operational for at least 14 contiguous days in July.

Read more about Saturn's node uptime requirement in the docs, here.

Obtaining a Filecoin wallet address

You need to own a Filecoin wallet to receive FIL payouts.

  • Official Filecoin wallet documentation

  • If you have an account on a Centralized Exchange (Coinbase, Binance, etc.) that supports Filecoin, go through the steps to deposit Filecoin and you'll be given a wallet address. This is recommended if you don't want to manage your wallet's seed phrase.

  • Web wallets

  • Desktop wallets

  • Mobile wallets

⚠️ Please follow crypto wallet best practices. Never share your seed phrase with anyone or enter it into websites. The Saturn team will never DM you or ask you to verify/validate/upgrade your wallet. If you need assistance, please ask in public channels such as the #filecoin-saturn Slack.

Claiming your earnings

Each month, your node's earnings, in FIL, are calculated by the network based on various factors such as the amount of bandwidth it served, the number of requests it handled, its performance metrics like TTFB and upload speed, and its availaility and uptime. These earnings are then sent to a payout FVM smart contract by the 7th day of the following month. For example, earnings for December 2022 would be transferred to a payout smart contract by January 7th, 2023.

After your node's earnings are in the payout FVM smart contract, you can claim them on payouts.saturn.tech. Claiming your earnings moves the Filecoin your node(s) earned from the smart contract to your personal Filecoin wallet.

Node monitoring

Community Tools

These Saturn tools are maintained by community members outside the Saturn core team.

  • https://github.com/31z4/saturn-moonlet - Self-hosted Saturn monitoring using Prometheus and Grafana. View detailed, real-time and historical data on your nodes and earnings, setup alerts, and more.

License

Dual-licensed under MIT + Apache 2.0

l1-node's People

Contributors

31z4 avatar ameanasad avatar anomalroil avatar criadoperez avatar diegorbaquero avatar enoldev avatar gruns avatar guanzo avatar hannahhoward avatar holdenk avatar joaosa avatar juliangruber avatar kwypchlo avatar kylehuntsman avatar misilva73 avatar omahs avatar rvagg avatar tchardin avatar up_the_irons avatar vorburger avatar willscott avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

l1-node's Issues

Remove express

L1 doesn't need a full web framework. Express should be removed.

New IP Geolocation for IP

ip: '45.141.104.45',
loc: '52.092058, 5.119483',
org: 'AS57866 Fusix Networks B.V.',
city: 'Utrecht',
postal: '3512 JE',
region: 'Utrecht',
country: 'Netherlands',
timezone: 'Europe/Amsterdam',
countryCode: 'NL'
ip: '45.141.104.46',
loc: '52.092058, 5.119483',
org: 'AS57866 Fusix Networks B.V.',
city: 'Utrecht',
postal: '3512 JE',
region: 'Utrecht',
country: 'Netherlands',
timezone: 'Europe/Amsterdam',
countryCode: 'NL'

Geolocation has changed to wrong location

Been running my node since Sunrise and it has always reported to San Diego, but now it has switched to Irvine, which is not really close.

6fadfadcΒ πŸŒ…οΈ L1v465 38.xxx.xxx.236Cogent Communications IrvineUnited States (US)

implement http3

http3 could be a huge, quick bang for our buck to significantly improve ttfb without adding more nodes/PoPs

adding http3 support should be broken into three pieces:

  • do a quick benchmark of nginx's tcp+tls vs http3. for example, fire up an aws ec2 instance far away and run two web servers there, one standard tcp+tls with nginx and the other with quic/http3. then benchmark how long it takes to establish connections to each of those web servers in a browser that supports http3, like chrome (https://caniuse.com/http3)

    http3 should be much faster. but is it? this can also serve as a quick test-bed of the variegated http3 tools, libraries, and software that exist right now. see below

    ping @DiegoRBaquero to get a production ssl cert to use

  • understand the architecture of the l1 and investigate the http3 landscape to determine the best tool, library, or software to add http3 support to the l1. various tools:

    discuss viable avenues for http3 implementation with @gruns and @DiegoRBaquero

    note here that, for simplicity, any non-nginx http3 implementation will likely be replaced with nginx's native http3 support once ready, so long as nginx's implementation suffices. so we shouldn't get too crazy adding http3 support wrt the amount of time, effort, or complexity undertaken here

  • once http3 has shown itself superior and we've determined the implementation battleplan, add http3 support to the l1 node and ship it, baby πŸš€

Cluster Node processes

If container has multiple CPUs, multiple Node processes should be spawned to handle proxied cache misses from nginx

Finally test bandwidth

Can bandwidth testing be left to the end so that no additional traffic is wasted if other hardware requirements are not met?
Because the docker container restarts automatically, it would be too expensive and unnecessary to directly test the bandwidth every time it starts.

if (NODE_VERSION !== DEV_VERSION && initial) {
    let speedtest
    try {
      speedtest = await getSpeedtest()
    } catch (err) {
      debug(`Error while performing speedtest: ${err.name} ${err.message}`)
    }
    Object.assign(stats, { speedtest })
  }

  verifyRequirements(requirements, stats)
function verifyRequirements (requirements, stats) {
  const { minCPUCores, minMemoryGB, minUploadSpeedMbps, minDiskGB } = requirements

  if (stats.cpuStats.numCPUs < minCPUCores) {
    throw new Error(`Not enough CPU cores. Required: ${minCPUCores}, current: ${stats.cpuStats.numCPUs}`)
  }

  if (stats.memoryStats.totalMemory < minMemoryGB) {
    throw new Error(`Not enough memory. Required: ${minMemoryGB} GB, available: ${stats.memoryStats.totalMemory}`)
  }

  if (stats.speedtest?.upload.bandwidth < (minUploadSpeedMbps * 1_000_000 / 8)) {
    throw new Error(`Not enough upload speed. Required: ${minUploadSpeedMbps} Mbps, current: ${stats.speedtest.upload.bandwidth / 1_000_000 * 8} Mbps`)
  }

  if (stats.diskStats.totalDisk < minDiskGB) {
    throw new Error(`Not enough disk space. Required: ${minDiskGB} GB, available: ${stats.diskStats.totalDisk}`)
  }

  debug('All requirements met')
}

check and enforce hardware requirements on setup

on l1 node startup, check if the node satisfies the minimum hardware requirements. and, if not:

  • inform the user that their node needs to meet the minimum hardware requirements to participate in saturn
  • enumerate each minimum hardware requirement that wasnt met so they have immediate, actionable feedback on what hardware needs to be upgraded to meet the minimum hardware requirements

open question:

  • where/how are the hardware requirements stored? are they baked into the l1? or retrieved, like in json form, from the orchestrator?

Trying to deploy using docker-compose and crash due to SIGCHLD Error in NGINX?!

My docker-compose.yml is as follows. After starting it (testing on normal docker, not swarm) I'm getting this error:

filecoin-saturn1        | 2022-11-14T15:02:32.111Z node-"main":server shim process running
filecoin-saturn1        | 2022-11-14T15:02:32.610Z node-"main":log-ingestor Reading nginx log file
filecoin-saturn1        | 2022/11/14 15:03:31 [notice] 43#43: http file cache: /usr/src/app/shared/nginx_cache 0.000M, bsize: 4096
filecoin-saturn1        | 2022/11/14 15:03:32 [notice] 28#28: signal 17 (SIGCHLD) received from 43
filecoin-saturn1        | 2022/11/14 15:03:32 [notice] 28#28: cache loader process 43 exited with code 0
filecoin-saturn1        | 2022/11/14 15:03:32 [notice] 28#28: signal 29 (SIGIO) received

environment:
NODE_EMAIL and WALLET_FIL are set using docker-env.

version: '3.5'

volumes:
  filecoin-saturn-node-storage:
    external: true
    name: filecoin-saturn-node-storage

  filecoin-saturn-node:
    # Read this for Filecoin-Saturn Token
    # https://strn.network/#setupyournode
    # https://github.com/filecoin-saturn/L1-node#set-up-a-node
    image: ghcr.io/filecoin-saturn/l1-node:main
    container_name: filecoin-saturn
    restart: unless-stopped
    volumes:
      - filecoin-saturn-node-storage:/usr/src/app/shared
    ports:
      - 80:80/tcp
      - 443:443/tcp
    ulimits:
      nofile:
        soft: "1000000"
        hard: "1000000"
    environment:
      - SATURN_NETWORK="main"
      - ORCHESTRATOR_REGISTRATION="true"
      - IPFS_GATEWAY_ORIGIN="https://ipfs.io"
      - NODE_OPERATOR_EMAIL="${NODE_EMAIL}"
      - FIL_WALLET_ADDRESS="${WALLET_FIL}"

re-register in interval

In case for whatever reason the orchestrator fails to connect for a second and then gets removed, the gateway should re-register in an interval to get re-added

`update.sh` should revert if the container fails to start

If the newly downloaded image fails to start, the updater script should find the latest working version and run that instead. Simply running the previous version won't suffice, as multiple images in a row may be broken. It should keep iterating previous versions until docker run succeeds.

Avoid restart loop

  • Use restart on failure with a max
  • Change registration failures to non error exit so it doesn't restart

mirror grafana dashboard for test net

once http3 has landed in the test network (#42):

  • mirror the production grafana (https://protocollabs.grafana.net/d/_l58mGiVz/saturn-aggregated-metrics?orgId=1&refresh=1h)

  • mirror grafana in such a way as to avoid future copy+pasta. in other words, when we push a change to grafana, that change should be reflected in both the prod and test grafana dashboards without further user action

    what we want to avoid is manual copy+pasta+deploy to mirror prod and test manually with every change ad nauseum into the future

@AmeanAsad what's the best way to bundle the two dashboards together? ideally they both live under the same url just in different tabs. but bc grafana doesnt have tabs, what's the next best thing?

what we want to avoid is two different grafana urls to remember, one for prod and one for test

add brotli compression support

brotli is widely supported in browsers (https://caniuse.com/brotli) and enough superior to gzip to use it over gzip when supported

see https://docs.nginx.com/nginx/admin-guide/dynamic-modules/brotli/ and http://wiki.centos-webpanel.com/enabling-brotli-compression-on-nginx

there has to be some way to add brotli support without nginx plus.. but if nginx plus keeps being an impediment to our various usage, ie death by a thousand cuts, perhaps its worth migrating to another http server in the future (😒)

update script expects specific user configuration

the update script expects to be run as a user, non-interactively, with password-less sudo access.

That's not a default configuration on most linux installs, so some description of this expectation should be included in the install notes.

preferable we'd support saturn with a non-root docker. is there a reason that the image currently needs root docker?

explain why 10gbps upload is required in the docs and/or faq

this comes up over and over again. best to address it head on

one idea:

we could/should also note that the 10gbps can be burstable; 10gbps dedicated isnt required

the things to note:

  1. many users around the world have 1gbps download. it only takes a handful of 1gbps users to saturate and monopolize a 1gbps upload server
  2. better to start high and lower than start low and raise. with the latter, we'd have to kick nodes from the network
  3. we're looking into adjusting server requirements based on geography, knowing 10gbps is not the same price around the world

if i (@gruns) have time, ill whip up the faq answer and add it

Dashboard to view earnings

The node operator will want to see its earnings on a daily basis. Since they are not initially persisting all the logs of retrievals that they are providing, they will need to make a request to the ledger or to the log DB to see their earnings aggregated over a time period.

Add a flush limit to access_log

My node had been not committing retrieval logs, and I did a lot of testing for this.

After checking and testing, I finally determined that it was because of the buffer set by access_log. Could you please add a time limit to flush it to the disk? Otherwise, it may not be written to the disk for a long time in the case of small access, and these access logs in cache will be lost once docker restarts.
In the shared.conf:

access_log  /var/log/nginx/node-access.log node buffer=256k;

Add a flush limit, exp:

access_log  /var/log/nginx/node-access.log node buffer=256k flush=3m;

Replace update.sh with an external dependency

The less code we have to maintain, the less complexity we have to focus on and the better it will be on the long run.

This principle can be applied to update.sh which can be replaced by watchtower, for instance.

Since we can't have all nodes rebooting at the same time, as that would kill the network, we can use lifecycle hooks to wait a random time, as it is currently done here.

This guide offers a good way to get started. This is another good ref.

One should note that watchtower does not support rollbacks. Doing so would be ideal, as per this issue.

add nodeId as a response header

for now, saturn-pop like

saturn-pop: <nodeId>

we shouldn't use the X- prefix, eg no x-saturn-pop; that's deprecated. see https://www.rfc-editor.org/rfc/rfc6648

in the future, we'll add additional response headers. like cache hit/miss status. cloudflare does this:

$ curl -I "https://imgflip.com/i/6eqsf6" | grep -i status
cf-cache-status: HIT

of note for context: the ipfs gws already include a number of headers. ex:

$ curl -I "https://ipfs.io/ipfs/Qme7ss3ARVgxv6rXqVPiikMJ8u2NLgmgszg13pYrDKEoiu" | grep -i ipfs
x-ipfs-gateway-host: ipfs-bank16-sv15
x-ipfs-path: /ipfs/Qme7ss3ARVgxv6rXqVPiikMJ8u2NLgmgszg13pYrDKEoiu
x-ipfs-pop: ipfs-bank16-sv15
x-ipfs-lb-pop: gateway-bank1-sv15

ZeroSSL error

  node-test:tls Received status 400 with body: { success: false, message: 'ZeroSSL blocked at this moment. Please stop node and try again in a few hours.' } +0ms

investigate filled host logs to arrive at a solution

  1. /var/log/journal/* is systemd logs. what, exactly, is in those logs? normal humdrum stuff or errors of some kind we should fix?

depending on the answer above

  1. how should we help the node operator not fill their logs from running an l1? while host logs are the operator's responsibility, if saturn fills them up, helping them manage the host logs is our duty and responsibility

Disable orchestrator registration with a CLI flag

Currently the node exits if it can't register with the orchestrator.
It doesn't need to register when it's being developed or tested in isolation.

Something like node index.js --no-register, node index.js --register=false

add documentation beyond node setup to the readme for node operators

add documentation about running a node beyond the l1 setup instructions to the readme

added documentation should include:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.