Coder Social home page Coder Social logo

apache / fluo-muchos Goto Github PK

View Code? Open in Web Editor NEW
28.0 17.0 37.0 1.05 MB

Apache Fluo Muchos

Home Page: https://fluo.apache.org

License: Apache License 2.0

Shell 14.19% Python 83.72% Jinja 2.06% Vim Script 0.03%
fluo big-data accumulo aws azure ansible hacktoberfest

fluo-muchos's Introduction

Muchos

Build Status Apache License

Muchos automates setting up Apache Accumulo or Apache Fluo (and their dependencies) on a cluster

Muchos makes it easy to launch a cluster in Amazon's EC2 or Microsoft Azure and deploy Accumulo or Fluo to it. Muchos enables developers to experiment with Accumulo or Fluo in a realistic, distributed environment. Muchos installs all software using tarball distributions which makes its easy to experiment with the latest versions of Accumulo, Hadoop, Zookeeper, etc without waiting for downstream packaging.

Muchos is not recommended at this time for production environments as it has no support for updating and upgrading dependencies. It also has a wipe command that is great for testing but dangerous for production environments.

Muchos is structured into two high level components:

  • Ansible scripts that install and configure Fluo and its dependencies on a cluster.
  • Python scripts that push the Ansible scripts from a local development machine to a cluster and run them. These Python scripts can also optionally launch a cluster in EC2 using boto or in Azure using Azure CLI.

Check out Uno for setting up Accumulo or Fluo on a single machine.

Requirements

Common

Muchos requires the following common components for installation and setup:

  • Python 3 with a virtual environment setup. Create a Python 3 environment and switch to it. (The CI tests using Python 3.9, but this should work in later versions as well. If you encounter problems, please file an issue).
cd ~
python3.9 -m venv env
source env/bin/activate
  • ssh-agent installed and running and ssh-agent forwarding. Note that this may also require the creation of SSH public-private key pair.
eval $(ssh-agent -s)
ssh-add ~/.ssh/id_rsa
  • Git (current version).
  • Install required Python libraries by executing pip install -r lib/requirements.txt command.
  • Install common Ansible collections by executing the install-ansible-collections script.

EC2

Muchos requires the following for EC2 installations:

  • awscli (version 2) & boto3 libraries - Install using pip3 install awscli2 boto3 --upgrade
  • Note: if using Ubuntu you may need to install botocore separately using pip3 install awscli boto3 botocore
  • An AWS account with your SSH public key uploaded. When you configure muchos.props, set key.name to name of your key pair in AWS.
  • ~/.aws configured on your machine. Can be created manually or using aws configure.

Azure

Muchos requires the following for Azure installations:

  • Azure CLI must be installed, configured and authenticated to an Azure subscription. It is recommended to use Azure CLI 2.50 or later.
  • An Azure account with permissions to either use an existing or create new Resource Groups, Virtual Networks and Subnets
  • A machine which can connect to securely deploy the cluster in Azure.
  • Install the Ansible collection for Azure, and associated pre-requisites within the Python virtual environment, by executing the install-ansible-for-azure script.

When running Muchos under Ubuntu 18.04, checkout these tips.

Quickstart

The following commands will install Muchos, launch a cluster, and setup/run Accumulo:

git clone https://github.com/apache/fluo-muchos.git

cd fluo-muchos/
cp conf/muchos.props.example conf/muchos.props
vim conf/muchos.props                                   # Edit to configure Muchos cluster
./bin/muchos launch -c mycluster                        # Launches Muchos cluster in EC2 or Azure
./bin/muchos setup                                      # Set up cluster and start Accumulo

The launch command will create a cluster with the name specified in the command (e.g. 'mycluster'). The setup command can be run repeatedly to fix any failures and will not repeat successful operations.

After your cluster is launched, SSH to it using the following command:

./bin/muchos ssh

Run the following command to terminate your cluster. WARNING: All cluster data will be lost.

./bin/muchos terminate

Please continue reading for more detailed Muchos instructions.

Launching an EC2 cluster

Before launching a cluster, you will need to complete the requirements above, clone the Muchos repo, and create muchos.props. If you want to give others access to your cluster, add their public keys to a file named keys in your conf/ directory. During the setup of your cluster, this file will be appended on each node to the ~/.ssh/authorized_keys file for the user set by the cluster.username property.

Configuring the AMI

You might also need to configure the aws_ami property in muchos.props. Muchos by default uses a Fedora 35 image for EC2. By default, the aws_ami property is set to this Fedora 35 AMI in us-east-1. You will need to change this value if a newer image has been released or if you are running in different region than us-east-1.

Launching the cluster

After following the steps above, run the following command to launch an EC2 cluster called mycluster:

./bin/muchos launch -c mycluster

After your cluster has launched, you do not have to specify a cluster anymore using -c (unless you have multiple clusters running).

Run the following command to confirm that you can ssh to the leader node:

./bin/muchos ssh

You can check the status of the nodes using the EC2 Dashboard or by running the following command:

./bin/muchos status

Launching an Azure cluster

Before launching a cluster, you will need to complete the requirements for Azure above, clone the Muchos repo, and create your conf/muchos.props file by making a copy of the muchos.props example. If you want to give others access to your cluster, add their public keys to a file named keys in your conf/ directory. During the setup of your cluster, this file will be appended on each node to the ~/.ssh/authorized_keys file for the user set by the cluster.username property. You will also need to ensure you have authenticated to Azure and set the target subscription using the Azure CLI.

Muchos by default uses an AlmaLinux 9 image that is hosted in the Azure marketplace. The Azure Linux Agent is already pre-installed on the Azure Marketplace images and is typically available from the distribution's package repository.

Edit the values in the sections within muchos.props as below Under the general section, edit following values as per your configuration

  • cluster_type = azure
  • cluster_user should be set to the name of the administrative user
  • proxy_hostname (optional) is the name of the machine which has access to the cluster VNET

Under the azure section, edit following values as per your configuration:

  • azure_subscription_id to provide the Azure subscription GUID
  • resource_group to provide the resource-group name for the cluster deployment. A new resource group with this name will be created if it doesn't already exist
  • vnet to provide the name of the VNET that your cluster nodes should use. A new VNET with this name will be created if it doesn't already exist
  • subnet to provide a name for the subnet within which the cluster resources will be deployed
  • use_multiple_vmss allows you to configure VMs with different CPU, memory, disk configurations for leaders and workers. To know more about this feature, please follow the doc.
  • azure_image_reference allows you to specify the Azure image SKU in the format as shown below.
    offer|publisher|sku|version|image_id|
    Ex: almalinux-x86_64|almalinux|9-gen2|latest||
    For more information on using other images, refer to Azure images.
  • azure_proxy_image_reference allows you to specify the Azure image SKU that will be used for the optional proxy machine. If this property is not specified, then the value of azure_image_reference will be used instead.
  • numnodes to change the cluster size in terms of number of nodes deployed
  • data_disk_count to specify how many persistent data disks are attached to each node and will be used by HDFS. If you would prefer to use ephemeral / storage for Azure clusters, please follow these steps.
  • vm_sku to specify the VM size to use. You can choose from the available VM sizes.
  • use_adlsg2 to use Azure Data Lake Storage(ADLS) Gen2 as datastore for Accumulo ADLS Gen2 Doc. Setup ADLS Gen2 as datastore for Accumulo.
  • az_oms_integration_needed to implement Log Analytics workspace, Dashboard & Azure Monitor Workbooks Create Log Analytics workspace. Create and Share dashboards. Azure Monitor Workbooks.
  • az_use_app_insights to configure an Azure Application Insights with your setup, and activate the application insights Java agent with the manager and tablet servers. Customize applicationinsights.json to meet your needs before executing muchos setup.

Please refer to the muchos.props example for the full list of Azure-specific configurations - some of which have supplementary comments.

Within Azure the nodes section is auto populated with the hostnames and their default roles.

After following the steps above, run the following command to launch an Azure VMSS cluster called mycluster (where 'mycluster' is the name assigned to your cluster):

.bin/muchos launch -c `mycluster` # Launches Muchos cluster in Azure

Set up the cluster

Once your cluster is built in EC2 or Azure, the ./bin/muchos setup command will set up your cluster and start Hadoop, Zookeeper & Accumulo. It will download release tarballs of Fluo, Accumulo, Hadoop, etc. The versions of these tarballs are specified in muchos.props and can be changed if desired.

Optionally, Muchos can setup the cluster using an Accumulo or Fluo tarball that is placed in the conf/upload directory of Muchos. This option is only necessary if you want to use an unreleased version of Fluo or Accumulo. Before running the muchos setup command, you should confirm that the hash (typically SHA-512 or SHA-256) of your tarball matches what is set in conf/checksums. Run the command shasum -a 512 /path/to/tarball on your tarball to determine its hash. The entry in conf/checksums can optionally include the algorithm as a prefix. If the algorithm is not specified then Muchos will infer the algorithm based on the length of the hash. Currently Muchos supports using sha512 / sha384 / sha256 / sha224 / sha1 / md5 hashes for the checksum.

The muchos setup command will install and start Accumulo, Hadoop, and Zookeeper. The optional services below will only be set up if configured in the [nodes] section of muchos.props:

  1. fluo - Fluo only needs to be installed and configured on a single node in your cluster as Fluo applications are run in YARN. If set as a service, muchos setup will install and partially configure Fluo but not start it. To finish setup, follow the steps in the 'Run a Fluo application' section below.

  2. metrics - The Metrics service installs and configures collectd, InfluxDB and Grafana. Cluster metrics are sent to InfluxDB using collectd and are viewable in Grafana. If Fluo is running, its metrics will also be viewable in Grafana.

  3. spark - If specified on a node, Apache Spark will be installed on all nodes and the Spark History server will be run on this node.

  4. mesosmaster - If specified, a Mesos master will be started on this node and Mesos slaves will be started on all workers nodes. The Mesos status page will be viewable at http://<MESOS_MASTER_NODE>:5050/. Marathon will also be started on this node and will be viewable at http://<MESOS_MASTER_NODE>:8080/.

  5. client - Used to specify a client node where no services are run but libraries are installed to run Accumulo/Hadoop clients.

  6. swarmmanager - Sets up Docker swarm with the manager on this node and joins all worker nodes to this swarm. When this is set, docker will be installed on all nodes of the cluster. It is recommended that the swarm manager is specified on a worker node as it runs docker containers. Check out Portainer if you want to run a management UI for your swarm cluster.

  7. elkserver - Sets up the Elasticsearch, Logstash, and Kibana stack. This allows logging data to be search, analyzed, and visualized in real time.

If you run the muchos setup command and a failure occurs, you can repeat the command until setup completes. Any work that was successfully completed will not be repeated. While some setup steps can take over a minute, use ctrl-c to stop setup if it hangs for a long time. Just remember to run muchos setup again to finish setup.

Manage the cluster

The setup command is idempotent. It can be run again on a working cluster. It will not change the cluster if everything is configured and running correctly. If a process has stopped, the setup command will restart the process.

The ./bin/muchos wipe command can be used to wipe all data from the cluster and kill any running processes. After running the wipe command, run the setup command to start a fresh cluster.

If you set proxy_socks_port in your muchos.props, a SOCKS proxy will be created on that port when you use muchos ssh to connect to your cluster. If you add a proxy management tool to your browser and whitelist http://leader*, http://worker* and http://metrics* to redirect traffic to your proxy, you can view the monitoring & status pages below in your browser. Please note - The hosts in the URLs below match the configuration in [nodes] of muchos.prop.example and may be different for your cluster.

Run a Fluo application

Running an example Fluo application like WebIndex, Phrasecount, or Stresso is easy with Muchos as it configures your shell with common environment variables. To run an example application, SSH to a node on cluster where Fluo is installed and clone the example repo:

./bin/muchos ssh                      # SSH to cluster proxy node
ssh <node where Fluo is installed>    # Nodes with Fluo installed is determined by Muchos config
hub clone apache/fluo-examples        # Clone repo of Fluo example applications. Press enter for user/password.

Start the example application using its provided scripts. To show how simple this can be, commands to run the WebIndex application are shown below. Read the WebIndex README to learn more before running these commands.

cd fluo-examples/webindex
./bin/webindex init                   # Initialize and start webindex Fluo application
./bin/webindex getpaths 2015-18       # Retrieves CommonCrawl paths file for 2015-18 crawl
./bin/webindex load-s3 2015-18 0-9    # Load 10 files into Fluo in the 0-9 range of 2015-18 crawl
./bin/webindex ui                     # Runs the WebIndex UI

If you have your own application to run, you can follow the Fluo application instructions to configure, initialize, and start your application. To automate these steps, you can mimic the scripts of example Fluo applications above.

Customize your cluster

After ./bin/muchos setup is run, users can install additional software on the cluster using their own Ansible playbooks. In their own playbooks, users can reference any configuration in the Ansible inventory file at /etc/ansible/hosts which is set up by Muchos on the proxy node. The inventory file lists the hosts for services on the cluster such as the Zookeeper nodes, Namenode, Accumulo master, etc. It also has variables in the [all:vars] section that contain settings that may be useful in user playbooks. It is recommended that any user-defined Ansible playbooks should be managed in their own git repository (see mikewalch/muchos-custom for an example).

High-Availability (optional)

Additionally, Muchos can be configured to provide High-Availability for HDFS & Accumulo components. By default, this feature is off, however it can be turned on by editing the following settings in muchos.props under the general section as shown below:

hdfs_ha = True                        # default is False
nameservice_id = muchoshacluster      # Logical name for the cluster, no special characters

Before enabling HA, it is strongly recommended you read the Apache doc for HDFS HA & Accumulo HA

Also in the [nodes] section of muchos.props ensure the journalnode and zkfc service are configured to run.

When hdfs_ha is True it also enables the ability to have HA resource managers for YARN. To utilize this feature, specify resourcemanager for multiple leader nodes in the [nodes] section.

Terminating your cluster

If you launched your cluster, run the following command to terminate your cluster. WARNING - All data on your cluster will be lost:

./bin/muchos terminate

Automatic shutdown of clusters

With the default configuration, clusters will not shutdown automatically after a delay and the default shutdown behavior will be stopping the node. If you would like your cluster to terminate after 8 hours, set the following configuration in muchos.props:

shutdown_delay_minutes = 480
shutdown_behavior = terminate

If you decide later to cancel the shutdown, run muchos cancel_shutdown.

Retrieving cluster configuration

The config command allows you to retrieve cluster configuration for your own scripts:

$ ./bin/muchos config -p leader.public.ip
10.10.10.10

Contributions

We welcome contributions to the project. These notes should be helpful.

Powered by

Muchos is powered by the following projects:

  • boto - Python library used by muchos launch to start a cluster in AWS EC2.
  • ansible - Cluster management tool that is used by muchos setup to install, configure, and start Fluo, Accumulo, Hadoop, etc on an existing EC2 or bare metal cluster.
  • azure-cli - The Azure CLI is a command-line tool for managing Azure resources.
  • ansible-azure - Ansible includes a suite of modules for interacting with Azure Resource Manager.

fluo-muchos's People

Contributors

arvindshmicrosoft avatar ata18 avatar billierinaldi avatar brianloss avatar cjmctague avatar ctubbsii avatar dependabot[bot] avatar dhutchis avatar drewfarris avatar ericnewton avatar fredster33 avatar greencee avatar karthick-rn avatar keith-ratcliffe avatar keith-turner avatar manno15 avatar mikewalch avatar milleruntime avatar plainolneesh avatar shannawaz avatar slicknik avatar sputnik13 avatar srajtiwari avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluo-muchos's Issues

Can not run stress test init map reduce job

Seeing the following when trying to run the stress test map reduce job. Not sure if this is a problem w/ stress test or misconfiguraton of M/R.

15/02/24 20:37:05 WARN mapred.LocalJobRunner: job_local516094321_0002
java.lang.Exception: java.lang.RuntimeException: java.io.FileNotFoundException: file:/tmp/hadoop-ec2-user/mapred/local/1424810224594/splits.txt (No such file or directory)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: file:/tmp/hadoop-ec2-user/mapred/local/1424810224594/splits.txt (No such file or directory)
    at org.apache.accumulo.core.client.mapreduce.lib.partition.RangePartitioner.getPartition(RangePartitioner.java:55)
    at org.apache.accumulo.core.client.mapreduce.lib.partition.RangePartitioner.getPartition(RangePartitioner.java:43)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:712)
    at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
    at io.fluo.stress.trie.Init$InitMapper.map(Init.java:81)
    at io.fluo.stress.trie.Init$InitMapper.map(Init.java:62)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: file:/tmp/hadoop-ec2-user/mapred/local/1424810224594/splits.txt (No such file or directory)
    at java.io.FileInputStream.open(Native Method)
    at java.io.FileInputStream.<init>(FileInputStream.java:146)
    at java.io.FileInputStream.<init>(FileInputStream.java:101)
    at org.apache.accumulo.core.client.mapreduce.lib.partition.RangePartitioner.getCutPoints(RangePartitioner.java:92)
    at org.apache.accumulo.core.client.mapreduce.lib.partition.RangePartitioner.getPartition(RangePartitioner.java:53)
    ... 15 more

Put fluo-cluster stuff in single dir on cluster

After running fluo-deploy setup and ssh'ing into the cluster, the install dir looks like the following. Thinking it would be nice to put fluo-deploy dirs in a fluo-deploy dir.

[ec2-user@leader install]$ ls
accumulo-1.6.1  bin  conf  data  fluo-1.0.0-beta-1-SNAPSHOT  hadoop-2.6.0  zookeeper-3.4.6

Automate the set up of Graphite

It would be nice if the fluo-ec2 script could set up graphite and have Fluo workers automatically send metrics to it. It very helpful to have Graphite set up when testing.

Avoid uploading Fluo distribution to EC2 cluster

While users should still have the option of uploading a Fluo distribution from their machine to EC2 cluster, fluo-deploy should default to downloading the distribution tarball from maven central (using the version specified by users). If a snapshot is specified, the latest snapshot distribution could be downloaded using a maven comand to grab the snaphost.

Network errors while running Spark application using fluo-deploy on AWS

While running the init Spark application in WebIndex on AWS using fluo-deploy, I lost an executor which had the following error repeated many times:

ERROR org.apache.spark.network.server.TransportRequestHandler: Error sending result ChunkFetchSuccess

It looks like this might be fixed by setting spark.shuffle.blockTransferService to nio in spark-defaults.conf.

For more info, see http://stackoverflow.com/questions/29781489/apache-spark-network-errors-between-executors

Support Accumulo snapshot versions

These scripts could be used for Accumulo testing. To do this, snapshot versions of Accumulo would need to be supported.

A possible way to do this is to expect the accumulo tar ball to be in the tarball dir if the accumulo version contains snapshot.

Check test script return codes

fluo-deploy test should check the test script return codes for the user scripts that it calls. If non-zero return code, it should stop

Refer to 'leader' node as 'proxy' node

When the fluo-deploy script was first created, it called the node that directs the installation of the cluster as the leader. This worked well until the leader was split into three nodes: leader1, leader2, & leader3. It would be nicer change the naming to proxy node in the code, configuration, and documentation.

Add leader SSH retry loop to fluo-deploy script

There are two places in the fluo-deploy script where a SSH connection is made to the leader but the leader may not be up and running yet. At these points in the script, an SSH retry loop should be added to confirm that the leader is up and running before continuing.

Organize configuration files and templates

Currently, all configuration files and templates are fluo-cluster/conf. It would be nice to seperate them by component and create the following structure:

fluo-cluster/conf/hadoop
fluo-cluster/conf/accumulo
fluo-cluster/conf/zookeeper
fluo-cluster/conf/fluo
fluo-cluster/conf/graphite

Redirect test output and nohup test

When a test is run w/ fluo-deploy, any output from the test comes back over ssh. Would be better to redirect this output and err files and run the process w/ nohup. Do not want the test to stop if the ssh connection goes down.

support multiple zookeeper servers

Tried running multiple zookeeper servers and it did not start a quorum.

May need something like the following in zoo.cfg. Also may need id file in data dir

server.1=<zk server node 1>:2888:3888
server.2=<zk server node 2>:2888:3888
server.3=<zk server node 3>:2888:3000

Support two instance types

fluo-deploy currently supports an arbitrary number of instance types. As discussed in #13 having instance types with different numbers of ephemeral drives is problematic for datanodes. The intent behind supporting multiple instance types is to have different instance types for worker nodes and coordinator nodes. If the scripts and configs were aligned with this intent it would simplify the schism introduced in #13 between the possibility of many instance types and ephemeral storage.

Enable usage of hostnames in Hadoop core-site.xml & yarn-site.xml

I tried to change the configuration yarn-site.xml & core-site.xml to use hostnames for the NameNode & ResourceManager. However, these hostnames are not resolving correctly in Java all of the time. This is causing fluo yarn start command to fail with following error:

Caused by: java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "leader2":8032; java.net.UnknownHostException; For more details see:  http://wiki.apache.org/hadoop/UnknownHost

To prevent this issue, yarn-site.xml & core-site.xml use IP addresses for now.

A long term fix probably will require fixing DNS lookups on EC2 cluster by fixing /etc/resolv.conf

Create 'config' command to retrieve cluster configuration

I would like script running the stress test using fluo-deploy. I need a way to scp jars to the cluster. Thinking if fluo-deploy provides an easy way to get leader ip that would make this easier. Maybe something like the following.

LEADER_IP=`fluo-deploy leader-ip`
scp fluo-stress.jar ec2-user@$LEADER_IP:

Improve for Accumulo testing

Was using fluo-deploy to test Accumulo 1.7.0 RCs. Need to make the following improvements

  • If local accumulo file exists, just use it (not just snapshot files)
  • Need to write accumulo logs to ephemeral storage or change logging config. Default config filled up root drive.

Append user supplied keys file to authorized_keys file on all hosts

When an EC2 cluster is launched, each instance is loaded with the key of the user that launched the cluster. This key is set by key.name in fluo-deploy.props. If a user wants another developer to look at their cluster, they will need to add the developer's key to every node in the cluster. It would be better if users could specify a keys file that is automatically append to the authorized_keys file of every cluster that they launch using fluo-deploy.

Install maven and git

Having fluo-deploy install maven and git on the cluster would make it easier to write scripts that automatically clone, build, and run fluo applications

Yarn application logs not in usual place

Noticed that yarn application logs were ending up directly under hadoop dir for some reason. For example /home/ec2-user/install/hadoop-2.6.0/application_1424895718075_0001

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.