Coder Social home page Coder Social logo

nsidn98 / informarl Goto Github PK

View Code? Open in Web Editor NEW
62.0 2.0 19.0 19.04 MB

Code for our paper: Scalable Multi-Agent Reinforcement Learning through Intelligent Information Aggregation

Home Page: https://nsidn98.github.io/InforMARL/

License: MIT License

Python 97.16% Shell 2.84%
graph-neural-networks graphneuralnetwork multiagent-reinforcement-learning navigation reinforcement-learning reinforcement-learning-algorithms reinforcement-learning-environments icml-2023

informarl's Introduction

InforMARL

Scalable Multi-Agent Reinforcement Learning through Intelligent Information Aggregation

Project Status: Active – The project has reached a stable, usable state and is being actively developed. Documentation License: MIT License: MIT License: MIT

A graph neural network framework for multi-agent reinforcement learning with limited local observability for each agent. This is an official implementation of the model described in:

"Scalable Multi-Agent Reinforcement Learning through Intelligent Information Aggregation",

Siddharth Nayak, Kenneth Choi, Wenqi Ding, Sydney Dolan, Karthik Gopalakrishnan, Hamsa Balakrishnan

April 2023 - The paper was accepted to ICML'2023! See you in Honolulu in July 2023

Dec 2022 - Presented a short version of this paper at the Strategic Multi-Agent Interactions: Game Theory for Robot Learning and Decision Making Workshop at CoRL in Auckland. You can find the recording here.

Please let us know if anything here is not working as expected, and feel free to create new issues with any questions.

Abstract:

We consider the problem of multi-agent navigation and collision avoidance when observations are limited to the local neighborhood of each agent. We propose InforMARL, a novel architecture for multi-agent reinforcement learning (MARL) which uses local information intelligently to compute paths for all the agents in a decentralized manner. Specifically, InforMARL aggregates information about the local neighborhood of agents for both the actor and the critic using a graph neural network and can be used in conjunction with any standard MARL algorithm. We show that (1) in training, InforMARL has better sample efficiency and performance than baseline approaches, despite using less information, and (2) in testing, it scales well to environments with arbitrary numbers of agents and obstacles.

image

Overview of InforMARL: (i) Environment: The agents are depicted by green circles, the goals are depicted by red rectangles, and the unknown obstacles are depicted by gray circles. $x^{i}{agg}$ represents the aggregated information from the neighborhood, which is the output of the GNN. A graph is created by connecting entities within the sensing-radius of the agents. (ii) Information Aggregation: Each agent's observation is concatenated with $x^{i}{\mathrm{agg}}$. The inter-agent edges are bidirectional, while the edges between agents and non-agent entities are unidirectional. (iii) Graph Information Aggregation: The aggregated vector from all the agents is averaged to get $X_{\mathrm{agg}}$. (iv) Actor-Critic: The concatenated vector $[o^{i}, x^{i}{\mathrm{agg}}]$ is fed into the actor network to get the action, and $X{\mathrm{agg}}$ is fed into the critic network to get the state-action values.

Usage:

To train InforMARL:

python -u onpolicy/scripts/train_mpe.py --use_valuenorm --use_popart \
--project_name "informarl" \
--env_name "GraphMPE" \
--algorithm_name "rmappo" \
--seed 0 \
--experiment_name "informarl" \
--scenario_name "navigation_graph" \
--num_agents 3 \
--collision_rew 5 \
--n_training_threads 1 --n_rollout_threads 128 \
--num_mini_batch 1 \
--episode_length 25 \
--num_env_steps 2000000 \
--ppo_epoch 10 --use_ReLU --gain 0.01 --lr 7e-4 --critic_lr 7e-4 \
--user_name "marl" \
--use_cent_obs "False" \
--graph_feat_type "relative" \
--auto_mini_batch_size --target_mini_batch_size 128

Graph Neural Network Compatible Navigation Environment:

We also provide with code for the navigation environment which is compatible to be used with graph neural networks.

Note: A more thorough documentation will be up soon.

python multiagent/custom_scenarios/navigation_graph.py

from multiagent.environment import MultiAgentGraphEnv
from multiagent.policy import InteractivePolicy

# makeshift argparser
class Args:
    def __init__(self):
        self.num_agents:int=3
        self.world_size=2
        self.num_scripted_agents=0
        self.num_obstacles:int=3
        self.collaborative:bool=False 
        self.max_speed:Optional[float]=2
        self.collision_rew:float=5
        self.goal_rew:float=5
        self.min_dist_thresh:float=0.1
        self.use_dones:bool=False
        self.episode_length:int=25
        self.max_edge_dist:float=1
        self.graph_feat_type:str='global'
args = Args()

scenario = Scenario()
# create world
world = scenario.make_world(args)
# create multiagent environment
env = MultiAgentGraphEnv(world=world, reset_callback=scenario.reset_world, 
                    reward_callback=scenario.reward, 
                    observation_callback=scenario.observation, 
                    graph_observation_callback=scenario.graph_observation,
                    info_callback=scenario.info_callback, 
                    done_callback=scenario.done,
                    id_callback=scenario.get_id,
                    update_graph=scenario.update_graph,
                    shared_viewer=False)
# render call to create viewer window
env.render()
# create interactive policies for each agent
policies = [InteractivePolicy(env,i) for i in range(env.n)]
# execution loop
obs_n, agent_id_n, node_obs_n, adj_n = env.reset()
stp=0
while True:
    # query for action from each agent's policy
    act_n = []

    for i, policy in enumerate(policies):
        act_n.append(policy.action(obs_n[i]))
    # step environment
    obs_n, agent_id_n, node_obs_n, adj_n, reward_n, done_n, info_n = env.step(act_n)

    # render all agent views
    env.render()

Here env.reset() returns obs_n, agent_id_n, node_obs_n, adj_n where:

  • obs_n: Includes local observations (position, velocity, relative goal position) of each agent.
  • agent_id_n: Includes the 'ID' for each agent. This can be used to query any agent specific features in the replay buffer
  • node_obs_n: Includes node observations for graphs formed wrt each agent $i$. Here each node can be any entity in the environment namely: agent, goal or obstacle. The node features include relative position, relative velocity of the entity and the relative position of the goal on the entity.
  • adj_n: Includes the adjacency matrix of the graphs formed.

This can also be used with an environment wrapper:

from multiagent.MPE_env import GraphMPEEnv
# all_args can be pulled config.py or refer to `onpolicy/scripts/train_mpe.py`
env = MPEEnv(all_args)
obs_n, agent_id_n, node_obs_n, adj_n = env.reset()

Dependencies:

  • Multiagent-particle-envs: We have pulled the relevant folder from the repo to modify it.
    • pip install gym==0.10.5 (newer versions also seem to work)
    • pip install numpy-stl
    • torch==1.11.0
    • torch-geometric==2.0.4
    • torch-scatter==2.0.8
    • torch-sparse==0.6.12

Baseline Sources

We compare our methods with other MARL baselines:

Troubleshooting:

  • OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.: Install nomkl by running conda install nomkl

  • AttributeError: dlsym(RTLD_DEFAULT, CFStringCreateWithCString): symbol not found: This issue arises with MacOS Big Sur. A hacky fix for this is to revert change the pyglet version to maintenance version using pip install --user --upgrade git+http://github.com/pyglet/[email protected]

  • AttributeError: 'NoneType' object has no attribute 'origin': This error arises whilst using torch-geometric with CUDA. Uninstall torch_geometric, torch-cluster, torch-scatter, torch-sparse, and torch-spline-conv. Then re-install using:

    TORCH="1.8.0"
    CUDA="cu102"
    pip install --no-index torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html --user
    pip install --no-index torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html --user
    pip install torch-geometric --user
    

Questions/Requests

Please file an issue if you have any questions or requests about the code or the paper. If you prefer your question to be private, you can alternatively email me at [email protected]

Citation

If you found this codebase useful in your research, please consider citing

@article{nayak22informarl,
  doi = {10.48550/ARXIV.2211.02127},
  url = {https://arxiv.org/abs/2211.02127},
  author = {Nayak, Siddharth and Choi, Kenneth and Ding, Wenqi and Dolan, Sydney and Gopalakrishnan, Karthik and Balakrishnan, Hamsa},
  keywords = {Multiagent Systems (cs.MA), Artificial Intelligence (cs.AI), Robotics (cs.RO), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {Scalable Multi-Agent Reinforcement Learning through Intelligent Information Aggregation},
  publisher = {arXiv},
  year = {2022},
  copyright = {Creative Commons Attribution 4.0 International}
}

Contributing

We would love to include more scenarios from the multi-agent particle environment to be compatible with graph neural networks and would be happy to accept PRs.

License

MIT License

informarl's People

Contributors

nsidn98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

informarl's Issues

Can I add a custom environment?

I would like to ask whether the current code framework supports customized 3D simulation scenes. If so, could you tell me which files need to be modified? I would be very grateful if I could receive your reply.

Edge formation

Hello, thank you for your help and patience while I learn from this project. I had one more question for you.

While training and evaluating I see that edges are not only created between agents and other entities but the entities also share edges (e.g. goal to goal). I wanted to ask about the benefits of doing this if there are any. I imagine that if we were to increase the number of agents and therefore the number of goal positions and obstacles that the number of edges created could potentially grow exponentially.

In your tests during the creation of this project did you recognize any benefits to creating these edges between non-agent entities or does this feature not impact performance in a meaningful way and it is just a visual representation?

Comparison algorithms

If I want to verify different algorithms for the same scenario, how should I write the code to verify the algorithm to support different algorithms?

Traceback (most recent call last)

Hello, thank you so much for open-sourcing such a great program, I've learned a lot in it and really appreciate it! But I encountered the following error in running the code, please answer, thank you!
Traceback (most recent call last):
File "/home/ljy/.conda/envs/infornmarl/lib/python3.8/multiprocessing/process.py", line 313, in _bootstrap
self.run()
File "/home/ljy/.conda/envs/infornmarl/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/ljy/InforNMARL/onpolicy/envs/env_wrappers.py", line 941, in graphworker
ob, ag_id, node_ob, adj, reward, done, info = env.step(data)
File "/home/ljy/InforNMARL/multiagent/environment.py", line 703, in step
self.update_graph(self.world)
File "/home/ljy/InforNMARL/multiagent/custom_scenarios/navigation_graph.py", line 400, in update_graph
sparse_connect = sparse.csr_matrix(connect)
File "/home/ljy/.conda/envs/infornmarl/lib/python3.8/site-packages/scipy/sparse/_compressed.py", line 85, in init
self._coo_container(arg1, dtype=dtype)
File "/home/ljy/.conda/envs/infornmarl/lib/python3.8/site-packages/scipy/sparse/_coo.py", line 130, in init
if isinstance(arg1, tuple):
TypeError: init() takes 1 positional argument but 3 were given

Local vs Global Information

Hello again,
I wanted to ask about the node observations within this project. I see that although the agents are only meant to have access to the information within their visual radius (and their neighboring agents), they do have access to the location of their goal.

My question is, do they obtain the position of their goals through local observations or are they given this piece of global information from the beginning of each episode?

I am specifically referring to the _get_entity_feat_relative function in navigation_graph.py where the agents calculate the distance to their goal positions here:

goal_pos = world.get_entity("landmark", entity.id).state.p_pos

Env Setup

Could you please tell me which version of python you used in your own env, or could your please mark down how to setup the env and how to reproduce the results in the paper if possible. Thx

Problems encountered when running with Windows?

After configuring the environment as required, I encountered this problem when running the scene multiagent/custom_scenarios/navigation_graph.py on Windows, but it can run normally on Linux. I would like to ask specifically about this, is it currently not supported on Windows?

Traceback (most recent call last):
File "D:\code\DRL\InforMARL\multiagent\custom_scenarios\navigation_graph.py", line 566, in
policies = [InteractivePolicy(env, i) for i in range(env.n)]
File "D:\code\DRL\InforMARL\multiagent\custom_scenarios\navigation_graph.py", line 566, in
policies = [InteractivePolicy(env, i) for i in range(env.n)]
File "D:\code\DRL\InforMARL\multiagent\policy.py", line 26, in init
env.viewers[agent_index].window.on_key_press = self.key_press
AttributeError: 'NoneType' object has no attribute 'window'

More information about running this

Hi, I was interested in cloning and running the code on my windows machine , is there any read me file that can guide me? I tried doing it on conda, was able to get all the requirements working, but i get stuck on the wandb login?? I don't know what i am doing wrong, i don't know if i should create a new account or do something else?

python -u onpolicy/scripts/train_mpe.py --use_valuenorm --use_popart --project_name "informarl" --env_name "GraphMPE" --algorithm_name "rmappo" --seed 0 --experiment_name "informarl" --scenario_name "navigation_graph" --num_agents 3 --collision_rew 5 --n_training_threads 1 --n_rollout_threads 128 --num_mini_batch 1 --episode_length 25 --num_env_steps 2000000 --ppo_epoch 10 --use_ReLU --gain 0.01 --lr 7e-4 --critic_lr 7e-4 --user_name "marl" --use_cent_obs "False" --graph_feat_type "relative" --auto_mini_batch_size --target_mini_batch_size 128

Running this on the terminal gives me 3 options on wandb, i have made an account there, but its not linking :

Screenshot 2024-06-26 163539

Understanding of graph neural networks?

Yǒu guānyú onpolicy zhèlǐ, wǒ xiǎng wèn wèn zhèlǐ shì zěnme yùnxíng tǐxiàn gnn, mùqián lái shuō zhèlǐ wǒ zhǐshì kàn dàole rnn-mappo,mappo zhè liǎng gè suànfǎ zài train.Py, nàme shìfǒu zhīchí gnn-mappo ne, zhège jùtǐ de guānlián shì zěnme yàng de ne? Dāngrán wǒ zhīdào jiàn mó de shíhòu shì tú dǎoháng de fāngshì, zhǐshì yùnxíng zhège jiǎoběn hòu, jùtǐ dōu shì nǎxiē wénjiàn fāshēngle liándòng ne? Python -u onpolicy/scripts/train_mpe.Py --use_valuenorm --use_popart--project_name"informarl" --env_name"GraphMPE"--algorithm_name"rmappo" --seed th--experiment_name"informarl" --scenario_name"navigation_graph" --num_agents 3--collision_rew 5--n_training_threads 1 --n_rollout_threads 128--num_mini_batch 1--episode_length 25--num_env_steps 2000000--ppo_epoch 10 --use_ReLU --gain 0.01 --Lr 7e-4 --critic_lr 7e-4--user_name"marl" --use_cent_obs"False" --graph_feat_type"relative" --auto_mini_batch_size --target_mini_batch_size 128
Show more
715 / 5,000
Regarding onpolicy, I would like to ask how to run gnn here. At present, I only see the two algorithms of rnn-mappo and mappo in train.py. So does it support gnn-mappo? What is the specific connection? Of course, I know that the modeling is done by graph navigation, but after running this script, which specific files are linked?

python -u onpolicy/scripts/train_mpe.py --use_valuenorm --use_popart --project_name "informarl" --env_name "GraphMPE" --algorithm_name "rmappo" --seed θ--experiment_name "informarl" --scenario_name "navigation_graph" --num_agents 3 --collision_rew 5 --n_training_threads 1 --n_rollout_threads 128 --num_mini_batch 1 --episode_length 25 --num_env_steps 2000000 --ppo_epoch 10 --use_ReLU --gain 0.01 --lr 7e-4 --critic_lr 7e-4 --user_name "marl" --use_cent_obs "False" --graph_feat_type "relative" --auto_mini_batch_size --target_mini_batch_size 128

Installation details

Which Python version used?
Could you please detail the installation dependencies and procedures?
Thx.

information aggregation

Hello, the information aggregation in your project is done based on UniMP and the environment has different types of entities, have you considered using a heterogeneous graph?

torch error

OSError: [WinError 1455] 页面文件太小,无法完成操作。 Error loading "D:\Anaconda\envs\InforMARL\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies.

find this error when running
python -u onpolicy/scripts/train_mpe.py --use_valuenorm --use_popart --project_name "informarl" --env_name "GraphMPE" --algorithm_name "rmappo" --seed 0 --experiment_name "informarl" --scenario_name "navigation_graph" --num_agents 3 --collision_rew 5 --n_training_threads 1 --n_rollout_threads 32 --num_mini_batch 1 --episode_length 25 --num_env_steps 200000 --ppo_epoch 10 --use_ReLU --gain 0.01 --lr 7e-4 --critic_lr 7e-4 --user_name "marl" --use_cent_obs "False" --graph_feat_type "relative" --auto_mini_batch_size --target_mini_batch_size 32

Have some questions about the scene?

What is the size unit of the map in the current scene design?
What is the distance that the agent moves each time, and what is its unit?
Is the current map continuous or grid-designed?

dependencies errors and wandb

Sorry, I'm not familiar with this kind of code, so under questions can be ridiculous.

  1. The torch version in the requirement.txt and dependencies of MAPE are different from 1.8.0 and 1.11.0. So should I install 1.11.0?

  2. When I pip install libraries, torch-* things are not installed well.

  3. I'm not familiar with wandb. Do you know if the next error is usual?
    wandb: ERROR Error while calling W&B API: permission denied (<Response [403]>)
    Should I get your permission...? Or do you mean I should turn off the wandb.

Thanks for your help.

Understanding the effects of n_rollout_threads

Hello again,

I have been using this repo for studying and testing for some time now (using the navigation_graph scenario) and earlier this week I ran into an interesting issue.

To make testing/debugging easier I have set the number of n_rollout_threads to 1 to remove the parallel processing to make it a little easier to track environmental data over time.

However, when you do this, the network completely losses its ability to converge towards any kind of optimal policy and I do not understand why since the implementation of parallel processing should only effect the speed of training and not the effectiveness.

For more context here is the command I used to remove parallel processing where n_rollout_threads=1 and auto_mini_batch_size has been removed in order to maintain the target batch size of 128:

python -u onpolicy/scripts/train_mpe.py --use_valuenorm --use_popart --project_name "GNN-Testing" --env_name "GraphMPE" --algorithm_name "rmappo" --seed 0 --experiment_name "baseline" --scenario_name "navigation_graph" --num_agents 3 --collision_rew 5 --n_training_threads 1 --n_rollout_threads 1 --num_mini_batch 1 --episode_length 25 --num_env_steps 2000000 --ppo_epoch 10 --use_ReLU --gain 0.01 --lr 7e-4 --critic_lr 7e-4 --user_name "......" --use_cent_obs "False" --graph_feat_type "relative" --target_mini_batch_size 128 --use_wandb "False"

Here is the command I typically use where training converges succesfully:

python -u onpolicy/scripts/train_mpe.py --use_valuenorm --use_popart --project_name "GNN-Testing" --env_name "GraphMPE" --algorithm_name "rmappo" --seed 0 --experiment_name "baseline" --scenario_name "navigation_graph" --num_agents 3 --collision_rew 5 --n_training_threads 1 --n_rollout_threads 128 --num_mini_batch 1 --episode_length 25 --num_env_steps 2000000 --ppo_epoch 10 --use_ReLU --gain 0.01 --lr 7e-4 --critic_lr 7e-4 --user_name "......." --use_cent_obs "False" --graph_feat_type "relative" --auto_mini_batch_size --target_mini_batch_size 128 --use_wandb "False"

Using the first command, the average reward over time isn't even oscillating it is just random.

Do you have any insight on why this is happening?

Formation, Coverage and Line

Hello, thanks again for your open source project, I have another question: how can I test the performance of the algorithm in the 3 scenarios of Formation, Coverage and Line?

Scalablity of InforMARL

Hello @nsidn98 , Thank you for open-source the paper code.
When I tried to run the program to reproduce the experimental results in the paper, I couldn't find part of the experimental code for Scalablity.
The setting of the experiment is mentioned in the paper:

The number of obstacles in the environment is randomly chosen from (0, 10) at the beginning of the episode.

But I can't find anything in the code about random obstacles
Can you tell me how to set it up, or what parts of the code I should change?
Thank you

why the computation between query and key is not transposed for one of them?

Hi, I'd like to ask why the computation between query and key is not transposed for one of them, inconsistent with the description in the original articleHi, I'd like to ask why the computation between query and key is not transposed for one of them, inconsistent with the description in the original article.
image

image

A Minor question about env.reset

Hi @nsidn98 Thank you for all your kind responses. Those have helped me a lot.

I successfully got train results, but I wonder that does the environment reset after the end of the episode. In my opinion, after the end of the episode, the environment should be reset.

According to the code below, which is from graph_mpe_runner.py, env.reset only activates only once when `run' is called.

I wonder if is it okay env.reset activates once for the whole training, even though there are multiprocessing of 128(= --num_rollout_threads) running. if it is so, then only 128 rollouts are done without an environment reset.

The answers that I want to hear are as follows

  1. Is the environment reset called only once for the entire training?
  2. If it is so, then Are the rollouts done 128 times for one training? Also, according to the code below, there is a for loop of which ranges are `self.episode_length'. Is it okay that env.reset is not done after the end of the episode?

Thank you!

def run(self):
        self.warmup()   

        start = time.time()
        episodes = int(self.num_env_steps) // self.episode_length // self.n_rollout_threads
        
        # This is where the episodes are actually run.
        for episode in range(episodes):
            if self.use_linear_lr_decay:
                self.trainer.policy.lr_decay(episode, episodes)

            for step in range(self.episode_length):
                # Sample actions
                values, actions, action_log_probs, rnn_states, \
                    rnn_states_critic, actions_env = self.collect(step)
                    
                # Obs reward and next obs
                obs, agent_id, node_obs, adj, rewards, \
                    dones, infos = self.envs.step(actions_env)

                data = (obs, agent_id, node_obs, adj, agent_id, rewards, 
                        dones, infos, values, actions, action_log_probs, 
                        rnn_states, rnn_states_critic)

                # insert data into buffer
                self.insert(data)

            # compute return and update network
            self.compute()
            train_infos = self.train()
            
            # post process
            total_num_steps = (episode + 1) * self.episode_length * self.n_rollout_threads
            
            # save model
            if (episode % self.save_interval == 0 or episode == episodes - 1):
                self.save()

            # log information
            if episode % self.log_interval == 0:
                end = time.time()

                env_infos = self.process_infos(infos)

                avg_ep_rew = np.mean(self.buffer.rewards) * self.episode_length
                train_infos["average_episode_rewards"] = avg_ep_rew
                print(f"Average episode rewards is {avg_ep_rew:.3f} \t"
                    f"Total timesteps: {total_num_steps} \t "
                    f"Percentage complete {total_num_steps / self.num_env_steps * 100:.3f}")
                self.log_train(train_infos, total_num_steps)
                self.log_env(env_infos, total_num_steps)

            # eval
            if episode % self.eval_interval == 0 and self.use_eval:
                self.eval(total_num_steps)

Node observation representation

Hello, in this project I see that the purpose is to show the effectiveness of the GNN when agents only have the information from their local field of vision (plus the messages). However, I noticed in the navigation_graph.py file that when we form the node_obs and the adj matrix we do so by calculating the distance or relative distance between all entities and the ego agent. I was thinking this was a little contradictory since agents wouldn't know the relative positions of other entities besides their own goal positions so I wanted to ask for some clarification.

So, do you only use this during training or do you follow a similar process for evaluation of the learned policy after training?

Also, I am trying to understand what the numbers in the node_obs represent in the field of vision for the agent and how the adjacency matrix is used of a part of the input to the neural network when agents haven't 'seen' the entities its calculating relative distance for.

def _get_entity_feat_global(self, entity: Entity, world: World) -> arr:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.