Coder Social home page Coder Social logo

initial-h / alphazero_gomoku_mpi Goto Github PK

View Code? Open in Web Editor NEW
184.0 10.0 43.0 28.71 MB

An asynchronous/parallel method of AlphaGo Zero algorithm with Gomoku

Python 100.00%
alphazero alphazero-gomoku parallel mpi4py tensorflow alphago mcts gomoku tensorlayer tree-search

alphazero_gomoku_mpi's Introduction

AlphaZero-Gomoku-MPI

Updating

  • 2019.03.05 -- upload a 15x15 board model
  • Please download and try it yourself. If you have questions or ideas about AlphaZero and MCTS, feel free to issue me and maybe we can do some improvement.

Overview

This repo is based on junxiaosong/AlphaZero_Gomoku, sincerely grateful for it.

I do these things:

  • Implement asynchronous self-play training pipeline in parallel like AlphaGo Zero's way
  • Write a root parallel MCTS when play against it (vote a move using ensemble way)
  • Use ResNet structure to train the model and set a transfer learning API to train a larger board model based on small board's model (like pre-training way in order to save time)

Strength

  • Current model is on 11x11 board, and playout 400 times when test
  • Play with this model, can always win regardless of black or white
  • Play with gomocup's AI, can rank around 20th-30th for some rough tests
  • When I play white, I can't win AI. When I play black, end up with tie/lose for most of my time

References

Blog

Installation Dependencies

  • Python3 (my own 3.6.8)
  • tensorflow>=1.8.0 (my own 1.12.0)
  • tensorlayer>=1.8.5 (my own 1.10.1)
  • mpi4py (parallel train and play)(my own 2.0.0)
  • pygame (GUI)(my own 1.9.6)

How to Install

tensorflow/tensorlayer/pygame install :

pip install tensorflow
pip install tensorlayer
pip install pygame

mpi4py install click here

mpi4py on windows click here

How to Run

  • Play with AI
python human_play.py
  • Play with parallel AI (-np : set number of processings, take care of OOM !)
mpiexec -np 3 python -u human_play_mpi.py 
  • Train from scratch
python train.py
  • Train in parallel
mpiexec -np 43 python -u train_mpi.py

Algorithm

It's almost no difference between AlphaGo Zero except APV-MCTS. A PPT can be found in dir demo/slides

Details

Most settings are the same with AlphaGo Zero, details as follow :

  • Network Structure

    • Current model uses 19 residual blocks, more blocks means more accurate prediction but also slower speed

    • The number of filters in convolutional layer shows in the follow picture

  • Feature Planes

    • In AlphaGo Zero paper, there are 19 feature planes: 8 for current player's stones, 8 for opponent's stones, and the final feature plane represents the colour to play
    • Here I only use 4 for each player, it can be easily changed in game_board.py
  • Dirichlet Noise

    • I add dirichlet noises in each node, it's different from paper that only add noises in root node. I guess AlphaGo Zero discard the whole tree after each move and rebuild a new tree, while here I keep the nodes under the chosen action, it's a little different
    • Weights between prior probabilities and noises are not changed here (0.75/0.25), though I think maybe 0.8/0.2 or even 0.9/0.1 is better because noises are added in every node
  • Parameters in Detail

    • I try to maintain the original parameters in AlphaGo Zero paper, so as to testify it's generalization. Besides, I also take training time and computer configuration into consideration.

      Parameters Setting Gomoku AlphaGo Zero
      MPI num 43 -
      c_puct 5 5
      n_playout 400 1600
      blocks 19 19/39
      buffer size 500,000(data) 500,000(games)
      batch_size 512 2048
      lr 0.001 annealed
      optimizer Adam SGD with momentum
      dirichlet noise 0.3 0.03
      weight of noise 0.25 0.25
      first n move 12 30
  • Training detials

    • I train the model for about 100,000 games and takes 800 hours or so
    • Computer configuration : 2 CPU and 2 1080ti GPU
    • We can easily find the computation gap with DeepMind and rich people can do some future work

Some Tips

  • Network
    • ZeroPadding with Input : Sometimes when play with AI, it's unaware of the risk at the edge of board even though I'm three/four in a row. ZeroPadding data input can mitigate the problem
    • Put the network on GPU : If the network is shallow, it's not matter CPU/GPU to use, otherwise it's faster to use GPU when self-play
  • Dirichlet Noise
    • Add Noise in Node : In junxiaosong/AlphaZero_Gomoku, noises are added outside the tree, seemingly like DQN's ε-greedy way. It's ok when I test on 6x6 and 8x8 board, but when on 11x11 some problems occur. After a long time training on 11x11, black player will always play the first stone in the middle place with policy probability equal to 1. It's very rational for black to play here, however, the white player will never see other kifu that play in the other place at first stone. So, when I play black with AI and place somewhere not the middle place, AI will get very stupid because it has never seen this way at all. Add noise in node can mitigate the problem
    • Smaller Weight with Noise : As I said before, I think maybe 0.8/0.2 or even 0.9/0.1 is a better choice between prior probabilities and noises' weights, because noises are added in every node
  • Randomness
    • Dihedral Reflection or Rotation : When use the network to output probabilities/value, it's better to do as paper said: "The leaf node s_L is added to a queue for neural network evaluation, (d_i(p),v)=f_θ(d_i(s_L)), where d_i is a dihedral reflection or rotation selected uniformly at random from i in [1..8]".
    • Add Randomness when Test : I add the dihedral reflection or rotation also when play with it, so as to avoid to play the same game all the time
  • Tradeoffs
    • Network Depth : If the network is too shallow, loss will increase. If too deep, it's slow when train and test. (My network is still a little slow when play with it, I think maybe 9 blocks is all right)
    • Buffer Size : If the size is small, it's easy to fit by network but can't guarantee it's performance for only learning from these few data. If it's too large, much longer time and deeper network structure should be taken
    • Playout Number : If small, it's quick to finish a self-play game but can't guarantee kifu's quality. On the contrary with more playout times, better kifu will get but also take longer time

Future Work

  • Continue to train (a larger board) and increase the playout number
  • Try some other parameters for better performance
  • Alter network structure
  • Alter feature planes
  • Implement APV-MCTS
  • Train on standard/renju rule

alphazero_gomoku_mpi's People

Contributors

initial-h avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alphazero_gomoku_mpi's Issues

AI执黑不走天元

我训练13 x 13的棋盘,即使过了很久,AI执黑棋第一步依然不走中间,概率输出如下:
value [[0.80382586]]
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.233 0.0 0.0 0.0 0.256 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.236 0.0 0.0 0.0 0.276 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

我把模型调小,并把噪声调低了一点(见后文),问题依旧,请问你知道是怎么回事吗?
调低噪音的代码:
dirichlet_noise = np.random.dirichlet(0.25 * np.ones(length)) # 原来是0.3
self._children[action_priors[i][0]] = TreeNode(self,0.8*action_priors[i][1]+0.2*dirichlet_noise[i]) #原来是0.75/0.25

关于 τ 的疑问

在alpha zero 的文档中说 搜索概率 π 被返回,与 N(1/τ)成正比,
而在您的程序中, τ 没有参与返回概率中,而只是在选取action的时候使用,那这个τ是不是就没有意义了?

        p = softmax(1.0 / 1.0 * np.log(np.array(visits) + 1e-10))
        move_probs[list(acts)] = p
        # return the prob with temp=1
   
     return move,move_probs

有关该项目和原版的差异的一些疑问

1、这个是原版下AlphaZero_Gomoku的一个issue:
junxiaosong/AlphaZero_Gomoku#54
除了这个issue外,也有很多人指出即使已经训练了很多盘棋,原版模型在棋盘边缘上的判断上依然有明显失误(比如不会主动去堵对手在边界上形成的活三,或者在边界上有必杀棋但不走)。
并且个人测试了一下,用一个已经训练了1500局的模型在9*9棋盘下五子棋时,增大playout(个人测试是到5000以上)能稍微缓解这种现象的发生,但依然不能保证在主战场还在棋盘**的时候,AI会堵棋盘边缘的活三。而playout增加到5000的时候下一步棋已经需要快半分钟的时间了,因此继续增大playout可能没法解决这个问题。
这个issue则指出这个现象的发生可能是因为神经网络是CNN的zero padding导致的,如果增加边缘特征平面可以解决这种问题:
junxiaosong/AlphaZero_Gomoku#68

我注意到您的模型似乎没有这个问题,但不是很清楚是不是因为使用了残差网络,所以想请教一下您是否有在原来的基础上添加特征平面?还是仅仅将CNN换成了残差网络进行训练?

2、
我注意到您的模型在相同的局面下可能会下在不同的地方,而原版模型在相同模型下只会下在同一个地方,所以想问一下您的策略pi最终是如何确定的。

3、
似乎我没有安装mpi4py也能正常跑human_play.py下棋,所以mpi4y是只有在train时才一定要装吗?

最后感谢拨冗回答。

ubuntu make教程

您好,
方便提供一下ubuntu16.04的make教程吗?我的报错了。

-- Caffe2: CUDA detected: 10.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda-10.1/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-10.1
-- Caffe2: Header version is: 10.1
-- Found cuDNN: v7.6.4 (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)
-- Autodetected CUDA architecture(s): 6.1
-- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61
CMake Warning (dev) at /usr/local/share/cmake-3.16/Modules/UseSWIG.cmake:607 (message):
Policy CMP0078 is not set: UseSWIG generates standard target names. Run
"cmake --help-policy CMP0078" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.

Call Stack (most recent call first):
CMakeLists.txt:31 (swig_add_library)
This warning is for project developers. Use -Wno-dev to suppress it.

CMake Warning (dev) at /usr/local/share/cmake-3.16/Modules/UseSWIG.cmake:460 (message):
Policy CMP0086 is not set: UseSWIG honors SWIG_MODULE_NAME via -module
flag. Run "cmake --help-policy CMP0086" for policy details. Use the
cmake_policy command to set the policy and suppress this warning.

Call Stack (most recent call first):
/usr/local/share/cmake-3.16/Modules/UseSWIG.cmake:702 (SWIG_ADD_SOURCE_TO_MODULE)
CMakeLists.txt:31 (swig_add_library)
This warning is for project developers. Use -Wno-dev to suppress it.

-- Configuring done
-- Generating done
-- Build files have been written to: /home/snail/Desktop/alpha-zero-gomoku-master/build
[ 16%] Built target library_swig_compilation
[ 33%] Building CXX object CMakeFiles/_library.dir/src/libtorch.cpp.o
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp: In constructor ‘NeuralNetwork::NeuralNetwork(std::__cxx11::string, bool, unsigned int)’:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: error: no matching function for call to ‘std::shared_ptrtorch::jit::script::Module::shared_ptr(torch::jit::script::Module)’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:327:7: note: candidate: std::shared_ptr<_Tp>::shared_ptr(const std::weak_ptr<_Tp>&, std::nothrow_t) [with _Tp = torch::jit::script::Module]
shared_ptr(const weak_ptr<_Tp>& __r, std::nothrow_t)
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:327:7: note: candidate expects 2 arguments, 1 provided
/usr/include/c++/6/bits/shared_ptr.h:317:2: note: candidate: template<class _Alloc, class ... _Args> std::shared_ptr<_Tp>::shared_ptr(std::_Sp_make_shared_tag, const _Alloc&, _Args&& ...)
shared_ptr(_Sp_make_shared_tag __tag, const _Alloc& __a,
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:317:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: candidate expects at least 2 arguments, 1 provided
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:269:17: note: candidate: constexpr std::shared_ptr<_Tp>::shared_ptr(std::nullptr_t) [with _Tp = torch::jit::script::Module; std::nullptr_t = std::nullptr_t]
constexpr shared_ptr(nullptr_t) noexcept : shared_ptr() { }
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:269:17: note: no known conversion for argument 1 from ‘torch::jit::script::Module’ to ‘std::nullptr_t’
/usr/include/c++/6/bits/shared_ptr.h:262:2: note: candidate: template<class _Tp1, class _Del, class> std::shared_ptr<_Tp>::shared_ptr(std::unique_ptr<_Up, _Ep>&&)
shared_ptr(std::unique_ptr<_Tp1, _Del>&& __r)
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:262:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: ‘torch::jit::script::Module’ is not derived from ‘std::unique_ptr<_Tp, _Dp>’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:255:2: note: candidate: template std::shared_ptr<_Tp>::shared_ptr(std::auto_ptr<_Up>&&)
shared_ptr(std::auto_ptr<_Tp1>&& __r);
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:255:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: ‘torch::jit::script::Module’ is not derived from ‘std::auto_ptr<_Up>’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:250:11: note: candidate: template std::shared_ptr<_Tp>::shared_ptr(const std::weak_ptr<_Tp1>&)
explicit shared_ptr(const weak_ptr<_Tp1>& __r)
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:250:11: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: ‘torch::jit::script::Module’ is not derived from ‘const std::weak_ptr<_Tp>’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:238:2: note: candidate: template<class _Tp1, class> std::shared_ptr<_Tp>::shared_ptr(std::shared_ptr<_Tp1>&&)
shared_ptr(shared_ptr<_Tp1>&& __r) noexcept
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:238:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: ‘torch::jit::script::Module’ is not derived from ‘std::shared_ptr<_Tp1>’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:229:7: note: candidate: std::shared_ptr<_Tp>::shared_ptr(std::shared_ptr<_Tp>&&) [with _Tp = torch::jit::script::Module]
shared_ptr(shared_ptr&& __r) noexcept
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:229:7: note: no known conversion for argument 1 from ‘torch::jit::script::Module’ to ‘std::shared_ptrtorch::jit::script::Module&&’
/usr/include/c++/6/bits/shared_ptr.h:221:2: note: candidate: template<class _Tp1, class> std::shared_ptr<_Tp>::shared_ptr(const std::shared_ptr<_Tp1>&)
shared_ptr(const shared_ptr<_Tp1>& __r) noexcept
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:221:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: ‘torch::jit::script::Module’ is not derived from ‘const std::shared_ptr<_Tp1>’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:210:2: note: candidate: template std::shared_ptr<_Tp>::shared_ptr(const std::shared_ptr<_Tp1>&, _Tp*)
shared_ptr(const shared_ptr<_Tp1>& __r, _Tp* __p) noexcept
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:210:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: ‘torch::jit::script::Module’ is not derived from ‘const std::shared_ptr<_Tp1>’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:188:2: note: candidate: template<class _Deleter, class _Alloc> std::shared_ptr<_Tp>::shared_ptr(std::nullptr_t, _Deleter, _Alloc)
shared_ptr(nullptr_t __p, _Deleter __d, _Alloc __a)
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:188:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: candidate expects 3 arguments, 1 provided
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:169:2: note: candidate: template<class _Tp1, class _Deleter, class _Alloc> std::shared_ptr<_Tp>::shared_ptr(_Tp1*, _Deleter, _Alloc)
shared_ptr(_Tp1* __p, _Deleter __d, _Alloc __a)
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:169:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: mismatched types ‘_Tp1*’ and ‘torch::jit::script::Module’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:150:2: note: candidate: template std::shared_ptr<_Tp>::shared_ptr(std::nullptr_t, _Deleter)
shared_ptr(nullptr_t __p, _Deleter __d)
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:150:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: candidate expects 2 arguments, 1 provided
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:133:2: note: candidate: template<class _Tp1, class _Deleter> std::shared_ptr<_Tp>::shared_ptr(_Tp1*, _Deleter)
shared_ptr(_Tp1* __p, _Deleter __d)
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:133:2: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: mismatched types ‘_Tp1*’ and ‘torch::jit::script::Module’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:116:11: note: candidate: template std::shared_ptr<_Tp>::shared_ptr(_Tp1*)
explicit shared_ptr(_Tp1* __p)
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:116:11: note: template argument deduction/substitution failed:
/home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:15:19: note: mismatched types ‘_Tp1*’ and ‘torch::jit::script::Module’
loop(nullptr) {
^
In file included from /usr/include/c++/6/memory:82:0,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/c10/core/Allocator.h:4,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/ATen/ATen.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/libtorch/include/torch/script.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/./src/libtorch.h:3,
from /home/snail/Desktop/alpha-zero-gomoku-master/src/libtorch.cpp:1:
/usr/include/c++/6/bits/shared_ptr.h:107:7: note: candidate: std::shared_ptr<_Tp>::shared_ptr(const std::shared_ptr<_Tp>&) [with _Tp = torch::jit::script::Module]
shared_ptr(const shared_ptr&) noexcept = default;
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:107:7: note: no known conversion for argument 1 from ‘torch::jit::script::Module’ to ‘const std::shared_ptrtorch::jit::script::Module&’
/usr/include/c++/6/bits/shared_ptr.h:104:17: note: candidate: constexpr std::shared_ptr<_Tp>::shared_ptr() [with _Tp = torch::jit::script::Module]
constexpr shared_ptr() noexcept
^~~~~~~~~~
/usr/include/c++/6/bits/shared_ptr.h:104:17: note: candidate expects 0 arguments, 1 provided
CMakeFiles/_library.dir/build.make:88: recipe for target 'CMakeFiles/_library.dir/src/libtorch.cpp.o' failed
make[2]: *** [CMakeFiles/_library.dir/src/libtorch.cpp.o] Error 1
CMakeFiles/Makefile2:76: recipe for target 'CMakeFiles/_library.dir/all' failed
make[1]: *** [CMakeFiles/_library.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

关于MPI版的显卡利用率

出于兴趣我也写了一个单机并行的C++版AlphaZero
训练了12个小时 15*15棋盘 3500局(MCTS 1600次搜索+单块 GTX1070)
测试发现棋力还不错,预计完全训练可能要几天乃至一周
https://github.com/hijkzzz/alpha-zero-gomoku

看到这个MPI版的特别感兴趣,稍微研究了一下原理
但想问一下MPI这种多进程版本的,GPU的利用率能达到多少,还有就是显存会不会吃的很紧
因为根据我的测试,AlphaZero的主要瓶颈是MCTS中用神经网络预测那一步,GPU利用率上不来的话实际效率很难提升。

关于experience replay的问题

你好,我了解到AlphaZero好像是一个on-policy算法,on-policy算法是不适用experience replay的,但是我又在代码里面看到使用了experience replay, 我想知道我的看法是不是对的。

mini_batch = tmp_buffer[i*self.batch_size:(i+1)*self.batch_size]
state_batch = [data[0] for data in mini_batch]
mcts_probs_batch = [data[1] for data in mini_batch]
winner_batch = [data[2] for data in mini_batch]

概率显示问题

目前的概率显示是在命令行中打印概率,没有坐标很难认。
请问怎么改动。
顺便提点建议:能在弹出的面板显示AI认为所有合法落子点的概率吗?(每个落子点的概率大小由颜色深浅表示)。
十分感谢。

关于模型的一些疑惑

作者您好,我是刚入门不久的新手,因为对tensorflow不甚了解,所以想把网络改写成pytorch的。但是在研究代码和改写的过程中对多线程版本中保存的模型有一些疑惑,想向您请教一下,谢谢!
1、`if len(self.data_buffer)>self.batch_size*5:
# training

                    # print('`'*50+'data buffer length:{}'.format(len(self.data_buffer)))
                    # print()
                    print_out = True
                    if print_out:
                        # print some training information
                        print('now time : {}'.format((time.time()-start_time)/3600))
                        print('training ...',)
                        print()
                    loss,entropy = self.policy_update(print_out=print_out)

                    # save model to tmp dir, wait for evaluating
                    self.policy_value_net.save_model('tmp/best_policy.model')`

在单线程版本中,模型每50盘棋保存一次,而在多线程版本中,模型每一盘棋都要保存一次模型,这样的理解对吗

2、关于save_numpy和load_numpy,为什么在多线程版本中要保存一个这样的东西,它的作用是什么,一直不明白,如果在policy_evaluate的时候不读取这些参数,会出现什么情况

3、if os.path.exists('tmp/best_policy.model.index'):
请问这个.model.index文件是起到什么作用,和.model的模型文件有什么联系

reset the MCTS during self-play

Hi, in line 391 of game_board.py, I see you reset the MCTS when the self-play game terminates. I know this is exactly the same code in the referenced repo https://github.com/junxiaosong/AlphaZero_Gomoku. However, is this really the expected behavior? You reset the search tree after every single self-play game? Should you remove this line instead? And reset the search tree after every K self-play games (K is large enough to ensure you'll not exceed the memory limit)? Because if you reset the search tree after every single self-play game, you'll lose a lot of useful information stored in the search tree that might be useful for future games.

player.reset_player()

关于训练局数以及网络大小的疑问

作者您好,我想问一下您的15×15棋盘的模型的网络大小如何?大概训练了多少局?训练时的playout数设置为多少?如果想得到一个可以击败弈心的模型大概需要跑多少局? 感谢您的回复。

laptop for this project

Hi, I saw you can run around 1e5 training games in 800h, but it would take me a year to run the same number of training games for a similar strategy game (connect-4) on my personal laptop. Could you please tell me your computer/laptop's brand and the OS for the machine?
Thanks.

训练15*15的棋盘

不好意思
我原本的棋盘大小是99的
但是我现在把棋盘改成15
15
我要怎么去重新训练它

关于五子棋执黑第一步和禁手问题?

作者您好,最近我想在您的代码上进行一些拓展,比如设置黑先第一步棋必须下正中,同时加入一部分禁手规则的限制,实现AI在有禁手规则限制下进行自我对弈。

从其他的issues了解到貌似可以增大噪音来增加第一步走正中的概率,但这显然还是有不确定性的,那否可以通过代码实现第一步必须走中呢(只考虑15X15的棋盘)?

关于禁手规则,考虑到五子棋本身就是一个先手必胜的游戏,应当加入禁手规则。那是否可以直接在原来五个成一线的胜负判定基础上,额外加入规则限制呢?比如说当黑方下出某种禁手,便直接判负。

期待您的回复。

C_puct 参数

对于C_puct参数我有一个疑问,我查过原论文还有网上的一些资料,我没发现原论文C_puct参数的具体设置,在您的README文档里提到原论文的C_puct参数是5。
此外,我也看了其他的repo,C_puct的设置是1.5或1,请问作者您有尝试过其他C_puct参数吗?5是否是最优的?

policy_value_net_tensorlayer.py一个小问题

    self.restore_params = []
    for params in self.network_params:
        # print(params,'**'*100)
        if ('conv2d' in params.name) or ('resnet' in params.name) or ('bn' in params.name) or ('flatten_layer' in params.name):
            self.restore_params.append(params)

在restore_params 时,为什么不添加 dense_layer?

Training time

how long did it take you to train the 15*15 model ? the model seems well.

关于train.py的一些疑问

作者您好,想请问一下有关单线程train.py的问题。
我发现,在data_buffer未满之前,tmp_buffer随着训练局数的增加而增大,而step的长度又与tmp_buffer相关。也就是说在在data_buffer未满之前,policy_update()里的step是不断增大的,耗时也是不断地在增加的。而每次训练结束后,tmp_buffer里的数据并未保存。

那这是否意味着,一次训练1000局与分十次每次在上一次的权重上训练100局的时间是不同的?
这二者同样一共训练了1000局,哪个效果更好呢?

使用模型报错

使用 tensorflow 1.14.0 ,tensorlayer 1.8.5 运行 python human_play.py 出现错误:

2019-09-25 16:45:35.017514: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at save_restore_v2_ops.cc: 184 : Not found: Key model/conv2d_1/W_conv2d not found in checkpoint
Traceback (most recent call last):
File "/app/tangbh/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 135 6, in _do_call
return fn(*args)
File "/app/tangbh/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 134 1, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/app/tangbh/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 142 9, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: Key model/conv2d_1/W_conv2d not found in checkpoint
[[{{node save/RestoreV2}}]]
(1) Not found: Key model/conv2d_1/W_conv2d not found in checkpoint
[[{{node save/RestoreV2}}]]
[[save/RestoreV2/_675]]
0 successful operations.
0 derived errors ignored.

使用自己训练的模型不会出错,是不是版本问题?

关于MPI版本中棋力没有进步的问题?

作者您好。最近我在多线程版本中训练了上千局后,发现当loss下降到3.1左右时就不再下降了,自对弈长时间保持5:5的战绩。但是实测水平却还是比较菜。

我的参数和源代码保持一致,训练了不到2000局。现在也不知道这个模型是不是出了差错,目前把参数从400调整为了1600再进行训练,若无改善可能要重新训练了。

不知道作者有没有碰到过类似的情况呢?
盼望您的回复。

Hi, I'm the person who mailed 3 days before.

You said 'leave the question here' , so I write the question here.

After I mailed you, I compared with mcts_alphazero.py and mcts_pure.py as far as i could.

I found something difference. In mcts_alphazero, more variables are added(like dirichlet, temperature)

As far as I understood, variables like dirichlet and temperature in mcts_alphazero are devices that can explore more than mcts_pure...

are there any differences between them? am i understanding right?

把value head的输出改成像DQN一样的多头输出是否可行?

经过模仿您的框架,我的实验已经取得一些成功,非常感谢
我有以下几点设想,想跟您讨论一下:

把value head的输出层改成像DQN一样,改完后计算value有以下几个方案:

  1. 每个节点的value从Q网络产生的多个Q中取最大值
  2. 每个节点的value=policy head输出的概率加权每个Q求和
  3. 每个节点的value=policy head输出的最大概率对应的动作,由这个动作取得对应的Q值

更进一步地,取消policy head,动作概率由多个子节点Q值经过softmax得到概率

关于MPI版本的一些疑问?

作者您好,我在比较MPI版本和单线程版本中,发现了一个显著的区别。

在单线程版本中,训练时一方是current_mcts_player,另一方是test_player。
在多线程版本中,除了上述两个之间的对弈,还多了current_mcts_player与mcts_player_oppo的对弈,后者看起来更加像是一种"不断超越自我的修炼"。

这差异似乎还直接导致运行MPI版时CPU发热量不正常。3线程情况下,CPU占用率80%但发热却很厉害...

所以想请教一下,单线程版本和多线程版本中的这种差异是出于何种考虑呢?

感谢作者,这是我用过最有用的架构,我修改成了keras的模型,我有一个问题,当训练到后期的时候,episode_len会越来越小,在9-15左右

这个项目我训练出来了,我修改成了keras的模型,我有一个问题,当训练到后期的时候,episode_len会越来越小,在9-15左右; 按我的理解,模型越到后期应该越优秀,自我对弈的步数应该越来越多,episode_len应该会越来越大才对,为什么会越来越小,而且效果也不差,我是比较费解这一点。 是因为first_n_moves的temp为1导致的吗?

小白关于mpi多进程通信求助

作者您好,最近在学习您的项目,有个问题想不通,如果是要实现batch预测,怎样用mpi4py来实现缓冲队列并检测其中是否有新的元素呢,我尝试使用send和recv函数,发现recv不到东西进程会阻塞在那里,怎样才能实现接收不到东西就继续执行下面的代码呢?十分感谢

关于MCTS_Pure的一些疑问?

作者您好,想请教一下MTCS_Pure的意义?
据我这些天对代码的研究,在单线程版本中,它在生成棋谱的过程中充当对手。而在多线程版本中,它则仅仅是在评估时作为对手。

所以想请教一下它的具体作用是什么呢?为何在单线程版本中选择它作为对手,而在多线程版本中则只用作评估?pure_mcts_playout_num的大小是否会影响它的水平,是否越大越好呢?

盼望您的回复。

关于白棋的退火温度的疑问

在没有禁手和交换的五子棋下,黑棋有很大的优势,而围棋就算是前40步偶尔下一两步坏棋,也可以翻盘,至少下两三百步是没有问题的。五子棋白棋如果还倾向于探索的话,会不会导致过快的陷入劣势而使得棋局很快结束,让自对弈棋谱的价值变低
也就是在前12步让黑棋探索新下法,而让白棋保持最强应对,部分抵消黑棋先手的优势,会不会对自对弈棋谱质量有所改善呢,期待作者的解答,谢谢!

I have a question about entropy

I understood that entropy is simply a means of monitoring and that well-trained models have small entropy. So is entropy the figure of the variance?

问一个比较低级的关于单GPU训练的问题

你好,我想问一下为什么rank=0,1,2的这3个进程在你的代码当中cuda都是默认为True,但是当只有一个GPU的时候,有方案可以让多个进程同时占用一个GPU资源么?我在代码中没有看到有指定各个进程占用的GPU资源量,tensorflow会自动分配给各个graph显存和运算资源么?

迁移学习很神奇啊,有意思!

我用训练好的11X11的模型来训练15X15的棋盘,只用了2天时间就有很好的水平,至少我是下不过它了。
另外,我稍微修改了一下 tensorflow的文件,把 tenserlayer 部分去掉了。

基于您提供的model再进行训练问题

基于您提供的model训练,发现跑偏了?从一个高手立马变成了一个菜鸟。。
请问这种情况怎么办?
学习率是2e-3,play_out是200,first_moves是1
顺便感谢您的付出。

大神您好,有问题想请教一下

您好,我想请教一下在mtcs_alphazero.py中的expand中,那个dirichlet noise的参数数值的含义和具体作用是什么,加到不同的地方会怎么样。
比如您在11x11的棋盘设置的是0.3,是根据什么来决定这个值的。您在注释中写到20x20的应该要用0.03,这个值是通过数学方法计算出来的吗,还是在一个大概的范围内调试出来的。
我目前大一对这个强化学习很有兴趣,不过知识有限可能问的问题比较低级,不过这个参数的含义我的确有点搞不明白,其他地方基本都能看懂。麻烦大神解答一下,谢谢了

Tricks Discussion

首先表达一下感谢,看到git上有这么多有同样兴趣而且认真付诸实践的人其实挺开心的。而且repo主你是我见过回答最积极的一个 :D

这里我有几个关于你使用的tricks想和你请教一下:

  1. 我认为总共有三种添加noise的方式来增加对局多样性,a) 在select计算score时;b) 在expand分配prior时;c) 在play选择move时。你采用的是第二种,不知道你是否尝试过其他的呢?因为第二种是唯一一种改变了tree本身数据的方式,这样会不会导致training过程很混乱呢?
    事实上,在我的尝试中(我的代码是自己从头写的,只是某些算法过程有参考各个repos,所以可能结果会与你们不太一致),第二种是收敛最慢的,而且是最早遇到local minima卡住的...但从你描述的结果来看,他应该是要上限更高

  2. 我看了你的代码,似乎没有看到root parallel的内容?只是在多进程收数据。是我漏看了吗,还请指明

  3. 不停的evaluation是否有必要?按照alpha zero general的说法,只维护一个最新的model似乎能达到更快的收敛速度。

事实上,我自己先尝试了4子的训练,一切很顺利,policy loss达到2.1的时候,我就已经输多赢少了。然后我就拓展成5子,结果即使试验很多不同的设置,包括网络结构深度、history buffer length、对局噪声甚至是c_puct和tau这样的超参。但一直连先手赢我都做不到,所以想问一下你是否有踩过类似的坑呢?顺便一提,我在训练过程中经常会遇到鞍点之类的,loss不降,继续训练就会慢慢升高,需要我停止再重新从断点train。最后我的policy loss最终在2.0~2.2晃荡,无论怎么调也降不下去了。

谢谢!

转换成pytorch

作者你好,刚刚入门,我想问一下我想将你的tensorflow模型转换成pytorch,我需要学习哪些知识,目前已有tensorflow和pytorch部分基础,但还是很多地方看不懂,谢谢大佬

关于Dihedral Reflection or Rotation和node概率p的一些困惑

作者,您好!看了你的并行实现代码深感佩服,但仍有一些困惑,想要请教。一方面是关于Dihedral Reflection or Rotation这部分实现我看了很多遍还是不大理解,这个是随机旋转节点概率赋值吗?从论文到代码一直不是很能理解这部分,一开始以为单纯对应训练数据增强,后来看了你的代码后感觉是单独一部分,但始终不大理解;第二个是关于Tree node的概率p赋值,这个概率我看了下并未根据可行动作重新进行归一化(某个节点分支的可行节点的概率和不为1),这个不影响吗,亦或者是我看的不大仔细,还麻烦您稍作指点!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.