Coder Social home page Coder Social logo

netmamba's Introduction

NetMamba

Efficient Network Traffic Classification via Pre-training Unidirectional Mamba

Tongze Wang, Xiaohui Xie, Wenduo Wang, Chuyi Wang, Youjian Zhao, Yong Cui

ArXiv Paper (arXiv 2405.11449)

Overview

Environment Setup

  • Create python environment
    • conda create -n NetMamba python=3.10.13
    • conda activate NetMamba
  • Install PyTorch 2.1.1+cu121 (we conduct experiments on this version)
    • pip install torch==2.1.1 torchvision==0.16.1 --index-url https://download.pytorch.org/whl/cu121
  • Install Mamba 1.1.1
    • cd mamba-1p1p1
    • pip install -e .
  • Install other dependent libraries
    • pip install -r requirements.txt

Data Preparation

Download our processed datasets

For simplicity, you are welcome to download our processed datasets on which our experiments are conducted from google drive.

Each dataset is organized into the following structure:

.
|-- train
|   |-- Category 1
|   |   |-- Sample 1
|   |   |-- Sample 2
|   |   |-- ...
|   |   `-- Sample M
|   |-- Category 2
|   |-- ...
|   `-- Catergory N
|-- test
`-- valid

Process your own datasets

If you'd like to generate customized datasets, please refer to preprocessing scripts provided in dataset. Note that you need to change several file paths accordingly.

Run NetMamba

  • Run pre-training:
CUDA_VISIBLE_DEVICES=0 python src/pre-train.py \\
    --batch_size 128 \\
    --blr 1e-3 \\
    --steps 150000 \\
    --mask_ratio 0.9 \\
    --data_path <your-dataset-dir> \\
    --output_dir <your-output-dir> \\
    --log_dir <your-output-dir> \\
    --model net_mamba_pretrain \\
    --no_amp
  • Run fine-tuning (including evaluation)
CUDA_VISIBLE_DEVICES=0 python src/fine-tune.py \\
    --blr 2e-3 \\
    --epochs 120 \\
    --nb_classes <num-class> \\
    --finetune <pretrain-checkpoint-path> \\
    --data_path <your-dataset-dir> \\
    --output_dir <your-output-dir> \\
    --log_dir <your-output-dir> \\
    --model net_mamba_classifier \\
    --no_amp

Note that you should replace variable in the < > format with your actual values.

Checkpoint

The pre-trained checkpoint of NetMamba is available for download on our huggingface repo. Feel free to access it at your convenience. If you require any other type of checkpoints, please contact us via email.

Citation

@misc{wang2024netmamba,
      title={NetMamba: Efficient Network Traffic Classification via Pre-training Unidirectional Mamba}, 
      author={Tongze Wang and Xiaohui Xie and Wenduo Wang and Chuyi Wang and Youjian Zhao and Yong Cui},
      year={2024},
      eprint={2405.11449},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

netmamba's People

Contributors

wangtz19 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

netmamba's Issues

About Dataset Usage in NetMamba

Hello,

I have been exploring your work on NetMamba and found it very impressive. I noticed that while you have included ET-bert in the comparison models, the CSTNET-TLS 1.3 dataset, which ET-bert has been tested on, is not used in your experiments.

Could you please share the reason for not using the CSTNET-TLS 1.3 dataset in your work? Understanding your rationale would be very helpful for my research.

Thank you for your time and for sharing this excellent work with the community!

RuntimeError: CUDA error: no kernel image is available for execution on the device

ubuntu2204,cuda12.1


CUDA available: True
Number of CUDA devices: 1
Device 0: NVIDIA GeForce GTX 1080 Ti
Current device: 0
Current device name: NVIDIA GeForce GTX 1080 Ti
CUDA version: 12.1
CUDNN version: 8902


(NetMamba) clb@MS-7C82:~/桌面/NetMamba$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0


(NetMamba) clb@MS-7C82:~/桌面/NetMamba$ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=1 python src/pre-train.py --batch_size 32 --blr 1e-3 --steps 150000 --mask_ratio 0.9 --data_path dataset/root/Vim/dataset/ISCXVPN2016/dataset_sampled --model net_mamba_pretrain --no_amp --num_workers 0
Not using distributed mode
[11:13:47.785222] job dir: /home/clb/桌面/NetMamba/src
[11:13:47.785270] Namespace(batch_size=32,
steps=150000,
epochs=400,
save_steps_freq=10000,
accum_iter=1,
model='net_mamba_pretrain',
input_size=40,
mask_ratio=0.9,
norm_pix_loss=False,
weight_decay=0.05,
lr=None,
blr=0.001,
min_lr=0.0,
warmup_epochs=25,
data_path='dataset/root/Vim/dataset/ISCXVPN2016/dataset_sampled',
output_dir='./output/pretrain',
log_dir='./output/pretrain',
device='cuda',
seed=0,
resume='',
num_workers=0,
pin_mem=True,
world_size=1,
local_rank=-1,
dist_on_itp=False,
dist_url='env://',
if_amp=False,
pop=False,
pop_loss_weight=0.01,
distributed=False)
[11:13:47.811319] Dataset ImageFolder
Number of datapoints: 13281
Root location: dataset/root/Vim/dataset/ISCXVPN2016/dataset_sampled/train
StandardTransform
Transform: Compose(
Grayscale(num_output_channels=1)
ToTensor()
Normalize(mean=[0.5], std=[0.5])
)
[11:13:47.811398] Sampler_train = <torch.utils.data.distributed.DistributedSampler object at 0x7253e03cb310>
[11:13:47.927809] Model = NetMamba(
(patch_embed): StrideEmbed(
(proj): Conv1d(1, 256, kernel_size=(4,), stride=(4,))
)
(drop_path): DropPath()
(blocks): ModuleList(
(0-1): 2 x Block(
(mixer): Mamba(
(in_proj): Linear(in_features=256, out_features=1024, bias=False)
(conv1d): Conv1d(512, 512, kernel_size=(4,), stride=(1,), padding=(3,), groups=512)
(act): SiLU()
(x_proj): Linear(in_features=512, out_features=48, bias=False)
(dt_proj): Linear(in_features=16, out_features=512, bias=True)
(out_proj): Linear(in_features=512, out_features=256, bias=False)
)
(norm): RMSNorm()
(drop_path): Identity()
)
(2-3): 2 x Block(
(mixer): Mamba(
(in_proj): Linear(in_features=256, out_features=1024, bias=False)
(conv1d): Conv1d(512, 512, kernel_size=(4,), stride=(1,), padding=(3,), groups=512)
(act): SiLU()
(x_proj): Linear(in_features=512, out_features=48, bias=False)
(dt_proj): Linear(in_features=16, out_features=512, bias=True)
(out_proj): Linear(in_features=512, out_features=256, bias=False)
)
(norm): RMSNorm()
(drop_path): DropPath()
)
)
(norm_f): RMSNorm()
(decoder_embed): Linear(in_features=256, out_features=128, bias=True)
(decoder_blocks): ModuleList(
(0-1): 2 x Block(
(mixer): Mamba(
(in_proj): Linear(in_features=128, out_features=512, bias=False)
(conv1d): Conv1d(256, 256, kernel_size=(4,), stride=(1,), padding=(3,), groups=256)
(act): SiLU()
(x_proj): Linear(in_features=256, out_features=40, bias=False)
(dt_proj): Linear(in_features=8, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=128, bias=False)
)
(norm): RMSNorm()
(drop_path): Identity()
)
)
(decoder_norm_f): RMSNorm()
(decoder_pred): Linear(in_features=128, out_features=4, bias=True)
)
[11:13:47.928055] trainable params: 2174724 || all params: 2174724 || trainable%: 100.0000
[11:13:47.928089] base lr: 1.00e-03
[11:13:47.928098] actual lr: 1.25e-04
[11:13:47.928104] accumulate grad iterations: 1
[11:13:47.928110] effective batch size: 32
[11:13:47.928466] AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.95)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
lr: 0.000125
maximize: False
weight_decay: 0.0

Parameter Group 1
amsgrad: False
betas: (0.9, 0.95)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
lr: 0.000125
maximize: False
weight_decay: 0.05
)
[11:13:47.928530] Start training for 150000 steps
[11:13:47.928979] log_dir: ./output/pretrain
Traceback (most recent call last):
File "/home/clb/桌面/NetMamba/src/pre-train.py", line 236, in
main(args)
File "/home/clb/桌面/NetMamba/src/pre-train.py", line 205, in main
train_stats = pretrain_one_epoch(
File "/home/clb/桌面/NetMamba/src/engine.py", line 48, in pretrain_one_epoch
loss, _, _ = model(samples, mask_ratio=args.mask_ratio)
File "/home/clb/anaconda3/envs/NetMamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/clb/anaconda3/envs/NetMamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/clb/桌面/NetMamba/src/models_net_mamba.py", line 264, in forward
latent, mask, ids_restore = self.forward_encoder(imgs,
File "/home/clb/桌面/NetMamba/src/models_net_mamba.py", line 188, in forward_encoder
hidden_states, residual = blk(hidden_states, residual)
File "/home/clb/anaconda3/envs/NetMamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/clb/anaconda3/envs/NetMamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/clb/桌面/NetMamba/src/models_mamba.py", line 94, in forward
hidden_states = self.mixer(hidden_states, inference_params=inference_params)
File "/home/clb/anaconda3/envs/NetMamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/clb/anaconda3/envs/NetMamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/clb/桌面/NetMamba/mamba-1p1p1/mamba_ssm/modules/mamba_simple.py", line 249, in forward
out = mamba_inner_fn(
File "/home/clb/桌面/NetMamba/mamba-1p1p1/mamba_ssm/ops/selective_scan_interface.py", line 612, in mamba_inner_fn
return MambaInnerFn.apply(xz, conv1d_weight, conv1d_bias, x_proj_weight, delta_proj_weight,
File "/home/clb/anaconda3/envs/NetMamba/lib/python3.10/site-packages/torch/autograd/function.py", line 539, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/clb/anaconda3/envs/NetMamba/lib/python3.10/site-packages/torch/cuda/amp/autocast_mode.py", line 113, in decorate_fwd
return fwd(*args, **kwargs)
File "/home/clb/桌面/NetMamba/mamba-1p1p1/mamba_ssm/ops/selective_scan_interface.py", line 318, in forward
conv1d_out = causal_conv1d_cuda.causal_conv1d_fwd(x, conv1d_weight, conv1d_bias, None, True)
RuntimeError: CUDA error: no kernel image is available for execution on the device
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

请问一下这个是什么原因导致的呢

models_mamba.py中TypeError: Mamba.__init__() got an unexpected keyword argument 'init_layer_scale'

def create_block(
d_model,
ssm_cfg=None,
norm_epsilon=1e-5,
drop_path=0.,
rms_norm=False,
residual_in_fp32=False,
fused_add_norm=False,
layer_idx=None,
device=None,
dtype=None,
if_bimamba=False,
bimamba_type="none",
if_devide_out=False,
init_layer_scale=None,
):
if if_bimamba:
bimamba_type = "v1"
if ssm_cfg is None:
ssm_cfg = {}
factory_kwargs = {"device": device, "dtype": dtype}
mixer_cls = partial(Mamba, layer_idx=layer_idx, bimamba_type=bimamba_type, if_devide_out=if_devide_out, init_layer_scale=init_layer_scale, **ssm_cfg, **factory_kwargs)

mixer_cls = partial(Mamba, layer_idx=layer_idx, bimamba_type=bimamba_type, if_devide_out=if_devide_out, init_layer_scale=init_layer_scale, **ssm_cfg, **factory_kwargs)在这句中,Mamba实例化没有bimamba_type,if_devide_out,init_layer_scale等参数

finetune过程报错AttributeError: 'NetMamba' object has no attribute 'decoder_pos_embed'

按照readme来的,cuda版本12.1 gpu2080ti,pretrain完成了,finetune过程报错,请问一下是包没装好吗,谢谢

(NetMamba) root@autodl-container-b9b842b3fc-38b33995:/autodl-tmp/NetMamba# CUDA_VISIBLE_DEVICES=0 python src/fine-tune.py --blr 2e-3 --epochs 120 --nb_classes 7 --finetune output/pretrain/checkpoint-step10000.pth --data_path dataset/ISCXVPN2016 --model net_mamba_classifier --no_amp
Not using distributed mode
[10:24:46.114657] job dir: /root/autodl-tmp/NetMamba/src
[10:24:46.114849] Namespace(batch_size=64,
epochs=120,
save_steps_freq=5000,
accum_iter=1,
model='net_mamba_classifier',
input_size=40,
drop_path=0.1,
clip_grad=None,
weight_decay=0.05,
lr=None,
blr=0.002,
layer_decay=0.75,
min_lr=1e-06,
warmup_epochs=20,
color_jitter=None,
aa='rand-m9-mstd0.5-inc1',
smoothing=0.1,
reprob=0.25,
remode='pixel',
recount=1,
resplit=False,
mixup=0,
cutmix=0,
cutmix_minmax=None,
mixup_prob=1.0,
mixup_switch_prob=0.5,
mixup_mode='batch',
finetune='output/pretrain/checkpoint-step10000.pth',
data_path='dataset/ISCXVPN2016',
nb_classes=7,
output_dir='./output/finetune',
log_dir='./output/finetune',
device='cuda',
seed=0,
resume='',
start_epoch=0,
eval=False,
dist_eval=False,
num_workers=10,
pin_mem=True,
world_size=1,
local_rank=-1,
dist_on_itp=False,
dist_url='env://',
if_amp=False,
distributed=False)
[10:24:46.161752] Dataset ImageFolder
Number of datapoints: 13281
Root location: dataset/ISCXVPN2016/train
StandardTransform
Transform: Compose(
Grayscale(num_output_channels=1)
ToTensor()
Normalize(mean=[0.5], std=[0.5])
)
[10:24:46.166717] Dataset ImageFolder
Number of datapoints: 1383
Root location: dataset/ISCXVPN2016/valid
StandardTransform
Transform: Compose(
Grayscale(num_output_channels=1)
ToTensor()
Normalize(mean=[0.5], std=[0.5])
)
[10:24:46.171635] Dataset ImageFolder
Number of datapoints: 1384
Root location: dataset/ISCXVPN2016/test
StandardTransform
Transform: Compose(
Grayscale(num_output_channels=1)
ToTensor()
Normalize(mean=[0.5], std=[0.5])
)
[10:24:46.171723] Sampler_train = <torch.utils.data.distributed.DistributedSampler object at 0x7fc93b98fa00>
Traceback (most recent call last):
File "/root/autodl-tmp/NetMamba/src/fine-tune.py", line 397, in
main(args)
File "/root/autodl-tmp/NetMamba/src/fine-tune.py", line 234, in main
model = models_net_mamba.dict[args.model](
File "/root/autodl-tmp/NetMamba/src/models_net_mamba.py", line 275, in net_mamba_classifier
model = NetMamba(
File "/root/autodl-tmp/NetMamba/src/models_net_mamba.py", line 91, in init
self.initialize_weights()
File "/root/autodl-tmp/NetMamba/src/models_net_mamba.py", line 96, in initialize_weights
trunc_normal_(self.decoder_pos_embed, std=.02)
File "/root/miniconda3/envs/NetMamba/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'NetMamba' object has no attribute 'decoder_pos_embed'
(NetMamba) root@autodl-container-b9b842b3fc-38b33995:
/autodl-tmp/NetMamba# pip list
Package Version Editable project location


absl-py 2.1.0
addict 2.4.0
aiohttp 3.9.1
aiosignal 1.3.1
alembic 1.13.0
asttokens 2.4.1
async-timeout 4.0.3
attrs 23.1.0
Automat 22.10.0
blinker 1.7.0
buildtools 1.0.6
causal-conv1d 1.1.0
certifi 2023.11.17
charset-normalizer 3.3.2
click 8.1.7
cloudpickle 3.0.0
comm 0.2.2
constantly 23.10.4
contourpy 1.2.0
cycler 0.12.1
databricks-cli 0.18.0
datasets 2.15.0
debugpy 1.8.1
decorator 5.1.1
dill 0.3.7
docker 6.1.3
docopt 0.6.2
einops 0.7.0
entrypoints 0.4
exceptiongroup 1.2.0
executing 2.0.1
filelock 3.13.1
Flask 3.0.0
fonttools 4.46.0
frozenlist 1.4.0
fsspec 2023.10.0
furl 2.1.3
fvcore 0.1.5.post20221221
gitdb 4.0.11
GitPython 3.1.40
greenlet 3.0.2
grpcio 1.62.1
gunicorn 21.2.0
huggingface-hub 0.19.4
hyperlink 21.0.0
idna 3.6
importlib-metadata 7.0.0
incremental 22.10.0
iopath 0.1.10
ipykernel 6.29.4
ipython 8.22.2
itsdangerous 2.1.2
jedi 0.19.1
Jinja2 3.1.2
joblib 1.3.2
jupyter_client 8.6.1
jupyter_core 5.7.2
kiwisolver 1.4.5
Mako 1.3.0
mamba_ssm 1.1.1 /root/autodl-tmp/NetMamba/mamba-1p1p1
Markdown 3.5.1
MarkupSafe 2.1.3
matplotlib 3.8.2
matplotlib-inline 0.1.6
mlflow 2.9.1
mmcv 1.3.8
mmsegmentation 0.14.1
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
nest-asyncio 1.6.0
networkx 3.2.1
ninja 1.11.1.1
numpy 1.26.2
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.18.1
nvidia-nvjitlink-cu12 12.4.99
nvidia-nvtx-cu12 12.1.105
oauthlib 3.2.2
opencv-python 4.8.1.78
orderedmultidict 1.0.1
packaging 23.2
pandas 2.1.3
parso 0.8.3
pexpect 4.9.0
Pillow 10.1.0
pip 24.0
platformdirs 4.1.0
portalocker 2.8.2
prettytable 3.9.0
prompt-toolkit 3.0.43
protobuf 4.25.1
psutil 5.9.8
ptyprocess 0.7.0
pure-eval 0.2.2
pyarrow 14.0.1
pyarrow-hotfix 0.6
Pygments 2.17.2
PyJWT 2.8.0
pyparsing 3.1.1
python-dateutil 2.8.2
python-hostlist 1.23.0
pytz 2023.3.post1
PyYAML 6.0.1
pyzmq 25.1.2
querystring-parser 1.2.4
redo 2.0.4
regex 2023.10.3
requests 2.31.0
safetensors 0.4.1
scapy 2.5.0
scikit-learn 1.3.2
scipy 1.11.4
setuptools 69.5.1
simplejson 3.19.2
six 1.16.0
smmap 5.0.1
SQLAlchemy 2.0.23
sqlparse 0.4.4
stack-data 0.6.3
sympy 1.12
tabulate 0.9.0
tensorboard 2.16.2
tensorboard-data-server 0.7.2
termcolor 2.4.0
thop 0.1.1.post2209072238
threadpoolctl 3.2.0
timm 0.4.12
tokenizers 0.15.0
tomli 2.0.1
torch 2.1.1+cu121
torchaudio 2.1.1
torchprofile 0.0.4
torchvision 0.16.1+cu121
tornado 6.4
tqdm 4.66.1
traitlets 5.14.2
transformers 4.35.2
triton 2.1.0
Twisted 24.3.0
typing_extensions 4.8.0
tzdata 2023.3
urllib3 2.1.0
wcwidth 0.2.12
websocket-client 1.7.0
Werkzeug 3.0.1
wheel 0.43.0
xxhash 3.4.1
yacs 0.1.8
yapf 0.40.2
yarl 1.9.4
zipp 3.17.0
zope.interface 6.2

Mamba推理

针对Mamba本身的问题
请问下Mamba推理的时候,在decoder阶段需要将前面所有的token都输入吗?
Transformer里面,因为有KVCache所以减少了很多的计算量,Mamba是怎么处理的?

NetMamba Framework

想请问一下作者大佬您论文中NetMamba的结构图是用啥软件或者网站画的,这么美观!!!

NetMamba Framework

想请问一下作者大佬您论文中NetMamba的结构图是用啥软件或者网站画的,这么美观!!!

使用python dataset_x.py分割pcap文件失败

您好,我在使用python dataset_x.py处理数据集时无法正确地分割出流量。

我先是在linux服务器上执行python dataset_iscx_tor2016.py,所有log日志都报错 “pcap_tool/splitter: error while loading shared libraries: libpcap.so.0.8: cannot open shared object file: No such file or directory”。在网上找到的解决方法因权限不够无法使用。

随后我在本地windows系统中执行python dataset_iscx_tor2016.py,系统中所有.log日志中都报错 “'F:/traffic' 不是内部或外部命令,也不是可运行的程序或批处理文件。”

请问这种情况应该如何解决?

如下图,执行python dataset_iscx_tor2016.py后得到“flows”文件夹,但是里面的各个流量类别的文件夹都是空的,且log文件都报错 “'F:/traffic' 不是内部或外部命令,也不是可运行的程序或批处理文件。”
Snipaste_2024-06-13_22-05-42
Snipaste_2024-06-13_22-14-09

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.