Coder Social home page Coder Social logo

dconnnet's Introduction

Directional Connectivity-based Segmentation of Medical Images

Pytorch implementation for CVPR2023 paper "Directional Connectivity-based Segmentation of Medical Images" [paper].

For another simple connectivity-based method, please also check BiconNet

image

Requirements

Pytorch 1.7.0+cu110

Code Stucture

The main stucture and important files or functions of this repository is as following:


  - train.py: main file. Define your parameters, selection of GPU etc.
  - solver.py: the training details and testing details.
  - connect_loss.py: loss function for DconnNet
    * connectivity_matrix: converting segmentation masks to connectivity masks
    * Bilateral_voting: bilateral voting and convert connectivity-based output into segmentation map.
    
  data_loader: your data loader files and the SDL weights for your dataset if needed.
  model: DconnNet model files
  scripts: scripts for training different datasets

Implementation

Train on datasets in the paper.

For training detail of each dataset, please check the general/scripts/

Please store the each dataset in the following path:

Retouch

/retouch
  /Cirrus ### device, same for Spectrailis and Topcon
    /train
      /TRAIN002 ### volume id
        /mask ### store .png masks here
        /orig ### store .png images here

ISIC2018

The resized data we used and the training pipeline in our paper follows this site with following hyperparameters:

/ISIC2018_npy_all_224_320
  /image
  /label

Image size: (224, 320)
batch size: 10
epoch: 200
starting lr: 1e-4
Optimizer: Adam with weight decay 1e-8
lr_scheduler: CosineAnnealingWarmRestarts (T_0=15, T_mult=2, eta_min = 0.00001).

However, different settings (e.g., different sizes, pipelines) of ISIC data might yield different results. We do recommend you try DconnNet in your own ISIC data setting, following the guide in the next section.

CHASEDB1

/CHASEDB1
  /img
  /gt

Train on your own dataset using this code.

  1. Make your own dataloader.
  2. Replace your dataloader in main() function of train.py. If need k-fold validation, use exp_id to specify your sub-folds.
  3. Specify your network setting in train.py
  4. Run: python train.py

Train DconnNet on your own codes.

Important: please follow these steps to ensure you get a correct implementation

  1. Get our model files from /model

  2. In the training phase, please use connect_loss.py as the loss function

    • for single-class, use connect_loss.single_class_forward
    • for general multi-class,use connect_loss.multi_class_forward
  3. In the testing phase, please follow our official procedure in test_epoch of /solver.py based on the number of your classes.

    • for single-class, we get the final predictions by sigmoid --> threshold --> Bilateral_voting
    • for general multi-class, we get the final predictions by Bilateral_voting --> topK (softmax + topK)
    • you might also need to create two variable hori_translation and verti_translation in this step for matrix shifting purpose, you can follow the codes or customize your own shifting methods.

Notice of the codes

Please always make sure the dimenstion of your data is correct. For example, in connect_loss.py, we specified the shape of each each in the comment. When there is issue, please always check the dimension first.

Pretrained model

The pretrained model and predictions can be downloaded at here

If use SDL loss:

Please pre-calculate the mask size distribution and save it as a .npy file (i.e., the pos_cnt.npy in the Solver function) with the shape of (C, N) where C is the class number, and N is the sample number (number of images). For example, index (1,10) stands for the mask size (pixel count) of the second class in image 11.

Citation

If you find this work useful in your research, please consider citing:

Z. Yang and S. Farsiu, "Directional Connectivity-based Segmentation of Medical Images," in CVPR, 2023, pp. 11525-11535.

dconnnet's People

Contributors

zyun-y avatar

Stargazers

gx avatar Kevin Hong avatar  avatar  avatar  avatar AndyBear avatar Tongfei avatar  avatar Jelly27 avatar  avatar  avatar  avatar  avatar Chen Junnan avatar YUAN avatar Xinyu Liu avatar  avatar BeurJEry avatar  avatar L.F. Wang avatar Explode6799 avatar  avatar  avatar Han Ji-Rui avatar Aydin Ayanzadeh avatar YONGGI HONG avatar  avatar  avatar Hongxi Yang avatar ZhangZiyu avatar Tan avatar Lin avatar  avatar Yining Jiao avatar Shan avatar Ellis avatar  avatar  avatar Chisheng Chen avatar  avatar wild. avatar Charl1e avatar  avatar Qian Wu avatar  avatar  avatar MWinter avatar Toby avatar  avatar Kaichen Yu avatar  avatar  avatar JiayiChen815 avatar Frank avatar Cliff Sterry avatar  avatar  avatar Zhi Li avatar  avatar heyuanpengpku avatar Qilong Ying avatar leo1 avatar  avatar  avatar  avatar lilijian avatar  avatar  avatar Amine MEKKI avatar  avatar Martin Liao-WHU avatar  avatar  avatar  avatar  avatar Keying Qi avatar  avatar Sungjoon Park avatar Hongtao Wang avatar FengCN avatar rsdljm avatar Ὀδυσσεύς avatar CarpeTim avatar  avatar Yiqing Wang avatar ozkan avatar Tianyu Gao avatar Lőrincz-Molnár Szabolcs-Botond avatar  avatar  avatar yang avatar  avatar  avatar Marshall Xu avatar Fenghe Tang avatar Xinxu Wei avatar Yanda Meng avatar Larry avatar  avatar Yankai Chen avatar

Watchers

Kostas Georgiou avatar  avatar

dconnnet's Issues

Forecast results

Our number of class is 1, and the final predicted result is [1,8,320,320]. Which 8-dimensional feature should we choose as the predicted result

About the disentanglement of directional subspace

In your paper which is related to this code, Figure 10 shows different results on T-SNE.
Pls, do you have any relevant code here about How to visualize the output of SDEs into 8 categories, I want to test it locally.
Thank you!

RuntimeError: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 5 is not equal to len(dims) = 4

When I run the train.py on ReTouch dataset, I got the following error:

/anaconda3/envs/pt1130cu117py39/lib/python3.9/site-packages/apex-0.1-py3.9.egg/apex/init.py:68: DeprecatedFeatureWarning: apex.amp is deprecated and will be removed by the end of February 2023. Use PyTorch AMP
/anaconda3/envs/pt1130cu117py39/lib/python3.9/site-packages/torch/nn/functional.py:1967: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
Traceback (most recent call last):
File "/home/lgc/pycharm_code/DconnNet-main/train.py", line 151, in
main(args)
File "/home/lgc/pycharm_code/DconnNet-main/train.py", line 147, in main
solver.train(model, train_loader, val_loader,exp_id+1, num_epochs=args.epochs)
File "/home/lgc/pycharm_code/DconnNet-main/solver.py", line 147, in train
loss_main = self.loss_func(output, y)
File "/anaconda3/envs/pt1130cu117py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lgc/pycharm_code/DconnNet-main/connect_loss.py", line 172, in forward
loss = self.multi_class_forward(c_map, target)
File "/home/lgc/pycharm_code/DconnNet-main/connect_loss.py", line 185, in multi_class_forward
onehotmask = onehotmask.permute(0,3,1,2)
RuntimeError: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 5 is not equal to len(dims) = 4

Questions about training time.

I reproduced the model of this article and loaded my own data set for verification. There are 3,062 samples in total. The batch-size is set to 8, and it takes up to an hour to train an epoch. Is it normal? How long does it take for you to train an epoch? Thank you very much.

Sent from PPHub

Experimental details about T-SNE

Hello, could you please elaborate on the experimental details of T-SNE in Figure 1 and Figure 10, especially the settings of (B, C, H, W)

训练报错

运行 python train.py
Traceback (most recent call last):
File "train.py", line 150, in
main(args)
File "train.py", line 146, in main
solver.train(model, train_loader, val_loader,exp_id+1, num_epochs=args.epochs)
File "/root/autodl-tmp/DconnNet-main/solver.py", line 147, in train
loss_main = self.loss_func(output, y)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/autodl-tmp/DconnNet-main/connect_loss.py", line 172, in forward
loss = self.multi_class_forward(c_map, target)
File "/root/autodl-tmp/DconnNet-main/connect_loss.py", line 184, in multi_class_forward
onehotmask = onehotmask.permute(0,3,1,2)
RuntimeError: number of dims don't match in permute

a typo

DconnNet/connect_loss.py

Lines 243 to 249 in 70562e4

### get edges gt###
class_conn = con_target.view([c_map.shape[0],self.args.num_class,8,c_map.shape[2],c_map.shape[3]])
sum_conn = torch.sum(class_conn,dim=2)
### get edges gt###
class_conn = con_target.view([c_map.shape[0],self.args.num_class,8,c_map.shape[2],c_map.shape[3]])
sum_conn = torch.sum(class_conn,dim=2)

This part repeats the one above.

Question about t-SNE

Hi, I have a question about how to build figure 2 & 10 which both applied T-SNE. Especially, for figure 10, how do you get the color for the points in figure 10(b)?

IndexError: list index out of range

Hi,I have read your paper and want to run a simulation of your experiment,I follwed your readme and downloaded the ISIC2018 dataset and put it in the data_loader fold ,owing to the fact that I'm just getting started,when I ran train.py ,I don't know why the bug occurred that :
Traceback (most recent call last):
File "C:\Users\TNC\Coderlife\DconnNet-main\train.py", line 151, in
main(args)
File "C:\Users\TNC\Coderlife\DconnNet-main\train.py", line 113, in main
test_root = [pat_ls[i] for i in test_id]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\TNC\Coderlife\DconnNet-main\train.py", line 113, in
test_root = [pat_ls[i] for i in test_id]
~~~~~~^^^
IndexError: list index out of range
I will sincerely thank you if you can answer me

help! about the the input for training mask

Thank you for your code and work. I copied your model into my training code,but an error occurred during my training.
connect_loss.py about Line 200:
class_pred = c_map.view([c_map.shape[0],1, 8, c_map.shape[2],c_map.shape[3]])
RuntimeError: shape '[1, 1, 8, 256, 256]' is invalid for input of size 65536.
My dataset‘s image annotation's shape is 1, 1, 256, 256. But it needs to be forcibly transformed into 1, 8, 256, 256. So I would like to ask about the shape of the input annotation and how to set the annotation.
By the way, the output of the DconnNet.py is (return cls_pred,mapped_c5), i know cls_pred is the prediction of the model, but what is the meaning of the mapped_c5 and is mapped_c5 useful in the subsequent training and tests?

Regarding the visualization of latent space and feature maps

Hello, I have two questions. In Figure 1 of this paper, which layer's feature output in the code corresponds to the feature map displayed in the "connectivity-based models" section? And is it simply plotted using plt, or are there other details involved? Thank you!
QQ截图20240305174030

Low Dice on CHASEDB1

I attempted to train the model on the CHASEDB1 dataset, but the resulting Dice score is significantly lower than expected, at around 0.23. In contrast, using the official pretrained model for testing yields a Dice score of around 0.85.

Environment

  • Windows 11
  • Python 3.10
  • PyTorch: 2.3.0+cu118 (Note: The requirement specifies torch1.7.0+cu110, which does not support RTX 40 GPUs)
  • I replaced the apex amp with PyTorch amp since I cannot install it properly

The following arugments were used during my training (Running in PyCharm):

--dataset chase
--data_root ./dataset/CHASEDB1
--resize 960 960
--num-class 1
--batch-size 4
--epochs 130
--lr 0.0038
--lr-update poly
--folds 5

Resize Error

Hello,

I want to use the DonnNet on an other Dataset (ACDC) with medical heart segmentation. For that I implemented an get_Datatset ACDC function, similar to the get_Dataset_CHASE. But for this datasaet I want to change the hyerparamter size with resize in train.py to 250,250 instead of 960x960. Than an Error occurs with dimensions in the model.DconnNet file in the forward path. I printed out the not matching dimensions, but I don't know how to fix it.

Train batch number: 8
Test batch number: 10
Petrain Model Have been loaded!
START TRAIN.

Dimensionen von Tensor fb5(r5): torch.Size([5, 256, 8, 8])
Dimensionen von Tensor c4: torch.Size([5, 256, 7, 7])

Traceback (most recent call last):
File "/misc/usrhomes/d1488/train.py", line 172, in
main(args)
File "/misc/usrhomes/d1488/train.py", line 167, in main
solver.train(model, train_loader, val_loader, exp_id + 1, num_epochs=args.epochs) # Aufruf der Train-Methode des solvers
File "/misc/usrhomes/d1488/solver.py", line 152, in train
output, aux_out = net(X) # Modell (net) wird mit den Eingangsdaten X aufgerufen, um Vorhersagen (output) und optional auch Nebenausgaben (aux_out) zu generieren
File "/no_backups/d1488/.pyenv/versions/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/misc/usrhomes/d1488/model/DconnNet.py", line 96, in forward
d4=self.relu(self.fb5(r5)+c4) #256
RuntimeError: The size of tensor a (8) must match the size of tensor b (7) at non-singleton dimension 3

Can you please help me with this or describe what I should do additional to give the input data a resized dimension 250x250. I'm greatful for help.

Problems with saving models

Hi, saving the model the way you did in your code, why is it that when loading the trained model using "test_only", it has the same effect as randomly initialising the model, with all metrics being close to 0 (on the ISIC2018 dataset), except for accuracy?

retouch data preprocessing

Hello author, after downloading the dataset from the official website, how should I preprocess it? Where are the training lists? I'm very sorry to disturb you

使用ISIC产生的RuntimeError

G:\soft\Ano\envs\test1\python.exe G:\code\DconnNet-main\train.py
Train batch number: 182
Test batch number: 518
Petrain Model Have been loaded!
START TRAIN.
Selected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.

Defaults for this optimization level are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O2
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : True
master_weights : True
loss_scale : dynamic
Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'")
G:\soft\Ano\envs\test1\lib\site-packages\torch\nn\functional.py:1639: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
torch.Size([10, 8, 224, 320])
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [32,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [33,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [34,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [35,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [36,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [37,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [38,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [39,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [40,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [41,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [42,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [43,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [44,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [45,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [46,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [47,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [48,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [49,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [50,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [51,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [52,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [53,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [54,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [55,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [56,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [57,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [58,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [59,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [60,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [61,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [62,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [38,0,0], thread: [63,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [0,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [1,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [2,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [3,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [4,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [5,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [6,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [7,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [8,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [9,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [10,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [11,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [12,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [13,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [14,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [15,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [16,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [17,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [18,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [19,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [20,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [21,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [22,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [23,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [24,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [25,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [26,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [27,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [28,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [29,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [30,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [40,0,0], thread: [31,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [32,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [33,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [34,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [35,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [36,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [37,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [38,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [39,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [40,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [41,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [42,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [43,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [44,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [45,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [46,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [47,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [48,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [49,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [50,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [51,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [52,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [53,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [54,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [55,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [56,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [57,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [58,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [59,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [60,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [61,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [62,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [65,0,0], thread: [63,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [32,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [33,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [34,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [35,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [36,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [37,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [38,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [39,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [40,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [41,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [42,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [43,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [44,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [45,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [46,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [47,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [48,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [49,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [50,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [51,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [52,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [53,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [54,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [55,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [56,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [57,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [58,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [59,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [60,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [61,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [62,0,0] Assertion input_val >= zero && input_val <= one failed.
C:/cb/pytorch_1000000000000/work/aten/src/ATen/native/cuda/Loss.cu:102: block: [10,0,0], thread: [63,0,0] Assertion input_val >= zero && input_val <= one failed.
Traceback (most recent call last):
File "G:\code\DconnNet-main\train.py", line 154, in
main(args)
File "G:\code\DconnNet-main\train.py", line 150, in main
solver.train(model, train_loader, val_loader,exp_id+1, num_epochs=args.epochs)
File "G:\code\DconnNet-main\solver.py", line 148, in train
loss_aux = self.loss_func(aux_out, y)
File "G:\soft\Ano\envs\test1\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(input, **kwargs)
File "G:\code\DconnNet-main\connect_loss.py", line 171, in forward
loss = self.single_class_forward(c_map, target)
File "G:\code\DconnNet-main\connect_loss.py", line 237, in single_class_forward
con_target = connectivity_matrix(target,self.args.num_class)#(B, 8, H, W)
File "G:\code\DconnNet-main\connect_loss.py", line 23, in connectivity_matrix
conn = torch.zeros([batch,class_num
8,rows, cols]).cuda()
RuntimeError: CUDA error: device-side assert triggered

Process finished with exit code 1

Please ask the cause of this problem and how to solve it

IndexError: index 112 is out of bounds for dimension 0 with size 50

When I run the train.py on ReTouch dataset, I got the following error:

Traceback (most recent call last):
File "train.py", line 151, in
main(args)
File "train.py", line 147, in main
solver.train(model, train_loader, val_loader,exp_id+1, num_epochs=args.epochs)
File "/home/DconnNet/solver.py", line 146, in train
loss_main = self.loss_func(output, y)
File "/home/.conda/envs/python36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/DconnNet/connect_loss.py", line 173, in forward
loss = self.multi_class_forward(c_map, target)
File "/home/DconnNet/connect_loss.py", line 215, in multi_class_forward
dice_l += self.dice_loss(pred[:,j,:,:], onehotmask[:,j],j-1)
File "/home/DconnNet/connect_loss.py", line 122, in call
b = self.soft_dice_loss(y_true, y_pred,class_i)
File "/home/DconnNet/connect_loss.py", line 117, in soft_dice_loss
loss = self.soft_dice_coeff(y_true, y_pred,class_i)
File "/home/DconnNet/connect_loss.py", line 112, in soft_dice_coeff
weight = density_weight(self.bin_wide[class_i], i,self.density[class_i])
File "/home/DconnNet/connect_loss.py", line 141, in density_weight
selected_density = [density[index[i].long()] for i in range(gt_cnt.shape[0])]
File "/home/DconnNet/connect_loss.py", line 141, in
selected_density = [density[index[i].long()] for i in range(gt_cnt.shape[0])]
IndexError: index 112 is out of bounds for dimension 0 with size 50

where I am going wrong?

Error

image
Hello,sir.I can't deal with this error,please help me.

Outcomes of My Reproduction on the ChaseDB1 dataset failed to reach the reported results

Hello, I tried to train the DconnNet on the ChaseDB1 dataset with the code in this repo. However, I failed to reach the reported result in the paper by a margin.

I referred to this script to run the code: https://github.com/Zyun-Y/DconnNet/blob/main/scripts/chasedb1_train.sh

Here are the evaluation results on 5 folds, together with the mean result:

   fold      dice    cldice        β0        β1
     1  0.825640  0.829551  0.355556  0.028148
     2  0.833390  0.846933  0.302963  0.122222
     3  0.805985  0.821772  0.369630  0.180741
     4  0.786474  0.828176  0.352593  0.136296
     5  0.784559  0.827419  0.276667  0.031111
  mean  0.807210  0.830770  0.331481  0.099704

Note that I used the same computation method for dice and cldice as you provided in this repo. Since you did not offer complete code for betti computation (the directory 'Betti_Compute/' and the Gudhi package are missing), I used mine instead. It can be viewed that only the cldice (0.831) and β0 (0.331) are comparable to the reported ones (0.833 and 0.341 respectively).

For the β1, I don’t understand how you could reach such a big value (above 1). Because you calculated the overall betti error based on a series of (65, 65) patches and reported the mean value. For most of the patches that you iterated on the (960, 960) image, they could not form any loops, thus I believe the final β1 score should be a relatively small value.

Could you please shed some light on why this might be the case? Thanks!

how to test and output the binary results(maps)?

Hi, Professor Yang, when training , it's OK. But i do not know how to test the model and generate the binary predictions. I tried to add the .pth file to the parameter ('--test_only') in train.py after training process. However, it outputs the low DSC scores and do not output the binary predicted results(images).
If possible, please help me. Thanks!

multiclass prediction wrong

Hello,

I got such great help from you last time, could you help me with this issue too? I want to do an image classification with three classes. The ground truth is a 2D png image where the three classes are marked with different gray values (class 1: white, class2: light gray, class3: dark gray). I have no error message, but the problem is that only class 1 (white area) is recognized. So for class 1 DSC is really good, but for class 2 and 3 the classification does not work.
DSC: Class1: 0.8673 Class2: 0.001 Class3: 5.4148e-08

Please find attached the GT, the image and the predictions for the 3 classes.

Patient50_class1_pred
Patient50_class2_pred
Patient50_class3_pred
Patient50slice5_image
Patient50slice5_mask

I am wondering if the input has the correct format. What kind of input do I need for a multiclass classification? Or what else can be the reason that only class 1 is recognized?

Thank you in advance!

How does Bilateral_voting work?

Hi author, in the Bilateral_voting function, I am confused by code like the following
left =(torch.bmm(c_map[:,:,3].contiguous().view(-1,row,column),hori_translation.transpose(3,2).view(-1,column,column))).view(batch, class_num,row,column)
According to my current knowledge, I think it is calculating the part X_{9-j}(x+a,y+b) according to equation (11), may I ask the author what exactly it is doing, I hope the author can tell me (due to my weak code ability, I hope the author can be as detailed as possible). I would be grateful if the author could give me some advice.

What is the details of training on dataset ISIC2018?

I want to study deeper and better of your paper via code. And I choose dataset ISIC2018 you provide. But I met the error as follows. Some dimension error. So I wonder if you could provide us with the details of training o ISIC2018 like you did on other datasets? I would appreciate a lot.


Traceback (most recent call last):
File "../DconnNet/train.py", line 156, in
main(args)
File ../DconnNet/train.py", line 148, in main
solver.train(model, train_loader, val_loader,exp_id+1, num_epochs=args.epochs)
File "../DconnNet/solver.py", line 147, in train
loss_main = self.loss_func(output, y)
File "../anaconda3/envs/Dcon/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "../DconnNet/connect_loss.py", line 172, in forward
loss = self.multi_class_forward(c_map, target)
File "../DconnNet/connect_loss.py", line 184, in multi_class_forward
onehotmask = onehotmask.permute(0,3,1,2)
RuntimeError: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 5 is not equal to len(dims) = 4

Regarding the RETOUCH dataset

The MICCAI 2017 RETOUCH retinal fluid dataset has three parts, with each part containing 8 volumes. How is the training set and validation set determined for this dataset?

image

image

image

question about the distribution of retouch dataset

assert 'retouch' in self.args.dataset, 'Please input the calculated distribution data of your own dataset, if you are now using Retouch'

Hello! I am writing to ask how to calculate the distribution of my own dataset. Could you take retouch as an example?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.