Coder Social home page Coder Social logo

okunator / cellseg_models.pytorch Goto Github PK

View Code? Open in Web Editor NEW
69.0 2.0 3.0 102.83 MB

Encoder-Decoder Cell and Nuclei segmentation models

Home Page: https://github.com/okunator/cellseg_models.pytorch

License: MIT License

Python 99.91% Dockerfile 0.09%
accelerate ai cell-segmentation cellpose digital-pathology hovernet microscopy ml nuclei-segmentation omnipose

cellseg_models.pytorch's Introduction

Logo

Python library for 2D cell/nuclei instance segmentation models written with PyTorch.

Generic badge PyTorch - Version Python - Version
Github Test Pypi Codecov
DOI

Introduction

cellseg-models.pytorch is a library built upon PyTorch that contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name might suggest, this library is heavily inspired by segmentation_models.pytorch library for semantic segmentation.

What's new? πŸ“’

  • Now you can use any pre-trained image encoder from the timm library as the model backbone. (Given that they implement the forward_intermediates method, most of them do).
  • New example notebooks showing how to finetune Cellpose and Stardist with the new state-of-the-art foundation model backbones: UNI from the MahmoodLab, and Prov-GigaPath from Microsoft Research. Check out the notebooks here (UNI), and here (Prov-GigaPath).
  • NOTE!: These foundation models are licensed under restrictive licences and you need to agree to the terms of any of the said models to get access to the weights. Once you have been granted access, you can run the above notebooks. You can request for access here: the model pages UNI and model pages Prov-GigaPath. These models may only be used for non-commercial, academic research purposes with proper attribution. Be sure that you have read and understood the terms before requesting access.

Features 🌟

  • High level API to define cell/nuclei instance segmentation models.
  • 6 cell/nuclei instance segmentation model architectures and more to come.
  • Open source datasets for training and benchmarking.
  • Flexibility to modify the components of the model architectures.
  • Sliding window inference for large images.
  • Multi-GPU inference.
  • All model architectures can be augmented to panoptic segmentation.
  • Popular training losses and benchmarking metrics.
  • Benchmarking utilities both for model latency & segmentation performance.
  • Regularization techniques to tackle batch effects/domain shifts such as Strong Augment, Spectral decoupling, Label smoothing.
  • Example notebooks to train models with lightning or accelerate.
  • Example notebooks to finetune models with foundation model backbones such as UNI, Prov-GigaPath, and DINOv2.

Installation πŸ› οΈ

pip install cellseg-models-pytorch

Models πŸ€–

Model Paper
[1] HoVer-Net https://www.sciencedirect.com/science/article/pii/S1361841519301045?via%3Dihub
[2] Cellpose https://www.nature.com/articles/s41592-020-01018-x
[3] Omnipose https://www.biorxiv.org/content/10.1101/2021.11.03.467199v2
[4] Stardist https://arxiv.org/abs/1806.03535
[5] CellVit-SAM https://arxiv.org/abs/2306.15350.03535
[6] CPP-Net https://arxiv.org/abs/2102.0686703535

Datasets

Dataset Paper
[7, 8] Pannuke https://arxiv.org/abs/2003.10778 , https://link.springer.com/chapter/10.1007/978-3-030-23937-4_2

Notebook examples πŸ‘‡

Finetuning CellPose with UNI backbone
  • Finetuning CellPose with UNI. Here we finetune the CellPose multi-class nuclei segmentation model with the foundation model UNI-image-encoder backbone (checkout UNI). The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing accelerate by hugginface. NOTE that you need to have granted access to the UNI weights and agreed to the terms of the model to be able to run the notebook.
Finetuning Stardist with Prov-GigaPath backbone
  • Finetuning Stardist with Prov-GigaPath. Here we finetune the Stardist multi-class nuclei segmentation model with the foundation model Prov-GigaPath-image-encoder backbone (checkout Prov-GigaPath). The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing accelerate by hugginface. NOTE that you need to have granted access to the Prov-GigaPath weights and agreed to the terms of the model to be able to run the notebook.
Finetuning CellPose with DINOv2 backbone
  • Finetuning CellPose with DINOv2 backbone for Pannuke. Here we finetune the CellPose multi-class nuclei segmentation model with a LVD-142M pretrained DINOv2 backbone. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing lightning.
Finetuning CellVit-SAM with Pannuke
  • Finetuning CellVit-SAM with Pannuke. Here we finetune the CellVit-SAM multi-class nuclei segmentation model with a SA-1B pretrained SAM-image-encoder backbone (checkout SAM). The encoder is transformer based VitDet-model. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing accelerate by hugginface.
Training Hover-Net with Pannuke
  • Training Hover-Net with Pannuke. Here we train the Hover-Net nuclei segmentation model with an imagenet pretrained resnet50 backbone from the timm library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing lightning.
Training Stardist with Pannuke
  • Training Stardist with Pannuke. Here we train the Stardist multi-class nuclei segmentation model with an imagenet pretrained efficientnetv2_s backbone from the timm library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing lightning.
Training CellPose with Pannuke
  • Training CellPose with Pannuke. Here we train the CellPose multi-class nuclei segmentation model with an imagenet pretrained convnext_small backbone from the timm library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing accelerate by hugginface.
Training OmniPose with Pannuke
  • Training OmniPose with Pannuke. Here we train the OmniPose multi-class nuclei segmentation model with an imagenet pretrained focalnet_small_lrf backbone from the timm library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing accelerate by hugginface.
Training CPP-Net with Pannuke
  • Training CPP-Net with Pannuke. Here we train the CPP-Net multi-class nuclei segmentation model with an imagenet pretrained efficientnetv2_s backbone from the timm library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing lightning.
Benchmarking Cellpose Trained on Pannuke

Code Examples πŸ’»

Define Cellpose for cell segmentation.

import cellseg_models_pytorch as csmp
import torch

model = csmp.models.cellpose_base(type_classes=5)
x = torch.rand([1, 3, 256, 256])

# NOTE: the outputs still need post-processing.
y = model(x) # {"cellpose": [1, 2, 256, 256], "type": [1, 5, 256, 256]}

Define Cellpose for cell and tissue area segmentation (Panoptic segmentation).

import cellseg_models_pytorch as csmp
import torch

model = csmp.models.cellpose_plus(type_classes=5, sem_classes=3)
x = torch.rand([1, 3, 256, 256])

# NOTE: the outputs still need post-processing.
y = model(x) # {"cellpose": [1, 2, 256, 256], "type": [1, 5, 256, 256], "sem": [1, 3, 256, 256]}

Define panoptic Cellpose model with more flexibility.

import cellseg_models_pytorch as csmp

# the model will include two decoder branches.
decoders = ("cellpose", "sem")

# and in total three segmentation heads emerging from the decoders.
heads = {
    "cellpose": {"cellpose": 2, "type": 5},
    "sem": {"sem": 3}
}

model = csmp.CellPoseUnet(
    decoders=decoders,                   # cellpose and semantic decoders
    heads=heads,                         # three output heads
    depth=5,                             # encoder depth
    out_channels=(256, 128, 64, 32, 16), # num out channels at each decoder stage
    layer_depths=(4, 4, 4, 4, 4),        # num of conv blocks at each decoder layer
    style_channels=256,                  # num of style vector channels
    enc_name="resnet50",                 # timm encoder
    enc_pretrain=True,                   # imagenet pretrained encoder
    long_skip="unetpp",                  # unet++ long skips ("unet", "unetpp", "unet3p")
    merge_policy="sum",                  # concatenate long skips ("cat", "sum")
    short_skip="residual",               # residual short skips ("basic", "residual", "dense")
    normalization="bcn",                 # batch-channel-normalization.
    activation="gelu",                   # gelu activation.
    convolution="wsconv",                # weight standardized conv.
    attention="se",                      # squeeze-and-excitation attention.
    pre_activate=False,                  # normalize and activation after convolution.
)

x = torch.rand([1, 3, 256, 256])

# NOTE: the outputs still need post-processing.
y = model(x) # {"cellpose": [1, 2, 256, 256], "type": [1, 5, 256, 256], "sem": [1, 3, 256, 256]}

Run HoVer-Net inference and post-processing with a sliding window approach.

import cellseg_models_pytorch as csmp

# define the model
model = csmp.models.hovernet_base(type_classes=5)

# define the final activations for each model output
out_activations = {"hovernet": "tanh", "type": "softmax", "inst": "softmax"}

# define whether to weight down the predictions at the image boundaries
# typically, models perform the poorest at the image boundaries and with
# overlapping patches this causes issues which can be overcome by down-
# weighting the prediction boundaries
out_boundary_weights = {"hovernet": True, "type": False, "inst": False}

# define the inferer
inferer = csmp.inference.SlidingWindowInferer(
    model=model,
    input_folder="/path/to/images/",
    checkpoint_path="/path/to/model/weights/",
    out_activations=out_activations,
    out_boundary_weights=out_boundary_weights,
    instance_postproc="hovernet",               # THE POST-PROCESSING METHOD
    normalization="percentile",                 # same normalization as in training
    patch_size=(256, 256),
    stride=128,
    padding=80,
    batch_size=8,
)

inferer.infer()

inferer.out_masks
# {"image1" :{"inst": [H, W], "type": [H, W]}, ..., "imageN" :{"inst": [H, W], "type": [H, W]}}

Models API

Generally, the model building API enables the effortless creation of hard-parameter sharing multi-task encoder-decoder CNN architectures. The general architectural schema is illustrated in the below image.



Architecture

Class API

The class API enables the most flexibility in defining different model architectures. It borrows a lot from segmentation_models.pytorch models API.

Model classes:

  • csmp.CellPoseUnet
  • csmp.StarDistUnet
  • csmp.HoverNet
  • csmp.CellVitSAM

All of the models contain:

  • model.encoder - pretrained timm backbone for feature extraction.
  • model.{decoder_name}_decoder - Models can have multiple decoders with unique names.
  • model.{head_name}_seg_head - Model decoders can have multiple segmentation heads with unique names.
  • model.forward(x) - forward pass.
  • model.forward_features(x) - forward pass of the encoder and decoders. Returns enc and dec features

Defining your own multi-task architecture

For example, to define a multi-task architecture that has resnet50 encoder, four decoders, and 5 output heads with CellPoseUnet architectural components, we could do this:

import cellseg_models_pytorch as csmp
import torch

model = csmp.CellPoseUnet(
    decoders=("cellpose", "dist", "contour", "sem"),
    heads={
        "cellpose": {"type": 5, "cellpose": 2},
        "dist": {"dist": 1},
        "contour": {"contour": 1},
        "sem": {"sem": 4}
    },
)

x = torch.rand([1, 3, 256, 256])
model(x)
# {
#   "cellpose": [1, 2, 256, 256],
#   "type": [1, 5, 256, 256],
#   "dist": [1, 1, 256, 256],
#   "contour": [1, 1, 256, 256],
#   "sem": [1, 4, 256, 256]
# }

Function API

With the function API, you can build models with low effort by calling the below listed functions. Under the hood, the function API simply calls the above classes with pre-defined decoder and head names. The training and post-processing tools of this library are built around these names, thus, it is recommended to use the function API, although, it is a bit more rigid than the class API. Basically, the function API only lacks the ability to define the output-tasks of the model, but allows for all the rest as the class API.

Model functions Output names Task
csmp.models.cellpose_base "type", "cellpose", instance segmentation
csmp.models.cellpose_plus "type", "cellpose", "sem", panoptic segmentation
csmp.models.omnipose_base "type", "omnipose" instance segmentation
csmp.models.omnipose_plus "type", "omnipose", "sem", panoptic segmentation
csmp.models.hovernet_base "type", "inst", "hovernet" instance segmentation
csmp.models.hovernet_plus "type", "inst", "hovernet", "sem" panoptic segmentation
csmp.models.hovernet_small "type","hovernet" instance segmentation
csmp.models.hovernet_small_plus "type", "hovernet", "sem" panoptic segmentation
csmp.models.stardist_base "stardist", "dist" binary instance segmentation
csmp.models.stardist_base_multiclass "stardist", "dist", "type" instance segmentation
csmp.models.stardist_plus "stardist", "dist", "type", "sem" panoptic segmentation
csmp.models.cppnet_base "stardist_refined", "dist" binary instance segmentation
csmp.models.cppnet_base_multiclass "stardist_refined", "dist", "type" instance segmentation
csmp.models.cppnet_plus "stardist_refined", "dist", "type", "sem" panoptic segmentation
csmp.models.cellvit_sam_base "type", "inst", "hovernet" instance segmentation
csmp.models.cellvit_sam_plus "type", "inst", "hovernet", "sem" panoptic segmentation
csmp.models.cellvit_sam_small "type","hovernet" instance segmentation
csmp.models.cellvit_sam_small_plus "type", "hovernet", "sem" panoptic segmentation

References

  • [1] S. Graham, Q. D. Vu, S. E. A. Raza, A. Azam, Y-W. Tsang, J. T. Kwak and N. Rajpoot. "HoVer-Net: Simultaneous Segmentation and Classification of Nuclei in Multi-Tissue Histology Images." Medical Image Analysis, Sept. 2019.
  • [2] Stringer, C.; Wang, T.; Michaelos, M. & Pachitariu, M. Cellpose: a generalist algorithm for cellular segmentation Nature Methods, 2021, 18, 100-106
  • [3] Cutler, K. J., Stringer, C., Wiggins, P. A., & Mougous, J. D. (2022). Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation. bioRxiv. doi:10.1101/2021.11.03.467199
  • [4] Uwe Schmidt, Martin Weigert, Coleman Broaddus, & Gene Myers (2018). Cell Detection with Star-Convex Polygons. In Medical Image Computing and Computer Assisted Intervention - MICCAI 2018 - 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II (pp. 265–273).
  • [5] HΓΆrst, F., Rempe, M., Heine, L., Seibold, C., Keyl, J., Baldini, G., Ugurel, S., Siveke, J., GrΓΌnwald, B., Egger, J., & Kleesiek, J. (2023). CellViT: Vision Transformers for Precise Cell Segmentation and Classification (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2306.15350.
  • [6] Chen, S., Ding, C., Liu, M., Cheng, J., & Tao, D. (2023). CPP-Net: Context-Aware Polygon Proposal Network for Nucleus Segmentation. In IEEE Transactions on Image Processing (Vol. 32, pp. 980–994). Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/tip.2023.3237013
  • [7] Gamper, J., Koohbanani, N., Benet, K., Khuram, A., & Rajpoot, N. (2019) PanNuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification. In European Congress on Digital Pathology (pp. 11-19).
  • [8] Gamper, J., Koohbanani, N., Graham, S., Jahanifar, M., Khurram, S., Azam, A.,Hewitt, K., & Rajpoot, N. (2020). PanNuke Dataset Extension, Insights and Baselines. arXiv preprint arXiv:2003.10778.
  • [9] Graham, S., Jahanifar, M., Azam, A., Nimir, M., Tsang, Y.W., Dodd, K., Hero, E., Sahota, H., Tank, A., Benes, K., & others (2021). Lizard: A Large-Scale Dataset for Colonic Nuclear Instance Segmentation and Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 684-693).

Citation

@misc{csmp2022,
    title={{cellseg_models.pytorch}: Cell/Nuclei Segmentation Models and Benchmark.},
    author={Oskari Lehtonen},
    howpublished = {\url{https://github.com/okunator/cellseg_models.pytorch}},
    doi = {10.5281/zenodo.7064617}
    year={2022}
}

Licence

This project is distributed under MIT License

The project contains code from the original cell segmentation and 3rd-party libraries that have permissive licenses:

If you find this library useful in your project, it is your responsibility to ensure you comply with the conditions of any dependent licenses. Please create an issue if you think something is missing regarding the licenses.

cellseg_models.pytorch's People

Contributors

dependabot[bot] avatar okunator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cellseg_models.pytorch's Issues

Training Problem

misconfigurationexception: you passed accelerator='gpu', but you didn't pass gpus to trainer.

using pretrained weights

Hello, this looks like a great tool! I'm trying to implement it and was hoping to start by running inferences on some MoNuSAC data using the HoVer-Net model. HoVer-Net has pretrained weights on their website, so I downloaded them (they strangely appear to be in a tar file) but they are not compatible with this setup. How can I get my hands on a compatible pretrained weights file for HoVer-Net MoNuSAC, or make the tar file compatible with PyTorch state dict loading?

Data preparation issue

AttributeError: type object 'FileHandler' has no attribute 'write_mask'

Got this issue while running the lizard_module.prepare_data()

Validation loss goes to infinity

Hi okunator

Thanks for this fantastic package.

I tried playing with hovernet example notebook (ran it as is) - it seems that the validation loss goes to infinity with the first epoch.

Have you had this issue before?

pannuke_datamodule.py is different between your installed package and github code.

This is the code in cellseg_models_pytorch.datamodules.pannuke_datamodule.py in the python packages directory

image

This is the code in cellseg_models_pytorch.datamodules.pannuke_datamodule.py of your github code.
image

I installed your awesome package, cellseg_models_pytorch and am checking its running via 'pannuke_nuclei_segmentation_cellpose.ipynb'.

There was such a issue above. Could you check your examples with jupyter notebook do work?

AttributeError: type object 'FileHandler' has no attribute 'read_mask'

when i run [lizard_nuclei_segmentation_cellpose.ipynb].(https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/lizard_nuclei_segmentation_cellpose.ipynb)

Found all folds. Skip downloading.
Splitting the files into train, valid, and test sets.
Patch the data... This will take a while...
Extracting train patches to folders..: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 70/70 [15:35<00:00, 13.36s/it, # of extracted tiles 2451]
Extracting valid patches to folders..: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 70/70 [17:03<00:00, 14.62s/it, # of extracted tiles 2748]
Extracting test patches to folders..: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 98/98 [13:35<00:00, 8.32s/it, # of extracted tiles 2627]

AttributeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_7576\871233218.py in
10 im2 = csmp.utils.FileHandler.read_img(imgs[50])
11 im3 = csmp.utils.FileHandler.read_img(imgs[300])
---> 12 mask1 = csmp.utils.FileHandler.read_mask(masks[0], return_all=True)
13 mask2 = csmp.utils.FileHandler.read_mask(masks[50], return_all=True)
14 mask3 = csmp.utils.FileHandler.read_mask(masks[300], return_all=True)

AttributeError: type object 'FileHandler' has no attribute 'read_mask'

how do i fix this issue?

Training Example

Thank you for your generous sharing! Could you give more training examples about Hovernet model and Omnipose model, i'm confused about their default setting in this code, e.g. the loss defined in hovernet seems different from your setting.

Segmentation output issues

Hello. I have been trying to use this library to test a few segmentation models on some internal data. I am working with large tiff images of size 30000x30000 tiff images. Due to the size of the image I decided to use the SlidingWindowInferer with HoverNet, however it was not able to run on 1 gpu and kept resulting in an out of memory issue. I then decided to break the image into individual patches and save the individual files. Due to some normalization the values in the matrix are float, if I convert this to integer all the values become 0 and so this is resulting in an empty patch. I am saving them using the tifffile package and then feeding the path to the ResizeInferer. I preserve the floats in the patch and load in the patches with the tifffile package since opencv cannot open them. While I am able to use Resize Inferer, the HoverNet segmentation is coming out completely empty, and the segmentations for Cellpose and Stardist are very similar although I do see more segmentations. The behavior is very odd, directly after segmentation from Stardist the individual patches are giving images such as the following:
Screenshot 2024-03-07 at 10 06 47β€―AM

Here are some of the parameters I use for Stardist (note some of these params are for other preprocessing or postprocessing steps):
params:
out_activations: '{"dist": None, "stardist": None}'
out_boundary_weights: '{"dist": False, "stardist": True}'
resize: '(256,256)'
overlap: 248
patch: 256
instance_postproc: 'stardist'
padding: '64'
batch_size: '1'
downsample_factor: 1
n_channels: 3
n_rays: '4'

Questions:

  1. Is it possible to use the Sliding Window Inferer on 1 gpu, if so what are some key considerations to take when setting the params to allow for this? Any tips would be a great help!
  2. I have checked the input images and they seem to be set up properly when inputted, and yet I still receive such odd results (shared screenshot). Do you have any recommendations on things to check for this?
  3. Do all the models need to be trained beforehand, or do all the base model versions in the package have a pretrained version that are directly called and can be utilized without training beforehand?

Could you provide pretrained weight?

Hi, all

Thanks for your great works! Could you provide the pretrained weights and users could run inference directly without re-train the model again.

Best!

Data preparation issue

I had the same issue with the person who had issued you with the title 'Data preparation issue' There was an error saying "AttributeError: type object 'FileHandler' has no attribute 'write_mask'". But after upgrading the module cellseg_models_pytorch I didn't get the same error. but after running the code in the first cell I get a blank file in train/images & train/labels.

I need help please

Hello, I am a student in the College of Engineering and I am trying to learn, and now I am trying to run the code, but it does not work. Is it possible for me to contact you privately, please, to inquire about some things?
Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.