Coder Social home page Coder Social logo

autodistill / autodistill-llava Goto Github PK

View Code? Open in Web Editor NEW
6.0 5.0 2.0 18 KB

LLaVA base model for use with Autodistill.

Home Page: https://docs.autodistill.com

License: Apache License 2.0

Makefile 10.05% Python 89.95%
autodistill computer-vision llava multimodal-llm

autodistill-llava's Introduction

Autodistill LLaVA Module

This repository contains the code supporting the LLaVA base model for use with Autodistill.

LLaVA is a multi-modal language model with object detection capabilities. You can use LLaVA with autodistill for object detection. Learn more about LLaVA 1.5, the most recent version of LLaVA at the time of releasing this package.

Read the full Autodistill documentation.

Read the LLaVA Autodistill documentation.

Installation

To use LLaVA with autodistill, you need to install the following dependency:

pip3 install autodistill-llava

Quickstart

from autodistill_llava import LLaVA

# define an ontology to map class names to our LLaVA prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = LLaVA(
    ontology=CaptionOntology(
        {
            "a forklift": "forklift"
        }
    )
)
base_model.label("./context_images", extension=".jpeg")

License

This model is licensed under an Apache 2.0 License.

๐Ÿ† Contributing

We love your input! Please see the core Autodistill contributing guide to get started. Thank you ๐Ÿ™ to all our contributors!

autodistill-llava's People

Contributors

0asa avatar capjamesg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

0asa samuel5106

autodistill-llava's Issues

Error in Kaggle

Hello!
I wanna test LLaVa for auto distillation, but I got this error:

[TypeError: 'NoneType' object is not subscriptable](https://pytorch.org/docs/master/notes/extending.func.html%3C/span%3E%3Cspan)

Minimal code for implement the error:

from autodistill.detection import CaptionOntology

ontology = CaptionOntology({
    "car": "small_car",
    "motorbike": "bike",
    "bus": "bus"
})

from autodistill_llava import LLaVA

base_model = LLaVA(ontology=ontology)
dataset = base_model.label(
    input_folder='/kaggle/input/distillation-test/traffic_dataset/test',
    extension=".jpg",
    output_folder='/kaggle/working/images'
)

Full error:

TypeError                                 Traceback (most recent call last)
Cell In[7], line 4
      1 from autodistill_llava import LLaVA
      3 base_model = LLaVA(ontology=ontology)
----> 4 dataset = base_model.label(
      5     input_folder='/kaggle/input/distillation-test/traffic_dataset/test',
      6     extension=".jpg",
      7     output_folder='/kaggle/working/images'
      8 )

File /opt/conda/lib/python3.10/site-packages/autodistill/detection/detection_base_model.py:52, in DetectionBaseModel.label(self, input_folder, extension, output_folder, human_in_the_loop, roboflow_project, roboflow_tags)
     50     f_path_short = os.path.basename(f_path)
     51     images_map[f_path_short] = image.copy()
---> 52     detections = self.predict(f_path)
     53     detections_map[f_path_short] = detections
     55 dataset = sv.DetectionDataset(
     56     self.ontology.classes(), images_map, detections_map
     57 )

File /opt/conda/lib/python3.10/site-packages/autodistill_llava/model.py:140, in LLaVA.predict(self, input)
    137 streamer = TextStreamer(self.tokenizer, skip_prompt=True, skip_special_tokens=True)
    139 with torch.inference_mode():
--> 140     output_ids = self.model.generate(
    141         input_ids,
    142         images=image_tensor,
    143         do_sample=True,
    144         temperature=0.2,
    145         max_new_tokens=512,
    146         streamer=streamer,
    147         use_cache=True,
    148         stopping_criteria=[stopping_criteria])
    150 outputs = self.tokenizer.decode(output_ids[0, input_ids.shape[1]:]).strip()
    152 self.conv.messages[-1][-1] = outputs

File /opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
    112 @functools.wraps(func)
    113 def decorate_context(*args, **kwargs):
    114     with ctx_factory():
--> 115         return func(*args, **kwargs)

File /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:1588, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)
   1580     input_ids, model_kwargs = self._expand_inputs_for_generation(
   1581         input_ids=input_ids,
   1582         expand_size=generation_config.num_return_sequences,
   1583         is_encoder_decoder=self.config.is_encoder_decoder,
   1584         **model_kwargs,
   1585     )
   1587     # 13. run sample
-> 1588     return self.sample(
   1589         input_ids,
   1590         logits_processor=logits_processor,
   1591         logits_warper=logits_warper,
   1592         stopping_criteria=stopping_criteria,
   1593         pad_token_id=generation_config.pad_token_id,
   1594         eos_token_id=generation_config.eos_token_id,
   1595         output_scores=generation_config.output_scores,
   1596         return_dict_in_generate=generation_config.return_dict_in_generate,
   1597         synced_gpus=synced_gpus,
   1598         streamer=streamer,
   1599         **model_kwargs,
   1600     )
   1602 elif is_beam_gen_mode:
   1603     if generation_config.num_return_sequences > generation_config.num_beams:

File /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:2642, in GenerationMixin.sample(self, input_ids, logits_processor, stopping_criteria, logits_warper, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
   2639 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
   2641 # forward pass to get next token
-> 2642 outputs = self(
   2643     **model_inputs,
   2644     return_dict=True,
   2645     output_attentions=output_attentions,
   2646     output_hidden_states=output_hidden_states,
   2647 )
   2649 if synced_gpus and this_peer_finished:
   2650     continue  # don't waste resources running the code we don't need

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
    163         output = old_forward(*args, **kwargs)
    164 else:
--> 165     output = old_forward(*args, **kwargs)
    166 return module._hf_hook.post_forward(module, output)

File ~/.autodistill/LLaVA/llava/model/language_model/llava_llama.py:79, in LlavaLlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, images, return_dict)
     56 def forward(
     57     self,
     58     input_ids: torch.LongTensor = None,
   (...)
     68     return_dict: Optional[bool] = None,
     69 ) -> Union[Tuple, CausalLMOutputWithPast]:
     71     if inputs_embeds is None:
     72         (
     73             input_ids,
     74             position_ids,
     75             attention_mask,
     76             past_key_values,
     77             inputs_embeds,
     78             labels
---> 79         ) = self.prepare_inputs_labels_for_multimodal(
     80             input_ids,
     81             position_ids,
     82             attention_mask,
     83             past_key_values,
     84             labels,
     85             images
     86         )
     88     return super().forward(
     89         input_ids=input_ids,
     90         attention_mask=attention_mask,
   (...)
     98         return_dict=return_dict
     99     )

File ~/.autodistill/LLaVA/llava/model/llava_arch.py:121, in LlavaMetaForCausalLM.prepare_inputs_labels_for_multimodal(self, input_ids, position_ids, attention_mask, past_key_values, labels, images)
    119     image_features = [x.flatten(0, 1).to(self.device) for x in image_features]
    120 else:
--> 121     image_features = self.encode_images(images).to(self.device)
    123 # TODO: image start / end is not implemented here to support pretraining.
    124 if getattr(self.config, 'tune_mm_mlp_adapter', False) and getattr(self.config, 'mm_use_im_start_end', False):

File ~/.autodistill/LLaVA/llava/model/llava_arch.py:96, in LlavaMetaForCausalLM.encode_images(self, images)
     94 def encode_images(self, images):
     95     image_features = self.get_model().get_vision_tower()(images)
---> 96     image_features = self.get_model().mm_projector(image_features)
     97     return image_features

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
    163         output = old_forward(*args, **kwargs)
    164 else:
--> 165     output = old_forward(*args, **kwargs)
    166 return module._hf_hook.post_forward(module, output)

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py:217, in Sequential.forward(self, input)
    215 def forward(self, input):
    216     for module in self:
--> 217         input = module(input)
    218     return input

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
    163         output = old_forward(*args, **kwargs)
    164 else:
--> 165     output = old_forward(*args, **kwargs)
    166 return module._hf_hook.post_forward(module, output)

File /opt/conda/lib/python3.10/site-packages/bitsandbytes/nn/modules.py:441, in Linear8bitLt.forward(self, x)
    438 if self.bias is not None and self.bias.dtype != x.dtype:
    439     self.bias.data = self.bias.data.to(x.dtype)
--> 441 out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
    443 if not self.state.has_fp16_weights:
    444     if self.state.CB is not None and self.state.CxB is not None:
    445         # we converted 8-bit row major to turing/ampere format in the first inference pass
    446         # we no longer need the row-major weight

File /opt/conda/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:563, in matmul(A, B, out, state, threshold, bias)
    561 if threshold > 0.0:
    562     state.threshold = threshold
--> 563 return MatMul8bitLt.apply(A, B, out, bias, state)

File /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py:506, in Function.apply(cls, *args, **kwargs)
    503 if not torch._C._are_functorch_transforms_active():
    504     # See NOTE: [functorch vjp and autograd interaction]
    505     args = _functorch.utils.unwrap_dead_wrappers(args)
--> 506     return super().apply(*args, **kwargs)  # type: ignore[misc]
    508 if cls.setup_context == _SingleLevelFunction.setup_context:
    509     raise RuntimeError(
    510         'In order to use an autograd.Function with functorch transforms '
    511         '(vmap, grad, jvp, jacrev, ...), it must override the setup_context '
    512         'staticmethod. For more details, please see '
    513         '[https://pytorch.org/docs/master/notes/extending.func.html](https://pytorch.org/docs/master/notes/extending.func.html%3C/span%3E%3Cspan) style="color:rgb(175,0,0)">')

File /opt/conda/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:384, in MatMul8bitLt.forward(ctx, A, B, out, bias, state)
    382     outliers = F.extract_outliers(state.CxB, state.SB, state.idx.int())
    383 else:
--> 384     outliers = state.CB[:, state.idx.long()].clone()
    386 state.subB = (outliers * state.SCB.view(-1, 1) / 127.0).t().contiguous().to(A.dtype)
    387 CA[:, state.idx.long()] = 0

TypeError: 'NoneType' object is not subscriptable

xyxy coordinates are not absolute

First of all, thanks for creating the repo!

While playing around with autodistill I noticed that llava always returns relative coordinates which can't be displayed by supervision out of the box. I guess these just need to be converted to absolute values.

This should do the job:

image = Image.open(SAMPLE_IMAGE)

# # Transform detection bounding boxes from percentage to absolute coordinates
# for detection in results.xyxy:
#     detection[0] *= image.width
#     detection[1] *= image.height
#     detection[2] *= image.width
#     detection[3] *= image.height

Errors on Apple M1

Hello,

While running the code on Applle M1, I get the following errors:

/Users/csv610/Projects/CompVis/ObjectDetection/AutoDistill/autodistillenv/lib/python3.11/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
'NoneType' object has no attribute 'cadam32bit_grad_fp32'
Traceback (most recent call last):
File "/Users/csv610/Projects/CompVis/ObjectDetection/AutoDistill/LLAVA-1.5/autodistill-llava/genlabels.py", line 2, in
from autodistill_llava import LLaVA
File "/Users/csv610/Projects/CompVis/ObjectDetection/AutoDistill/LLAVA-1.5/autodistill-llava/autodistill_llava/init.py", line 1, in
from autodistill_llava.model import LLaVA
File "/Users/csv610/Projects/CompVis/ObjectDetection/AutoDistill/LLAVA-1.5/autodistill-llava/autodistill_llava/model.py", line 46, in
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
File "/Users/csv610/.autodistill/LLaVA/llava/init.py", line 1, in
from .model import LlavaLlamaForCausalLM
File "/Users/csv610/.autodistill/LLaVA/llava/model/init.py", line 2, in
from .language_model.llava_mpt import LlavaMPTForCausalLM, LlavaMPTConfig
File "/Users/csv610/.autodistill/LLaVA/llava/model/language_model/llava_mpt.py", line 26, in
from .mpt.modeling_mpt import MPTConfig, MPTForCausalLM, MPTModel
File "/Users/csv610/.autodistill/LLaVA/llava/model/language_model/mpt/modeling_mpt.py", line 19, in
from .hf_prefixlm_converter import add_bidirectional_mask_if_missing, convert_hf_causal_lm_to_prefix_lm
File "/Users/csv610/.autodistill/LLaVA/llava/model/language_model/mpt/hf_prefixlm_converter.py", line 15, in
from transformers.models.bloom.modeling_bloom import _expand_mask as _expand_mask_bloom
ImportError: cannot import name '_expand_mask' from 'transformers.models.bloom.modeling_bloom' (/Users/csv610/Projects/CompVis/ObjectDetection/AutoDistill/autodistillenv/lib/python3.11/site-packages/transformers/models/bloom/modeling_bloom.py)

Running on M1

Hello,

I get the following error on the Apple M1. I tried to change fp16 to f32 and changed cuda to CPU() in the code.
File "/Users/Projects/CompVis/ObjectDetection/AutoDistill/autodistillenv/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'

batch predictions

The autodistill sample takes 1 image at a time, is there a way to predict on a batch to maximize utilization on GPU.

NameError: name 'CaptionOntology' is not defined

I am trying to use this tool and run a simple script to test it out, but get this error. Am i I missing an import or something?
Sorry if this is some oversight, but could use some help.

from autodistill_llava import LLaVA #, CaptionOntology

ontology=CaptionOntology(
NameError: name 'CaptionOntology' is not defined

AttributeError: 'numpy.ndarray' object has no attribute 'read'


AttributeError Traceback (most recent call last)
File ~/.pyenv/versions/3.8.0/lib/python3.8/site-packages/PIL/Image.py:3222, in open(fp, mode, formats)
3221 try:
-> 3222 fp.seek(0)
3223 except (AttributeError, io.UnsupportedOperation):

AttributeError: 'numpy.ndarray' object has no attribute 'seek'

During handling of the above exception, another exception occurred:

AttributeError Traceback (most recent call last)
Cell In[2], line 18
8 base_model = LLaVA(
9 ontology=CaptionOntology(
10 {
(...)
13 )
14 )
16 # base_model.label("/home/aiteam/data1/samuel/test_images_for_each_model/may2_test_images_for_ilava", extension=".jpg")
---> 18 base_model.label(input_folder="/home/aiteam/data1/samuel/test_images_for_each_model/may2_test_images_for_ilava", extension=".jpg")

File ~/.pyenv/versions/3.8.0/lib/python3.8/site-packages/autodistill/detection/detection_base_model.py:99, in DetectionBaseModel.label(self, input_folder, extension, output_folder, human_in_the_loop, roboflow_project, roboflow_tags, sahi, record_confidence, nms_settings)
97 detections = slicer(image)
98 else:
---> 99 detections = self.predict(image)
101 if nms_settings == NmsSetting.CLASS_SPECIFIC:
102 detections = detections.with_nms()

File ~/.pyenv/versions/3.8.0/lib/python3.8/site-packages/autodistill_llava/model.py:97, in LLaVA.predict(self, input)
96 def predict(self, input: str) -> sv.Detections:
---> 97 image = Image.open(input)
99 # image = Image.fromarray(input)
101 ImageInfo = namedtuple('ImageInfo', ['image_aspect_ratio'])

File ~/.pyenv/versions/3.8.0/lib/python3.8/site-packages/PIL/Image.py:3224, in open(fp, mode, formats)
3222 fp.seek(0)
3223 except (AttributeError, io.UnsupportedOperation):
-> 3224 fp = io.BytesIO(fp.read())
3225 exclusive_fp = True
3227 prefix = fp.read(16)

AttributeError: 'numpy.ndarray' object has no attribute 'read'
Collecting transformers==4.38
Downloading transformers-4.38.0-py3-none-any.whl (8.5 MB)
|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 8.5 MB 1.1 MB/s
Requirement already satisfied: huggingface-hub<1.0,>=0.19.3 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (0.22.2)
Requirement already satisfied: filelock in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (3.14.0)
Collecting tokenizers<0.19,>=0.14
Downloading tokenizers-0.15.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)
|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3.6 MB 28.4 MB/s
Requirement already satisfied: tqdm>=4.27 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (4.66.2)
Requirement already satisfied: regex!=2019.12.17 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (2024.4.28)
Requirement already satisfied: packaging>=20.0 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (24.0)
Requirement already satisfied: requests in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (2.31.0)
Requirement already satisfied: safetensors>=0.4.1 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (0.4.3)
Requirement already satisfied: numpy>=1.17 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (1.24.4)
Requirement already satisfied: pyyaml>=5.1 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from transformers==4.38) (6.0.1)
Requirement already satisfied: fsspec>=2023.5.0 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from huggingface-hub<1.0,>=0.19.3->transformers==4.38) (2024.3.1)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from huggingface-hub<1.0,>=0.19.3->transformers==4.38) (4.11.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from requests->transformers==4.38) (3.3.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from requests->transformers==4.38) (1.26.18)
Requirement already satisfied: certifi>=2017.4.17 in /home/aiteam/.pyenv/versions/3.8.0/lib/python3.8/site-packages (from requests->transformers==4.38) (2023.7.22)
Requirement already sa

LLAVA_code_ss

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.