Coder Social home page Coder Social logo

golbstein / keras-segmentation-deeplab-v3.1 Goto Github PK

View Code? Open in Web Editor NEW
175.0 175.0 73.0 21.39 MB

An awesome semantic segmentation model that runs in real time

Python 18.01% Jupyter Notebook 81.99%
crf-segmentations keras keras-deeplab keras-implementations keras-subpixel segmentation semantic-segmentation

keras-segmentation-deeplab-v3.1's People

Contributors

golbstein avatar wenda-wu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

keras-segmentation-deeplab-v3.1's Issues

AttributeError: 'ProgbarLogger' object has no attribute 'log_values'

AttributeError Traceback (most recent call last)
in ()
28
29 SegClass.set_num_epochs(10)
---> 30 history = SegClass.train_generator(model, train_generator, valid_generator, callbacks, mp = True)

/kaggle/input/Keras-segmentation-deeplab-v3.1/utils.py in train_generator(self, model, train_generator, valid_generator, callbacks, mp)
216 validation_steps=len(valid_generator),
217 max_queue_size=10,
--> 218 workers=workers, use_multiprocessing=mp)
219 return h
220

/opt/conda/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/opt/conda/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1416 If all outputs in the model are named,
1417 you can also pass a dictionary
-> 1418 mapping output names to Numpy arrays.
1419 sample_weight: Optional array of the same length as x, containing
1420 weights to apply to the model's loss for each sample.

/opt/conda/lib/python3.6/site-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
249 for l, o in zip(out_labels, val_outs):
250 epoch_logs['val_' + l] = o
--> 251
252 if callbacks.model.stop_training:
253 break

/opt/conda/lib/python3.6/site-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
77 self._t_enter_batch = time.time()
78 # Batch is ending, calculate batch time
---> 79 self._delta_t_batch = time.time() - self._t_enter_batch
80
81 logs = logs or {}

/opt/conda/lib/python3.6/site-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
336 # Arguments
337 epoch: integer, index of epoch.
--> 338 logs: dict, metric results for this training epoch, and for the
339 validation epoch if validation is performed. Validation result keys
340 are prefixed with val_.

AttributeError: 'ProgbarLogger' object has no attribute 'log_values'
hjhjh
gitiss

Adapting to Cityscapes version

Hi! Thank you for this amazing work you have done on converting the model for a keras implementation as well as the training and validation process. It was really helpful.

However, I am experiencing trouble adapting this repository to use the cityscapes format (for a dataset of mine that uses the cityscapes training format). After including the cityscapes weights and doing some initial predictions, the results come with a lot of noise and error, for instance:

Screenshot from 2021-02-19 15-28-11

On the top, the two first images shows the results before and after applying the crf (sometimes comes even more noisy) and on the bottom is the representation of each label color to its category. As you can see it doesnt predict very good.

Any idea why that might be?

model performance

HI Golbstein,
Could you share some training result regarding this keras model? I download your model and train with pascal voc dataset with a 6GB GPU. I could only use two batch due to hardware limitation. However it take very long for training. And I could not get very good result as in the tensorflow version of this model?
Could you kindly share some result from your training?
thanks,
Leon

Xception_weights

@Golbstein Thanks for sharing your code; It's really helpful for me and I'm learning a lot! Can you maybe give me advice on where to find the weights for pre-trained DLV3+ with Xception backbone?

Code Debugging

Thanks for your great efforts so that I can use your code to build a new structure upon DeepLabv3p easily. But I do find a few bugs inside your code. I found them by debugging line by line.
In my own project, I corrected these bugs but also reshaped a bit the structure of the code, like defining more functions for readability and modifying certain functions.
However, I am not sure if it is proper to commit my modifications, since the modifications may violate the interface of your own project.
Would you like to give me a little advice? Thanks! By the way, I will finish the whole debugging in several weeks.

xception_subpixel.h5 file

Hi

I found this repo after i search for Deeplabv3+ improvement results

It successfully run (without training for now but using the pre train files from weights folder

I don't have any experience with subpixel

Where can I found, or how can I generate/train xception_subpixel.h5 file? (if I understand correctly it should have the best result)

for now I use mobilenetv2_subpixel.h5

------------ Editing

I've successfully generated both subpixel and origin files after train but when I tried to predict using these files I've got nothing

predict

can you send us a predict.py,because you mode don't contain a predict.py,so we can't use it to predict

when change 'better_model' as True encountered ValueError

when I set better_model = True and run create_seg_model, I encountered error as follow:
`InvalidArgumentError Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1575 try:
-> 1576 c_op = c_api.TF_FinishOperation(op_desc)
1577 except errors.InvalidArgumentError as e:

InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 1 and 1344. Shapes are [1,1,256,21] and [1344,256,1,1]. for 'Assign_813' (op: 'Assign') with input shapes: [1,1,256,21], [1344,256,1,1].

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
in ()
1 if better_model:
----> 2 model = SegClass.create_seg_model(net='subpixel',n=n_classes, load_weights=True, multi_gpu=False, backbone=backbone)
3 else:
4 model = SegClass.create_seg_model(net='original',n=n_classes, load_weights=True, multi_gpu=False, backbone=backbone)
5

~/project/Keras-segmentation-deeplab-v3.1/utils.py in create_seg_model(self, net, n, backbone, load_weights, multi_gpu)
157 backbone=backbone, OS=8, alpha=1)
158 if load_weights:
--> 159 model.load_weights('weights/{}_{}.h5'.format(backbone, net))
160
161 base_model = Model(model.input, model.layers[-5].output)

~/anaconda3/lib/python3.6/site-packages/keras/engine/network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
1159 else:
1160 saving.load_weights_from_hdf5_group(
-> 1161 f, self.layers, reshape=reshape)
1162
1163 def _updated_config(self):

~/anaconda3/lib/python3.6/site-packages/keras/engine/saving.py in load_weights_from_hdf5_group(f, layers, reshape)
926 ' elements.')
927 weight_value_tuples += zip(symbolic_weights, weight_values)
--> 928 K.batch_set_value(weight_value_tuples)
929
930

~/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in batch_set_value(tuples)
2433 assign_placeholder = tf.placeholder(tf_dtype,
2434 shape=value.shape)
-> 2435 assign_op = x.assign(assign_placeholder)
2436 x._assign_placeholder = assign_placeholder
2437 x._assign_op = assign_op

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py in assign(self, value, use_locking)
643 the assignment has completed.
644 """
--> 645 return state_ops.assign(self._variable, value, use_locking=use_locking)
646
647 def assign_add(self, delta, use_locking=False):

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py in assign(ref, value, validate_shape, use_locking, name)
214 return gen_state_ops.assign(
215 ref, value, use_locking=use_locking, name=name,
--> 216 validate_shape=validate_shape)
217 return ref.assign(value, name=name)
218

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py in assign(ref, value, validate_shape, use_locking, name)
58 _, _, _op = _op_def_lib._apply_op_helper(
59 "Assign", ref=ref, value=value, validate_shape=validate_shape,
---> 60 use_locking=use_locking, name=name)
61 _result = _op.outputs[:]
62 _inputs_flat = _op.inputs

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
785 op = g.create_op(op_type_name, inputs, output_types, name=scope,
786 input_types=input_types, attrs=attr_protos,
--> 787 op_def=op_def)
788 return output_structure, op_def.is_stateful, op
789

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
452 'in a future version' if date is None else ('after %s' % date),
453 instructions)
--> 454 return func(*args, **kwargs)
455 return tf_decorator.make_decorator(func, new_func, 'deprecated',
456 _add_deprecated_arg_notice_to_docstring(

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in create_op(failed resolving arguments)
3153 input_types=input_types,
3154 original_op=self._default_original_op,
-> 3155 op_def=op_def)
3156 self._create_op_helper(ret, compute_device=compute_device)
3157 return ret

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in init(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
1729 op_def, inputs, node_def.attr)
1730 self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1731 control_input_ops)
1732
1733 # Initialize self._outputs.

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1577 except errors.InvalidArgumentError as e:
1578 # Convert to ValueError for backwards compatibility.
-> 1579 raise ValueError(str(e))
1580
1581 return c_op

ValueError: Dimension 0 in both shapes must be equal, but are 1 and 1344. Shapes are [1,1,256,21] and [1344,256,1,1]. for 'Assign_813' (op: 'Assign') with input shapes: [1,1,256,21], [1344,256,1,1].
`
could you help me with my problem? Thank you!

Train on a custom dataset

Hello,

is it possible to train that model in a different dataset than VOC or at least fine-tune it? Have you tried something similar?

As mentioned in this repository, there is a problem with Keras implementations and deeplab model training/fine-tuning.

RecursionError when trying to save 'model' and its 'trained-weights' into a single .h5 file

Hi, i have trained the DeepLab model on my custom dataset. I'm able to load these saved weights inside the model architecture.
And now, I am trying to save this architecture with its loaded weights into a single .h5 file. However, when i do this i get error
RecursionError: maximum recursion depth exceeded while calling a Python object

Following is the code i'm using,
from utils import *
backbone = 'mobilenetv2'
n_classes=2
image_size=(512,512)
SegClass = SegModel(image_size=image_size)
model1 = SegClass.create_seg_model(net='original',n=n_classes, load_weights=False, multi_gpu=False, backbone=backbone)
model1.load_weights('weights/Saved_Weights-02.h5')
model1.save('Complete_Model.h5')

Im getting the RecursionError when the last line model1.save('Complete_Model.h5') is executed.
How can i resolve this issue such that both the 'model architecture' as well as its 'weights' are stored inside a single file.

Data generator: where use data augmentation and adaptive pixels weights

Hi, I am new to python and keras, so your code really help me a lot. When I am running segmentation.ipynb I got a error in data generation when trainning , so I did some debug.
The error happend during the middel of training some times at step76 some times step 99 like it is some picture error, so I stop shuffle. It was still random.
error:

Epoch 1/10
429/1238 [=========>....................] - ETA: 7:43 - loss: 0.2252 - Jaccard: 0.6086 - sparse_accuracy_ignoring_last_label: 0.9520multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/DATA/liutian/envs/mask/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/DATA/liutian/envs/mask/lib/python3.6/site-packages/keras/utils/data_utils.py", line 401, in get_index
return _SHARED_SEQUENCES[uid][i]
File "/home/DATA/liutian/tmp/deeplab/Keras-segmentation-deeplab-v3.1-master/utils.py", line 362, in getitem
class_weights = class_weight.compute_class_weight('balanced', u_classes, valid_pixels)
File "/home/DATA/liutian/envs/mask/lib/python3.6/site-packages/sklearn/utils/class_weight.py", line 55, in compute_class_weight
weight = recip_freq[le.transform(classes)]
IndexError: arrays used as indices must be of integer (or boolean) type
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/DATA/liutian/tmp/deeplab/Keras-segmentation-deeplab-v3.1-master/mainslic.py", line 83, in
history = SegClass.train_generator(model, train_generator, valid_generator, callbacks, mp=True)
File "/home/DATA/liutian/tmp/deeplab/Keras-segmentation-deeplab-v3.1-master/utils.py", line 226, in train_generator
workers=workers, use_multiprocessing=mp)
File "/home/DATA/liutian/envs/mask/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/DATA/liutian/envs/mask/lib/python3.6/site-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/home/DATA/liutian/envs/mask/lib/python3.6/site-packages/keras/engine/training_generator.py", line 181, in fit_generator
generator_output = next(output_generator)
File "/home/DATA/liutian/envs/mask/lib/python3.6/site-packages/keras/utils/data_utils.py", line 601, in get
six.reraise(*sys.exc_info())
File "/home/DATA/liutian/envs/mask/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/home/DATA/liutian/envs/mask/lib/python3.6/site-packages/keras/utils/data_utils.py", line 595, in get
inputs = self.queue.get(block=True).get()
File "/home/DATA/liutian/envs/mask/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
IndexError: arrays used as indices must be of integer (or boolean) type

Process finished with exit code 1

Then I comment out :

valid_pixels = self.F[n][self.Y[n] != self.n_classes]  # get all pixels (bg and foregroud) that aren't void
u_classes = np.unique(valid_pixels)
class_weights = class_weight.compute_class_weight('balanced', u_classes, valid_pixels)
class_weights = {class_id: w for class_id, w in zip(u_classes, class_weights)}
if len(class_weights) == 1:  # no bg\no fg
    if 1 in u_classes:
        class_weights[0] = 0.
    else:
        class_weights[1] = 0.
elif not len(class_weights):
    class_weights[0] = 0.
    class_weights[1] = 0.

sw_valid = np.ones(y.shape)
np.putmask(sw_valid, self.Y[n] == 0, class_weights[0])  # background weights
np.putmask(sw_valid, self.F[n], class_weights[1])  # foreground wegihts
np.putmask(sw_valid, self.Y[n] == self.n_classes, 0)
self.F_SW[n] = sw_valid

and I notice that the self.SWis just in [0,1] and not a weight consider the pixels amount relative also the model only got one input layer.
sample_dict = {'pred_mask': self.SW}
I change 'return self.X, self.Y, sample_dict' to
return self.X, self.Y

and it works. So it seems that these code is not used, right?

Data augmentation seems not working cause the step amount in an epoch equals to image amount.
How can I make it working? Or it is already working and I miss it?

Problem with train_generator: list index out of range

Hey Golbstein,
thank you for your code. I am trying to implement it, but i get an error:

Traceback (most recent call last):
  File "segmentation.py", line 65, in <module>
    rotation=False, zoom=0.1, validation_split = .15, seed = 7, do_ahisteq = False)
  File "utils.py", line 206, in create_generators
    validation_split = validation_split, seed = seed)
  File "utils.py", line 263, in __init__
    self.label_path_list = [self.label_path_list[j] for j in x]
  File "utils.py", line 263, in <listcomp>
    self.label_path_list = [self.label_path_list[j] for j in x]
IndexError: list index out of range

Do you know why?
In your Code in utils.py you used the folder "SegmentationClassAug" in line 247, but there is no folder with this name in VOC2012. There is one named "SegmentationClass" and "SegmentationObject". I used "SegmentationClass". And i created the folder "train", which you used in line 246, too. There i copied all the images which were in JPEGImages before.
Maybe this has to do with the error i got.

ValueError: Shapes are incompatible when saving and loading model using Subpixel

Generating the model that uses the Subpixel layer produces an error when the loading from a saved checkpoint. The problem is located in this line:

config['filters']= int(config['filters'] / self.r*self.r)

That can be solved just putting some parenthesis wrapping the r product like this:

config['filters']= int(config['filters'] / (self.r*self.r))

Additionally, using tensorflow 1.14 and tf.keras included in mentioned version the same method raises and error as the key rank does not exist in the Conv2D layer. The problematic line is:

model only predict one label after training

Hi, Golbstein
With weight provided by you, the IOU and prediction is all good.(I suppose the generator is right?) But with the training weight by me , the model just predict 0 all the time.
And in training the accuracy stucked in a very low value in first epoch and never change. All accuracy being difference refers to different batch size.
np.unique(model1.predict(x))return only one value 0.042with my training weight
in generator:
np.unique(self.Y[0,:]) I got array([ 0., 1., 15., 21.], dtype=float32)
np.unique(self.SW[0,:]) I got array([0. , 0.37763503, 4.679469 , 7.2337375 ], dtype=float32) they match like{0: 0.377, 1:7.23, 15:4.67, 21:0.}
I visualize the self.X, self.y, self.SW SW seems like label and image match the label.dtype all float(including self.X).
I cant figure out what is the problem. There is only the generator need to be change, right?

xception_pretrained_weights

Hello Golbstein,

Iam implementing your training code, I wonder that can you please public the pretrained weights of your implementation with 'xception' backbone ?

Thanks in advance.

Error running segmentation.ipynb

OS: Ubuntu 16.04
ENV: Conda
TF: 1.12.0 (GPU)

When running the notebook unmodified (except the PATH variable) I get the follow error.

`TypeErrorTraceback (most recent call last)
in ()
7
8 model.compile(optimizer=Adam(lr=7e-4, epsilon=1e-8, decay=1e-6), sample_weight_mode="temporal",
----> 9 loss=losses, metrics=metrics)
10 print('Weights path:', SegClass.modelpath)

/home/sam/miniconda3/envs/masknet/lib/python2.7/site-packages/keras/engine/training.pyc in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs)
449 output_metrics = nested_metrics[i]
450 output_weighted_metrics = nested_weighted_metrics[i]
--> 451 handle_metrics(output_metrics)
452 handle_metrics(output_weighted_metrics, weights=weights)
453

/home/sam/miniconda3/envs/masknet/lib/python2.7/site-packages/keras/engine/training.pyc in handle_metrics(metrics, weights)
418 metric_result = weighted_metric_fn(y_true, y_pred,
419 weights=weights,
--> 420 mask=masks[i])
421
422 # Append to self.metrics_names, self.metric_tensors,

/home/sam/miniconda3/envs/masknet/lib/python2.7/site-packages/keras/engine/training_utils.pyc in weighted(y_true, y_pred, weights, mask)
402 """
403 # score_array has ndim >= 2
--> 404 score_array = fn(y_true, y_pred)
405 if mask is not None:
406 # Cast the mask to floatX to avoid float64 upcasting in Theano

/home/sam/Code/Keras-segmentation-deeplab-v3.1/utils.py in Jaccard(y_true, y_pred)
139 epochs = 20
140 batch_size = 16
--> 141 def init(self, dataset='VOCdevkit/VOC2012', image_size=(320,320)):
142 self.sz = image_size
143 self.mainpath = dataset

/home/sam/miniconda3/envs/masknet/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.pyc in is_nan(x, name)
3930 if _ctx is None or not _ctx._eager_context.is_eager:
3931 _, _, _op = _op_def_lib._apply_op_helper(
-> 3932 "IsNan", x=x, name=name)
3933 _result = _op.outputs[:]
3934 _inputs_flat = _op.inputs

/home/sam/miniconda3/envs/masknet/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.pyc in _apply_op_helper(self, op_type_name, name, **keywords)
607 _SatisfiesTypeConstraint(base_type,
608 _Attr(op_def, input_arg.type_attr),
--> 609 param_name=input_name)
610 attrs[input_arg.type_attr] = attr_value
611 inferred_from[input_arg.type_attr] = input_name

/home/sam/miniconda3/envs/masknet/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.pyc in _SatisfiesTypeConstraint(dtype, attr_def, param_name)
58 "allowed values: %s" %
59 (param_name, dtypes.as_dtype(dtype).name,
---> 60 ", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
61
62

TypeError: Value passed to parameter 'x' has DataType int32 not in list of allowed values: bfloat16, float16, float32, float64`

error while using Jaccard loss

When i try to train with the given loss the following error accord : ValueError: An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

Train assumptions

Hi
I tried to train aug over pascal_voc data set unsuccessfully, h5 files has been generated (both original and subpixel) however it not predict anything

These are my assumptions/my understanding, any comments are welcome

  1. backbone to chose between 'mobilenetv2' or 'xception'

  2. better_model to chose between original or subpixel

  3. weights = is the pretrain model to load as base, 'pascal_voc' or None

  4. load_weights in case I want to load weight during create_seg_model - on training should be False

  5. Data:
    under PEGImages\train need to have the jpg you want to train
    under SegmentationClassAug you should have the mask on png format ,2D with the number of class on each pixel (see attached as example)
    both PEGImages\train and SegmentationClassAug must have the same pairs for example 2007_000032.png and 2007_000032.jpg

Still h5 files are not good

Any Idea? Do I miss something?

2007_000032
6.

Xception backbone

Did you train the Xception_subpixel network and can you make it available? I wanted to test it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.