Coder Social home page Coder Social logo

42-ai / bootcamp_machine-learning Goto Github PK

View Code? Open in Web Editor NEW
204.0 204.0 41.0 30.28 MB

Bootcamp to learn basics in Machine Learning

License: Other

Dockerfile 0.02% Makefile 0.42% TeX 99.55%
artificial-intelligence learning machine-learning machine-learning-practice

bootcamp_machine-learning's People

Contributors

a-mahla avatar afoures avatar atrudel avatar bcarlier75 avatar blopax avatar ezalos avatar fxbabin avatar karocyt avatar madvid avatar matboivin avatar maximechoulika avatar plumeroberts avatar ppeigne avatar qfeuilla avatar tflahaul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bootcamp_machine-learning's Issues

Day01 - Linear Regression

Day01: Linear Regression

Algorithm

  • Linear Cost Function
  • Linear Regression
  • Regularized Linear Regression (not present in the last version 13/01)
  • Normal Equation

Model Evaluation

  • Mean Square Error
  • Root Mean Square Error
  • Mean Absolute Error (Replace by the R2score)

edit [date:09/01/2020]

fix/phase1:

ex00:

  • difference in ressource name (file and citation in .md)
  • name columns of the data in ressource file and in example
  • formula
  • docstring coherence with other
  • verification of example after formula correction

ex01:

  • docstring coherence with other
  • generator method different with the one in ex00
  • split the exercise ? or more intermediate steps
  • add remark if we get nan or inf with the fit method (information of learning rate)
  • verification of example after formula correction

ex02:

  • docstring coherence with other
  • verification of example after formula correction

ex03:

  • description_data.csv empty
  • no description_data.txt
  • more precision in the hint.
  • remark to precise to not modify the methods but we expect manipulation of the data
  • modify the dataset to allow a learning rate of 1e-4 or -5
  • verification of example after formula correction

phase2/day04

Exercises :

  • Logistic Regression
  • Polynomial hypothesis
  • Questions
  • Sigmoid
  • Regularized Linear Cost function
  • Regularized Linear Gradient
  • Ridge Regression
  • Regularized Logistic Cost function
  • Regularized Logistic Gradient
  • Regularized Logistic Regression
  • Questions
  • Bonus 1: Stochastic Gradient Descent

d00 ex01 Mean

  • Day: 00
  • Exercise: 01 (Mean)

La fonction à implementer utilise une fonction f, mais on ne retrouve pas cette fonction dans notre prototype

Screen Shot 2020-01-20 at 11 53 28 AM

Naming variable in function

Days: 00 & 01

General remarks:
I think that be good to naming variable, not in mathematical notation but in code environment rules. In code when we use uppercase by convention it's for Class not to naming variable. that's can be good to keep this idea ?

I think the prototype for the function when the matrice is waiting must be like that def my_function(x, mat, vec) not def my_function(x,X,Y) the name of the variable must help the user directly.
I hope my english is not too much uggly

add example to ex00

  • Day: 01
  • Exercise: 00

add example for this exercice can be good option, that's a better view for the non-mathetical coder :)

from py_lib import *
import numpy as np
# @log
def predict(theta, mat):
	col_1 = np.ones((mat.shape[0],1))
	if theta.shape[0] < 2 or theta.shape[0]-1 != mat.shape[1]:
		print("theta size error",theta.shape[0])
		print("theta need",mat.shape[1] + 1,"elements")
		return None
	mat = np.concatenate((col_1,mat), axis = 1)
	return np.dot(mat,theta)

if __name__ == "__main__":
	theta1 = np.array([[2.], [4.]])
	mat1 = np.array([[0.], [1.], [2.], [3.], [4.]])
	print(predict(theta1, mat1))
	# array([[2], [6], [10], [14.], [18.]])

	theta2 = np.array([[2.]])
	mat2 = np.array([[1], [2], [3], [5], [8]])
	print(predict(theta2, mat2))
	# Incompatible dimension match between 'mat' and 'theta'.
	# None

	theta3 = np.array([[0.05], [1.], [1.], [1.]])
	mat3 = np.array([[0.2, 2., 20.], [0.4, 4., 40.], [0.6, 6., 60.], [0.8, 8., 80.]])
	print(predict(theta3, mat3))
	# array([[22.25], [44.45], [66.65], [88.85]])

	theta3 = np.array([[0.05], [1.], [2.]])
	mat3 = np.array([[0.1, 1., 10.], [0.2, 2., 20.], [0.3, 3., 30.], [0.4, 4., 40.]])
	print(predict(theta3, mat3))
	# Incompatible dimension match between 'mat' and 'theta'.
	# None

Day00 - Updates post-testing

ex00:

  • 1. dans l'exemple, la valeur retournee n'est pas explicitement un float

ex01:

  • 1. Possible de mettre le symbole mu plutot que "mu" dans la definition des termes de l'expression?
  • 2. "Return None if f is not a valid function" tandis que mean ne prend pas de fonction f en parametre

ex02:

  • 1. "Return None if f is not a valid function" tandis que variance ne prend pas de fonction f en parametre
  • 2. xj non defini lors de la definiton des termes de l'expression ?

ex03:

  • 1. "Return None if f is not a valid function" tandis que std ne prend pas de fonction f en parametre

ex04:

  • 1. dans l'exemple, la valeur retournee n'est pas explicitement un float

ex08:

  • 1. "Files to turn in : vec_mse.p": oubli du 'y' dans le nom du fichier a rendre

ex09:

  • 1. Rien dans Forbidden function

ex11:

  • 1. la notation de la formule rend confus, l'utilisation du j est peu claire, je pense qu'il faudrait specifier
  • 2. preciser que le signe "delta inversé" est un nabla, pour faciliter les recherches des futurs apprenants

ex12:

  • 1. dans la formule, un grand X sort de nulle part
  • 2. dans les exemples, les resultats sont egaux a la moitie de ce que l'on trouve (sachant qu'on trouve la meme chose dans l'ex11)

clarification on formula

  • Day: 00
  • Exercise: 08

Hello, in the formula, there is needed a clarification I think

Screenshots
Capture

CodeCogsEqn(1)

X,Y function parameters inversed

  • Day: 03
  • Exercise: 02

A clear and concise description of what the problem/misunderstanding is.
Parameters x,y inversed regarding the givens examples
Examples
If applicable, add examples to help explain your problem.

print("Code example")

Screenshots
If applicable, add screenshots to help explain your problem.

error in example section ex 06

  • Day: 00
  • Exercise: 06

Hello again, I see an error in the Examples section of mat_mat_prod, it say

Z = array([[ -6, -1, -8, 7, -8],...,[ -9, -4, -10, -3, 6]])

Instead of

Z = numpy.array([[ -6, -1, -8, 7, -8],...,[ -9, -4, -10, -3, 6]])

phase2/day02

Exercises :

  • Univariate Linear Regression
  • Questions
  • Multivariate hypothesis
  • Multivariate gradient
  • Questions
  • Multivariate Gradient Descent
  • Multivariate Linear Regression
  • Practicing Multivariate Linear Regression
  • Polynomial hypothesis
  • DataSpliter
  • Questions
  • Bonus: Derive gradient with partial derivatives

Day00 - First page

The first page of the subject : You have written "Day01" instead of "Day00" :)
Screenshot from 2020-04-27 12-02-01

Day03 - Ex06-07 Typo error

  • Day: 03
  • Exercise: 06-07

In the exemples :

>>> vec_reg_logistic_grad(Y, X, Z,, 1)
array([ 6.69780169, -0.33235792, 2.71787754])
>>> vec_reg_logistic_grad(Y, X, Z,, 0.5)
array([ 6.61208741, -0.3680722, 2.74073468])
>>> vec_reg_logistic_grad(Y, X, Z,, 0.0)
array([ 6.52637312, -0.40378649, 2.76359183])

There are extras , after the Z variable

Maybe an error in the example

  • Day: 00
  • Exercise: 11

Here is my code for gradient

def gradient(x, y, theta):
    if (len(y) == 0 or len(x) == 0 or len(theta) == 0 
        or y.shape[0] != x.shape[0] or x.shape[1] != theta.shape[0]):
        return (None)
    res = []
    for j in range(len(theta)):
        line = sum([(theta.dot(x[i]) - y[i]) * (x[i][j])  for i in range(len(x))]) / len(x) 
        res.append(line)
    return (np.array(res))

and i haven't the same result than in the example

array([ -18.67857143,   91.57142857, -196.5])

mine his two time bigger

array([ -37.35714286,  183.14285714, -393. ])

|Day00| |ex05| : Mauvais nom de fonction dans l'exemple.

  • Day: 00
  • Exercise: 05

(Should have I written this in English ? If so I can rewrite it if needed)

Dans l'exemple de l'exo, une fonction "mat_vec_mult" est utilisée, cependant cette fonction n'existe ni dans le sujet de l'exercice, ni dans les précédents, et semble simplement être la fonction "mat_vec_prod" (fonction que l'on doit implémenter dans l'exercice) n'ayant pas été renommée vu que celle ci donne les même résultat que l'exemple :
(ici mon test avec mon implémentation de la "bonne" fonction)

W = np.array([
    [-8, 8, -6, 14, 14, -9, -4],
    [2, -11, -2, -11, 14, -2, 14],
    [-13, -2, -5, 3, -8, -4, 13],
    [2, 13, -14, -15, -14, -15, 13],
    [2, -1, 12, 3, -7, -3, -6]])
Y = np.array([2, 14, -13, 5, 12, 4, -19]).reshape((7, 1))
print(mat_vec_prod(W, Y))

Sortie:
[[ 452]
 [-285]
 [-333]
 [-182]
 [-133]]

Et en capture d'écran, l'exemple du ex05 (entouré en rouge):
image

Mon implémentation de la fonction (au cas où ça vienne de moi, si jamais c'est le cas, désolée):

import numpy as np


def dot(x, y):
    try:
        assert len(x) == len(y)
        res = 0
        for i in range(len(x)):
            res += x[i] * y[i]
        return res
    except AssertionError:
        return None


def mat_vec_prod(x, y):
    try:
        assert len(x[0]) == len(y)
        return np.asarray([dot(x[i], y) for i in range(len(x))])
    except AssertionError:
        return None

Formula of hypothesis fonction

  • Day: 01
  • Exercise: 00

I don't understand well the hypothesis fonction

Screenshots
Capture

is it this ?

CodeCogsEqn

Where X is the vector CodeCogsEqn(2) and CodeCogsEqn(3)

Wrong Example

  • Day: 00
  • Exercise: 09

in the example section there is

linear_mse(X, Y, Z) 

but it should be

linear_mse(Y, X, Z) 

because shapes doesn't match

Day04 - Regularization and feature engineering

Day04: Regularization and feature engineering

Maths

  • Ridge iterative
    • subject
    • examples
  • Ridge vectorized
    • subject
    • examples
  • Regularized MSE
    • subject
    • examples
  • Regularized Linear Gradient - iterative
    • subject
    • examples
  • Regularized Linear Gradient - vectorized
    • subject
    • examples
  • Regularized Logistic Loss
    • subject
    • examples
  • Regularized Logistic Gradient - iterative
    • subject
    • examples
  • Regularized Logistic Gradient - vectorized
    • subject
    • examples

Algorithm

  • Ridge gradient descent
    • subject
    • examples
  • Regularized Logistic gradient descent
    • subject
    • examples

Feature Engineering

  • z-score standardization
    • subject
    • examples
  • min-max standardization
    • subject
    • examples
  • polynomial features + interaction terms
    • subject
    • examples

Day00 - Mathematical Delights

Day00: Mathematical Delights

  • sum

  • mean

  • variance

  • standard deviation

  • vectors dot product

  • matrix-vector multiplication

  • matrix-matrix multiplication

  • mean squared error - iterative

  • mean squared error - vectorized

  • mean squared error as linear cost function - iterative

  • mean squared error as linear cost function - vectorized

  • linear gradient - iterative

  • linear gradient - vectorized

Day02 - Updates post-testing

Global notes:

I agree with a previous issue from bcarlier, its not so cool to do exercices twice, especially cause the vectorised one is the one that we thing of first ! So maybe do it in one exercice that ask the 2 methods ?
Otherwise, really cool day, really easy exercices, it s reeeeally conforting to do exercice in less than 5min !

ex01:

  • 1.

We encourage you to get a look at this section of the Cross entropy Wikipedia.

--> maybe show that "this section" is a link

ex02:

  • 1. - x are the variables of your models
    + x are the variables of your model

ex03:

  • 1. - predicted outputs (formula below).
    + predicted outputs
    there is no formula below (copy pasta from ex01)

One directory per day

  • Day: day*
  • Exercise: ex*

Would be preferable to have only one directory per day and all exercices files in that directory
Thus, we would be able to import previous exercice as it is frequently recommended in the exercices!

Day02 - Updates post-testing

ex00:

  • 1. Turn-in directory and not turning

ex01:

  • 1. Turn-in directory and not turning
  • 2. the "*" is usually used for convolution product, prefer the \dot in the formula

ex02:

  • 1. Turn-in directory and not turning
  • 2. After the instructions for the function it is written :
    "x width will be the number of coefficients or number of coefficients + 1 if you choose to add an intercept ..."
    I think it should be:
    "x width will be the number of coefficients or number of coefficients - 1 if you choose to add an intercept ..."
    because if we decide to add an intercept the length of theta is the (width of X) + 1 and not the opposite.
  • 3. The formula of the gradient might be confusing, maybe a notation closer to the one of the day 00

ex03:

  • 1. Turn-in directory and not turning
  • 2. the "*" is usually used for convolution product, prefer the \dot in the formula
  • 3. In the description of the formula , there is no definition of h below has it is written

ex04:

  • 1. Turn-in directory and not turning

ex05:

  • 1. Turn-in directory and not turning

ex06:

  • 1. Turn-in directory and not turning

ex07:

  • 1. Turn-in directory and not turning

ex08:

  • 1. Turn-in directory and not turning

ex09:

  • 1. Turn-in directory and not turning

ex10:

  • 1. Turn-in directory and not turning

Day00/ex02 - Explain shortly what is Variance

  • Day: 00
  • Exercise: 02 - Variance

In order to avoid coding a function which wasn't understood, the purpose of some formula could be explained.

You must implement the following formula as a function:
$$ \sigma^2 = \frac{1}{m} \sum_{i = 1}^{m} (x_i - \frac{1}{m} \sum_{j = 1}^{m} x_j)^{2} $$

Maybe explain what is variance, why it is used (gaussian curve, std)?

phase2/day01

Exercises :

  • Prediction
  • Cost Function
  • Question
  • Iterative Linear Gradient
  • Vectorized Linear Gradient
  • Gradient Descent
  • Univariate Linear Regresison
  • Practice
  • Question 1
  • Normalization 1
  • Normalization 2
  • Question 2
  • Bonus: Derive gradient from cost function

np or numpy

  • Day: 00
  • Exercise: all

On the Example section, you use numpy.array() on the first half then you use np.array
maybe you should precise that np is import numpy as np ?

Day03 - Updates post-testing

ex02:

  • 1. in the prototype of reg_mse function, parameters x and y are inverted.

So, it's not according with examples:

ex03:

  • 1. in the prototype of reg_linear_grad function, parameters x and y are inverted.

So, it's not according with examples:

There is also a parameter alpha in excess in the function description:

ex04:

  • 1. in the prototype of vec_reg_linear_grad function, parameters x and y are inverted. Moreover the parameter theta is missing.

So, it's not according with examples:

There is also a parameter alpha in excess in the function description:

ex06:

  • 1. in the prototype of reg_logistic_grad function, parameters x and y are inverted.

You can notice in the example, the parameter X and Y are also inverted, so it's according with the prototype of reg_logistic_grad, but it's pretty clumsy (and there is a comma in excess):

There is also a parameter alpha in excess in the function description:

Day02 Ex09 Wrong objectives

  • Day: 02
  • Exercise: 09

In the 'Objectives' part it says :

The goal of this exercise is to recreate the function recall_score of sklearn.metrics and to
learn what represents the recall and how to measure it.

But it was the objectives of the previous exercice (ex08: recall_score) instead of a description of F-Score.

Day00/ex07 - Explain shortly what is Mean Squared Error

  • Day: 00
  • Exercise: 07 - Mean Squared Error

In order to avoid coding a function which wasn't understood, the purpose of some formula could be explained.

You must implement the following formula as a function:
$$ \frac{1}{m}\sum_{i=1}^{m}(\hat{y}_i - y_i)^2 $$

Maybe explain the purpose of mean square error?

Formul in variance

  • Day: 00
  • Exercise: 02

Hello, in the formula, there is two sum with the same index of summation so it's a bit confusing

Screenshot
Capture

Proposed Fixe
CodeCogsEqn

error in example section

  • Day: 01
  • Exercise: 00

Hello, the name of the file isn't the same than in the ressources

data = pd.read_csv("are_blue_pills_magics.csv") 

and

Xpill = np.array(data[micrograms]).reshape(-1,1) 
Yscore = np.array(data[Score]).reshape(-1,1)

should be

Xpill = np.array(data["micrograms"]).reshape(-1,1) 
Yscore = np.array(data["Score"]).reshape(-1,1)

clarification on forbiden func

  • Day: 00
  • Exercise: 00

Hello, in the Forbidden functions it saided *.sum() isn't authorized but is it possible
to use the function sum() like that ?

Examples

def sum_(x, f):
    return(sum([f(i) for i in x]))

Little issues in day01 ex01 ex02 ex3 ex04

  • Day: 01
  • Exercise: 01, 02, 03, 04
  • ex01 issue in the cost_elem docstring

Description:
Calculates all the elements 0.5M(y_pred - y)^2 of the cost function.

à la place de (0.5/M)*(y_pred - y)^2

  • ex02

arguments manquants de fit_

def fit_(theta, X, Y):

à la place de def fit_(theta, X, Y, alpha, n_cycle):

  • ex03
    `arguments manquants de fit_

def fit_(theta, X, Y):

à la place de def fit_(theta, X, Y, alpha, n_cycle):

  • ex03
    les valeurs données dans les exemples theta1, predict_(theta1, X1), theta2, predict_(theta2, X2)

theta1
array([[2.0023..],[3.9991..]])
predict_(theta1, X1)
array([2.0023..], [6.002..], [10.0007..], [13.99988..], [17.9990..])

correspondent à self.theta -= (0.5*alpha/M)*correct
à la place de self.theta -= (alpha/M)*correct

  • ex04
    erreurs dans les 2 exemples avec mse_

print(linear_model1.mse_(Xpill, Yscore))
57.60304285714282

à la place de print(linear_model1.mse_(Y_model1, Yscore))
et pareil pour le model2

  • ex05
    même erreur qu'à l'ex04 dans les exemples avec mse_

phase2/day00

Exercises :

  • The Vector
  • The Matrix
  • TinyStatistician
  • Simple Prediction
  • Add Intercept
  • Prediction
  • Practice 1
  • Cost function 1
  • Cost function 2
  • Practice 2
  • Questions
  • Bonus 1: Other cost functions
  • Bonus 2: KNN

day 02 - ex02 : precision of dimension of X

  • Day: 02
  • Exercise: 02

After the instructions for the function it is written :
"x width will be the number of coefficients or number of coefficients + 1 if you choose to add an intercept ..."
I think it should be:
"x width will be the number of coefficients or number of coefficients - 1 if you choose to add an intercept ..."
because if we decide to add an intercept the length of theta is the (width of X) + 1 and not the opposite.

Day01 - Updates post-testing

ex00:

  • 1. Remove one equal symbol on hypothesis function.
  • 2. It's very unclear the class accept a parameter theta and also the fact we've got to try imputing differents thetas ourselves.

ex01:

  • 1.

"The graph with the data, the hypothesis obtained via linear gradient desent versus age (see example figure 1)"

There is no legends for any figures of the exercice stipulate the nº of figures, maybe it would be preferable to use:

"Figure 1: The graph with the data (...)"

Moreover there is a typo in these sentences, "desent" -> "descent"

  • 2. Inconsistent use variable names on examples (MyRL_age become MyLRage):

ex02:

  • 1.
    - def normalequation(self, x, y))
    + def normalequation(self, x, y):
  • 2. It would be preferable to go to the new line between and

ex03:

  • 1. The formulation is awkward

Maybe:

[...], with a large power thrust comes a risk of spacecraft crash

  • 2.
    There is a typo, "exloses" -> "exploses"

  • 3.
    There is also a typo "you have q really nice (...)" -> "you have a really nice (...)"

phase2/day03

Exercises :

  • Multivariate Linear Regression
  • DataSpliter
  • Questions
  • Sigmoid
  • Logistic Hypothesis
  • Logistic Cost Function - iterative
  • Logistic Cost Function - vectorized
  • Logistic Gradient - iterative
  • Logistic Gradient - vectorized
  • Questions
  • Logistic Regression
  • Multiclass Logistic Regression
  • Bonus 1: Other metrics
  • Bonus 2: Bagging

Wrong prototype in fit (MyRidge)

  • Day: 03
  • Exercise: 08

There are missing parameters in the following prototype of fit :

fit_(self, lambda=1.0, max_iter=1000, tol=0.001):

It should be composed by a training set and true results.
Example

fit_(self, x, y, alpha, lambda=1.0, max_iter=1000[, tol=0.001]):

Day03 - Decision Tree

Day03: Decision Tree

Math

  • Gini Impurity
  • Shannon's Entropy
  • Information Gain (w/ Gini impurity, Entropy, or variance)

Algorithm

  • Decision Tree for classification
  • Entropy
  • Gini
  • Decision Tree for regression
  • Variance
  • Feature importance

Model Evaluation

  • Cross Validation

Vectorize logistic loss function

  • Day: 02
  • Exercise: 03

It's a bit hard to find the vectorized form of the logistic loss function,
I think it will be easier to give it or to give a way to calculate it

Day02 - Logistic Regression

Day02: Logistic Regression

Math

  • Sigmoid
  • Logistic Cost Function
  • Logistic Gradient Descent
  • Vectorized Logistic Cost Function
  • Vectorised Logistic Gradient Descent

Algorithm

  • Logistic Regression

Model Evaluation

  • Accuracy
  • Precision
  • Recall
  • F1 Score
  • Confusion matrix

Todo

  • Check together if the content and format of the day is good

ALL DAYS - ADD Ressources LINK

The ressources needed to do the exercices must be given clearly in the DayXX.md and in the relevant exercice sections.

Day00:

  • Day section
  • Exercice section

Day01:

  • Day section
  • Exercice section

Day02:

  • Day section
  • Exercice section

Day03:

  • Day section
  • Exercice section

Day04:

  • Day section
  • Exercice section

Create a readme

Hi all! To let you know I am going to create a readme for the repo.

Rush00 - Classification on MNIST handwritten digits database.

Rush: Classification on MNIST handwritten digits database.

The students have to build the best classifier for handwritten digits recognition. They will have access to two datasets.

  • Dataset1: A sample from the MNIST with two missing classes (ex. no '8' and '6')
  • Dataset2: A bigger (and complete) but 'corrupted' sample from MNIST with missing or abberant values.

In order to get the best performance, the student will have to use both datasets which means

* experiment several ways to find out the missing classes:

  • KNN, unsupervised learning

* experiment several ways to deal with missing / aberrant values, e.g.:

  • replacement by mean/median/mode
  • smarter replacement : KNN, RF, etc.

They will also have to consider the use of mixed classifiers, such that:

  • bagged classifiers
  • random forest

Inconsistent definition regularized linear gradient

  • Day: 03
  • Exercise: 03

A clear and concise description of what the problem/misunderstanding is.
THETA is given as: "is a vector of dimension n * 1"
although the definition is given for (n+1) terms (0 until n)

Examples
If applicable, add examples to help explain your problem.
Also examples are given with the same dimension for X and THETA
The same result than the examples can be obtain using only the general formula (1...n)

print("Code example")

Screenshots
If applicable, add screenshots to help explain your problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.