Coder Social home page Coder Social logo

bioc-proloc-hyperlopit-workflow's People

Contributors

laurentgatto avatar lgatto avatar lmsimp avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

bioc-proloc-hyperlopit-workflow's Issues

Using updateFvarLabels

@ClaireMulvey had a very good question regarding the code chunk where we update the feature variable names:

fvarLabels(hyperLOPIT2015ms3r1)[3:5] <- paste0(fvarLabels(hyperLOPIT2015ms3r1)[3:5], 1)
fvarLabels(hyperLOPIT2015ms3r2)[3:5] <- paste0(fvarLabels(hyperLOPIT2015ms3r2)[3:5], 2)
fData(hyperLOPIT2015ms3r1) <- fData(hyperLOPIT2015ms3r1)[1:5] 
fData(hyperLOPIT2015ms3r2) <- fData(hyperLOPIT2015ms3r2)[3:5]

where she asks if one could use updateFvarLabels using, for example

updateFvarLabels(hyperLOPIT2015ms2r1, label = "rep1")

The reason we don't do that is that updateFvarLabels updates all the names, including EntryName and ProteinDescription, that would end up as EntryName_Rep1 and ProteinDescription_Rep1 (which is a bit ugly, but would work). I have however changed to code chunk to use updateFvarLabels for the second replicate.

Thanks for the suggestion!

Overview figure

We could prepare a overview figure of the workflow and refer to the specific sections.

On using filterNA

Comment form @ClaireMulvey

Can you use filterNA() to allow for some MVs? E.g. allow for 2 missing values per replicate? > How would you do this?

The easiest way to do that is to filter before combining and setting to proportion of allowed missing values to be 2/10 (using pNA to control the percent of NA values).

Is it possible to say that such filtering to allow for a proportion of missing values should ideally be done on a PSM level and then show the code you would use to reassemble from the PSM to the protein level file?

Yes, ideally filtering and imputation should be done at the PSM/peptide level. That is because when combining into proteins, there is an implicit imputation that takes place (using zero, of mean) that is often not acknowledged but might not be a good option at all. For details, see

Lazar C, Gatto L, Ferro M, Bruley C, Burger T. Accounting for the Multiple Natures of Missing Values in Label-Free Quantitative Proteomics Data Sets to Compare Imputation Strategies. J Proteome Res. 2016 Apr 1;15(4):1116-25 PMID./2690640)

Topics to add

  • FeaturesOfInterest
  • plot3D
  • QSep
  • GO annotations
  • Future directions (multi/trans- localisation, automation, ...)

Questions/checks in the manuscript

On figure 7, we plot the profile of the mitochondrial and peroxisome markers to highlight the differences in profiles between these two sets of markers along the 6th and 7th channels, as represented above along the 7th PC on the PCA plot.

This is confusing as the channels and indices on the y axis of plotDist don't match. Also, I have the feeling that it's not really along the 12th and 13th (TMT channels 6 and 7), but 13th and 14th columns that the data differs most.

how to store data

options:

  • add results from the phenoDisco run, TL, SVM classification to a new object called hl and add to pRolocdata? Then load using the function data? The bad side of this is we have then multiple datasets in pRolocdata containing the same quantitation information
  • add results as columns to hyperLOPIT2015. I would rather not do this as the fData gets bigger and then there are many similar columns
  • add .rda or other file type to extdata in pRolocdata and load using dir function with appropriate read function? I think this is the best option?

@lgatto or other suggestions?

Use Bioconductor release 3.3

The manuscript/vignette should be build using the recent (yesterday) release (stable) version 3.3, as that is what readers/users will be assumed to use. To make use of that version, you will need R 3.3.

Which markers

In the tl object (but see #15), we use at time SVM.marker.set and markers, as produced by addMarkers. The latter contains ER and Golgi as separate groups, and the latter is all over the place. Two possibilities:

  1. We use the former, possibly renaming it to markers
  2. We use the latter, and drop Golgi markers, explaining there is no Golgi in these cells.

TL

Highlight where TL is useful at the beginning of section. We say towards the end particularly useful for datasets with low cluster resolution, would be good to emphasise this where TL is first introduced.

Plotting all organelle profiles

Question from @ClaireMulvey

How would you adapt this code to plot all or individual organelle markers in the same colour as they appears on the PCA plot against the grey background of all unselected proteins (and unknowns) potentially with a protein of interest highlighted?
Basically, I mean exactly what can be done easily in the pRolocGUI, but using R instead, as it isn't possible to export high res figs from the GUI?

library(pRoloc)
library(pRolocdata)
data(hyperLOPIT2015)
o <- order(hyperLOPIT2015$Iodixonal.Density) ## reorder
cl <- getMarkerClasses(hyperLOPIT2015) ## class names
cols <- getStockcol() ## colours, as used by plot2D
![allplotdists](https://cloud.githubusercontent.com/assets/384198/20843994/ac3bb5f2-b8b5-11e6-9c72-bca03d93f6ea.png)


par(mfrow = c(4, 4))
for (i in seq_along(cl)) {    
    plotDist(hyperLOPIT2015[, o],
             markers = getMarkers(hyperLOPIT2015, verbose = FALSE) == cl[i],
             mcol = cols[i],
             type = "l")
    title(cl[i])
}

Which produces
allplotdists

errors with the proof from f1000

Nearly all figures do not follow on from the code chunks. Lots of code chunks are in the wrong place and don't follow on from the text.

The use of addMarkers in the workflow

library('pRoloc')
library('pRolocdata')

## Create new `MSnSet`
f0 <- dir(system.file("extdata", package = "pRolocdata"), full.names = TRUE, 
          pattern = "hyperLOPIT-SIData-ms3-rep12-intersect.csv")
lopit2015 <- readMSnSet2(f0, ecol = c(8:27), fnames = 1, skip = 1)

## Clean `fData` cols 
fData(lopit2015) <- fData(lopit2015)[, c(2, 8, 11)]

## get markers
mrk <- pRolocmarkers(species = "mums")
lopit2015 <- addMarkers(lopit2015, markers = mark)
Error in pRolocmarkers(species = "mums") : 
  Available species: atha, dmel, ggal, hsap, mmus, scer_sgd, scer_uniprot. See pRolocmarkers() for details.

## Of course `featureNames` do not match, but trying to update `featureNames` is not possible because of protein grouping and unique IDs

featureNames(lopit2015) <- fData(lopit2015)[, 1]
Error in `row.names<-.data.frame`(`*tmp*`, value = c(1185L, 3057L, 3399L,  : 
  duplicate 'row.names' are not allowed
In addition: Warning message:
non-unique values when setting 'row.names':ACTB_MOUSE’, ‘AP1B1_MOUSE’, ‘AT1A1_MOUSE’, ‘AT2A2_MOUSE’, ‘BAIP2_MOUSE’, ‘CLAP2_MOUSE’, ‘CNNM4_MOUSE’, ‘CTNA1_MOUSE’, ‘CTNB1_MOUSE’, ‘CTND1_MOUSE’, ‘CYFP1_MOUSE’, ‘DAAM1_MOUSE’, ‘DNM1L_MOUSE’, ‘DOCK6_MOUSE’, ‘DYST_MOUSE’, ‘E41L3_MOUSE’, ‘EPHB2_MOUSE’, ‘EPHB4_MOUSE’, ‘EPN3_MOUSE’, ‘GANAB_MOUSE’, ‘GBB2_MOUSE’, ‘glutamine-hydrolyzing’, ‘GNAI3_MOUSE’, ‘GNAS2_MOUSE’, ‘HNRPC_MOUSE’, ‘HNRPK_MOUSE’, ‘IMMT_MOUSE’, ‘KC1G1_MOUSE’, ‘KPYM_MOUSE’, ‘MACF1_MOUSE’, ‘MPRIP_MOUSE’, ‘MYO1C_MOUSE’, ‘MYOF_MOUSE’, ‘NADP’, ‘NCAM1_MOUSE’, ‘PKP4_MOUSE’, ‘PLAK_MOUSE’, ‘PLXA1_MOUSE’, ‘PVRL2_MOUSE’, ‘RAB1A_MOUSE’, ‘RAB6A_MOUSE’, ‘RADI_MOUSE’, ‘RALA_MOUSE’, ‘RAP1B_MOUSE’, ‘RAP2B_MOUSE’, ‘RAP2C_MOUSE’, ‘RASH_MOUSE’, ‘RASN_MOUSE’, ‘RB11B_MOUSE’, ‘RRAS2_MOUSE’, ‘RS27A_MOUSE�� [... truncated] 

@lgatto This is annoying. Now if I want to use addMarkers I will need to write a few more lines code up make the identifiers unique. This is not ideal for the workflow. What do you think? Not use addMarkers but highlight that it's available?

Not a model example, change getEcols?

@lgatto Annoyingly, the combined hyperLOPIT data is not a model dataset to use in terms of default names and functions to create a MSnSet. For example, I can not use getEcols as there are two headers in the .csv file. Also, there is no column called markers. Of course, this does not matter, and it may be the case that one has several headers. I wonder whether we should change getEcols so that we can allow reading column names from a second or other header row line?

f0 <- dir(system.file("extdata", package = "pRolocdata"), 
          full.names = TRUE, 
          pattern = "hyperLOPIT-SIData-ms3-rep12-intersect")

## getEcols does not work for this dataset
strsplit(readLines(f0, 2)[2], ",")[[1]]

lopit2015 <- readMSnSet2(f0, ecol = c(7:26), row.names = 1, skip = 1,
                         header = TRUE, stringsAsFactors=FALSE)
plot2D(lopit2015, fcol = "SVM.marker.set")

Fractions vs channels

I think we should clarify fractions (along the gradient) and channels (the final columns in the data).

Daniel's review comments to address

Breckels et al. have written a very nice piece on analysing appropriate proteomics data for subcellular localisation. I particularly like the "workshop characteristics" of the text. Which allows a novice, but interested reader to work through the analysis stepwise and reproduce the results described therein. The authors took great care in keeping this ideal up during their text and this is also where I have put my greatest reservation to the manuscript in its present form - since a reader cannot work through the code presented in the manuscript, since there at at least two situations where a readily available HPC and quite some time is required. This kind of leaves a dent in my impression - however, given this can be resolved as well as some typos - the workflow report is superb.

Major comments:

  • Next to reducing the dimensions of data for visualisation, PCA also offers a way to understand how the variability is distributed across the multidimensional data by providing linear combinations of the variables which then constitute the actual PCs. On that note it would be nice to mention this in Visualising markers section on page 16, where PC7 explains not much variability but due to the correct weighing of the variables we do get a separation between mitochondrial and peroxisome. This then can be further motivated with Figure 9 - where we probably can see that the weights for the fractions where the two localisations differ are larger than otherwise.

We have added a paragraph to the 'Visualising markers' section of the manuscript reiterating the purpose of PCA and motivating the choice of looking at PC's 1 and 7. Figure 9 now follows on from this (now Figure 8), along with the corresponding code and an explanation of the plotDist function.

  • I was unable to reproduce Figure 13 comparing the two MSnSets. While I was able to look at each set separately using pRolocVis(hllst@x[[I]]), where i is 1 or 2, I only got an error using the code from the manuscript. When using ‘remap=FALSE’ it actually works, but since this makes barely sense it is of no use - but just as a hint at debugging it.
Subsetting MSnSetList to their common feature names
5032 features in common
Remapping data to the same PC space
Error in (function (od, vd)  : 
  object and replacement value dimnames differ
Error in pRolocVis_compare(object, ...) : object 'idDT' not found

We can not reproduce this error. Have you updated to the latest version of R and the latest version of pRolocGUI? If you still get this error message could you please post this as an issue on the pRolocGUI Github page along with your SessionInfo() and we will certainly attempt to solve this.

  • You really need to make the results from the phenoDisco classification available too. It is super disappointing that one cannot continue reproducing the code from page 23 on, because it takes 24 hours to compute it using 40 cores…

The results are already available as a RDS file and stored in pRolocdata for users. This is what is called in the manuscript under the hood:

f0 <- dir(extdatadir, full.names = TRUE, pattern = "bpw-pdres.rds")
pdres <- readRDS(f0)
hl <- addMarkers(hl, pdres, mcol = "pd", verbose = FALSE)

We have made this code available in the manuscript in an appendix so users can continue to produce the exact plots as they see in this workflow.

  • The above comment is of course also true for the KNN TL Optimisation on page 33 - this needs to be downloadable, since not everyone has access to Cambridge’s HPC and probably even less have 76 hours to spare.

The same as for the phenoDisco analysis and svm, the TL results are stored as a RDS in pRolocdata and are loaded in the background. We have added the code required to the appendix so that users can load the results directly.

  • Your comment on the increase suitability of classification instead of clustering (when additional information on classes is available) at the bottom of page 35 could be more pronounced - for educational reasons.

To address the above comment on suitability we have added a few additional points on the challenges of using clustering for this type of data.

We generally find supervised learning more suited to the task of protein localisation prediction in which we use high-quality curated marker proteins to build a classifier, instead of using an entirely
unsupervised approach to look for clusters and then look for enrichment of organelles and complexes. In the latter we do not make good use of valuable prior knowledge, and in our experience unsupervised clustering can be extremely difficult due to (i) the loose definition of what constitutes a cluster (for example whether it is defined by the quantitative data or the localisation information), (ii) the influence of the algorithm assumption on the cluster identification (for example parametric or non-parametric) and (iii) poor estimates of the number of clusters that may appear in the data.

Minor comments:

  • I was not able to naïvely reproduce the workflow from the R commands in the article due to an error installing pRolocdata on a Windows machine. On OS X it was smooth.

We didn't experience any problems with installing pRolocdata on Windows. If you re-try the installation and please let us know if you still have any issues by opening an issue on Github or posting on the Bioconductor Support site.

  • On page 10 line 2 there is a ‘to’ missing.

In this version we currently can't find the missing 'to'.

  • I never came across the verb imputate in the context of missing values, I guess the proper term is impute.

This has been changed to read "We can impute missing data..."

  • On page 11 the image2 function is called after the filterNA function a couple of lines above. This however would result in an only black heat map (since there are no more missing). The image2 function should be called before the filterNA function. Since the reader does not see the chunk options, it could be puzzling.

This was an editing mistake and has now been rectified

  • For completeness sake there should also be an install.packages(c(“hexbin”, “rgl”)) somewhere to generate the second PCA-plot and the 3D plot. Moreover, Mac users will need to install xquartz to use rgl properly.

A footnote has been added here to tell users that the package rgl may need to be installed with install.packages("rgl") and mac users may need to install xquartz if it's not already installed.

  • On page 14 the plotting code chunk is off track - in the middle of the marker sets output.

This has now been rectified.

  • On page 18: …wanted to highlight a proteins with the… -> lose the a and later in the sentence there is a ‘create a’ too many.

These typos have been rectified.

  • Direct comparisons of individual channels in replicated experiments do not provide…

This typo has been rectified.

  • You may want to consider adding a layout(1) or similar, after changing the mfrow argument of the parameters to accommodate 2 panels, such that the uncanny reader does not get confused.

We would prefer to keep the code as it is and not introduce more noise with calls to other functions such as layout. The workflow is not aimed at teaching R. Users should have some basic knowledge of R before tackling this tutorial.

  • I would prefer links to referred sections of the text, but that may be personal taste…
    This is a comment for f1000. We can not control the linking of sections in the final version.

  • Page 23: One should note that the decreasing the GS, and increasing the … at least one the too many, probably two.

We have reworded this sentence as requested.

  • On page 25: We find the general tendancy to be that it is not the choice … tendency?

This typo has been rectified.

  • On page 28 you refer to ‘…the code chunk below…’ for Figure 17, however, the following code chunk is generating Figure 16 (which is above and btw not referenced in the text). Maybe force your figures a little to float where you want them/refer to them.

We have now referenced Figure 16 in the text and made sure that the code chunks and figures follow inline where they are referenced in the text.

  • On page 28: …by extracting the median or 3rd quantile score per organelle… do you mean quartile? Otherwise I do not follow.

Thank you, yes this is a typo and has now been changed to 'quartile'.

  • On page 32: …package to query the relevent database … relevant?

This typo has been rectified

  • On page 32 - there is something wrong with this sentence: To remove the 4 classes and create a new column of markers in the feature data called tlmarkers to use for the analysis:

This sentence is not needed here and so it has now been removed as it essentially reiterates what is said in the above paragraph.

  • On page 34: From examining the parameter seach plots as described in section Optimisation… search!

This typo has been rectified.

  • On page 36: …and later reload the object using save. -> that would be ‘load’ then!

This typo has been rectified.

  • On page 38 - I fully agree with the following sentence, but right after the updating comment it kind of seems ‘misplaced’? Maybe add a title like ‘Getting help’?

We have changed the title of this section to 'Session information and getting help' to clarify this section of the tutorial.

renamed lopit2016 to hl

Did this because it was confusion to call that variable lopit2016 and the 2 replicates, used to illustrate combine are called hyperLOPIT2015. And hl is shorter to type.

@lmsimp - please close the issue once read.

Confusion between naming MSnSets

Reminder to check use of hl and hyperLOPIT as examples. One is combined from loading to independent replicates, the other is loaded from a csv already been combined

Using a seed before svmOptimisation

@ClaireMulvey asked about using a seed when optimising the parameters. This would be done as shown below. I used 123 as seed (and would take note of it), but any integer can be used (and there's no need to sample that one at random).

set.seed(123)
params <- svmOptimisaion(hl, ...)

Saving the params saves me for re-running the optimisation routine and allows me to inspect them again at a later date. If I wanted to re-run it and the same results, I would first set the seed to 123 again.

But I don't think that this is necessary, however. The optimisation routine is repeated 100 times (that's the default) and we assume the actual folds don't matter. If they did, i.e. different seeds gave different and incompatible best parameters, we would have an issue, and would need to increase the number of iteration and make sure the results converge. We have never observed such an issue.

Leonard's review comments to address

This manuscript describes a Bioconductor workflow for analyzing subcellular proteomics data. It is very detailed and comprehensive and will be useful for others in the field.

A few comments:

  • Some clearer statement early on would help to clarify for readers what types of data this works with. I know that the authors indicate that the example they use is 10-plex TMT and that it can be used with label-free or other labels, but that is not what I am referring to. Rather, structure of the experiment. That is, that one needs systematic quantitative data on all the different relevant fractions from a cell, as opposed to someone who perhaps did a differential centrifugation experiment to isolate a couple fractions and then wants to apply this (my understanding is that this latter example would not be usable).

  • How do the authors recommend collapsing replicates? This could be covered in the section dedicated to the Compare function. Two replicates will (almost) never agree 100% so how are discrepancies handled?

QC PCA

The un-annotated PCA plot in the QC section could use transparency or the new hexbin method.

Combining

  • Take the two replicates from the .xls spreadsheet with the paper and show how to combine
  • Show how to split the combined dataset

Replicates

We probably need a paragraph about replicates, whether they are combined, analysed/classified independently, how are they normalised, ...

Funding

@lmsimp - could you update the acknowledgement.md file with the correct grant code.

Would like to use addGoAnnotations

@lgatto Do you think it would be nice to use this in this workflow? It would fit in nicely in the markers section where I add markers from (1) pRolocmarkers and then (2) show the curated markers. Of course, the restriction is however that this is not in the stable version of pRoloc and it's a lot of code to backport... and not to mention not good practice.

Add pd and tl results to pRolocdata?

In the manuscript we load the results of the phenoDisco and TL algorithms under the hood:

PD

f0 <- dir(extdatadir, full.names = TRUE,
          pattern = "bpw-pdres.rds")

TL

tlfile <- dir(extdatadir, full.names = TRUE,
              pattern = "bpw-tlopt.rds")

SVM

svmf <- dir(extdatadir, full.names = TRUE,
              pattern = "bpw-svmopt.rds") 

I do agree with Daniel it is annoying when you follow the manuscript you can't produce the relevant plots yourself because the objects are not directly accessible by following the code.

What can we do @lgatto? Do you think we should add a column to the hyperLOPIT MSnSet in pRolocdata for each of these results? Or directly show how to load these results using the dir command? The latter is messy.

Although, loading the optimisations is tricky, they can't be stored in the MSnSet at least no the TL results.....

README

Says something about Software Carpentry - I think this is a mistake?

Fig 6

@lgatto can you regenerate this figure so that the x-axis labels on the LHS plot is las = 2 and same font size are plot of the RHS?

Claire's comments

@ClaireMulvey's comment

  • Are you planning to mention pRolocGUI in this workflow – it would be nice for readers to be made aware of the GUI and associated apps.
  • In the section called “The use-case: prediction sub-cellular localisation in pluripotent embryonic mouse stem cells” it looks as though SPS-MS3 has a typo in it – the 3 hasn't superscripted correctly?
  • In the Infrastructure section, it might be a good idea to briefly describe what you define as feature meta-data and sample meta-data.
  • In the last part of the Importing Data section, it would be nice to have a little more detailed explanation of the code you have used. Simple things like what the $ represents would be very helpful for the beginner and helps us learn a bit of basic R..!

Normalisation section:

  • Now I am just being greedy, but you mention different normalisation technique are available in pRoloc – mean, median scaling, variance stabilisation etc…. Could you mention briefly why different methods should be used and also could you show how to write the different methods i.e. instead of:
hl <- normalise(hl, method = "sum")

In the Missing Data section,

  • Could you show an example of how to use the impute() function? Up to how many missing values are acceptable to impute? Or is it always preferable to filterNA() instead?

In the Quality Control section,

  • could you show an example of plotting lower dimensions with the dims () function?

In the Markers section:

  • Typo? In the text at the start you say – “The mouse dataset used here has Uniprot IDs stored as the featureNames (see head(featureNames(lopit2016)))” – should this msnset be called hl instead of lopit2016?
  • The human marker set Katerina and I are using now has a lot more proteins and is a lot better than Andy's old marker set.. do you want me to send it to you? - to be followed up here.
  • Maybe you could mention in the marker section how to import your own specific marker set or how to update a marker list with new additions.
  • It might also be good to show how to plot the reporter ion series for markers using plotDist()?

Replication section:

  • Does the scatterplot for Channel 10 show that everything EXCEPT the chromatin has a reproducible distribution for the chromatin prep?!! Oh dear....Sigh…!
  • It would be nice to show the code you used to generate the linear regression scatter plots and the reporter ion distributions – or is it too complex for beginners?

In Optimisation section:

  • can you mention from the example you have plotted which are the best parameters to use from this test example? I am unclear of how to interpret the cost/sigma results

It might also be nice to mention some other basic tools such as:

  • how to plot a particular protein of interest on the pca and pull out its reporter ion series - addressed with pRolocVis
  • How to plot some organelles and not others – previously Lisa showed me - addressed with pRolocVis
  • how to use fDataToUnknown() ??
  • Is there a methodical way to make cutoffs for SVM scores?
  • How do you pull out the classified proteins list?

clustering

Mention somewhere that we can do clustering but a totally unsupervised approach not an appropriate/adaquete data analyses for spatial proteomics. It is still useful.... for clustering of markers, ref mrkHClust

Repeat TL with updated GO MSnSet

The latest pRoloc version (added in 2b878e604bb1835f0b93f88a2227f35a9c4eedc4) chunks the input before querying Biomart. It's probably worth repeating the TL analysis. May be we could use a bit more classes, possibly complement those that don't have 13 markers with some assignments from the paper.

errors in 2nd proof from f1000

  • Page 18 - The code for plot3D is wrong. It should be
plot3D(hl, dims = c(1, 2, 7))
  • Page 19 - there is an erroneous plot3D ... line of code at the end of this section. It needs to be deleted.

  • The bottom of page 19 the output of the call to foi13s is missing and has been erroneously placed on page 20 at the end of the section. It should follow after foi13s e.g.

foi13s
## Traceable object of class "FeaturesOfInterest"
## Created on Tue May 22 16:07:42 2018
## Description:
##  13S condensin
## 4 features of interest:
##   Q8CG48, Q8CG47, Q8K2Z4, Q8C156
  • page 31 the output does not follow the code chunks here, please see original submission.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.