Comments (16)
from cogaps.
I did not filter genes with zero expression
from cogaps.
from cogaps.
Good morning,
Sorry for the delay. After removing zero variance and zero expression genes with
#drop zero expression genes
row_sums <- rowSums(Hoxd10.mat)
zeroindx <- which(row_sums == 0)
Hoxd10mat_filtered <- Hoxd10.mat[-zeroindx,]
#drop zero variance genes
row_var <- rowVars(Hoxd10mat_filtered)
zervarindx <- which(row_var == 0)
Hoxd10mat_filtered <- Hoxd10mat_filtered[-zervarindx,]
I reran CoGAPS with and without sparse optimization. Looks like it gave a similar result with -nan ChiSq
> Hoxd10_mat <- readRDS('~Hoxd10mat_filtered.RDS')
> params <- CogapsParams(nPatterns=5, nIterations=30000, seed=42, sparseOptimization=TRUE, distributed="genome-wide")
> params <- setDistributedParams(params, nSets=5)
setting distributed parameters - call this again if you change nPatterns
> Hoxd10_matnp5 <- CoGAPS(Hoxd10_mat, params)
This is CoGAPS version 3.19.1
Running genome-wide CoGAPS on Hoxd10_mat (27277 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 30000
seed 42
sparseOptimization TRUE
distributed genome-wide
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
-- Distributed CoGAPS Parameters --
nSets 5
cut 5
minNS 3
maxNS 8
Creating subsets...
set sizes (min, mean, max): (5455, 5455.4, 5457)
Running Across Subsets...
Data Model: Sparse, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
worker 1 is starting!
worker 2 is starting!
worker 4 is starting!
worker 3 is starting!
worker 5 is starting!
-- Equilibration Phase --
1000 of 30000, Atoms: 16620(A), 1167(P), ChiSq: -nan, Time: 00:00:52 / 01:28:04
...
30000 of 30000, Atoms: 25197(A), 1328(P), ChiSq: -nan, Time: 00:41:15 / 01:28:38
-- Sampling Phase --
1000 of 30000, Atoms: 25152(A), 1329(P), ChiSq: -nan, Time: 00:42:42 / 01:28:29
...
30000 of 30000, Atoms: 25114(A), 1344(P), ChiSq: -nan, Time: 01:24:45 / 01:24:45
worker 1 is finished! Time: 01:24:45
worker 2 is finished! Time: 01:24:49
worker 3 is finished! Time: 01:25:26
worker 5 is finished! Time: 01:28:35
worker 4 is finished! Time: 01:30:20
Matching Patterns Across Subsets...
Running Final Stage...
Data Model: Sparse, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
worker 1 is starting!
worker 2 is starting!
worker 5 is starting!
worker 3 is starting!
worker 4 is starting!
-- Equilibration Phase --
1000 of 30000, Atoms: 12640(A), 0(P), ChiSq: -nan, Time: 00:00:34 / 00:57:35
...
30000 of 30000, Atoms: 19022(A), 0(P), ChiSq: -nan, Time: 00:25:15 / 00:54:15
-- Sampling Phase --
1000 of 30000, Atoms: 19126(A), 0(P), ChiSq: -nan, Time: 00:26:07 / 00:54:07
...
30000 of 30000, Atoms: 18941(A), 0(P), ChiSq: -nan, Time: 00:51:43 / 00:51:43
worker 1 is finished! Time: 00:51:43
worker 3 is finished! Time: 00:52:06
worker 5 is finished! Time: 00:52:45
worker 2 is finished! Time: 00:52:56
worker 4 is finished! Time: 00:53:10
Warning message:
In checkInputs(data, uncertainty, allParams) :
running distributed cogaps without mtx/tsv/csv/gct data
While running without sparse optimization looked normal.
> Hoxd10_mat <- readRDS('~Hoxd10mat_filtered.RDS')
> params <- CogapsParams(nPatterns=5, nIterations=30000, seed=42, distributed="genome-wide")
> params <- setDistributedParams(params, nSets=5)
setting distributed parameters - call this again if you change nPatterns
> Hoxd10_matnp5 <- CoGAPS(Hoxd10_mat, params)
This is CoGAPS version 3.19.1
Running genome-wide CoGAPS on Hoxd10_mat (27277 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 30000
seed 42
sparseOptimization FALSE
distributed genome-wide
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
-- Distributed CoGAPS Parameters --
nSets 5
cut 5
minNS 3
maxNS 8
Creating subsets...
set sizes (min, mean, max): (5455, 5455.4, 5457)
Running Across Subsets...
worker 2 is starting!
Data Model: Dense, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
worker 1 is starting!
worker 4 is starting!
worker 3 is starting!
worker 5 is starting!
-- Equilibration Phase --
1000 of 30000, Atoms: 5351(A), 1181(P), ChiSq: 6223970, Time: 00:01:14 / 02:05:20
...
30000 of 30000, Atoms: 12093(A), 2528(P), ChiSq: 5907389, Time: 00:55:27 / 01:59:09
-- Sampling Phase --
1000 of 30000, Atoms: 12133(A), 2524(P), ChiSq: 5907130, Time: 00:57:20 / 01:58:48
...
22000 of 30000, Atoms: 12115(A), 2522(P), ChiSq: 5906996, Time: 01:38:09 / 01:54:53
worker 5 is finished! Time: 01:39:32
23000 of 30000, Atoms: 12027(A), 2495(P), ChiSq: 5907204, Time: 01:40:02 / 01:54:40
24000 of 30000, Atoms: 12127(A), 2530(P), ChiSq: 5906893, Time: 01:41:46 / 01:54:16
25000 of 30000, Atoms: 12067(A), 2549(P), ChiSq: 5907026, Time: 01:43:30 / 01:53:54
26000 of 30000, Atoms: 12093(A), 2520(P), ChiSq: 5907063, Time: 01:45:13 / 01:53:31
worker 4 is finished! Time: 01:46:33
27000 of 30000, Atoms: 12146(A), 2533(P), ChiSq: 5907351, Time: 01:46:57 / 01:53:09
worker 3 is finished! Time: 01:48:18
28000 of 30000, Atoms: 12094(A), 2504(P), ChiSq: 5906844, Time: 01:48:37 / 01:52:44
worker 2 is finished! Time: 01:49:34
29000 of 30000, Atoms: 12145(A), 2491(P), ChiSq: 5907016, Time: 01:50:00 / 01:52:03
30000 of 30000, Atoms: 12155(A), 2493(P), ChiSq: 5907008, Time: 01:51:34 / 01:51:34
worker 1 is finished! Time: 01:51:34
Matching Patterns Across Subsets...
Running Final Stage...
worker 5 is starting!
worker 3 is starting!
worker 4 is starting!
worker 2 is starting!
Data Model: Dense, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
worker 1 is starting!
-- Equilibration Phase --
1000 of 30000, Atoms: 6726(A), 0(P), ChiSq: 18248056, Time: 00:00:08 / 00:13:33
...
30000 of 30000, Atoms: 11486(A), 0(P), ChiSq: 18248056, Time: 00:07:05 / 00:15:13
-- Sampling Phase --
1000 of 30000, Atoms: 11367(A), 0(P), ChiSq: 18248056, Time: 00:07:20 / 00:15:11
...
30000 of 30000, Atoms: 11467(A), 0(P), ChiSq: 18248056, Time: 00:14:32 / 00:14:32
worker 1 is finished! Time: 00:14:32
worker 2 is finished! Time: 00:15:12
worker 5 is finished! Time: 00:16:59
worker 3 is finished! Time: 00:17:06
worker 4 is finished! Time: 00:17:30
Warning message:
In checkInputs(data, uncertainty, allParams) :
running distributed cogaps without mtx/tsv/csv/gct data
This time, each run generated the same number of patterns, however the values differed.
> range(sparseTRUE@featureLoadings)
[1] 0.000000 9.607491
> range(sparseFALSE@featureLoadings)
[1] 7.684777e-09 5.845305e+00
PatternMarkers with threshold = 'all' also did not work on the CoGAPS object generated with sparseOptimization = "TRUE". PatternMarkers worked on the object generated without sparse optimization.
from cogaps.
from cogaps.
Chisq is still not nan if we run on exact same dimensions and parameters
c <- 380
r <- 27277
simdata <- matrix(runif(r*c), nrow=r, ncol=c)
params <- CogapsParams(nPatterns=5, nIterations=30000, seed=42, distributed="genome-wide", sparseOptimization=TRUE)
params <- setDistributedParams(params, nSets=5)
res <- CoGAPS(simdata, params = params, outputFrequency = 1000)
This is CoGAPS version 3.22.0
Running genome-wide CoGAPS on simdata (27277 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 30000
seed 42
sparseOptimization TRUE
distributed genome-wide
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
-- Distributed CoGAPS Parameters --
nSets 5
cut 5
minNS 3
maxNS 8
Creating subsets...
set sizes (min, mean, max): (5455, 5455.4, 5457)
Running Across Subsets...
Data Model: Sparse, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
worker 1 is starting!
worker 3 is starting!
-- Equilibration Phase --
worker 4 is starting!
worker 2 is starting!
worker 5 is starting!
1000 of 30000, Atoms: 28296(A), 1016(P), ChiSq: 171672640, Time: 00:00:55 / 01:33:09
2000 of 30000, Atoms: 31014(A), 983(P), ChiSq: 171320192, Time: 00:02:02 / 01:32:27
3000 of 30000, Atoms: 32194(A), 934(P), ChiSq: 171210704, Time: 00:03:07 / 01:28:59
4000 of 30000, Atoms: 33225(A), 922(P), ChiSq: 171167024, Time: 00:04:12 / 01:26:24
from cogaps.
Making data 50% sparse still runs fine. @rpalaganas what's the sparsity of your data?
c <- 380
r <- 27277
dense <- runif(r*c)
sparse <- sample(c(dense, rep(0, length(dense))),
size = length(dense), replace = T)
sum(sparse==0)/length(sparse)
simdata <- matrix(dense, nrow=r, ncol=c)
params <- CogapsParams(nPatterns = 5, nIterations = 30000, seed = 42, distributed = "genome-wide",
sparseOptimization = TRUE)
params <- setDistributedParams(params, nSets=5)
res <- CoGAPS(simdata, params = params, outputFrequency = 1000)
This is CoGAPS version 3.22.0
Running genome-wide CoGAPS on simdata (27277 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 30000
seed 42
sparseOptimization TRUE
distributed genome-wide
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
-- Distributed CoGAPS Parameters --
nSets 5
cut 5
minNS 3
maxNS 8
Creating subsets...
set sizes (min, mean, max): (5455, 5455.4, 5457)
Running Across Subsets...
Data Model: Sparse, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
worker 1 is starting!
-- Equilibration Phase --
worker 2 is starting!
worker 4 is starting!
worker 3 is starting!
worker 5 is starting!
1000 of 30000, Atoms: 28519(A), 990(P), ChiSq: 171619760, Time: 00:00:56 / 01:34:51
2000 of 30000, Atoms: 30973(A), 967(P), ChiSq: 171261920, Time: 00:02:04 / 01:33:58
from cogaps.
The sparsity of the matrix that gave the -nans is 0.71.
coop::sparsity(Hoxd10) #0.7125972
I also do not get -nan ChiSq when testing a matrix that is almost exactly as sparse.
x <- matrix(0, 380, 27277)
x[sample(length(x), size = round(0.29 * length(x)))] <- 1
coop::sparsity(x) #0.7099992
params <- CogapsParams(nPatterns = 5, nIterations = 30000, seed = 42, distributed = "genome-wide",
sparseOptimization = TRUE)
params <- setDistributedParams(params, nSets=5)
res <- CoGAPS(t(x), params = params, outputFrequency = 1000)
This is CoGAPS version 3.21.5
Running genome-wide CoGAPS on t(x) (27277 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 30000
seed 42
sparseOptimization TRUE
distributed genome-wide
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
-- Distributed CoGAPS Parameters --
nSets 5
cut 5
minNS 3
maxNS 8
Creating subsets...
set sizes (min, mean, max): (5455, 5455.4, 5457)
Running Across Subsets...
Data Model: Sparse, Normal
Sampler Type: Sequential
Loading Data... worker 5 is starting!
worker 4 is starting!
worker 2 is starting!
Done! (00:00:00)
worker 1 is starting!
-- Equilibration Phase --
worker 3 is starting!
1000 of 30000, Atoms: 10004(A), 1354(P), ChiSq: 42395544, Time: 00:00:15 / 00:25:24
2000 of 30000, Atoms: 12855(A), 1780(P), ChiSq: 42189580, Time: 00:00:38 / 00:28:47
from cogaps.
I thought that something is wrong in some genes' distributions, and used this fun to remove genes that would yield chisq nan
in the results.
failRemoveRowsAll <- function(data){
i <- 1
j <- 6 #to keep genes > patterns
failnames <- c()
params <- CogapsParams(nPatterns = 5, nIterations = 10, seed = 1,
sparseOptimization = TRUE)
while (j <= nrow(data)) {
res <- CoGAPS(data[c(i:j), ], params = params, outputFrequency = 10,
messages = FALSE)
if (sum(is.na(res@metadata$chisq)) > 0) {
failname <- rownames(data)[j]
message(failname, ", at: ", j, " fails: ", length(failnames))
failnames <- c(failnames, failname)
data <- data[-c(j),]
} else {
j <- j + 1
}
}
return(failnames)
}
afterwards, the chisq is not nan anymore, but the value itself is huge:
This is CoGAPS version 3.22.0
Running Standard CoGAPS on hoxdata[!(rownames(hoxdata) %in% failed_by_all), ] (28470 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 10000
seed 1
sparseOptimization TRUE
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
Data Model: Sparse, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
-- Equilibration Phase --
1000 of 10000, Atoms: 108128(A), 625(P), ChiSq: 1839920873269298576727893606400, Time: 00:01:01 / 00:30:39
2000 of 10000, Atoms: 115884(A), 588(P), ChiSq: 2292072881336368004474865713152, Time: 00:02:15 / 00:30:21
3000 of 10000, Atoms: 120147(A), 593(P), ChiSq: 2783612750416574213534292901888, Time: 00:03:29 / 00:29:30
4000 of 10000, Atoms: 123364(A), 572(P), ChiSq: 2550784517891173166148455759872, Time: 00:04:44 / 00:28:53
5000 of 10000, Atoms: 126268(A), 568(P), ChiSq: 2541503896605446561631530123264, Time: 00:06:01 / 00:28:30
6000 of 10000, Atoms: 126646(A), 563(P), ChiSq: 2905690684144169274910709907456, Time: 00:07:20 / 00:28:16
7000 of 10000, Atoms: 126687(A), 555(P), ChiSq: 2674804590947979129294038237184, Time: 00:08:38 / 00:27:57
8000 of 10000, Atoms: 126574(A), 561(P), ChiSq: 3147559868411558326579587186688, Time: 00:09:56 / 00:27:41
9000 of 10000, Atoms: 126684(A), 553(P), ChiSq: 2698749482425631185700149788672, Time: 00:11:13 / 00:27:23
10000 of 10000, Atoms: 126485(A), 553(P), ChiSq: 2957405810624188978088997027840, Time: 00:12:30 / 00:27:06
-- Sampling Phase --
1000 of 10000, Atoms: 126720(A), 556(P), ChiSq: 2646788641772774809142103638016, Time: 00:13:47 / 00:26:51
2000 of 10000, Atoms: 126701(A), 559(P), ChiSq: 2966075017676645483900815015936, Time: 00:15:05 / 00:26:40
3000 of 10000, Atoms: 126748(A), 561(P), ChiSq: 2824885175666762749901468073984, Time: 00:16:23 / 00:26:29
4000 of 10000, Atoms: 126825(A), 548(P), ChiSq: 2910313314246920713217492647936, Time: 00:17:41 / 00:26:19
5000 of 10000, Atoms: 126857(A), 542(P), ChiSq: 3040585044948408827482774437888, Time: 00:18:58 / 00:26:08
6000 of 10000, Atoms: 126800(A), 551(P), ChiSq: 3114951210047638030192943824896, Time: 00:20:16 / 00:25:59
7000 of 10000, Atoms: 126686(A), 548(P), ChiSq: 2923385429134413698483590529024, Time: 00:21:33 / 00:25:49
8000 of 10000, Atoms: 126779(A), 559(P), ChiSq: 2855560157182209446923168907264, Time: 00:22:51 / 00:25:41
9000 of 10000, Atoms: 126407(A), 551(P), ChiSq: 2890611147933206197900012421120, Time: 00:24:09 / 00:25:34
10000 of 10000, Atoms: 126802(A), 552(P), ChiSq: 2753871361865324913892758913024, Time: 00:25:26 / 00:25:26
compared to results on the same data with sparseOptimization = FALSE:
-- Equilibration Phase --
10000 of 10000, Atoms: 57241(A), 3038(P), ChiSq: 26866870, Time: 00:29:54 / 01:04:51
sparse sampler cannot find a proper solution? btw is that normal for chisq
to increase over iterations?
from cogaps.
hey all-- a few notes regarding this issue
@rpalaganas since the run is distributed, the P atoms being 0 in the second phase is correct, because at that point one matrix has been fixed and cogaps is learning the cognate (A) matrix
from cogaps.
Is this data more than 80% sparse?
from cogaps.
Is this data more than 80% sparse?
Slightly less, ~71% sparse
from cogaps.
so, we have technically addressed all the points addressed in the issue report:
- ChiSq value was -nan: these -nans appear as the actual value of ChiSq are too large, as can be confirmed here
- during the equilibration phase, the P matrix was 0: it is expected as in the distributed run one of the matrices is fixed (see comment above)
- SparseOptimization = TRUE gave 5 patterns while SparseOptimization = FALSE gave 6 patterns: number of patterns returned can differ from the number of patterns requested in the distributed mode, as the number of patterns is a superset of patterns matched across nSets, controlled by maxNS parameter. We may want to set the maxNS parameter to nPatterns by default to avoid this confusion.
The unsolved problem that is motivated by this issue is why the ChisQ is so large for a given dataset compared to a simulated dataset with similar dimensions and sparsity parameters, as demonstrated here.
from cogaps.
Interestingly the boostrapped version of the original data also fails
> resamp <- sample(hoxdata, size = length(hoxdata), replace = T)
> resamp <- matrix(resamp, ncol = 380)
> params <- CogapsParams(nPatterns=5, nIterations=30000, seed=42, sparseOptimization=TRUE, distributed="genome-wide")
> params <- setDistributedParams(params, nSets=5)
setting distributed parameters - call this again if you change nPatterns
> res <- CoGAPS(resamp, params)
This is CoGAPS version 3.23.1
Running genome-wide CoGAPS on resamp (30407 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 30000
seed 42
sparseOptimization TRUE
distributed genome-wide
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
-- Distributed CoGAPS Parameters --
nSets 5
cut 5
minNS 3
maxNS 8
Creating subsets...
set sizes (min, mean, max): (6081, 6081.4, 6083)
Running Across Subsets...
Data Model: Sparse, Normal
Sampler Type: Sequential
worker 3 is starting!
worker 4 is starting!
Loading Data...Done! (00:00:00)
worker 1 is starting!
-- Equilibration Phase --
worker 5 is starting!
worker 2 is starting!
1000 of 30000, Atoms: 24149(A), 137(P), ChiSq: nan, Time: 00:00:15 / 00:25:24
from cogaps.
Also sparseOptimization=TRUE fails for non-distributed mode:
#sparse optimized and not distributed
params <- CogapsParams(nPatterns=5, nIterations=30000, seed=42, sparseOptimization=TRUE)
params <- setDistributedParams(params, nSets=5)
res <- CoGAPS(hoxdata, params)
This is CoGAPS version 3.22.0
Running Standard CoGAPS on hoxdata (30407 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 30000
seed 42
sparseOptimization TRUE
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
Data Model: Sparse, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
-- Equilibration Phase --
1000 of 30000, Atoms: 86929(A), 369(P), ChiSq: nan, Time: 00:00:38 / 01:04:21
2000 of 30000, Atoms: 99716(A), 375(P), ChiSq: nan, Time: 00:01:29 / 01:07:26
3000 of 30000, Atoms: 103343(A), 386(P), ChiSq: nan, Time: 00:02:23 / 01:08:03
from cogaps.
Interestingly, sampling from a histogram does not fail:
#sample from histogram of data
hox_hist <- hist(hoxdata, breaks = 100, plot = FALSE)
hox_sim <- sample(hox_hist$mids, size = length(hoxdata),
replace = T, prob = hox_hist$density)
hox_sim <- matrix(jitter(hox_sim), ncol = 380)
params <- CogapsParams(nPatterns=5, nIterations=30000, seed=42, sparseOptimization=TRUE)
res <- CoGAPS(hox_sim, params)
This is CoGAPS version 3.22.0
Running Standard CoGAPS on hox_sim (30407 genes and 380 samples) with parameters:
-- Standard Parameters --
nPatterns 5
nIterations 30000
seed 42
sparseOptimization TRUE
-- Sparsity Parameters --
alpha 0.01
maxGibbsMass 100
Data Model: Sparse, Normal
Sampler Type: Sequential
Loading Data...Done! (00:00:00)
-- Equilibration Phase --
1000 of 30000, Atoms: 73945(A), 1873(P), ChiSq: 221560368, Time: 00:05:48 / 09:49:26
2000 of 30000, Atoms: 83256(A), 1960(P), ChiSq: 220883600, Time: 00:11:54 / 09:01:04
3000 of 30000, Atoms: 94329(A), 1951(P), ChiSq: 220712496, Time: 00:19:01 / 09:03:02
4000 of 30000, Atoms: 103541(A), 1927(P), ChiSq: 220577920, Time: 00:26:22 / 09:02:24
5000 of 30000, Atoms: 111937(A), 1922(P), ChiSq: 220456768, Time: 01:02:02 / 16:30:34
^C
from cogaps.
Related Issues (20)
- CoGAPS does not learn to specifed nPatterns when runnign in dsitributed mode HOT 4
- Enable the Checkpoints HOT 2
- patternMarkers 'all' returning empty lists HOT 4
- Refactor - add comments
- Refactor - patternMarkers() HOT 2
- getPatternGeneSet gives wrong gene.set names HOT 4
- fix `threshold="cut"` mode
- Cache downloads for vignette
- enable sparse matrix support when input is object HOT 1
- limit number of cores for tests HOT 1
- Cpp tests
- Standard deviation when running distributed
- Vignettes fail on querying ENSEMBL
- Error when performing getPatternHallmarks: Error in 'collect()' HOT 9
- Replace Ensemble dependency and Add overrepresentation and enrichment tests for gene sets in patterns
- Confusion on the parallelization of R CoGAPS HOT 5
- GSEA algorithms for the function getPatternHallmarks HOT 9
- default number of patterns
- Too long runtime for GoGAPS R HOT 2
- Semisupervised CoGAPS
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cogaps.