Coder Social home page Coder Social logo

Comments (9)

seunghoon001 avatar seunghoon001 commented on July 23, 2024 1

Thank you so much for your solution. We really appreciate it. Now I feel that I can move away from STATA for good! :)

from fixest.

lrberge avatar lrberge commented on July 23, 2024

Hi, and thanks for using it!

Actually it's not a bug.

As you can see in the second model, t1, the t-statistics reflect the fact that the variables are collinear. The thing is that one year for each variable would have been removed if the "level of collinearity" had reached a certain threshold -- and this threshold has not been met only due to numerical precision issues.

In the package, when interactions are concerned, the choice has been made to let the researcher decide which value is set as a reference (and actually a robust automatic procedure to take out references is not easy to implement).
The recommended way is to use the fixest function i() to create interactions. The basic syntax is i(var, f, ref) where f is a variable that will be treated as a factor. So in your example it could be (using different data to make it easily replicable):

library(fixest)
data(base_did)
base = base_did

base$x2 = rnorm(nrow(base))
base$x3 = rnorm(nrow(base))

feols(y ~ i(x1:x2, period, 5) | id + period, base)
#> OLS estimation, Dep. Var.: y
#> Observations: 1,080 
#> Fixed-effects: id: 108,  period: 10
#> Standard-errors: Clustered (id) 
#>                     Estimate Std. Error   t value   Pr(>|t|)    
#> x1 * x2:period::1  -0.147262   0.203154 -0.724878 0.46870400    
#> x1 * x2:period::2  -0.121188   0.230306 -0.526204 0.59886900    
#> x1 * x2:period::3   0.060376   0.212584  0.284012 0.77646300    
#> x1 * x2:period::4   0.286134   0.246789  1.159400 0.24657300    
#> x1 * x2:period::6   0.029323   0.251893  0.116412 0.90735100    
#> x1 * x2:period::7  -0.596675   0.124337 -4.798800 0.00000185 ***
#> x1 * x2:period::8   0.511662   0.227328  2.250800 0.02462800 *  
#> x1 * x2:period::9  -0.040499   0.224877 -0.180096 0.85711600    
#> x1 * x2:period::10  0.123815   0.237599  0.521109 0.60241200    
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> Log-likelihood: -3,237.80   Adj. R2: 0.18228 
#>                           R2-Within: 0.02447

where the 5th period is taken as a reference.

[Warning: unconventional syntax.] If you find the syntax too "heavy", i() could be written differently using the var::f(ref) syntax. Then once the reference is fixed, you can set it with a macro and use the macro variable instead.
Here's an example:

setFixest_fml(..year = ~period(5))
feols(y ~ x1 + x2:x3::..year + x2::..year + x3::..year, base)
#> OLS estimation, Dep. Var.: y
#> Observations: 1,080 
#> Standard-errors: Standard 
#>               Estimate Std. Error   t value  Pr(>|t|)    
#> (Intercept)   1.966200   0.150835 13.036000 < 2.2e-16 ***
#> x1            0.976243   0.050534 19.318000 < 2.2e-16 ***
#> x2:period::1  0.029526   0.445853  0.066225  0.947212    
#> x2:period::2 -0.011703   0.455886 -0.025671  0.979525    
#> x2:period::3  0.126219   0.463717  0.272189   0.78553    
#> x2:period::4  0.428151   0.480398  0.891242  0.373003    
#> x2:period::6  0.010826   0.509080  0.021266  0.983038    
#> x2:period::7 -0.338409   0.474770 -0.712783  0.476138    
#> ... 21 coefficients remaining (display them with summary() or use argument n)
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> Log-likelihood: -3,231.99   Adj. R2: 0.26569 

Hope this helps!

from fixest.

lrberge avatar lrberge commented on July 23, 2024

I'll get back to you when I have the time for the second issue (of precision).

from fixest.

lrberge avatar lrberge commented on July 23, 2024

So I've looked at the precision issue (and by the way: thanks for giving the data set and a replicable example!!!).

A collinearity issue

It's clearly a collinearity issue, and, if I re-interpret correctly, your suggestion would then be to automatically remove more variables.

First, note that for the variables that are not in an over-identified system, the results are correct:

# t0_bis: same estimation as t0 but with default value for `fixef.tol`.
# t0_ter: same estimation as t0 but with year fixed-effects (fixef.tol = default).

etable(t0, t0_bis, t0_ter, order = c("cb", "ntr"))
#>                                              t0              t0_bis            t0_ter
#> cb:factor(year)1998:ntr_gap    -0.4389 (0.3771)    -0.4389 (0.3771)  -0.4389 (0.3771)
#> cb:factor(year)1999:ntr_gap     0.5092 (0.4555)     0.5092 (0.4554)   0.5092 (0.4554)
#> cb:factor(year)2000:ntr_gap     -0.1313 (0.331)    -0.1313 (0.3311)  -0.1313 (0.3311)
#> cb:factor(year)2001:ntr_gap    -0.1955 (0.3691)     -0.1955 (0.369)  -0.1955 (0.3689)
#> cb:factor(year)2002:ntr_gap   -0.08861 (0.3209)   -0.08861 (0.3208) -0.08861 (0.3208)
#> cb:factor(year)1998           0.04421 (0.04504)     0.2718 (1821.1)   0.1567 (1382.6)
#> cb:factor(year)1999          -0.06991 (0.06375)     0.1576 (1821.1)  0.04262 (1382.6)
#> cb:factor(year)2000          0.006705 (0.05175)     0.2343 (1821.1)   0.1192 (1382.5)
#> cb:factor(year)2001           0.02484 (0.03433)     0.2524 (1821.1)   0.1374 (1382.6)
#> cb:factor(year)2002                                 0.2276 (1821.1)   0.1125 (1382.6)
#> factor(year)1998:ntr_gap        0.4223 (0.3814)     0.3422 (2545.6)  -0.2151 (2993.8)
#> factor(year)1999:ntr_gap       -0.2985 (0.3845)    -0.3785 (2545.5)  -0.9358 (2993.9)
#> factor(year)2000:ntr_gap        0.2931 (0.3532)      0.213 (2545.7)  -0.3443 (2993.8)
#> factor(year)2001:ntr_gap        0.1527 (0.2861)    0.07258 (2545.5)  -0.4847 (2993.8)
#> factor(year)2002:ntr_gap                          -0.08008 (2545.6)  -0.6374 (2993.8)
#> factor(year)1998             -0.03024 (0.04338)  -0.03024 (0.04338)                  
#> factor(year)1999              0.06211 (0.06241)   0.06211 (0.06241)                  
#> factor(year)2000             -0.03133 (0.05028)  -0.03133 (0.05028)                  
#> factor(year)2001            -0.008621 (0.03181) -0.008621 (0.03181)                  
#> Fixed-Effects:              ------------------- ------------------- -----------------
#> bank_id                                     Yes                 Yes               Yes
#> cz90                                        Yes                 Yes               Yes
#> year                                         No                  No               Yes
#> ___________________________ ___________________ ___________________ _________________
#> Observations                             60,816              60,816            60,816
#> S.E. type: Clustered                by: bank_id         by: bank_id       by: bank_id
#> R2                                       0.2273              0.2273            0.2273
#> Within R2                               0.00183             0.00183             5e-04

So for factor(year)dddd (in t0 & t0_bis only) and factor(year)dddd:ntr_gap:cb the results are identical across estimations: same coefficients and same standard-errors.
Of course that's not the case for variables in a collinear system. And collinearity can be spotted by the extremely large standard-errors.

Back to the question

You are suggesting that my algorithm gives wrong results with the default values. Well, I would retort that you are giving it a badly identified system in the first place! :-)
Remember that when variables are removed because of collinearity, it's only for convenience. Since the system is over-identified, the results cannot be unique by construction. The fact that different software use the same rule of thumb to remove collinear variables doesn't make it the only valid result.

So what to do with collinear variables? Here's my point of view:

  • if collinear variables are used as controls, just forget about them and interpret only the coefficients of interest. These collinear variables don't affect them.
  • if the coefficients of interest are part of the collinear variables: Then there is a problem with the design of the study: it's up to the researcher to understand why they are collinear and which variables to remove in order to have a proper interpretation.

To conclude, I am rather against automating the removal of collinear variables further. For me, only the second point that I mentioned matters, and, in my opinion, it should absolutely not be automated since it's part of a research decision.

Of course, I'm open to arguments! Let me know!

Other points

By the way, I'll implement a diff-like operator for panel data in the future (would be useful to lighten the creation of the dep. var).
I'll also add a user-level argument collin.tol to control the threshold of collinear variable removal (that's different from fixel.tol).

from fixest.

seunghoon001 avatar seunghoon001 commented on July 23, 2024

Thanks for your quick response! You are doing a great service to the community, and I just hope that you will find my feedback helpful.

I agree that it would be ideal for users to decide which variables to drop. The problem in this case is that we have to drop two levels to avoid the collinearity problem and there is no easy way to specify these two levels. In my example, "t0" drops two levels (year=1997 and 2002) automatically to avoid the collinearity problem. In the non-working "t1" specification, it drops only 1 level (year=1997). In "t1" specification, we could not find an easy way to drop specific 2 levels. (Sorry for our ignorance.)

While I agree with your view that user has to decide which levels to drop, I still think it is an issue that feols generates different results depending on whether a fixed effect is absorbed or included as regressor.

Thank you again for your generous contribution to the community!

from fixest.

lrberge avatar lrberge commented on July 23, 2024

I agree that if two equivalent models yield (partially) different solutions, well, that's unsettling. So I completely understand your point, but sometimes there's just no solution ;-). Here's a proof.

Setup

Assume X is a set of collinear variables, Z1 and Z2 are two sets of fixed-effects, and y is the dependent variable.

There are two estimations. In estimation a we introduce Z1 as dummy variables and Z2 are "absorbed" (i.e. y, X and Z1 are first orthogonally projected on Z2). In estimation b both Z1 and Z2 are absorbed (i.e. y and X are orthogonally projected on Z1 and Z2).

The second stages of the estimations are the OLS estimation of y_proj_a on X_proj_a + Z1_proj_a for estimation a, and y_proj_b on X_proj_b for estimation b.
Now you can reapply the Frisch Waugh Lovell theorem in estimation a: you project y_proj_a and X_proj_a on Z1_proj_a.

You end up with the equivalent estimation of y_dble_proj_a on X_dble_proj_a. And these two values are identical to y_proj_b and X_proj_b, up to rounding errors. Note that X_dble_proj_a and X_proj_b will never be exactly identical (i.e. up to an infinity of digits) because they are constructed using different processes.

Now that the stage is set, back to collinearity.
Consider a QR decomposition of X_proj_b with the following rule of thumb: when the norm of the current variable used as a pivot is lower than the threshold epsilon, it is removed because of collinearity.

Proposition

For any non-infinite level of numerical precision used to compute X_dble_proj_a and X_proj_b, and for any threshold epsilon > 0, you can find a set of variables X such that the number of variables removed due to collinearity in estimation a is different from the number of variables removed in estimation b.

Consequences

This means that increasing the numerical precision fixef.tol in your case leads to the same number of variables removed. But even at max precision, there exists data sets for which the number of variables removed will differ.
Similarly, in your case, taking a higher threshold for collinearity (epsilon) would lead to the same variables to be removed. But there exists other data sets for which the number of variables removed will differ.

Bottom line: there's no general solution. For sure increasing the level of precision fixef.tol can help because it would reduce the number of occurrences. But in my opinion it's too big a price to pay for what I consider as a mild problem.
Stated differently: setting fixef.tol close to 0 will slow down all estimations while the benefit would accrue only to users estimating miss-specified models. That's not a good trade-off imo!

ad hoc solutions

  1. It's only one level that is dropped (because the lags in the dep var make 1997 to disappear automatically).
  2. In any case, you can use i(var, year, drop = c(1997, 2002)) to drop the years 1997 and 2002 from the interaction.

Last words. That's a huge effort indeed, thanks for the kind words, that's appreciated! :-)

from fixest.

lrberge avatar lrberge commented on July 23, 2024

So I introduced in the dev version the new argument collin.tol that controls the threshold for collinear variable removal.

Following my original remark, the problem is with collinearity and not with the precision with which the FEs are computed.

Here's an illustration:

# We start with the same data as in the original post

library(data.table)
df_panel = panel(as.data.table(df), panel.id = c('bank_id_cz', 'year'))
df_panel[, diff_depsum_cz := log(depsum_cz) - log(l(depsum_cz))]
setFixest_notes(FALSE)
 
# Estimations without changing the default for fixef_tol
t0_bis = feols(diff_depsum_cz ~ i(I(cb*ntr_gap), year) + i(ntr_gap, year) + i(cb, year) + factor(year) | bank_id + cz90, data = df_panel)
t1_bis = feols(diff_depsum_cz ~ i(I(cb*ntr_gap), year) + i(ntr_gap, year) + i(cb, year) | bank_id + cz90 + year, data = df_panel)

#
# Looking at the effect of collin.tol
#

# Now only changing the value of the new argument collin.tol 
t0_ter = update(t0_bis, collin.tol = 1e-4)
t1_ter = update(t1_bis, collin.tol = 1e-4)
 
etable(t0_bis, t1_bis, t0_ter, t1_ter)
#>                                       t0_bis            t1_bis             t0_ter             t1_ter
#> I(cb*ntr_gap):year::1998    -0.4389 (0.3771)  -0.4389 (0.3771)   -0.4389 (0.3772)   -0.4389 (0.3771)
#> I(cb*ntr_gap):year::1999     0.5092 (0.4554)   0.5092 (0.4554)    0.5092 (0.4554)    0.5092 (0.4554)
#> I(cb*ntr_gap):year::2000    -0.1313 (0.3311)  -0.1313 (0.3311)   -0.1313 (0.3311)   -0.1313 (0.3311)
#> I(cb*ntr_gap):year::2001     -0.1955 (0.369)  -0.1955 (0.3689)    -0.1955 (0.369)    -0.1955 (0.369)
#> I(cb*ntr_gap):year::2002   -0.08861 (0.3208) -0.08861 (0.3208)  -0.08861 (0.3209)  -0.08861 (0.3209)
#> ntr_gap:year::1998           0.3422 (2545.6)  -0.2151 (2993.8)    0.4223 (0.3813)    0.4223 (0.3813)
#> ntr_gap:year::1999          -0.3785 (2545.5)  -0.9358 (2993.9)   -0.2985 (0.3844)   -0.2985 (0.3844)
#> ntr_gap:year::2000            0.213 (2545.7)  -0.3443 (2993.8)    0.2931 (0.3533)    0.2931 (0.3533)
#> ntr_gap:year::2001          0.07258 (2545.5)  -0.4847 (2993.8)    0.1527 (0.2861)    0.1527 (0.2861)
#> ntr_gap:year::2002         -0.08008 (2545.6)  -0.6374 (2993.8)                                      
#> cb:year::1998                0.2718 (1821.1)   0.1567 (1382.6)  0.04421 (0.04503)  0.04421 (0.04503)
#> cb:year::1999                0.1576 (1821.1)  0.04262 (1382.6) -0.06991 (0.06372) -0.06991 (0.06373)
#> cb:year::2000                0.2343 (1821.1)   0.1192 (1382.6) 0.006705 (0.05176) 0.006705 (0.05176)
#> cb:year::2001                0.2524 (1821.1)   0.1374 (1382.6)  0.02484 (0.03432)  0.02484 (0.03432)
#> cb:year::2002                0.2276 (1821.1)   0.1125 (1382.6)                                      
#> factor(year)1998          -0.03024 (0.04338)                   -0.03024 (0.04337)                   
#> factor(year)1999           0.06211 (0.06241)                    0.06211 (0.06239)                   
#> factor(year)2000          -0.03133 (0.05028)                   -0.03133 (0.05029)                   
#> factor(year)2001         -0.008621 (0.03181)                   -0.008621 (0.0318)                   
#> Fixed-Effects:           ------------------- ----------------- ------------------ ------------------
#> bank_id                                 Yes               Yes                Yes                 Yes
#> cz90                                    Yes               Yes                Yes                 Yes
#> year                                     No               Yes                 No                 Yes
#> ________________________ ___________________ _________________ __________________ __________________
#> Observations                          60,816            60,816             60,816             60,816
#> S.E. type: Clustered             by: bank_id       by: bank_id        by: bank_id        by: bank_id
#> R2                                    0.2273            0.2273             0.2273             0.2273
#> Within R2                            0.00183             5e-04            0.00183              5e-04

#
# And now looking at the effect of fixef.tol
#

t0_quar = update(t0_ter, fixef.tol = 1e-11)
t1_quar = update(t1_ter, fixef.tol = 1e-11)

etable(t0_ter, t1_ter, t0_quar, t1_quar)
#>                                      t0_ter             t1_ter             t0_quar            t1_quar
#> I(cb*ntr_gap):year::1998   -0.4389 (0.3772)   -0.4389 (0.3771)    -0.4389 (0.3771)   -0.4389 (0.3771)
#> I(cb*ntr_gap):year::1999    0.5092 (0.4554)    0.5092 (0.4554)     0.5092 (0.4555)    0.5092 (0.4555)
#> I(cb*ntr_gap):year::2000   -0.1313 (0.3311)   -0.1313 (0.3311)     -0.1313 (0.331)    -0.1313 (0.331)
#> I(cb*ntr_gap):year::2001    -0.1955 (0.369)    -0.1955 (0.369)    -0.1955 (0.3691)   -0.1955 (0.3691)
#> I(cb*ntr_gap):year::2002  -0.08861 (0.3209)  -0.08861 (0.3209)   -0.08861 (0.3209)  -0.08861 (0.3209)
#> ntr_gap:year::1998          0.4223 (0.3813)    0.4223 (0.3813)     0.4223 (0.3814)    0.4223 (0.3814)
#> ntr_gap:year::1999         -0.2985 (0.3844)   -0.2985 (0.3844)    -0.2985 (0.3845)   -0.2985 (0.3845)
#> ntr_gap:year::2000          0.2931 (0.3533)    0.2931 (0.3533)     0.2931 (0.3532)    0.2931 (0.3532)
#> ntr_gap:year::2001          0.1527 (0.2861)    0.1527 (0.2861)     0.1527 (0.2861)    0.1527 (0.2861)
#> cb:year::1998             0.04421 (0.04503)  0.04421 (0.04503)   0.04421 (0.04504)  0.04421 (0.04504)
#> cb:year::1999            -0.06991 (0.06372) -0.06991 (0.06373)  -0.06991 (0.06375) -0.06991 (0.06375)
#> cb:year::2000            0.006705 (0.05176) 0.006705 (0.05176)  0.006705 (0.05175) 0.006705 (0.05175)
#> cb:year::2001             0.02484 (0.03432)  0.02484 (0.03432)   0.02484 (0.03433)  0.02484 (0.03433)
#> factor(year)1998         -0.03024 (0.04337)                     -0.03024 (0.04338)                   
#> factor(year)1999          0.06211 (0.06239)                      0.06211 (0.06241)                   
#> factor(year)2000         -0.03133 (0.05029)                     -0.03133 (0.05028)                   
#> factor(year)2001         -0.008621 (0.0318)                    -0.008621 (0.03181)                   
#> Fixed-Effects:           ------------------ ------------------ ------------------- ------------------
#> bank_id                                Yes                Yes                 Yes                 Yes
#> cz90                                   Yes                Yes                 Yes                 Yes
#> year                                    No                Yes                  No                 Yes
#> ________________________ __________________ __________________ ___________________ __________________
#> Observations                         60,816             60,816              60,816             60,816
#> S.E. type: Clustered            by: bank_id        by: bank_id         by: bank_id        by: bank_id
#> R2                                   0.2273             0.2273              0.2273             0.2273
#> Within R2                           0.00183              5e-04             0.00183              5e-04

So as you can see: the coefs/standard-errors of the non-collinear variables are never modified across all estimations.
By giving a higher value to collin.tol, you obtain the results you wanted, without modifying the value of fixef.tol.
Increasing the precision, by setting fixef.tol = 1e-11 does not modify the coefs/SEs of any variable. It did, in your original estimation and problem, modify the coefs/SEs of variables in a collinear system, but only through the choice of collinear variables to remove.

On interacting interactions

There was a bug when one wanted to interact an interaction: i.e. i(cb:ntr_gap, year) was problematic. It is a bug, solved by using I(cb*ntr_gap). I will solve it in the future (to avoid needing I()), but for now I've added a very precise error message which includes the solution.

Thanks again for your report, the argument collin.tol seems to me a useful addition.

from fixest.

lrberge avatar lrberge commented on July 23, 2024

Quick update. I've send the version 0.7.0 to CRAN (not yet accepted).

I implemented two things that could be of interest to you:

  • the diff operator d()
  • you can create variables as factors with i()

Here's an example:

# Original estimation
t0 = feols(log(depsum_cz) - log(l(depsum_cz)) ~ i(cb*ntr_gap, year) + i(ntr_gap, year) + i(cb, year) + factor(year) | bank_id + cz90, data = df_panel, collin.tol = 1e-4)

# New estimation where:
# - d() is used to create the dep.var.
# - i(year) is used in place of factor(year)
t1 = feols(d(log(depsum_cz)) ~ i(cb*ntr_gap, year) + i(ntr_gap, year) + i(cb, year) + i(year) | bank_id + cz90, data = df_panel, collin.tol = 1e-4)

# Results are the same
etable(t0, t1)
#>                                                       t0                  t1
#> Dependent Var.:       log(depsum_cz)-log(l(depsum_cz,1)) d(log(depsum_cz),1)
#>                                                                             
#> cb*ntr_gap:year::1998                   -0.4389 (0.3772)    -0.4389 (0.3772)
#> cb*ntr_gap:year::1999                    0.5092 (0.4554)     0.5092 (0.4554)
#> cb*ntr_gap:year::2000                   -0.1313 (0.3311)    -0.1313 (0.3311)
#> cb*ntr_gap:year::2001                    -0.1955 (0.369)     -0.1955 (0.369)
#> cb*ntr_gap:year::2002                  -0.08861 (0.3209)   -0.08861 (0.3209)
#> ntr_gap:year::1998                       0.4223 (0.3813)     0.4223 (0.3813)
#> ntr_gap:year::1999                      -0.2985 (0.3844)    -0.2985 (0.3844)
#> ntr_gap:year::2000                       0.2931 (0.3533)     0.2931 (0.3533)
#> ntr_gap:year::2001                       0.1527 (0.2861)     0.1527 (0.2861)
#> cb:year::1998                          0.04421 (0.04503)   0.04421 (0.04503)
#> cb:year::1999                         -0.06991 (0.06372)  -0.06991 (0.06372)
#> cb:year::2000                         0.006705 (0.05176)  0.006705 (0.05176)
#> cb:year::2001                          0.02484 (0.03432)   0.02484 (0.03432)
#> factor(year)1998                      -0.03024 (0.04337)                    
#> factor(year)1999                       0.06211 (0.06239)                    
#> factor(year)2000                      -0.03133 (0.05029)                    
#> factor(year)2001                      -0.008621 (0.0318)                    
#> year::1998                                                -0.03024 (0.04337)
#> year::1999                                                 0.06211 (0.06239)
#> year::2000                                                -0.03133 (0.05029)
#> year::2001                                                -0.008621 (0.0318)
#> Fixed-Effects:           ------------------------------- -------------------
#> bank_id                                             Yes                  Yes
#> cz90                                                Yes                  Yes
#> _____________________ __________________________________ ___________________
#> S.E. type: Clustered                         by: bank_id         by: bank_id
#> Observations                                      60,816              60,816
#> R2                                                0.2273              0.2273
#> Within R2                                        0.00183             0.00183

Note that using i() is substantially faster than using factor().

Thanks again for your report!

from fixest.

seunghoon001 avatar seunghoon001 commented on July 23, 2024

Wow, this is super nice! I love this update. Thank you so much again for your great contribution to the community!

from fixest.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.