Coder Social home page Coder Social logo

causal_impact's Introduction

Python Causal Impact

Build

Causal inference using Bayesian structural time-series models. This package aims at defining a python equivalent of the R CausalImpact package by Google. Please refer to the package itself, its documentation or the related publication (Brodersen et al., Annals of Applied Statistics, 2015) for more information.

Setup

Simply install from pip:

pip install causal-impact

Example

Suppose we have a DataFrame data recording daily measures for three different markets y, x1 and x2, for t = 0..365). The y time series in data is the one we will be modeling, while other columns (x1 and x2 here) will be used as a set of control time series.

>>> data
      y       x1      x2
  0   1735.01 1014.44 1005.87
  1   1709.54 1012.63 1008.18
  2   1772.95 1039.04 1024.21
...   ...     ...     ...

At t = date_inter = 280, a marketing campaing (the intervention) is run for market y. We want to understand the impact of that campaign on our measure.

from causal_impact import CausalImpact

ci = CausalImpact(data, date_inter, n_seasons=7)
ci.run(max_iter=1000)
ci.plot()

After fitting the model, and estimating what the y time series would have been without any intervention, this will typically produce the following plots: Impact Plot

If you need access to the data behind the plots for further analysis, you can simply use the ci.result attribute (pandas.DataFrame object). Alternatively, you can also call

result = ci.run(return_df=True)

and skip the plotting step.

Issues and improvements

This package is still being developed. Feel free to contribute through github by sending pull requests or reporting issues.

causal_impact's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

causal_impact's Issues

Exception raised in plot. Row-Labels not set

I followed your doc and ran into this

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-68-af50b5359315> in <module>()
      3 ci = CausalImpact(causal2, date_interv)
      4 ci.run()
----> 5 ci.plot()

python3.5/site-packages/causal_impact/causal_impact.py in plot(self)
    110         pred = self.fit.get_prediction()
    111         pre_model = pred.predicted_mean
--> 112         pre_lower = pred.conf_int()['lower y'].values
    113         pre_upper = pred.conf_int()['upper y'].values
    114         pre_model[:min_t] = np.nan

IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices

as far I could understand by looking inside the statsmodels code, the error is due to pred.conf_int() returning an array instead of a data frame, and that happens because pred.row_labels is None.

Reproducible code:

df = pandas.DataFrame(
    {'y': [150, 200, 225, 150, 175],
     'x1': [150, 249, 150, 125, 325],
     'x2': [275, 125, 249, 275, 250]
    }
)
date_interv = 2
ci = CausalImpact(df, date_interv)
ci.run()
ci.plot()

Summary data

Hello Thomas,

Is there a way to get the summary data (as with the R package)?

Thank you!

Python 3 support?

I was able to get this working in Python 3 by changing .iteritems() to .items()

Confidence intervals differ from R CausalImpact package

Hi Thomas, Thanks for working on this! I just wanted to flag that your package doesn't produce results comparable to the R CausalImpact package yet for the same reason that @bange83 identified in jamalsenouci/causalimpact#7

To reproduce:

from causal_impact.causal_impact import CausalImpact
import pandas as pd
import sys
from io import StringIO

DATA = """
t,y,x1,x2\n
2016-02-20 22:41:20,110.0,134.0,128.0\n
2016-02-20 22:41:30,125.0,134.0,128.0\n
2016-02-20 22:41:40,123.0,134.0,128.0\n
2016-02-20 22:41:50,128.0,134.0,128.0\n
2016-02-20 22:42:00,114.0,134.0,128.0\n
2016-02-20 22:42:10,125.0,133.0,128.0\n
2016-02-20 22:42:20,119.0,133.0,128.0\n
2016-02-20 22:42:30,121.0,133.0,128.0\n
2016-02-20 22:42:40,139.0,133.0,128.0\n
2016-02-20 22:42:50,107.0,133.0,128.0\n
2016-02-20 22:43:00,115.0,132.0,128.0\n
2016-02-20 22:43:10,91.0,132.0,128.0\n
2016-02-20 22:43:20,107.0,132.0,128.0\n
2016-02-20 22:43:30,124.0,132.0,128.0\n
2016-02-20 22:43:40,116.0,131.0,128.0\n
2016-02-20 22:43:50,110.0,131.0,128.0\n
2016-02-20 22:44:00,100.0,131.0,128.0\n
2016-02-20 22:44:10,110.0,131.0,128.0\n
2016-02-20 22:44:20,113.0,129.0,128.0\n
2016-02-20 22:44:30,103.0,129.0,128.0\n
2016-02-20 22:44:40,117.0,129.0,128.0\n
2016-02-20 22:44:50,125.0,129.0,128.0\n
2016-02-20 22:45:00,115.0,129.0,128.0\n
2016-02-20 22:45:10,114.0,128.0,128.0\n
2016-02-20 22:45:20,138.0,128.0,128.0\n
2016-02-20 22:45:30,117.0,128.0,128.0\n
2016-02-20 22:45:40,104.0,128.0,128.0\n
2016-02-20 22:45:50,123.0,128.0,128.0\n
2016-02-20 22:46:00,122.0,128.0,128.0\n
2016-02-20 22:46:10,150.0,128.0,128.0\n
2016-02-20 22:46:20,127.0,128.0,128.0\n
2016-02-20 22:46:30,139.0,128.0,128.0\n
2016-02-20 22:46:40,139.0,127.0,127.0\n
2016-02-20 22:46:50,109.0,127.0,127.0\n
2016-02-20 22:47:00,107.0,127.0,127.0\n
2016-02-20 22:47:10,94.0,127.0,127.0\n
2016-02-20 22:47:20,112.0,127.0,127.0\n
2016-02-20 22:47:30,107.0,127.0,127.0\n
2016-02-20 22:47:40,126.0,127.0,127.0\n
2016-02-20 22:47:50,114.0,127.0,127.0\n
2016-02-20 22:48:00,129.0,127.0,127.0\n
2016-02-20 22:48:10,113.0,126.0,127.0\n
2016-02-20 22:48:20,114.0,126.0,127.0\n
2016-02-20 22:48:30,116.0,126.0,127.0\n
2016-02-20 22:48:40,110.0,125.0,126.0\n
2016-02-20 22:48:50,131.0,125.0,126.0\n
2016-02-20 22:49:00,109.0,125.0,126.0\n
2016-02-20 22:49:10,114.0,125.0,127.0\n
2016-02-20 22:49:20,116.0,125.0,126.0\n
2016-02-20 22:49:30,113.0,124.0,125.0\n
2016-02-20 22:49:40,108.0,124.0,125.0\n
2016-02-20 22:49:50,120.0,124.0,125.0\n
2016-02-20 22:50:00,106.0,123.0,125.0\n
2016-02-20 22:50:10,123.0,123.0,125.0\n
2016-02-20 22:50:20,123.0,123.0,124.0\n
2016-02-20 22:50:30,135.0,123.0,124.0\n
2016-02-20 22:50:40,127.0,123.0,124.0\n
2016-02-20 22:50:50,140.0,123.0,123.0\n
2016-02-20 22:51:00,139.0,123.0,123.0\n
2016-02-20 22:51:10,137.0,123.0,123.0\n
2016-02-20 22:51:20,123.0,123.0,123.0\n
2016-02-20 22:51:30,160.0,122.0,123.0\n
2016-02-20 22:51:40,173.0,122.0,123.0\n
2016-02-20 22:51:50,236.0,122.0,123.0\n
2016-02-20 22:52:00,233.0,122.0,123.0\n
2016-02-20 22:52:10,193.0,122.0,123.0\n
2016-02-20 22:52:20,169.0,122.0,123.0\n
2016-02-20 22:52:30,167.0,122.0,123.0\n
2016-02-20 22:52:40,172.0,121.0,123.0\n
2016-02-20 22:52:50,148.0,121.0,123.0\n
2016-02-20 22:53:00,125.0,121.0,123.0\n
2016-02-20 22:53:10,132.0,121.0,123.0\n
2016-02-20 22:53:20,165.0,121.0,123.0\n
2016-02-20 22:53:30,154.0,120.0,123.0\n
2016-02-20 22:53:40,158.0,120.0,123.0\n
2016-02-20 22:53:50,135.0,120.0,123.0\n
2016-02-20 22:54:00,145.0,120.0,123.0\n
2016-02-20 22:54:10,163.0,119.0,122.0\n
2016-02-20 22:54:20,146.0,119.0,122.0\n
2016-02-20 22:54:30,120.0,119.0,121.0\n
2016-02-20 22:54:40,149.0,118.0,121.0\n
2016-02-20 22:54:50,140.0,118.0,121.0\n
2016-02-20 22:55:00,150.0,117.0,121.0\n
2016-02-20 22:55:10,133.0,117.0,120.0\n
2016-02-20 22:55:20,143.0,117.0,120.0\n
2016-02-20 22:55:30,145.0,117.0,120.0\n
2016-02-20 22:55:40,145.0,117.0,120.0\n
2016-02-20 22:55:50,176.0,117.0,120.0\n
2016-02-20 22:56:00,134.0,117.0,120.0\n
2016-02-20 22:56:10,147.0,117.0,120.0\n
2016-02-20 22:56:20,131.0,117.0,120.0"""

df = pd.read_csv(StringIO(DATA))
df["t"] = pd.to_datetime(df["t"])
df.index = df["t"]
del df["t"]

ci = CausalImpact(df, pd.to_datetime('2016-02-20 22:51:20'))
ci.run()
ci.plot()

image

Compare to the R CausalImpact result (see referenced issue for code):
image

Result of the python package is different than the R-package result

Dear tcassou,

Thanks for translating the R-package to this nice python-package. I have run a random dataset consisting of 71 x 2 values in both the python package and R package, but it seems that the outcomes of the predictions in both packages are different. Did you every encountered such a thing previously? And if you have any idea what might be the reason for this behaviour?

Thanks in advance.

python_package
R_package

Different confident interval for cummulative effect

Hi, i used this libruary for analyse and compare results with original libruary with R language.
i found that confidence intervals (especially in cummulative sum) strong differ compared with original library.
Could you please explain why?

ValueError: zero-size array to reduction operation maximum which has no identity

Error with simple dataset:

ValueError Traceback (most recent call last)
in
1 from causal_impact import CausalImpact
2 ci = CausalImpact(test, 120, n_seasons=7)
----> 3 ci.run(max_iter=1000)
4 ci.plot()

~/anaconda3/envs/python3/lib/python3.6/site-packages/causal_impact/causal_impact.py in run(self, max_iter, return_df)
69 exog=self.data.loc[:self._inter_index - 1, self._reg_cols()].values,
70 level='local linear trend',
---> 71 seasonal=self.n_seasons,
72 )
73 self._fit = self._model.fit(maxiter=max_iter)

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/tsa/statespace/structural.py in init(self, endog, level, trend, seasonal, freq_seasonal, cycle, autoregressive, exog, irregular, stochastic_level, stochastic_trend, stochastic_seasonal, stochastic_freq_seasonal, stochastic_cycle, damped_cycle, cycle_period_bounds, mle_regression, use_exact_diffuse, **kwargs)
571 # Setup the representation
572 super(UnobservedComponents, self).init(
--> 573 endog, k_states, k_posdef=k_posdef, exog=exog, **kwargs
574 )
575 self.setup()

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/tsa/statespace/mlemodel.py in init(self, endog, k_states, exog, dates, freq, **kwargs)
136 super(MLEModel, self).init(endog=endog, exog=exog,
137 dates=dates, freq=freq,
--> 138 missing='none')
139
140 # Store kwargs to recreate model

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/tsa/base/tsa_model.py in init(self, endog, exog, dates, freq, missing, **kwargs)
45 missing='none', **kwargs):
46 super(TimeSeriesModel, self).init(endog, exog, missing=missing,
---> 47 **kwargs)
48
49 # Date handling in indexes

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/base/model.py in init(self, endog, exog, **kwargs)
234
235 def init(self, endog, exog=None, **kwargs):
--> 236 super(LikelihoodModel, self).init(endog, exog, **kwargs)
237 self.initialize()
238

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/base/model.py in init(self, endog, exog, **kwargs)
75 hasconst = kwargs.pop('hasconst', None)
76 self.data = self._handle_data(endog, exog, missing, hasconst,
---> 77 **kwargs)
78 self.k_constant = self.data.k_constant
79 self.exog = self.data.exog

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/base/model.py in _handle_data(self, endog, exog, missing, hasconst, **kwargs)
98
99 def _handle_data(self, endog, exog, missing, hasconst, **kwargs):
--> 100 data = handle_data(endog, exog, missing, hasconst, **kwargs)
101 # kwargs arrays could have changed, easier to just attach here
102 for key in kwargs:

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/base/data.py in handle_data(endog, exog, missing, hasconst, **kwargs)
670 klass = handle_data_class_factory(endog, exog)
671 return klass(endog, exog=exog, missing=missing, hasconst=hasconst,
--> 672 **kwargs)

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/base/data.py in init(self, endog, exog, missing, hasconst, **kwargs)
85 self.const_idx = None
86 self.k_constant = 0
---> 87 self._handle_constant(hasconst)
88 self._check_integrity()
89 self._cache = {}

~/anaconda3/envs/python3/lib/python3.6/site-packages/statsmodels/base/data.py in _handle_constant(self, hasconst)
175 (np.ones(self.exog.shape[0]), self.exog))
176 rank_augm = np.linalg.matrix_rank(augmented_exog)
--> 177 rank_orig = np.linalg.matrix_rank(self.exog)
178 self.k_constant = int(rank_orig == rank_augm)
179 self.const_idx = None

<array_function internals> in matrix_rank(*args, **kwargs)

~/anaconda3/envs/python3/lib/python3.6/site-packages/numpy/linalg/linalg.py in matrix_rank(M, tol, hermitian)
1902 S = svd(M, compute_uv=False, hermitian=hermitian)
1903 if tol is None:
-> 1904 tol = S.max(axis=-1, keepdims=True) * max(M.shape[-2:]) * finfo(S.dtype).eps
1905 else:
1906 tol = asarray(tol)[..., newaxis]

~/anaconda3/envs/python3/lib/python3.6/site-packages/numpy/core/_methods.py in _amax(a, axis, out, keepdims, initial, where)
37 def _amax(a, axis=None, out=None, keepdims=False,
38 initial=_NoValue, where=True):
---> 39 return umr_maximum(a, axis, None, out, keepdims, initial, where)
40
41 def _amin(a, axis=None, out=None, keepdims=False,

ValueError: zero-size array to reduction operation maximum which has no identity

"unhashable type: 'slice'" error when running ci.plot()

I just downloaded Causal Impact from github and was attempting a test run with the data and code below on python 3.6.3.

I'm getting an "unhashable type: 'slice'" error when attempting the ci.plot() command, though ci.plot_components() works without issue.

Error seems to be coming from line 118 in causal_impact.py in plot(self)

Not sure if anyone else has run into this and/or has a work around, but figured I'd post since I haven't seem much documentation on this online.

========Code============
Source: #5).

from causal_impact.causal_impact import CausalImpact
import pandas as pd
import sys
from io import StringIO

DATA = """
t,y,x1,x2\n
2016-02-20 22:41:20,110.0,134.0,128.0\n
2016-02-20 22:41:30,125.0,134.0,128.0\n
2016-02-20 22:41:40,123.0,134.0,128.0\n
2016-02-20 22:41:50,128.0,134.0,128.0\n
2016-02-20 22:42:00,114.0,134.0,128.0\n
2016-02-20 22:42:10,125.0,133.0,128.0\n
2016-02-20 22:42:20,119.0,133.0,128.0\n
2016-02-20 22:42:30,121.0,133.0,128.0\n
2016-02-20 22:42:40,139.0,133.0,128.0\n
2016-02-20 22:42:50,107.0,133.0,128.0\n
2016-02-20 22:43:00,115.0,132.0,128.0\n
2016-02-20 22:43:10,91.0,132.0,128.0\n
2016-02-20 22:43:20,107.0,132.0,128.0\n
2016-02-20 22:43:30,124.0,132.0,128.0\n
2016-02-20 22:43:40,116.0,131.0,128.0\n
2016-02-20 22:43:50,110.0,131.0,128.0\n
2016-02-20 22:44:00,100.0,131.0,128.0\n
2016-02-20 22:44:10,110.0,131.0,128.0\n
2016-02-20 22:44:20,113.0,129.0,128.0\n
2016-02-20 22:44:30,103.0,129.0,128.0\n
2016-02-20 22:44:40,117.0,129.0,128.0\n
2016-02-20 22:44:50,125.0,129.0,128.0\n
2016-02-20 22:45:00,115.0,129.0,128.0\n
2016-02-20 22:45:10,114.0,128.0,128.0\n
2016-02-20 22:45:20,138.0,128.0,128.0\n
2016-02-20 22:45:30,117.0,128.0,128.0\n
2016-02-20 22:45:40,104.0,128.0,128.0\n
2016-02-20 22:45:50,123.0,128.0,128.0\n
2016-02-20 22:46:00,122.0,128.0,128.0\n
2016-02-20 22:46:10,150.0,128.0,128.0\n
2016-02-20 22:46:20,127.0,128.0,128.0\n
2016-02-20 22:46:30,139.0,128.0,128.0\n
2016-02-20 22:46:40,139.0,127.0,127.0\n
2016-02-20 22:46:50,109.0,127.0,127.0\n
2016-02-20 22:47:00,107.0,127.0,127.0\n
2016-02-20 22:47:10,94.0,127.0,127.0\n
2016-02-20 22:47:20,112.0,127.0,127.0\n
2016-02-20 22:47:30,107.0,127.0,127.0\n
2016-02-20 22:47:40,126.0,127.0,127.0\n
2016-02-20 22:47:50,114.0,127.0,127.0\n
2016-02-20 22:48:00,129.0,127.0,127.0\n
2016-02-20 22:48:10,113.0,126.0,127.0\n
2016-02-20 22:48:20,114.0,126.0,127.0\n
2016-02-20 22:48:30,116.0,126.0,127.0\n
2016-02-20 22:48:40,110.0,125.0,126.0\n
2016-02-20 22:48:50,131.0,125.0,126.0\n
2016-02-20 22:49:00,109.0,125.0,126.0\n
2016-02-20 22:49:10,114.0,125.0,127.0\n
2016-02-20 22:49:20,116.0,125.0,126.0\n
2016-02-20 22:49:30,113.0,124.0,125.0\n
2016-02-20 22:49:40,108.0,124.0,125.0\n
2016-02-20 22:49:50,120.0,124.0,125.0\n
2016-02-20 22:50:00,106.0,123.0,125.0\n
2016-02-20 22:50:10,123.0,123.0,125.0\n
2016-02-20 22:50:20,123.0,123.0,124.0\n
2016-02-20 22:50:30,135.0,123.0,124.0\n
2016-02-20 22:50:40,127.0,123.0,124.0\n
2016-02-20 22:50:50,140.0,123.0,123.0\n
2016-02-20 22:51:00,139.0,123.0,123.0\n
2016-02-20 22:51:10,137.0,123.0,123.0\n
2016-02-20 22:51:20,123.0,123.0,123.0\n
2016-02-20 22:51:30,160.0,122.0,123.0\n
2016-02-20 22:51:40,173.0,122.0,123.0\n
2016-02-20 22:51:50,236.0,122.0,123.0\n
2016-02-20 22:52:00,233.0,122.0,123.0\n
2016-02-20 22:52:10,193.0,122.0,123.0\n
2016-02-20 22:52:20,169.0,122.0,123.0\n
2016-02-20 22:52:30,167.0,122.0,123.0\n
2016-02-20 22:52:40,172.0,121.0,123.0\n
2016-02-20 22:52:50,148.0,121.0,123.0\n
2016-02-20 22:53:00,125.0,121.0,123.0\n
2016-02-20 22:53:10,132.0,121.0,123.0\n
2016-02-20 22:53:20,165.0,121.0,123.0\n
2016-02-20 22:53:30,154.0,120.0,123.0\n
2016-02-20 22:53:40,158.0,120.0,123.0\n
2016-02-20 22:53:50,135.0,120.0,123.0\n
2016-02-20 22:54:00,145.0,120.0,123.0\n
2016-02-20 22:54:10,163.0,119.0,122.0\n
2016-02-20 22:54:20,146.0,119.0,122.0\n
2016-02-20 22:54:30,120.0,119.0,121.0\n
2016-02-20 22:54:40,149.0,118.0,121.0\n
2016-02-20 22:54:50,140.0,118.0,121.0\n
2016-02-20 22:55:00,150.0,117.0,121.0\n
2016-02-20 22:55:10,133.0,117.0,120.0\n
2016-02-20 22:55:20,143.0,117.0,120.0\n
2016-02-20 22:55:30,145.0,117.0,120.0\n
2016-02-20 22:55:40,145.0,117.0,120.0\n
2016-02-20 22:55:50,176.0,117.0,120.0\n
2016-02-20 22:56:00,134.0,117.0,120.0\n
2016-02-20 22:56:10,147.0,117.0,120.0\n
2016-02-20 22:56:20,131.0,117.0,120.0"""

df = pd.read_csv(StringIO(DATA))
df["t"] = pd.to_datetime(df["t"])
df.index = df["t"]
del df["t"]

ci = CausalImpact(df, pd.to_datetime('2016-02-20 22:51:20'))
ci.run()
ci.plot()

using datetime indexes

As a user of causal-impact who is unfamiliar with reading code as documentation, is there a way to demonstrate how to run the library using a pandas data frame with a DateTime index?

Feature Request: small multiples option for top row

Hi @tcassou. Thanks for taking the time to port causal impact over to Python. One aspect I like is that unlike the R version OOTB, you are plotting all of the time series. It got me thinking that for certain time series where the lines could be on top of each other a lot, an option to plot each time series in a small chart to view the trends side by side. I'd like to submit a pull request of this functionality.
See example image here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.