aewallin / allantools Goto Github PK
View Code? Open in Web Editor NEWAllan deviation and related time & frequency statistics library in Python
License: GNU Lesser General Public License v3.0
Allan deviation and related time & frequency statistics library in Python
License: GNU Lesser General Public License v3.0
test_gradev.py:20: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use array.size > 0
to check that an array is not empty.
if i_gap[0]:
Is it possible to add __version__
to the package?
tox seems to be a solution for testing on multiple python versions:
https://tox.readthedocs.org/en/latest/
at least Ubuntu 16.04LTS looks like it is based on Python 2.7, and probably will stay that way until replaced in 2018.
On the other hand many people are already on python 3.
So automated tox testing + documentation on how to do it would be a welcome enhancement.
We now have a dependency on Cython in setup.py
Less dependencies and complexity is better, so are there any benchmarks published that show improved performance using Cython?
If no significant speedup (2x ?) can be measured, I suggest removing Cython.
tau_generator() needs a fix so that it can handle all the possible inputs:
currently I get:
if not np.any(taus): # empty or no tau-list supplied
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 1840, in any
return arr.any(axis=axis, out=out)
File "/usr/lib/python2.7/dist-packages/numpy/core/_methods.py", line 33, in _any
keepdims=keepdims)
TypeError: cannot perform reduce with flexible type
if the user asks for a deviation calculation, but the dataset is so short that we have only N=1 in the calculation of a deviation, reject it.
version 2018.03
issue: error passing from tau_generator() to taus_valid1 when using octave for taus, error free when passing decade as taus
log :
File "GT9000show.py", line 68, in dev
self.devs[devtype].compute(devtype)
File "D:\ffrank\Programmes\WinPython3.6.5.0\python-3.6.5.amd64\lib\site-packages\allantools\dataset.py", line 143, in compute
data_type=self.inp["data_type"], taus=self.inp["taus"])
File "D:\ffrank\Programmes\WinPython3.6.5.0\python-3.6.5.amd64\lib\site-packages\allantools\allantools.py", line 146, in tdev
(taus, md, mde, ns) = mdev(phase, rate=rate, taus=taus)
File "D:\ffrank\Programmes\WinPython3.6.5.0\python-3.6.5.amd64\lib\site-packages\allantools\allantools.py", line 197, in mdev
(phase, ms, taus_used) = tau_generator(phase, rate, taus=taus)
File "D:\ffrank\Programmes\WinPython3.6.5.0\python-3.6.5.amd64\lib\site-packages\allantools\allantools.py", line 1386, in tau_generator
taus_valid1 = taus < (1 / float(rate)) * float(len(data))
TypeError: '<' not supported between instances of 'numpy.ndarray' and 'float'
Hi
When calculating MTIE for a data array of 436365 elements length I've got an error saying:
Traceback (most recent call last):
File "C:/Work/Projects/TimeErrorSimulationGUI/SupportingFiles/tDEVCalcCheck.py", line 137, in
tauArr, mtieAllanTools, a, b = allantools.mtie(freqTrend, 1 / 0.033, 'freq', taus=tauArr)
File "C:\Work\Projects\TimeErrorSimulationGUI\GUI\allantools.py", line 1086, in mtie
rw = mtie_rolling_window(phase, int(mj + 1))
File "C:\Work\Projects\TimeErrorSimulationGUI\GUI\allantools.py", line 1053, in mtie_rolling_window
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
File "C:\Users\antonp\AppData\Local\Programs\Python\Python36-32\lib\site-packages\numpy\lib\stride_tricks.py", line 102, in as_strided
array = np.asarray(DummyArray(interface, base=x))
File "C:\Users\antonp\AppData\Local\Programs\Python\Python36-32\lib\site-packages\numpy\core\numeric.py", line 492, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: array is too big; arr.size * arr.dtype.itemsize
is larger than the maximum possible size.
Error happens when window size reaches 1515 elements and software tries to create 436365 x 1515 array. Used Python 3.6.5.
In order to overcome the issue I've added piece of code which gets in play when original code reaches size limit (see except part of try/except block below) and continues calculation:
`
def mtie(data, rate=1.0, data_type="phase", taus=None):
""" Maximum Time Interval Error.
Parameters
----------
data: np.array
Input data. Provide either phase or frequency (fractional,
adimensional).
rate: float
The sampling rate for data, in Hz. Defaults to 1.0
data_type: {'phase', 'freq'}
Data type, i.e. phase or frequency. Defaults to "phase".
taus: np.array
Array of tau values, in seconds, for which to compute statistic.
Optionally set taus=["all"|"octave"|"decade"] for automatic
tau-list generation.
Notes
-----
this seems to correspond to Stable32 setting "Fast(u)"
Stable32 also has "Decade" and "Octave" modes where the
dataset is extended somehow?
"""
phase = input_to_phase(data, rate, data_type)
(phase, m, taus_used) = tau_generator(phase, rate, taus)
devs = np.zeros_like(taus_used)
deverrs = np.zeros_like(taus_used)
ns = np.zeros_like(taus_used)
for idx, mj in enumerate(m):
try:
rw = mtie_rolling_window(phase, int(mj + 1))
win_max = np.max(rw, axis=1)
win_min = np.min(rw, axis=1)
tie = win_max - win_min
dev = np.max(tie)
except:
if int(mj + 1) < 1:
raise ValueError("`window` must be at least 1.")
if int(mj + 1) > phase.shape[-1]:
raise ValueError("`window` is too long.")
mj = int(mj)
currMax = np.max(phase[0:mj])
currMin = np.min(phase[0:mj])
dev = currMax - currMin
for winStartIdx in range(1, int(phase.shape[0] - mj)):
winEndIdx = mj + winStartIdx
if currMax == phase[winStartIdx - 1]:
currMax = np.max(phase[winStartIdx:winEndIdx])
elif currMax < phase[winEndIdx]:
currMax = phase[winEndIdx]
if currMin == phase[winStartIdx - 1]:
currMin = np.min(phase[winStartIdx:winEndIdx])
elif currMin > phase[winEndIdx]:
currMin = phase[winEndIdx]
if dev < currMax - currMin:
dev = currMax - currMin
ncount = phase.shape[0] - mj
devs[idx] = dev
deverrs[idx] = dev / np.sqrt(ncount)
ns[idx] = ncount
return remove_small_ns(taus_used, devs, deverrs, ns)`
The gap resistant ADEV functions have some issues that should be easy to fix and a more serious issue concerning frequency data.
The easy to fix issues:
The more important issue is that gradev() doesn't work properly for frequency data. If the input data is a frequency array with NaN values, the frequency2phase() function is called and it sets all phase values to NaN after the first NaN value occurs in the frequency array. That's a property of np.cumsum(). If you are unlucky and the first value is a NaN you lose all data in this step. There is a function called trim_data() that trims NaNs from the beginning of an array, but it is not used anymore (since the API change).
Additionally, since the API change, the gradev example uses data_type='freq' and the example doesn't work very well due to the issue discussed above.
The core issue here is that for frequency data with gaps, the frequency data can not be converted to phase data and the gradev must be calculated based on the equations for the adev for fractional frequency data.
At the moment all *dev calculations in allantools are based on phase data. What do you think about *dev calculations based on frequency data?
As discussed in I. Sesia and P. Tavella, Metrologia, 45 (2008) p. 134, using the phase data for the adev calculation is preferred in the presence of data gaps, but sometimes there is no choice.
It would be very helpful if a new release would be made at pypi.python.org. It would enable Python 3 compatibility.
stable32-like plotting functions (#83) are now in /examples - these could be merged with the existing Plot() object - perhaps with a few style-options to the user.
Use of special characters in pylab (or matplotlib?) - like tau and sigma - seems to require python3. If someone is eager to support python 2.x also then this would be a slight improvement.
test_gradev.py::test_gradev
/home/travis/build/aewallin/allantools/allantools/allantools.py:1272: RuntimeWarning: invalid value encountered in double_scalars
dev = np.sqrt(s / (2.0 * n)) / mj * rate
Implement allan deviation calculation for time-stamped data with missing datapoints and/or outliers (removed by MAD test).
Reference:
Sesia et al. "Application of the Dynamic Allan Variance for the Characterization of Space Clock Behavior" (IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 47, NO. 2 APRIL 2011)
Matlab code for this is also available.
I'm not sure how the different files should import eachother.
I've now moved the CI, EDF, and noise-ID to a separate file ci.py, but this seems to break the build.
Some example would be needed on how to package a project with multiple files...
We are now including README.rst into the documentation - this is fine since we don't want to repeat the contents of the readme in the docs.
would it be possible to get the headings of README.rst into the TOC of the generated documentation?
The new TOC would have four main sections:
(perhaps the References section from the readme could also be in the main docs, not in the readme)
Tried installing allantools on Windows using Python 3.4. There was a print function error (looked like Python 2) on line 1116 of /allantools/allantools.py
Change
print o_taus, o_n, o_dev
to
print(o_taus, o_n, o_dev)
and then the install worked fine and everything seems to operate OK.
I implemented a first try at htotdev() following the description in NIST SP 1065.
The algorithm seems very similar to mtotdev(), but using frequency input data where mtotdev uses phase input data.
However the numbers produced by the algorithm seem incorrect. Should be fixed.
the API change caused a bug in three_cornered_hat_phase() which was not found by any of the tests. we probably have zero coverage of this function with current tests.
jleute has an implementation of the Kasdin & Walter algorithm for noise with different colors
https://github.com/jleute/colorednoise
this unifies all the noise-types under one algorithm
Please allow user to supply kwargs to the method plot for the class allantools.Plot
def plot(self, atDataset, errorbars=False, grid=False, **kwargs):
"""overwriting the implementation from allantools
Args:
atDataset:
errorbars:
grid:
Returns:
"""
if errorbars:
self.ax.errorbar(atDataset.out["taus"],
atDataset.out["stat"],
yerr=atDataset.out["stat_err"],
**kwargs
)
else:
self.ax.plot(atDataset.out["taus"],
atDataset.out["stat"],
**kwargs
)
self.ax.set_xlabel("Tau")
self.ax.set_ylabel(atDataset.out["stat_id"])
self.ax.grid(grid, which="minor", ls="-", color='0.65')
self.ax.grid(grid, which="major", ls="-", color='0.25')
Are there any major issues, questions, or changes that need to be fixed before the next release 2016.4?
I think we have made good progress and would want the new API out as soon as possible.
please discuss below.
Hi guys, I'm working with python3.4, pip 1.5.6, setuptools 41.0.1, and I'm getting the following error when trying to install allantools:
$ python3 --version
Python 3.4.2
$ pip3 --version
pip 1.5.6 from /usr/lib/python3/dist-packages (python 3.4)
$ python3 -c 'import setuptools;print(setuptools.__version__)'
41.0.1
$ pip3 install allantools
Downloading/unpacking allantools
Downloading AllanTools-2018.3.tar.gz
Running setup.py (path:/tmp/pip-build-6u1cwn7x/allantools/setup.py) egg_info for package allantools
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip-build-6u1cwn7x/allantools/setup.py", line 26, in <module>
long_description=open('README.rst', 'r').read(),
File "/usr/lib/python3.4/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1578: ordinal not in range(128)
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip-build-6u1cwn7x/allantools/setup.py", line 26, in <module>
long_description=open('README.rst', 'r').read(),
File "/usr/lib/python3.4/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1578: ordinal not in range(128)
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-6u1cwn7x/allantools
Storing debug log for failure in /root/.pip/pip.log
So, the workarround that I found is to install directly the code from github, this way:
$pip3 install git+http://[email protected]/aewallin/allantools.git
Downloading/unpacking git+http://[email protected]/aewallin/allantools.git
Cloning http://[email protected]/aewallin/allantools.git to /tmp/pip-7vzv6uo0-build
Running setup.py (path:/tmp/pip-7vzv6uo0-build/setup.py) egg_info for package from git+http://[email protected]/aewallin/allantools.git
/tmp/pip-7vzv6uo0-build/setup.py:12: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.
Requirement already satisfied (use --upgrade to upgrade): numpy in /usr/lib/python3/dist-packages (from AllanTools==2018.8.1)
Requirement already satisfied (use --upgrade to upgrade): scipy in /usr/lib/python3/dist-packages (from AllanTools==2018.8.1)
Installing collected packages: AllanTools
Running setup.py install for AllanTools
Could not find .egg-info directory in install record for AllanTools==2018.8.1 from git+http://[email protected]/aewallin/allantools.git
Successfully installed AllanTools
Cleaning up...
I'm not able to update python version, and i'm currently at the latest version of pip and setuptools for python3.4, so to keep python3.4 compatibility, maybe it would be nice if you could release a new pypi version.
Looking at the code for tierms_phase, it looks like the deverr output is always zero. I think line 464 should be
deverrs.append(tie / math.sqrt(ncount))
instead of
deverrs.append(dev / math.sqrt(ncount))
As the dev variable is never used. The new method would be
def tierms_phase(phase, rate, taus):
""" TIE rms """
rate = float(rate)
(m, taus_used) = tau_m(phase, rate, taus)
count = len(phase)
devs = []
deverrs = []
ns = []
for mj in m:
ncount = 0
tie = []
for i in range(count - mj):
phases = [phase[i], phase[i + mj]] # pair of phases at distance mj from eachother
tie.append(max(phases) - min(phases)) # phase error
ncount += 1
# RMS of tie vector
tie = [pow(x, 2) for x in tie] # square
tie = numpy.mean(tie) # mean
tie = math.sqrt(tie) # root
devs.append(tie)
deverrs.append(tie / math.sqrt(ncount))
ns.append(ncount)
return remove_small_ns(taus_used, devs, deverrs, ns)
because scipy is a very slow compile and install it takes quite a bit of time to test allantools in virtual environments (e.g. travis-ci or virtualenv)
There is only one function from scipy used: scipy.stats.chi2.ppf()
I tried writing this function using only numpy, but the implementation fails in some corner cases, see:
http://www.anderswallin.net/2016/05/scipy-stats-chi2-ppf-without-scipy/
It is probably time for a new release (see #88) - sometime towards the end of July 2019.
This issue is to discuss what we want to include (or not) before the release.
@aewallin to-do list
issues (to be) fixed:
General to-do list:
please add your ideas/comments/to-do-lists below - thanks.
We should add confidence intervals (errorbars) for the computed deviations.
For most common deviations the equivalent degrees of freedom are already computed in uncertainty_estimate()
However it looks like this will require a change in the API, both for the calling signalture which is now:
mdev(data, rate=1.0, data_type="phase", taus=None)
to something like:
mdev(data, rate=1.0, data_type="phase", taus=None, ci=None)
where the ci parameter would be used to select between simple 1/sqrt(N) errorbars and the more complicated calculation based on edf. In theory one could allow asymmetric confidence intervals, but perhaps this is not required? For example ci=0.683 would give standard error (1-sigma) confidence intervals.
Also on the output-side we need an API change since the return values are now:
(taus, devs, deverrs, ns)
A straightforward change would be to add the lower and upper confidence intervals:
(taus, devs, devs_lo, devs_hi, ns)
But I wonder if this starts to be too many variables to remember when calling the function.
An alternative would be a DevResult object where the outputs would have names:
myresult = allantools.adev(...)
myresult.taus
myresult.devs
myresult.devs_lo
..and so on...
any other ideas or comments on this?
The travis build/test shows a warning, probably related to passing NaN/negative-values to numpy.sqrt()?
tests/gradev/test_gradev.py::test_gradev
/home/travis/build/aewallin/allantools/allantools/allantools.py:1272: RuntimeWarning: invalid value encountered in double_scalars
dev = np.sqrt(s / (2.0 * n)) / mj * rate
Hi,
Great project! I was wondering if there's any interest in adding robust statistics methods to remove outliers or detect frequency jumps.
Hi,
we stumbled upon what seems to be an overwriting bug:
when creating 2 differents datasets with = at.Dataset(...), and then calling the computing jobs on the 2 datasets with .compute('tdev', ... ), the 2nd computed deviation job overwrites the 1st computed deviation job ie we end up with exactly the same deviation values.
However when doing dataset1= at.Dataset(...) ; dataset1.compute() and then dataset2= at.Dataset(...) ; dataset2.compute()
the computed deviation jobs & values don't get overwriten each other.
This behaviors could be reproduced with: AllanTools 2018.03, Python3, Ubuntu 64 bits, and AllanTools 2018.11, Python3, Windows 7 64 bits
Greetings from LNE-Syrte, Paris
Florian, Mikkel
I was unable to follow the example in the getting started page (using Version 2016.11) upon first review of this code:
https://github.com/aewallin/allantools/blob/master/docs/getting_started.rst
Specifically the "Frequency data example".
Upon reviewing the code it looks like the correct expression (using the earlier API that is still valid) should be:
From (sic):
(t2, ad, ade, adn) = allantools.oadev(freqyency=y, rate=r, taus=t)
To:
(t2, ad, ade, adn) = allantools.oadev(y, rate=r, data_type=βfreqβ, taus=t)
The other examples shown may have a similar issue. Also it would be good to show on this page how the new API can be used.
For some reason the 'autofunction' documentation that is generated OK by 'make html' offline does not show up on readthedocs.
(is it because we are now using latex math in the docstrings?)
Changing licence to "GNU Lesser General Public License (LGPL) version 2.1 or later" would allow wider usage of allantools while still requiring distributing changes to the library. Example LGPL usage can be seen at FFmpeg.
Hello,
Are there any tests in the test suite that validate the stochastically generated noise against the Adev plots?
For example (half-pseudocode):
samples = int(1e6)
data = noise.white(samples) * noise_magnitude
tau, dev, _, _ =
# fit a line of slope -1/2 to log(tau), log(dev)
# todo...
assert line.y_intercept == noise_magnitude
tau=[] is set by default, this is then checked later with
if not tau:
this works for python lists, but not for numpy arrays.
This should be changed so that the user can either supply a python list or a numpy array (such as numpy.linspace() or numpy.logspace())
A colleague discovered a problem when using 2016.4 under windows :
Collecting allantools Using cached AllanTools-2016.4.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "", line 1, in File "c:\temp\pip-build-e48r1_et\allantools\setup.py", line 10, in pkginfo = json.load(open(pkginfo_path)) File "D:\Programmes\WinPython-64bit-3.5.1.3\python-3.5.1.amd64\lib\json\__init__.py", line 2 65, in load return loads(fp.read(), File "D:\Programmes\WinPython-64bit-3.5.1.3\python-3.5.1.amd64\lib\encodings\cp1252.py", lin e 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 430: character maps to
The problem lies in allantools_info.json : I copy-pasted in it some UTF8 double quotes that windows encoding cp1252 does not know how to convert. This is probably a cp1252 bug, but an easy work-around is to replace them with regular ascii quotes.
Pull request incoming in a few seconds, I just describe it here in case someone looks for help...
I have a dataset. {lot it with Tom van Baak's adev5 and it looks clean. Plot it with allantools.oadev() and there is junk on the right edge of the data. I have seen this junk on other peoples adev plots, not sure what is happening.
# python three-cornered-hat-demo.py
allatools three-cornered-hat demo
Traceback (most recent call last):
File "three-cornered-hat-demo.py", line 62, in <module>
(taus,devA) = allantools.three_cornered_hat_phase(phaseAB,phaseBC,phaseCA,rate,t, allantools.mdev)
ValueError: too many values to unpack
This is on current git head.
Many times we require conversion of time-stamps from one format to another.
I have been using some versions of "gpstime" and "jdutil" for GPS-timescale and Modified Julian Date conversions.
gps-functions e.g. there: https://github.com/aewallin/ppp-tools/blob/master/gpstime.py
JD/MJD functions e.g. there: https://gist.github.com/jiffyclub/1294443 or there: https://github.com/aewallin/ppp-tools/blob/master/jdutil.py
These could be included in allantools, perhaps as an extended subclass of datetime.
I have tried to configure readthedocs using a new webhook - but it is uncertain if it will work after Jan 1st 2019.
This issue is to track if there are problems with the readthedocs build in Feb 2019 - or if the docs still update after each commit/merge as usual.
If someone wants to look at loop-unrolling and more efficient use of numpy then mtotdev() is a good candidate. htotdev() is very similar.
It is now working and producing correct bias-uncorrected output as far as I can tell. However the implementation is very slow.
We have some old pure python implementations in allantools_pure_python.py - so if the current code is replaced the old implementation can be moved there.
I have looked in the documentation, but can find no definition of 'phase data'. What should it looks like?
For example, I have data from a TIC about a PPS input:
0.999999994060
0.999999994632
1.000000014844
0.999999994485
0.999999995617
0.999999995685
1.000000017018
Is that phase data? Or should it be in degrees, radians, percentages, or?
I have received a request for changing the license of AllanTools from GPL to LGPL to allow for more flexible integration of the library into e.g. monitoring-dashboards for time/frequency fiber-links.
The license change can be discussed below - or you can simply reply "ok" or "yes" if you do not see any problems with the license change.
I will implement the license change in about one week starting Monday 2018-03-26 and follow that with a new release - unless there are issues emerging here that should be handled first.
We should avoid duplicating information, but we now have mostly the same information in README.md which is used for the github frontpage, as well as docs/index.rst which is used to generate the sphinx documentation frontpage.
is there a solution that uses one single source file for both?
On some (newer?) python versions the test_tau_reduction fails with an output of
E IndexError: boolean index did not match indexed array along dimension 0; dimension is 42 but corresponding boolean dimension is 41
the problem seems to be rounding-error in tau_reduction() around line 1411 in allantools.py
It's easy enough to correct on my end, but it seems inconsistent.
https://github.com/aewallin/allantools/blob/master/tests/functional_tests/test_noise.py
I ran into problems while trying to run few-months-old code using allantools. It seems the latest release is backwards incompatible, but the documentation makes no mention of this. Furthermore, the Jupyter notebook examples are not updated to reflect this.
When defining phase2frequency function it has been misspelled:
def phase2freqeuncy( phasedata, rate):
instead of:
def phase2frequency( phasedata, rate):
test_tau_generator.py::test_tau_generator_octave
/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/numpy/core/function_base.py:231: DeprecationWarning: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer.
y = linspace(start, stop, num=num, endpoint=endpoint)
On my machine "python setup.py test" seems to run a lot of numpy tests which are completely unrelated to allantools.
This takes a long time and gives a lot of errors and we should get rid of this.
output:
allantools$ python setup.py test
============================= test session starts ==============================
platform linux2 -- Python 2.7.6 -- pytest-2.5.1
collected 7086 items / 334 errors / 2 skipped
for coverage testing and just testing in general the CS5071A dataset takes a long time to test. Maybe remove it or store it as an example?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.