andykee / lentil Goto Github PK
View Code? Open in Web Editor NEWHeart-healthy physical optics
Home Page: https://andykee.github.io/lentil/
License: Other
Heart-healthy physical optics
Home Page: https://andykee.github.io/lentil/
License: Other
It would be nice to allow in-place operations on the Wavefront object. The two places this needs some development work are the Plane.multiply()
method (Wavefront.__imul__()
will need updating too) and the Wavefront.propagate_image()
method.
The general syntax should be:
def function(in, ..., inplace=True):
if inplace:
out = in
else:
out = in.copy()
...
return out
Some thought should also be put into what the default behavior should be for all inplace
operations.
This won't necessarily increase performance, but will make the user interface a bit more natural.
This new method should return a plane that is effectively a read-only static version of the parent plane
There was an issue with plane transformations (#4) that is now OBE because the nature of propagations is changing with v0.6.0.
This doesn't resolve the fact that we need to figure out the right approach for performing plane transformations with pre-existing tilts.
It is very inefficient to repeatedly calculate dynamic plane attributes (think phase for a segmented system). It would be nice if there were a freeze()
method that returns a new "frozen" Plane with all static attributes derived from the parent plane.
This method should operate on a standard set of attributes (probably defined in a private list) and also accept *args
so that additional attributes can be added on the fly as needed.
Image only accepts shape and pixelscale at the moment. This should be expanded to include other Plane parameters that make sense. At a minimus, I would expect amplitude and phase.
I have no recollection of what these were for. Dead code as far as I'm concerned.
Plane transformations (Rotate
and Flip
) are only applied to Wavefront's data
attribute at the moment. This is fine, unless the Wavefront's tilt
attribute is populated. This is pretty clear from the example below:
import lentil
pupil = lentil.Pupil(amplitude=lentil.util.circle((256, 256), 128), diameter=1,
focal_length=10, pixelscale=1/256)
detector = lentil.Detector(pixelscale=5e-6, shape=(1024, 1024))
psf = lentil.propagate([pupil, detector], wave=650e-9, npix=(64, 64))
tilt = lentil.Tilt(x=10e-6, y=0)
psf = lentil.propagate([tilt, pupil, detector], wave=650e-9, npix=(64, 64))
psf = lentil.propagate([tilt, pupil, lentil.Rotate(angle=30), detector], wave=650e-9, npix=(64, 64))
I'm not entirely sure how to deal with this yet, but I wanted to capture the problem before I forgot.
There's no reason we can't rescale differently in row vs. col (or x vs. y)
I think the change is as simple as sanitizing scale
to accept a single or multiple values, and updating the following block to use the correct indices:
if shape is None:
shape = np.ceil((img.shape[0]*scale, img.shape[1]*scale)).astype(int)
else:
if np.isscalar(shape):
shape = np.ceil((shape*scale, shape*scale)).astype(int)
else:
shape = np.ceil((shape[0]*scale, shape[1]*scale)).astype(int)
x = (np.arange(shape[1], dtype=np.float64) - shape[1]/2.)/scale + img.shape[1]/2.
y = (np.arange(shape[0], dtype=np.float64) - shape[0]/2.)/scale + img.shape[0]/2.
There are situations where it may be preferable to use the good old FFT to perform far field propagations. This should be fairly straightforward, but a few things to consider:
Both np.random.poisson
and np.random.normal
are undefined for negative lambda or scale values. Since this function is expecting counts, and it doesn't make sense to have negative counts, I think we should check img positivity and throw a ValueError if img has negative values.
One of the key missing bits of functionality in the propagator is to allow arbitrary tilts to be specified. The driving use-case for this at the moment is simulating pointing error that causes the PSF to wander around the focal plane from frame to frame when the line of sight stability isn't great.
I think a lot of the base functionality is there, but there are a couple of gotchas:
Right now the input wavefront inherits its shape from the first plane in planes
. This will break because Tilt
's shape is None. I think the right way to deal with this is to perform a pre-propagation step in Propagate
's __enter__()
method that determines the correct shape from the list of planes. It already has to iterate through all the planes to set up caching anyways, so this is the natural place to set the shape too.
Eventually we'll have to think about what to do with all the accumulated tilt if the last propagation isn't from a Pupil to a Detector. We'll get an early look at how this may play out when we try Tilt -> Pupil -> Grism -> Detector propagations,
The intended usage is something along the lines of:
planes = [Tilt(x=1e-6, y=-3e-6), pupil, detector]
The interface should allow for specifying both a static offset or providing a function which returns a shift every time it is called. This may result in the development of a RandomTilt
object.
The following snippet causes all sorts of issues..
import lentil
pupil = lentil.Pupil(amplitude=lentil.circle((256,256), 128), focal_length=10, pixelscale=1/100)
plane = lentil.Plane()
field = lentil.propagate([pupil, plane], wave=650e-9, npix=128)
Some thought needs to be put in to whether wavefront really needs a pixelscale attribute, if so how it should be used in conjunction with Field.pixelscale, and how the following line in Plane.multiply should be changed to make this all work as expected:
out.data.append(Field(data=np.broadcast_to(field.data, self.shape)[s] * phasor,
pixelscale=self.pixelscale,
offset=lentil.util.slice_offset(s, self.shape),
tilt=tilt))
Near-term methods:
to_image()
or prop_image()
to_pupil()
or prop_pupil()
Longer-term:
prop_angular()
or prop_fresnel()
Should live under developer documentation
Currently, if npix is not specified when calling propagate(), it defaults to planes[-1].shape
. This works fine for propagations to a Pupil plane or (current implementations of) an Image plane. A more robust implementation will eventually be needed.
It could look something like this:
min_q
is defined, we should select an output size that satisfies min_q (see #2 (comment))min_q
is 0 and npix hasn't been specified, the user clearly doesn't have precise requirements on the results. We'll compute the output plane size that will give them q = 1.5 ๐คท๐ผโโ๏ธI need to run on a system where the latest Python version available is 3.6.7, and an upgrade isn't likely anytime soon.
I think this should be as simple as removing f-strings from a few places and updating the minimum version in setup and init
You would expect that a zernike mode is defined over the extent of the mask that is supplied, but that isn't exactly what's happening:
segmask = lentil.util.circle((256,256), 0.2*256, center=(0.2*256, -0.25*256))
seg_focus = lentil.zernike.zernike(segmask, 4)
The issue appears to be that zernike_coordinates is not taking the mask boundary in to account when computing rho/theta.
Several of Lentil's primitive operations allow in-place execution, including Plane multiplication, Plane tilt fitting, Plane resampling/rescaling, and Wavefront propagation. The existence of these operations implies (at least in my mind) that their execution with inplace=True
should be faster and/or use less memory. The reality is that with the exception of Plane.filt_tilt()
, this assumption is not valid.
Beyond the lack of performance improvements, allowing in-place operations both complicates the library code and makes it less clear what is happening in user code.
Existing functions and methods can be arranged into the following two groups based on how they operate:
1. Methods that can never operate in-place
Method Name |
---|
Plane.multiply |
Plane.__imul__ |
Plane.resample |
Plane.rescale |
propagate_dft |
These methods can never operate in-place because the nature of the operation requires copying or resizing arrays. For those methods, inplace=True
is essentially just synctactic sugar for reassigning the new result to self
.
2. Methods that modify underlying data and can be done in-place
Method Name |
---|
Plane.fit_tilt |
These methods don't operate in-place by default, but could the option to specify inplace=True
. The structure of the enclosed data remains intact, but it can be mutated in place.
The inplace
parameter for all methods listed above should be deprecated. This will require modifying these functions and their supporting tests as well as updating documentation.
This is a breaking change. If this change is made in a pre-v1.0 release then no DeprecationWarning
will be issued.
The chip placement strategy in v0.5.0 beta 2
is failing when there are large tilts resulting in the chip being placed outside the canvas.
This is easily reproducible via
import numpy as np
import lentil as le
diameter = 1
focal_length = 20
wavelength = 500e-9
du = 5e-6
n = 256
s1 = le.util.circle((n, n), n // 5, shift=(0, -0.3 * n))
s2 = le.util.circle((n, n), n // 5, shift=(0, .3 * n))
amp = s1 + s2
segmask = np.array([s1, s2])
dx = diameter / n
phase = le.zernike.zernike(s1, 2)*5e-6
pupil = le.Pupil(diameter=1, focal_length=20, pixelscale=dx, amplitude=amp, phase=phase, segmask=segmask)
detector = le.Image(pixelscale=du)
le.propagate([pupil, detector], wave=620e-9, npix=64, npix_chip=32, oversample=5, rebin=False, tilt='angle')
The error is
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-12f639a9e090> in <module>
22 detector = le.Image(pixelscale=du)
23
---> 24 a = le.propagate([pupil, detector], wave=620e-9, npix=64, npix_chip=32, oversample=5, rebin=False, tilt='angle')
25
26 plt.imshow(a**0.5)
~/Dev/lentil/lentil/prop.py in propagate(planes, wave, weight, npix, npix_chip, oversample, rebin, tilt, interp_phasor, flatten, use_multiprocessing)
125 tiles.append(imtile(w.data[d], data_slice, chip_slice))
126
--> 127 tiles = consolidate(tiles)
128
129 for tile in tiles:
~/Dev/lentil/lentil/prop.py in consolidate(tiles)
500 def consolidate(tiles):
501 for m, n in combinations(range(len(tiles)), 2):
--> 502 if overlap(tiles[m].slice, tiles[n].slice):
503 tiles[m].join(tiles[n])
504 tiles.pop(n)
~/Dev/lentil/lentil/prop.py in overlap(a, b)
481
482 def overlap(a, b):
--> 483 return a[0].start <= b[0].stop and a[0].stop >= b[0].start and a[1].start <= b[1].stop and a[1].stop >= b[1].start
484
485
TypeError: 'NoneType' object is not subscriptable
The way a Plane phasor is constructed for multiplication with a Wavefront is not right:
The issue appears to be twofold:
The fix for 1 is: mask[n]
-> mask[n][s]
The fix for 2 is: amplitude[s] * mask[s]
-> amplitude[s] * mask
Should also update the comment just prior to this logic to both correctly call out what is happening in 2 above and include a few words about why we use self.mask if self.size ==1, etc. (the reason being: when size == 1, the associated parameter is a scalar and needs no slicing)
Plane.multiply()
method to be standalone
x3 = lentil.multiply(x1, x2)
Wavefront.__mul__()
and Wavefront.__imul__()
Wavefront.__rmul__()
or at least raising a TypeError
As part of a broader rearchitecting (and hopefully simplification) of the far-field propagation core, I think it's worth considering removing the distinction between Image and Detector planes.
How are Image and Detector related/different: Detector subclasses Image. Both objects have pixelscale and shape attributes, but while they are optional (default to None) for Image, they are required by the Detector constructor.
How Image and Detector are currently used: Including a Detector object in the list of planes provided to propagate is a trigger for using the DFT to perform pupil to image plane propagations. Image does not currently provide any real functionality.
Proposed change: Remove Detector plane and collapse all behavior into Image plane. DFT-based propagations are triggered by Pupil to Image planes where pixelscale is specified.
Additional things to consider:
Writing a unit test for convolvable.Jitter
uncovered an interesting question about what the scale
term should actually represent when applying jitter to an image.
The code uses scale
as the standard deviation of the gaussian kernel. This behavior is exactly as described in the docs. Practically what this means is that if we ask for 2 pixels of jitter, what we get back is actually 2 pixels of jitter in every direction, or what looks more like 4-5 pixels.
My gut feeling says that we should be dividing scale
by 2 when computing the kernel, but I have to dig through some references to verify.
Whether the code ends up changing or not, the docstring should be updated to make it a little more clear how scale
is used internally and what a user should expect to see.
The default behavior of the Wavefront constructor is to create a single Field object with data = 1+0j. This represents an "infinite" plane wave (really, it's just broadcastable to any shape).
If we try to access Wavefront.field in this condition, we get an error:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/akee/dev/lentil/lentil/wavefront.py", line 60, in field
out = np.zeros(self.shape, dtype=complex)
AttributeError: shape
Simply setting shape=()
doesn't do the trick either:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/akee/dev/lentil/lentil/wavefront.py", line 62, in field
out = lentil.field.insert(field, out)
File "/home/akee/dev/lentil/lentil/field.py", line 225, in insert
field_shifted_ul = (output_shape // 2) - (field_shape // 2) + field_offset
ValueError: operands could not be broadcast together with shapes (0,) (2,)
The expected behavior is to return 1+0j
The function signature for Wavefront.insert()
indicates there is a choice to work with intensity or not, but this is not the case.
Both the docstring and the call to Field.insert
are correct. Only the function definition needs to change here.
def insert(self, out, intensity=False, weight=1):
"""Directly insert wavefront intensity data into an output array.
This method can avoid repeatedly allocating large arrays of zeros
when accumulating :attr:`intensity`.
Parameters
----------
out : ndarray
Array to insert wavefront data into
weight : float
Scale factor applied to wavefront data
Returns
-------
out : ndarray
Array with wavefront data inserted into it at the appropriate location
"""
for field in lentil.field.reduce(*self.data):
out = lentil.field.insert(field, out, intensity=True, weight=weight)
return out
RandomState is frozen as of Numpy v1.16. The recommended replacement is default_rng, which was introduced in Numpy v1.17.
It's not entirely clear what the deprecation plan for RandomState
is (if there is one at all), but we should plan to migrate to default_rng
at some point.
We'll have to start enforcing a minimum Numpy version with this release (Numpy v1.17 was released in July 2019, so it should be an acceptable minimum version)
The Detector plane was (re)introduced in 00d2e2c. At some point during the crush to get a functioning beta release of v0.5.0 out to meet some program deadlines, I decided it was best to punt on forcing models to redefine final Image planes as Detector planes. This capability is currently disabled via comments.
I think it's going to be useful to make a distinction between Image and Detector planes as we move towards implementing true multi-plane and angular spectrum diffraction capabilities, but there's some cleanup to be done. Things to take a look at:
It's currently possible to independently specify an Image shape and also provide a mask and/or amplitude with a different shape. This is not handled correctly and causes unexpected behavior when constructing the Plane phasor for multiplication:
Line 457 in 0c4d70a
Here, s
is the plane slice computed in the Plane constructor from the mask and plane.shape
is (potentially) specified by the user.
If plane.shape != plane.mask.shape
, the resulting offset computed by slice_offset()
and subsequently applied to the phasor is nonsensical.
Note this is only an issue for the Image object since specifying a shape to the constructor isn't supported anywhere else.
Just like the title says.
Also default shape to None and fall back on the Wavefront shape if not otherwise specified.
The scipy.ndimage.interpolation
namespace is being deprecated:
DeprecationWarning: Please use `map_coordinates` from the `scipy.ndimage` namespace, the
`scipy.ndimage.interpolation` namespace is deprecated.
from scipy.ndimage.interpolation import map_coordinates
This import needs to be updated in the util
module.
A quick check shows map_coordinates
has been available in the ndimage
namespace since pretty much the beginning of Scipy (2007), so there shouldn't be any backwards compatibility issues.
Right now the definition of shift
parameters is inconsistent. In some places it's defined in terms of row, column and in others it's defined in terms of x, y. We should choose one approach and standardize throughout.
A similar approach should be taken in places where center
appears.
There's a TODO and a huge comment block that needs to be sorted through before this feature is ready for non-beta release.
The returned object should be a new Plane
with updated attributes:
newplane = Plane.resample(pixelscale=...)
This should call underlying methods attached to the Plane object:
newphase = Plane.resample_phase(pixelscale=...)
and construct the new Plane
object
There appears to be a change in precision happening somewhere. Things are no longer consistent between numpy.random.uniform
and lentil.detector.collect_charge
:
_____________________________ test_collect_charge_bayer_even _____________________________
def test_collect_charge_bayer_even():
img = np.random.uniform(low=0, high=100, size=(5, 2, 2))
qe_red = np.random.uniform(size=5)
qe_green = np.random.uniform(size=5)
qe_blue = np.random.uniform(size=5)
bayer_pattern = [['R', 'G'], ['G', 'B']]
out = lentil.detector.collect_charge_bayer(img, np.ones(5), qe_red,
qe_green, qe_blue, bayer_pattern)
ul = np.sum(img[:, 0, 0]*qe_red)
ur = np.sum(img[:, 0, 1]*qe_green)
ll = np.sum(img[:, 1, 0]*qe_green)
lr = np.sum(img[:, 1, 1]*qe_blue)
> assert np.array_equal(out, np.array([[ul, ur], [ll, lr]]))
E assert False
E + where False = <function array_equal at 0x106729550>(array([[183.9720367 , 99.00173696],\n [174.71267473, 103.77563856]]), array([[183.9720367 , 99.00173696],\n [174.71267473, 103.77563856]]))
E + where <function array_equal at 0x106729550> = np.array_equal
E + and array([[183.9720367 , 99.00173696],\n [174.71267473, 103.77563856]]) = <built-in function array>([[183.97203669606148, 99.00173696405702], [174.71267473057333, 103.77563856341055]])
E + where <built-in function array> = np.array
tests/test_detector.py:43: AssertionError
The implementation of these two attributes is pretty confusing, especially when Plane.mask
is not specified when a Plane is created.
It's not used and probably not strictly necessary
There's really no need to differentiate these - everything can be collapsed into an n-dimensional mask attribute.
Plane.ptt_vector
to use mask.size[0]
if mask.ndim
> 2It seems like detector modeling work falls in to one of two categories:
In the vast majority of use cases, a high-fidelity detector model isn't required. It may be useful to apply a quantum efficiency and gain, throw in some read noise, and digitize a frame subject to some bit depth and saturation limit.
When a high-fidelity detector model is required, it is often so specialized that a generic bit of code like what Lentil currently provides with the FPA.image method probably isn't particularly useful.
There's also the added complexity of trying to support every various possible noise source under the sun.
I think a better long-term solution is to pull out the piece parts of the Lentil code that have been used to make a few very high-fidelity (radiometrically verified) focal plane models at JPL and deliver them as individual functions that can be chained together however a user wants in the detector module.
As a part of this work, the fpa module would also be deprecated. The custom functionality currently implemented in the BayerFPA class should be extracted and delivered as a standalone function.
The current behavior of propagate_dft()
checks for overlap between the propagation result and the full-sized output canvas before performing a propagation. It would be beneficial to allow the specification of a mask to further refine the areas of interest within the output canvas.
To accomplish this, the propagate._overlap()
method could be updated to return one of two things:
()
With this information, we can compute the appropriate shape
and offset
for the DFT calculation.
This is a new feature and does not affect backward compatibility
================================================================================================ warnings summary ================================================================================================
tests/test_detector.py: 1 warning
tests/test_radiometry_blackbody.py: 5 warnings
tests/test_radiometry_spectrum.py: 45 warnings
/home/akee/dev/lentil/lentil/radiometry.py:74: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead.
self.wave = wave
tests/test_radiometry_spectrum.py::test_spectrum_resample
/home/akee/dev/lentil/lentil/radiometry.py:438: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead.
self.wave = wave
tests/test_radiometry_spectrum.py::test_spectrum_trim
/home/akee/dev/lentil/lentil/radiometry.py:655: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead.
self.wave = self.wave[index_min:index_max+1]
tests/test_radiometry_spectrum.py::test_spectrum_crop
tests/test_radiometry_spectrum.py::test_spectrum_crop_outside_wave_limits
/home/akee/dev/lentil/lentil/radiometry.py:663: DeprecationWarning: `alltrue` is deprecated as of NumPy 1.25.0, and will be removed in NumPy 2.0. Please use `all` instead.
self.wave = np.delete(self.wave, indx)
In its current implementation, the shape of the output from propagate
is determined by the npix
parameter and the center of the output will always be on-axis.
It may be desirable to adjust the location of the output by accepting a window
parameter to specify the indices within the output canvas with shape = npix
to return. In this way, a user could establish a 2000 x 2000 output array but only return the lower left quadrant by requesting
window = [slice(1000, 2000), slice(0, 1000)]
When referring to classes and methods in submodules in the docs, it is preferable to provide a fully qualified name. For example,
detector.pixelate()
is preferable to
pixelate()
This is simple to achieve in the documentation:
.. currentmodule:: lentil
:func:`detector.pixelate`
The documentation should be globally updated to make this change.
Carl is having a problem where he's performing a propagation with tilt='angle'
and npix_chip=256
but the Pupil phase has shape = (256,256). The pupil has some moderately large aberrations that are wrapping around.
It would be desirable to automatically detect when this will happen and interpolate the necessary arrays to prevent it. It should be possible to control this interpolation behavior, at least in the sense of turning it on or off so we'll need to add a new parameter to propagate()
. Let's tentatively call this parameter interpolate_phasor
. This parameter should default to True.
The wraparound period of the DFT is given by 1/alpha, which we should be able to compute for all Pupil to Detector propagations at the same time we cache the amplitude and phase terms in Propagate.__enter__()
. I think we'll also have to (at least temporarily) increase the mask size to ensure the ptt_vector
is computed correctly. Because alpha is wavelength-dependent, we'll have to use the worst case value.
Once FFT propagations are working, we'll be able to use this approach to make sure they are being performed at Nyquist or better.
In the event that interpolate_phasor = True
and the wraparound period is less than npix
or npix_chip
(plus some buffer), we should compute the size of the amplitude, phase, and mask arrays to avoid wrapping issues and interpolate away!
We'll also have to decide on which of scipy.interp2d's interpolation methods to use. It defaults to linear but we likely want cubic at least for the phase.
Currently there are two ways Lentil works with tilt:
Plane.fit_tilt()
and reintroducing it when a propagation is performedThere are also several Plane
objects that also allow tilt to be introduced (ultimately as a Tilt
object held in Wavefront.tilt
)
DispersiveShift
Grism
Tilt
It would be nice to have a bit more flexibility about both how tilt is ultimately applied (either via Wavefront.tilt
or by including the tilt OPD) and also be able to easily transform between Tilt
s and OPD tilt
Using mat2ndarray results in the following error:
Python Error: AttributeError: 'array.array' object has no attribute 'fromstring'
From this post, the issue appears to be due to the fact that MATLAB is attempting to pass the contents of an array to Numpy by calling by using Python's fromstring()
method, which was deprecated in Python 3.9.
There appears to be a workaround by using MATLAB's (undocumented) getByteStreamFromArray()
method:
% Create some array
a = round(10*rand(2,3,4));
% Grab raw bytes
b = getByteStreamFromArray(a);
% Grab its shape
msize = size(a);
% Hardcoded header size found empirically (maybe should find some doc to
% justify this)
header = 72;
% Create numpy array from raw bytes buffer
d = py.numpy.frombuffer(b(header+1:end)).reshape(int32(msize));
One additional complication is in appropriately sizing the header. A quick test showed that the header size depends on the number of array dimensions:
0D, 1D, 2D: 64 bytes
3D, 4D: 72 bytes
5D, 6D: 80 bytes
...
The header size can be computed as 64 + (ceil(ndim/2) - 1)*8
. Note that this is still valid for what Numpy would call a 0-dimensional "array" since MATLAB's ndims()
function always computes the number of dimensions as greater than or equal to 2.
Properties are denoted as such in the documentation. See nseg
and pixelscale
below.
Unfortunately, although phase
behaves like a property, because it is actually decorated with @cached_property
, which implements the descriptor protocol but has no real relationship to the Python property
, Sphinx doesn't identify these attributes a properties.
Sphinx uses a method called isproperty()
to tag things as properties. I'm wondering if it is easy to develop a simple extension to rework the isproperty()
method to return
isinstance(obj, (property, lentil.modeltools.cached_property))
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.