lihaoyi / macropy Goto Github PK
View Code? Open in Web Editor NEWMacros in Python: quasiquotes, case classes, LINQ and more!
Macros in Python: quasiquotes, case classes, LINQ and more!
Although defining static methods/variables in a case class does not generate runtime errors, neither the class nor instance objects seem to contain these static references. What gives?
import macropy.core.macros
from macropy.macros.adt import macros, adt
@case
class Point(x,y):
access = False
def getAccess(): return False
>>> p = Point(1, 2)
>>> p.access
Traceback (most recent call last):
File "<console>", line 1, in <module>
AttributeError: Point instance has no attribute 'access'
>>> Point.access
Traceback (most recent call last):
File "<console>", line 1, in <module>
AttributeError: class Point has no attribute 'access'
>>> Point.canAccess()
Traceback (most recent call last):
File "<console>", line 1, in <module>
TypeError: unbound method canAccess() must be called with Point instance as first argument (got nothing instead)
>>> p.canAccess()
Traceback (most recent call last):
File "<console>", line 1, in <module>
TypeError: canAccess() takes no arguments (1 given)
Macros should have access to the name of the module they were imported as. That way, if a macro is well-written, all the macro caller needs to do to ensure that everything works correctly is make sure they don't shadow the macro module.
For example, if a macro module was imported as foo, and a macro in foo needed to use a temporary variable, it could use foo.tmp instead of just tmp.
'Nuff said. It would require some cleverness to fix.
Currently we are parsing the source code into an AST, modifying it, and then unparsing it before sending it to the compiler. Hence we lose all of the original source locations for each AST node.
In order to preserve the line numbers for errors to work, we would need to maintain the AST (with some line numbers intact, others messed up due to macros) and fill in the missing line numbers (and other metadata) directly on the AST before passing that to the compiler.
We currently do not have many unit tests, or really any concrete description, or core/
's functionality; the macros in macros/
and macros2/
are currently being implicitly as unit tests when we make changes in core/
.
We will need to beef up the unit tests for core/
if we want to split the contents of macros/
and macros2/
into separate repos.
Decorator macros are currently hardcoded to only work with a single decorator:
if (isinstance(tree, ClassDef)
and len(tree.decorator_list) == 1
and tree.decorator_list[0]
and type(tree.decorator_list[0]) is Name
and tree.decorator_list[0].id in module.decorator_registry):
because I was lazy when i was getting it done and just wanted to hack something together. Ideally, we would have proper decorator stacking semantics for our macros, allowing multiple decorator macros or decorator macros mixed with normal decorators.
It should be possible to write a set of macros that does a delimited continuation transform, like in Scala:
with delimit:
...
@callcc
def func(cont):
...
...
Much of the same techniques would apply, and the transform would probably be similar to what the tail call macro does.
Edit: On the other hand, we wouldn't be able to use the @cps
annotation on functions containing callcc
, like the Scala transformation, since we won't get that information at transform-time without types. This would require an explicit annotation (c%
?) at call site of all the functions that contain naked callcc
s; not ideal but perhaps acceptable
I don't know if it's gonna be possible to make Macros work on the CPython REPL, but IPython lets you pass the input through filters and it would be great if we could get macros working there.
A bunch of people have already emailed me about the complete lack of backwards compatibility (e.g. the changes to basic expr-macro syntax from %
to ()
to []
) and these constant changes are probably inhibiting anyone else from seriously relying on MacroPy or contributing to it.
Thus it makes sense to at some point, declare the basic functionality "stable", and release it publicly as version 1.0.0. The internals can continue to evolve, and the edge cases worked out, but the parts of MacroPy which are obviously exposed to macro writers or users should stay more or less fixed, with some attention paid to backwards compatibility (e.g. changelists to help people keep up). This applies to both core/
and the bundled macros in macropy/
.
I've done a lot of cleanup in the past week, and with the addition of hygienic quasiquotes, I'm roughly certain that the user experience of writing macros is where it should be.
I'm intending to release 1.0 on Monday the 24th; if anyone wants to propose major changes, now is the time to do it.
Should allow decorating functions with a tail-call rewriter.
Is there currently any way to access the entire source file from which a macro is called? I have some ideas for implementing a inclined lambda, and being able to analyze and modify the entire file's east would be appreciated over hackish-approaches.
So after reading the reasons why my first proposal of a single Optimization Macro parsing the entire file was unfeasible, I got to thinking of better (and possibly more pythonic) ways of implementing inlined function calls.
Instead of a single macro, inlining a function would require two macros - one to register a target function as inline-able, and another to actually inline the registered function into a call site.
Proposed usage:
@inlineable
def foo(x, y):
return x+y
def bar():
for number in range(0,1000):
inline(result = foo(number, number+1))
print(result)
Or
@inline_all
def bar():
for number in range(0,1000):
result = foo(number, number+1)
print(result)
Alternatives to using the the "inline" macro as a function is to use it as a decorator, and have it attempt to scan and inline every registered function call within the body.
I believe that, with some usage limitations, this kind of optimization is quite possible. The only difficulties lie in restructuring function bodies so that they can be safely inlined. Argument handling is also a bit tricky, but I believe that clever use of python's symtable module, along with comparison of the function's argument ast should solve that.
MacroPy kinda-sorta works on the Python, PyPy and IPython REPLs now:
In [1]: import macropy.core.console
0=[]=====> MacroPy Enabled <=====[]=0
In : from macropy.macros2.tracing import macros, trace
In : trace%(1 + 2 + 3 + 4)
(1 + 2) -> 3
((1 + 2) + 3) -> 6
(((1 + 2) + 3) + 4) -> 10
Out[1]: 10
But there's some weird behavior: IPython's prompt changes from In[1]:
to In:
, and multi-line function and class bodies don't work in any of the shells; and that's because
It would be great if someone who knows his way around the various Python REPLs (e.g. not me) could help fix up macropy/core/console.py to do things properly.
I ran into this as soon as I tried to use pattern matching in a module where I'm already using case classes. Even simply pasting the first pattern matching example from the README fails identically with:
Traceback (most recent call last):
File "/Users/jrk/miniconda/lib/python2.7/site-packages/macropy/core/import_hooks.py", line 75, in find_module
exec code in mod.__dict__
File "path/to/schedule.py", line 38, in <module>
print reduce(lambda a, b: a + b, Cons(1, Cons(2, Cons(4, Nil()))))
File "path/to/schedule.py", line 32, in reduce
with switch(my_list):
NameError: global name '_matching' is not defined
Mac OS 10.11b3 with Miniconda Python 2.7.10, MacroPy 1.0.3 installed from pip this afternoon.
Hi,
I checked out the latest code from github and I tried running the string interpolation example using python 2.7.3 on OS X and I was unable to run it. I am posting the code and the error here: https://gist.github.com/ducky427/5543518
Help would be much appreciated.
cheers
from blah import *
Is awesome. However, it also dumps waaaaay more stuff in your local namespace than you need; for example it is currently dumping the contents of the ast
module into every file that uses macros.
The dumb thing to do would be to make the macro smart enough to only include additional imports for the "additional" things it needs. The better thing to do would be just to have a proper __all__
in all our files so import *
does the right thing.
sql%[a.x for a in database if a.x > 10] # -> SELECT 'x' FROM 'a' IF x > 10
It should be possible to make a really nice filesystem DSL using macros, something like:
with shell:
move(folder/file.txt, folder2/file2.txt) # move a file
remove(folder/_.pyc) # remove all .pyc files
remove(folder/__/_.pyc) # recursively remove all .pyc files
copy(folder/_.py, folder2) # copy all .py files into new folder
read(main/thing.java) | javac | write(main/thing.class)
# read a file, compile its contents and write it back to disk
Would be super cool
It would be nice to have a macro that would print out the expanded version of the code contained within it, in order to make debugging easier. Something like
with expanded_trace:
x = map(f%_+1, [1, 2, 3])
# x = map(lambda sym0: sym0+1, [1, 2, 3])
Would make it easier to see what's going on without having to put debug statements directly inside macros.py and can be easily implemented without inside_out
(which never really worked anyway) by exposing the _expand_ast
function to the macros, allowing them to manually expand any macros in their subtree before unparse_ast
ing the subtree for printing.
Hi, I think an interesting example of macropy would be to make self implicit in objects:
http://code.activestate.com/recipes/362305-making-self-implicit-in-objects/
How would one implement this in macropy?
Could this be included in the examples?
Would macropy enable a way to avoid having to subclass LocalsFromInstanceVars (see url above) in order to make it work?
Could it be implemented similar to quick_lambda except:
any @
s within the wrapped method become an implicit self reference? Thus similar to coffeescript / ruby ?
related post:
http://www.artima.com/weblogs/viewpost.jsp?thread=239003
Currently, gen_sym
simply replaces names with incrementing sym0
sym1
sym2
s etc., which makes code kinda hard to read:
return sym12(sym2, (sym13[0] - 1), (lambda: func()))
It would be great if gen_sym
was updated to take a suggested name as a parameter. It could then try to use the suggested name before generating a unique one. Furthermore, it could make the unique name look more similar to the suggested name. This would be far superior than seeing symX
all over the place.
It can get confused if a tail-recursive function makes a non-tail call to another tail-recursive function.
@finiteloop mentioned in #43 that macropy slows stuff down a bunch. It would be nice if we had a nice benchmark suite that we could use to see how fast MacroPy was in a few cases:
log
s)require
s)Only then would we be able to properly measure what effect (if any) our changes are having on performance, and therefore make systematic improvements in that direction.
We should have a folder full of runnable files which do the same thing as some of the examples in the readme. This will make it easier for people to play with.
with fork as x:
... do some expensive operations ...
result
with fork as y:
... do more expensive operations ...
result
res_x, res_y = join(x, y)
do_stuff(res_x, res_y)
Although it is possible to grab the bytecode of a function and send it to a separate process to be run in parallel, macros would let us parallelize tasks on a sub-function level. Exactly how they are run (process pool, thread pool, remote machines) is completely abstracted away. Another syntax that looks nicer but only works for expressions may be:
# these two get run in parallel
x, y = fork_join%(
...some expensive computation...,
...another expensive computation...
)
do_stuff(x, y)
Macros could also give us a concise way of emulating "lexical scoping", by delimiting which parts of the forked code are to be evaluated locally:
with forked as x:
temp1 = do_expensive_op_with(u%value1)
temp2 = do_expensive_op_with(u%value2)
...
result
In this case, the u%
syntax indicates that value1
and value2
are to be evaluated in the local scope and their value sent over to the forked task, rather than sending over the names value1
or value2
which may not exist in the scope where the forked task is executing. There are probably a bunch of other clever things you could do with macros, along with a bunch of pitfalls to do with concurrency/parallelism, neither of which has been thought through.
def handleFoo(foo):
with matcher:
if Foo(x, Bar(y)) << foo:
...
if Bar(y) << foo:
...
matcher%(Foo(a, b) << thing)
return a + b
Some form of preprocessor script that could be run on files with macros to make them directly runnable by replacing all macro code with the actual generated code.
This would allow:
To take an example from the README:
point.py:
from macropy.case_classes import macros, case
@case
class Point(x, y): pass
CLI:
$ python expand_macros.py point.py
exp_point.py:
class Point(object):
__slots__ = ['x', 'y']
def __init__(self, x, y):
self.x = x
self.y = y
def __str__(self):
return "Point(" + self.x + ", " + self.y + ")"
def __repr__(self):
return self.__str__()
def __eq__(self, other):
return self.x == other.x and self.y == other.y
def __ne__(self, other):
return not self.__eq__(other)
def __iter__(self, other):
yield self.x
yield self.y
I for one would love to be able to use macros like this to generate boilerplate code but still maintain my scripts as directly runnable and easily distributable.
A lot of our transformers could use pattern matching. It would make our code nicer.
Not sure if I installed it correctly, but from trying to follow one of the examples I'm getting:
In [1]: from macropy.macros2.tracing import trace
In [2]: trace
In [3]: trace is None
Out[3]: True
https://github.com/lihaoyi/macropy#gen_sym
gen_sym is a function which produce a new identifier (as a string) every time it is called. This is guaranteed to produce a identifier that does not appear anywhere in the origial source code, or have been produced by an earlier call to gen_sym.
Pattern matching seems to be broken since 774461c, where also most of the tests got removed.
For example, the following code prints None on current revision (from readme):
from macropy.macros.pattern import macros, patterns
from macropy.macros.adt import macros, case
@case
class Rect(p1, p2): pass
@case
class Line(p1, p2): pass
@case
class Point(x, y): pass
def area(rect):
with patterns:
Rect(Point(x1, y1), Point(x2, y2)) << rect
return (x2 - x1) * (y2 - y1)
print area(Rect(Point(1, 1), Point(3, 3))) # 4
The python code created (by unparse_ast) is following. Note the area function.
import inspect
from ast import *
from macropy.core import util
from macropy.core.macros import *
from macropy.core.lift import *
from macropy.core.lift import *
macros = Macros()
class PatternMatchException(Exception):
'\n Thrown when a nonrefutable pattern match fails\n '
pass
# snip, edited for brevity
@link_children
class Point(CaseClass):
def __init__(self, *args, **kwargs):
CaseClass.__init__(self, *args, **kwargs)
pass
_children = []
_fields = ['x', 'y']
def area(rect):
None
print area(Rect(Point(1, 1), Point(3, 3)))
It should be possible to make some rudimentary static analysis tools using Walker
s, e.g. to walk a AST while maintaining a dictionary of name bindings in the ctx
. This doesn't seem too hard (est. 100 LOC or so), and would serve as a good alternative to pull in the huge, undocumented mass of astng just to do some simple analyses.
Stuff this would be used for:
>>
operator in MacroPEGhq
, and reference it in the same hq
, without the hq
macro trying to pull it in from the local scopeThere should be a way of loading a bunch of modules, expanding all the macros inside them and saving the expanded source files. This would allow library-writers to utilize MacroPy as a dev/build/test-time dependency without having it as a run-time dependency, making it more likely that people will start using MacroPy incrementally.
Everything up to the point of hygienic quasiquotes should be extremely simple: just unparse
the AST and save it somewhere. Hygienic quasiquotes maintain a registry of lexically-captured identifiers with all sorts of things inside (values, classes, packages, functions, methods, closures) to simulate lexical closure within the quasiquote block. These will need to be pickled and saved during pre-compilation, such that they can be accurately re-constituted during the macroless import process.
That way we can validate that macropy works on different versions of Python at the same time and ensure that nothing is broken.
What about Travis-CI?
If you'll enable it, I'll set it up.
The original decision to use macro%...
was due to:
%
operations than function calls)Although the macro%...
syntax still benefits macros such as string interpolation (s%"..."
), it seems the vast bulk of our macros would look much better using the macro(...)
or macro[...]
syntax instead. This is partially because of the high precedence of the %
operator, turning basically many macro%...
calls into macro%(...)
calls anyway:
sql%(x.size for x in db.countries)
require%(value == 10)
trace%(1 + 2 + 3)
f%(_ + _)
q%(1 + 2)
with q as code:
x = 10
code[0].targets[0].id = n
with match:
Point(x, y) << my_point
s%"i a {cow}"
p%"<h1>Hello World</h1>
with peg:
op = "+" | "-" | "*" | "/"
vs
sql(x.size for x in db.countries)
require(value == 10)
trace(1 + 2 + 3)
f(_ + _)
q(1 + 2)
with q as code:
x = 10
code[0].targets[0].id = n
with match:
Point(x, y) << my_point
s("i am {cow}")
p("<h1>Hello World</h1>")
with peg:
op = "+" | "-" | "*" | "/"
vs
sql[(x.size for x in db.countries)]
require[value == 10]
trace[1 + 2 + 3]
f[_ + _]
q[1 + 2]
with q as code:
name[n] = 10
match[Point(x, y)] = my_point
s["i am {cow}"]
p["<h1>Hello World</h1>"]
peg[op] = "+" | "-" | "*" | "/"
In addition, using macro[...]
means you can place macros on the left hand side of an assignment, which allows for a nice looking shorthand match
and peg
syntax, rather than having to use with ...:
block macros when you really only need a single-statement.
Due to the fact that you can now rename macros when importing them, the downside of a name collision is much less severe: just rename the macro using ...import macros, sql as macro_sql
or similar.
We could:
macros.pct_expr
macros.paren_expr
macros.square_expr
decorators) to allow macros to more closely follow the syntax of similar operations (e.g. functions)I have no idea how hard this will be, but MacroPy does not have many dependencies on the underlying runtime except for PEP302 and the ast
library.
Some of the ASTs look slightly different (e.g. functions can't have nested parameter lists, no print
statement) and we may have to remove any print
statements from the implementation code, but I don't imagine it will be very difficult or require big changes.
Currently this:
from my_module import macros, ...
just imports everything from that module, and activates all the macros, regardless of what's in the ...
. This should be changed to
from my_module import macros, mac_a, mac_b
where only the macros which are explicitly listed get activated, while allowing for aliases:
from my_module import macros, long_macro_name as mac
I installed macropy from git (git clone, python setup.py install).
I go to a python shell and try the case classes example.
Python 2.7.4 (default, Apr 19 2013, 18:32:33)
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from macropy.macros.adt import macros, case
0=[]=====> MacroPy Enabled <=====[]=0
>>> @case
... class Point(x, y): pass
Traceback (most recent call last):
File "<console>", line 1, in <module>
NameError: name 'case' is not defined
I'm probably missing something obvious :-)
Macros usually contain a bunch of functions/classes needed to support their execution at run-time. e.g:
# tracing.py
__all__ = ['wrap', 'wrap_simple', 'handle', 'log']
# case_classes.py
__all__ = ['CaseClass', 'singleton']
Except for log
, which is meant to be overriden, the rest of these are meant to be invisible to the programmer; naming your own variable wrap
or singleton
in a macro-using module shouldn't cause things to explode, and though they currently do.
A possible solution for this is to rename these auxiliary functions on-import using import ... as ...
, and injecting a name_mapping
argument to the macros at such that at expansion-time, they can properly insert the renamed identifier into the ASTs (e.g. using name(...)
inside quasiquotes). This could be improved, possibly by making the quasiquote macro smart enough to grab the name_mapping
from the enclosing scope and perform this renaming automatically.
This should also include instructions on setting up development environment and running tests, coding style.
Macro expansion failures (uncaught exceptions during expansion) currently cause the macropy import hooks to abort.
Although this prints out the stack trace for the cause-of-failure (which pinpoints the location within the macro where the exception originated), it also results in
try
-catch
/with
-blocks, which is annoying (e.g. you can't use self.assertRaises
to unit test the error condition).An alternate approach would be to cause a failing macro-expansion to leave an exception-raising stub of code in-place. This could wrap the original exception (so no information is lost, and you have the original stack trace) but would allow us to
assertRaises
In general, it would make macros behave more like normal methods/decorators, failing gracefully in the normal way rather than blowing up violently. Developing (and debugging) macros would be much more pleasant, and mis-used macros (e.g. wrapping invalid code) could fail in a way that the end user wouldn't be frightened away from the keyboard.
So, I've been working with some Python code translated directly from Java. It works quite well, however it uses lots of small objects, and would benefit greatly with some liberal usage of slots in many of the classes . Would MacroPy be able to automatically generate slots for classes based on initialization arguments and class variables?
Macro Lenses could allow really nice nested updates of immutable hierarchical structures. Like:
# tree for the source code (~1 + (2 + 3))
x = BinOp(
left=UnaryOp(
op=Invert(),
operand=Num(n=1)
),
op=Add(),
right=BinOp(
left=Num(n=2),
op=Add(),
right=Num(n=3)
)
)
# converting it to the tree for (~3 + (2 + 3)
# without lenses
x1 = x.copy(left = x.left.copy(operand = x.left.operand.copy(n = 3)))
# with lenses
x1 = lens%x.left.operand.n.set(3)
# changing (~1 + (2 + 3)) to (~2 + (2 + 3)), by adding one to the previous value of the left item
# without lenses
x2 = x.copy(left = x.left.copy(operand = x.left.operand.copy(n = x.left.operand.n + 1)))
# with lenses
x2 = lens%x.left.operand.n.set(_ + 1)
# setting the value on the left to the sum of all values, giving (~6 + (2 + 3))
# without lenses
x3 = x.copy(left = x.left.copy(operand = x.left.operand.copy(n = x.left.operand.n + x.right.right.n + x.right.left.n)))
# with lenses
x3 = lens%x.left.operand.n.set(_ + _._.right.left.n + _._.right.right.n)
I think the last one is properly called a Zipper, but I'm not familiar enough with that branch of FP to say for sure.
Generators can handle this particular example, but it would be nice if we could use this style of programming:
@closures
def makeCounter():
x = 0
def foo():
x += 1
return x
return foo
which would be transformed into
def makeCounter():
x = [0]
def foo():
x[0] += 1
return x[0]
return foo
expand_if_in_registry
calls itself recursively when a macro is used with arguments, but it doesn't preserve **kwargs
. This means that you cannot use both args
and target
at the same time (i.e. with mymacro(x) as y:
)
monadic%[a + b for a in Just(x) for b in Nothing()] # --> Nothing
monadic%[a + b for a in Just(x) for b in Just(y)] # --> Just(x + y)
The tests under core/test/exporters/init.py fail for me under python2.7.
======================================================================
ERROR: test_pyc_exporter (macropy.core.test.exporters.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/macobo/projects/macropy/macropy/core/test/exporters/__init__.py", line 33, in test_pyc_exporter
f = open(__file__ + "/../pyc_cache.py", "a")
IOError: [Errno 20] Not a directory: '/home/macobo/projects/macropy/macropy/core/test/exporters/__init__.pyc/../pyc_cache.py'
======================================================================
ERROR: test_save_exporter (macropy.core.test.exporters.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/macobo/projects/macropy/macropy/core/test/exporters/__init__.py", line 53, in test_save_exporter
macropy.exporter = SaveExporter(__file__ + "/../exported", __file__ + "/..")
File "/home/macobo/projects/macropy/macropy/core/exporters.py", line 20, in __init__
shutil.copytree(root, directory)
File "/usr/lib/python2.7/shutil.py", line 171, in copytree
names = os.listdir(src)
OSError: [Errno 20] Not a directory: '/home/macobo/projects/macropy/macropy/core/test/exporters/__init__.pyc/..'
----------------------------------------------------------------------
test_save_exporter
can be fixed by properly using the os.path module get parent directory path. This is outlined in the following commit: macobo@543bc63
Even after that test_pyc_exporter
fails for me with an AssertionError on the last line.
======================================================================
FAIL: test_pyc_exporter (macropy.core.test.exporters.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/macobo/projects/macropy/macropy/core/test/exporters/__init__.py", line 52, in test_pyc_exporter
assert (pyc_cache_count, pyc_cache_macro_count) == (8, 5), (pyc_cache_count, pyc_cache_macro_count)
AssertionError: (8, 4)
----------------------------------------------------------------------
Would be nice to be able to do stuff like this:
#python_file.py
js_string = js%(lambda x: x > 10)
print js_string
# function(x){ return x > 10 }
Where, for small snippets of javascript code (e.g. form validation) we could write it once in python and cross-compile it into javascript. This would be the holy grail of form validation: logic performed on client and server, but written only once. Of the alternatives, https://github.com/jabapyth/PJs seems like the most promising.
Python's MacroPy used to run on PyPy's Python, but PyPy's Python AST compiler doesn't like MacroPy's Python ASTs. This was not a problem when we just unparsed them, but now that we are compiling them whole (to try and preserve line numbers) it's causing PyPy to die.
Would be great if it could be fixed, because apart from that, everything works on PyPy
The default python AST objects are clunky and annoying:
__str__
s or __repr__
sIn general they are a pain to use. It would be great if we could have a different ast representation, either using astng
or made ourselves using case classes.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.