modern-fortran / neural-fortran Goto Github PK
View Code? Open in Web Editor NEWA parallel framework for deep learning
License: MIT License
A parallel framework for deep learning
License: MIT License
In support of #64.
that will allow chaining layers of different shape, e.g. 1-d input to 2-d convolutional.
Reference: Keras Reshape layer
Currently ifort uses repeatable random seed by default:
https://fortran-lang.discourse.group/t/mnist-problem-finding-file/3464/17
Use random_init
(where supported) for more consistent behavior.
Also proposed in #57.
Currently the forward and backward passes are defined on the network type instead of the layer type. This prevents using different layers on a network. Define them on the layer type instead to enable more general network architectures.
An example of this has been implemented downstream in FKB.
Read Keras models from h5 files.
Needs #42.
Implement the reader for Keras's new default model output format.
In support of convolutional networks; a 2-d maxpool
layer will suffice.
Do additional parameters need to be set after loading a network?
In test_network_save.f90 weights and biases are only shown to be equal. Could you provide an example to load a network then evaluate its accuracy?
I'm able to load previously saved networks. However, when attempting to train them or evaluate them for accuracy I encounter segfaults.
program example_simple
use mod_network, only: network_type
implicit none
type(network_type) :: net, net2
real, allocatable :: input(:), output(:)
integer :: i
net = network_type([3, 5, 2])
! Saving network
call net % save('my_simple_net.txt')
! Loading network
call net2 % load('my_simple_net.txt')
input = [0.2, 0.4, 0.6]
output = [0.123456, 0.246802]
do i = 1, 500
! Segfault training net2
call net2 % train(input, output, eta=1.0)
print *, 'Iteration: ', i, 'Output:', net2 % output(input)
end do
end program example_simple
This issue only serves as a notification that the default branch changed its name from "master" to "main". I don't think GitHub issues these notifications.
Needs #42.
In the constructor for array1d
and array2d
, the dummy arguments are declared as integer
, but isn't it better to declare as integer(ik)
? (because db_init()
and dw_init()
passes integer(ik)
as actual arguments)
pure type(array1d) function array1d_constructor(length) result(a)
! Overloads the default type constructor.
integer, intent(in) :: length
allocate(a % array(length))
a % array = 0
end function array1d_constructor
pure type(array2d) function array2d_constructor(dims) result(a)
! Overloads the default type constructor.
integer, intent(in) :: dims(2)
allocate(a % array(dims(1), dims(2)))
a % array = 0
end function array2d_constructor
Related lines:
https://github.com/modern-fortran/neural-fortran/blob/master/src/lib/mod_layer.f90#L63
https://github.com/modern-fortran/neural-fortran/blob/master/src/lib/mod_layer.f90#L70
https://github.com/modern-fortran/neural-fortran/blob/master/src/lib/mod_layer.f90#L83
https://github.com/modern-fortran/neural-fortran/blob/master/src/lib/mod_layer.f90#L96
Which will allow chaining conv2d
and maxpool2d
layers with a dense
layer.
This issue tracks the progress on the support for convolutional layers. Ideally will be using GH Projects for this but that's on an Org level; I don't want to pollute the modern-fortran Org with it, so when we transfer to github.com/neural-fortran Org, will set up some projects.
Hi Milan, I have added neural-fortran
to this popular list of machine learning packages on GitHub and have requested a pull. Hope you do not mind. To do so, I created a new section on Fortran and listed neural-fortran
along with ParaMonte as the two Fortran machine learning packages that I know about, at the moment. Hopefully, this will lead to increased visibility of neural-fortran
and the Fortran language in general.
Add other cost functions beyond Mean Squared Error.
Could be implemented in the same way as activation function selector, for example:
net = network_type([3, 5, 2], activation='sigmoid', cost='cross-entropy')
How to parallelize matmul
beyond OpenMP directives in MKL?
Would it be possible to use sub-modules to provide different backends to the neural-fortran interface?
What I have in mind are things like:
While I admire the effort to build a pure Fortran NN library, the amount of effort (and money) being put into these other libraries is simply enormous. Perhaps this way disciplines traditionally reliant upon Fortran (meteorology, quantum chemistry, ...) could also benefit from the numerous existing machine learning frameworks containing all kinds of advanced graph and runtime optimizations.
The way I see this working, is we need to define (and possibly expand) the high-level interface for creating and training NNs. Then the non-Fortran implementations (effectively just adaptors to other frameworks) can be placed in submodules which could be switched on by a CMake flag.
% fpm test --compiler caf --flag "-cpp -DCAF -O3 -ffast-math"
T T T T T T T T T T T T T T T T T T T T
All tests passed.
Initializing 2 networks with random weights and biases
Save network 1 into file
Load network 2 from file
Layer 1 , weights equal: T , biases equal: T
Layer 2 , weights equal: T , biases equal: T
Layer 3 , weights equal: T , biases equal: T
Setting different activation functions for each layer of network 1
Save network 1 into file
Load network 2 from file
Layer 1 , activation functions equal: T (network 1: sigmoid, network 2: sigmoid)
Layer 2 , activation functions equal: T (network 1: tanh, network 2: tanh)
Layer 3 , activation functions equal: T (network 1: gaussian, network 2: gaussian)
Reading MNIST data..
At line 36 of file ././src/mod_io.f90
Fortran runtime error: Cannot open file '../data/mnist/mnist_training_images.dat': No such file or directory
Error termination. Backtrace:
#0 0x1090486ee
#1 0x109049395
#2 0x109049f7b
#3 0x109267638
#4 0x10926791c
#5 0x108320931
#6 0x108320ff2
#7 0x10831f29a
#8 0x1083317fb
1 -0.152883232 2.73143500E-02 0.131976783 -0.170446441 -0.293200493 3.38315852E-02 0.328212768 -0.113486469 -0.259437263 -0.248188660 0.214462832 0.278976798 -5.90201877E-02 -0.289550871 -0.366032451
<ERROR> Execution failed for object " test_mnist "
<ERROR>*cmd_run*:stopping due to failed executions
STOP 1
I'd like to use some of the core library source files within a climate model code that I can only compile with intel (not gnu) compilers due to rampant violations of gnu protocols elsewhere in its [substantial] code that would take ages to clean up.
As far as I can tell unless I use gnu I cannot access the co_sum intrinsic for coarrays assumed to be available by mod_network.F90 and mod_layer.F90. Does anyone know a workaround? Maybe an equivalent functionality that is intel-safe?
Thanks in advance!
Hi @milancurcic !
Really liking the style of the book.
I was wondering if you are going to include more examples with stock data.
Maybe a neural network forecast and backtest?
Many thanks,
Best,
Andrew
@milancurcic What's the purpose of the following code mod_mnist.f90?
#ifdef CAF
call co_sum(db(n) % array)
#endif
I'm unclear on whether it's designed to support serial builds (in which case it seems unnecessary) or designed to support compilers that don't support co_sum
, in which case I'm curious about which compilers and the motivation for supporting them. Intel, Cray, NAG, and GNU all support co_sum
.
Based on the principle of least surprise, I recommend making standard-conforming behavior the default behavior and using a macro to turn off standard behavior rather than requiring a macro to turn on standard behavior.
I also recommend wrapping the entire subroutine in the #ifdef
block and similarly wrapping any references to it. Otherwise, someone reading call db_co_sum(...)
elsewhere is going to be surprised when they realize they have to pass a special flag to get the procedure to do what its name implies. This is one of the subtleties that I was mentioning regarding the conceptual benefit of separating procedure definitions from the corresponding interface bodies. Doing so discourages hiding gotchas in the procedure definition.
It's just a matter of adding appropriate compiler flag in CMakeLists.txt.
When I build and run example_mnist, it appears to do all the training successfully, but does not appear to save or write its results anywhere. When I build and run test_mnist, it just reads the test results from an old DAT file which comes with the GIT download. It would be useful for example_mnist to recreate this test DAT file so that the user can compare it to the downloaded version to make sure all is working correctly.
Class(network_type) has a useful save method and corresponding load method, but to inspect the parameters of a neural network it is helpful if the fields of class(network_type) are labelled. I have coded a simple display method that can be added to file mod_network.f90.
subroutine display(self, outu)
! prints the network with descriptive labels
class(network_type), intent(in out) :: self
integer, intent(in), optional :: outu
integer(ik) :: fileunit, n
integer :: outu_
integer, parameter :: output_unit = 6
if (present(outu)) then
outu_ = outu
else
outu_ = output_unit
end if
write(outu_, fmt="('#layers = ',i0)") size(self%dims)
write (outu_,"(/,2a10,a20)") "layer","#neurons","activation"
do n = 1, size(self % dims)
write(outu_, fmt="(2i10,a20)") n,self%dims(n),self%layers(n)%activation_str
end do
write (outu_,"(/,'biases:')")
do n = 2, size(self % dims)
write(outu_, fmt="(i4,10000f12.6)") n,self%layers(n)%b
end do
write (outu_,"(/,'weights:')")
do n = 1, size(self % dims) - 1
write(outu_, fmt="(i4,10000f12.6)") n,self%layers(n)%w
end do
end subroutine display
A neural net is saved to file with the save method as
3
1 2 1
1 gaussian
2 gaussian
3 gaussian
-1.16021264 -1.54174113
-2.42969489
-1.59757996 -0.921771944
0.510428071 0.936606824
With the display method it is printed as
#layers = 3
layer #neurons activation
1 1 gaussian
2 2 gaussian
3 1 gaussian
biases:
2 -1.160213 -1.541741
3 -2.429695
weights:
1 -1.597580 -0.921772
2 0.510428 0.936607
Start with gfortran and -DSERIAL
, build OpenCoarrays later if at all possible.
For example Keras + Tensorflow.
We should check for presence of activation
in network_type % set_activation()
before referencing it in the select case
statement. While not checking for it seems to work with both GNU and Intel Fortran compilers, it's not allowed by the standard.
I just mistakenly committed to my local main branch and then pushed my commit to this repository's main branch. To recover from this, I then reset my local main branch to the immediately prior commit and force-pushed to this repository's main branch. I recommend protecting the main branch of the repository so that no one can push directly to the main branch. I don't think I have sufficient privileges to protect the main branch myself.
I'll submit a pull request to merge a new branch into main to make the changes that I was trying to suggest when I mistakenly pushed to main.
Hi Milan,
I tried to run the example_mnist program for a network of dimensions [784, 200, 10], with batch size of 200 and 30 epochs, learning rate of 3; I run this on Mac OS X 10.14.6 using the gfortran compiler (6.3.0) and in serial mode.
After a couple of epochs I get a system warning that my disk is full; using the activity monitor I noticed that the memory usage of the program keeps going up, eventually causing the OS to start swapping to disk until the disk is full... So I think there must be a memory leak somewhere. The total size of the input arrays and the layer arrays is only about 180Mb for this case, so it is surprising to see tens of gigs of memory allocation at runtime...
I think that the repeated calls to db_init and dw_init keep allocating new memory over and over. I modified the train_batch routine as shown below and that appears to fix the memory leak... the overall memory usage now hovers around 200 Mb, which is as expected.
It might be possible to achieve the same with a class clean up procedure (final procedure), but I haven't tried that yet. Let me know if you want me to do a pull request.
Thanks!
Marc.
subroutine train_batch(self, x, y, eta)
! Trains a network using input data x and output data y,
! and learning rate eta. The learning rate is normalized
! with the size of the data batch.
class(network_type), intent(in out) :: self
real(rk), intent(in) :: x(:,:), y(:,:), eta
type(array1d), allocatable :: db(:), db_batch(:)
type(array2d), allocatable :: dw(:), dw_batch(:)
integer(ik) :: i, ii, im, n, nm
integer(ik) :: is, ie, indices(2)
im = size(x, dim=2) ! mini-batch size
nm = size(self % dims) ! number of layers
! get start and end index for mini-batch
indices = tile_indices(im)
is = indices(1)
ie = indices(2)
call db_init(db_batch, self % dims)
call dw_init(dw_batch, self % dims)
do concurrent(i = is:ie)
call self % fwdprop(x(:,i))
call self % backprop(y(:,i), dw, db)
do concurrent(n = 1:nm)
dw_batch(n) % array = dw_batch(n) % array + dw(n) % array
db_batch(n) % array = db_batch(n) % array + db(n) % array
end do
!----
! new code to fix memory leak
do ii=1,nm
deallocate(db(ii)%array, dw(ii)%array)
end do
deallocate(db, dw)
!----
end do
if (num_images() > 1) then
call dw_co_sum(dw_batch)
call db_co_sum(db_batch)
end if
call self % update(dw_batch, db_batch, eta / im)
!----
! new code to fix memory leak
do i=1,nm
deallocate(db_batch(i)%array, dw_batch(i)%array)
end do
deallocate(db_batch, dw_batch)
!----
end subroutine train_batch
I just uploaded mnist.tar.gz to GitHub file storage in way that is persistent without being part of the git
repository. I recommend against storing binary files in a repository. Over time, they increase download times and because git
can't do useful diffs on binary files, every change to the file means a completely new version must be stored. What's worse, git rm
doesn't take any of the committed versions of the file out of the commit history so the only way to fix the problem is to rewrite history, which essentially means that everyone who has a local copy of the repository will need to do a fresh clone and if having everyone do a fresh clone is not practical, then it's probably necessary to set up a new repository.
I recommend either
execute_commnand_line()
subroutine to download and uncompress the file from the above location at runtime if it's missing orOn the OpenCoarrays project, I found that our downloads went up by a factor of 2-3 soon after I wrote an installation script. More recently, I settled on a much simpler approach to writing an installer for Caffeine that is an order of magnitude smaller than the OpenCoarrays installer, more robust, and much more maintainable. I'll be glad to adapt Caffeine's install.sh
script to neural-fortran if you like. When someone can get a package built and tested by typing nothing more than ./install.sh
, it saves a lot of time over reading build/test instructions, no matter how simple those instructions are.
Thanks for providing this nice package to implement keras/tensorflow in fortran!
Now I want to do some innovations to embed machine learning algorithms in one atmospheric model which is written by fortran 90. And this model use make
to compile Makefile
. And I put this Makefile
in a google document here: https://docs.google.com/document/d/10naj1WgE9P4qbILT3n85TosZCd1KISwSzGw2u5FIOwo/edit. So how can I use your FKB
or neural-fortran
in my project? I suppose I should put the library file .o
in my library? or put the .mod
in my .mod
directory of the atmospheric model? And after that, what should I do? how can I embed your Cmakelist.txt
into my Makefile
?
by the way, could you please write a readme
to illustrate how to use your project into my own fortran projects? Thanks!
Implement the forward pass method for the conv2d
layer.
In example_sine.f90 the response
y = (sin(x * 2 * pi) + 1) * 0.5
takes on values between 0 and 1. How would the code be modified to predict an unbounded continuous variable? Thanks for neural-fortran.
I'm using this package in a very performance-critical setting. I managed to make the inference quite a bit faster by making some changes (some of these are custom to my model). Below is the inference for a "flat" feed-forward model which takes an 1D array of inputs. I also experimented with inputting multiple samples at a time (2D array) which replaces matrix-vector multiplications with matrix-matrix. This may be faster for some platforms/models (for me it was a wash) but in this case make sure to use SGEMM/DGEMM to replace the matmul call.
subroutine output_flatmodel_opt_sig(self, x, neurons, output)
! Use forward propagation to compute the output of the network.
! For computational efficiency, following changes are implemented:
! 1) Outputs are allocated outside of function,
! 2) use of explicit-shape intermediate array that assumes the number of neurons are the same for all hidden layers,
! 3) activation functions are replaced with a subroutine that modifies the arguments (sigmoid), activation from final layer removed (linear activation=redundant 1:1 copy)
! 4) matmul replaced by custom function which is often faster than matmul for matrix-vector multiplication
! 5) weights have been pre-transposed in the load routine, class variable w_transposed.
! This procedure was much faster than the original when using gfortran -O3 -march=native or ifort -O3.
! For lower optimization levels the custom function (4) may be SLOWER
class(network_type), intent(in) :: self
integer, intent(in) :: neurons
real(wp), dimension(:), intent(in) :: x
real(wp), dimension(:), intent(out) :: output
! Local variables
real(wp), dimension(neurons) :: a
integer :: n
associate(layers => self % layers)
a = matvecmul(layers(1) % w_transposed,x,neurons,size(x)) + layers(2) % b
! sigmoid activation: using an "inout" subroutine to avoid array copy
call sigmoid_subroutine(a)
! INTERMEDIATE LAYERS
do n = 3, size(layers)-1
a = matvecmul(layers(n-1) % w_transposed, a, neurons, neurons) + layers(n) % b
call sigmoid_subroutine(a)
end do
! LAST LAYER (LINEAR ACTIVATION = do nothing, just add biases)
output = (matvecmul(layers(n-1) % w_transposed, a, size(output), neurons) + layers(n) % b)
end associate
end subroutine
function matvecmul(matA,vecB,nrow,ncol)
implicit none
integer, intent(in) :: nrow,ncol
real(wp), intent(in) :: matA(nrow,ncol)
real(wp), intent(in) :: vecB(ncol)
integer :: i,j
real(wp) :: matvecmul(nrow)
matvecmul = 0.0_wp
do j=1,ncol !
matvecmul = matvecmul + matA(:,j) * vecB(j)
enddo
end function matvecmul
subroutine sigmoid_subroutine(x)
real(wp), dimension(:), intent(inout) :: x
x = 1 / (1 + exp(-x))
end subroutine
Implement an HDF5 reader in support of Keras and PyTorch models.
scivision/h5fortran seems a good candidate for a high-level HDF5 API.
Following-up on #12 , net % save()
should store network metadata that is necessary for reproducing the output when loading from file using net % load()
.
At this time, the metadata are the activation functions used in each layer (all except input layer).
For example, this could take the format:
<number-of-layers>
<number-of-cells-per-each-layer>
<layer-number> <activation-function>
<layer-number> <activation-function>
...
<weights>
<biases>
In get_keras_h5_layers
, this code:
call json % parse(model_config_json, model_config_string)
allocates the local model_config_json
pointer, but it is not destroyed in this routine, so that's a memory leak. You should add a:
call json % destroy (model_config_json)
At the end to free the memory. The other json_value
pointers all look like they are just pointing to data inside this structure, so you don't have to worry about those.
FYI: the higher-level json_file
class is a little safer (but less flexible) to use since it will destroy the pointer when it goes out of scope (it has a finalizer). What you have here is fine though (you just need the destroy
call at the end).
Activation function is currently set globally on the network level. It's beneficial to allow using a different activation functions for some layers by implementing layer % set_activation()
. This change will not be API-breaking.
After attempting to develop test_network_sync.f90 into a test that reports passing or failure and seeking some advice Tom Clune, who develops the pFUnit testing framework, I'm not confident there's a way to write a test that is both meaningful straightforward. Without doing extensive research myself, I have the sense is that the formula employed in the randn
functions has been thoroughly studied and there's not much to test in its behavior beyond what is already known. @milancurcic if you have thoughts on a useful test, please let me know. Otherwise, I recommend removing the test, given that it currently just prints the generated samples without any form of check on the their statistical properties.
Hi I wasn't able to run the example_mnist, probably I don't have the right Fortran 2018-compatible compiler. Could you explain where to get the Fortran 2018 compatible compiler?
Sorry I am new in learning and using Fortran, could you show me how to get started with the setup and run the example?
Thank you.
just found this project, seems interesting!
I would venture to guess that there is more demand for doing neural network inference in Fortran, rather than both training and inference. This may be outside the scope for this project, but being able to load neural networks trained in Keras/TensorFlow (in Python) would be an extremely useful feature, akin to https://github.com/Dobiasd/frugally-deep . Maybe I'll have a look at it myself if I get the time, should be fairly simple for feedforward nets using the supported activations.
Hi, I'm new to ML and I find your code very interesting and useful as I begin to learn how to program these networks. I have one question regarding the fwdprop method in the mod_network module: on line 116 you have:
layers(n) % z = matmul(transpose(layers(n-1) % w), layers(n-1) % a) + layers(n) % b
I'm a bit confused on why you are using the weights matrix from layer n-1 along with the biases from layer n to compute the linear output of layer n... shouldn't we be using the w matrix for layer n here ? Maybe I'm misunderstanding how layers are organized, but any clarification you could provide would be very much appreciated !
Regards, Marc.
@milancurcic I keep feeling silly when I realize things that probably should have been obvious. It was very helpful that your comment on PR #71 pointed me to the nf module for the user API. I recommend using a directory structure first suggested to me by @everythingfunctional and which I've subsequently adopted on all libraries that I develop. For neural-fortran, the directory tree would look like the one below. I'll submit a new PR that organizes
% tree src
src
├── nf
│ ├── nf_activation.f90
│ ├── nf_base_layer.f90
│ ├── nf_base_layer_submodule.f90
│ ├── nf_conv2d_layer.f90
│ ├── nf_conv2d_layer_submodule.f90
│ ├── nf_datasets_mnist.f90
│ ├── nf_datasets_mnist_submodule.f90
│ ├── nf_dense_layer.f90
│ ├── nf_dense_layer_submodule.f90
│ ├── nf_input1d_layer.f90
│ ├── nf_input1d_layer_submodule.f90
│ ├── nf_input3d_layer.f90
│ ├── nf_input3d_layer_submodule.f90
│ ├── nf_io.f90
│ ├── nf_io_submodule.f90
│ ├── nf_layer.f90
│ ├── nf_layer_constructors.f90
│ ├── nf_layer_constructors_submodule.f90
│ ├── nf_layer_submodule.f90
│ ├── nf_loss.f90
│ ├── nf_loss_submodule.f90
│ ├── nf_maxpool2d_layer.f90
│ ├── nf_maxpool2d_layer_submodule.f90
│ ├── nf_network.f90
│ ├── nf_network_submodule.f90
│ ├── nf_optimizers.f90
│ ├── nf_parallel.f90
│ ├── nf_parallel_submodule.f90
│ ├── nf_random.f90
│ └── nf_random_submodule.f90
└── nf.f90
After unzipping data/mnist/mnist.tar.gz
, checking out the main branch, and building with the above compiler, running the tests yields the error below even though doing the same with gfortran
10.2.0 avoids this error.
$ fpm test --compiler ifort --flag "-cpp"
Project is up to date
Reading MNIST data..
forrtl: severe (36): attempt to access non-existent record, unit -129, file /storage/users/rouson/neural-fortran/data/mnist/mnist_training_images.dat
Image PC Routine Line Source
test_mnist 0000000000407428 Unknown Unknown Unknown
test_mnist 000000000042174E Unknown Unknown Unknown
test_mnist 000000000041F65B Unknown Unknown Unknown
test_mnist 00000000004049C9 Unknown Unknown Unknown
test_mnist 0000000000404059 Unknown Unknown Unknown
test_mnist 0000000000402E5B Unknown Unknown Unknown
test_mnist 0000000000402DA2 Unknown Unknown Unknown
libc-2.28.so 00007F77396B2493 __libc_start_main Unknown Unknown
test_mnist 0000000000402CAE Unknown Unknown Unknown
Initializing 2 networks with random weights and biases
Save network 1 into file
Load network 2 from file
Layer 1 , weights equal: F , biases equal: T
Layer 2 , weights equal: F , biases equal: F
Layer 3 , weights equal: T , biases equal: F
Setting different activation functions for each layer of network 1
Save network 1 into file
Load network 2 from file
Layer 1 , activation functions equal: T (network 1: sigmoid
, network 2: sigmoid)
Layer 2 , activation functions equal: T (network 1: tanh
, network 2: tanh)
Layer 3 , activation functions equal: T (network 1: gaussian
, network 2: gaussian)
1 -0.1052737 -0.1680061 0.2748350 0.1508894
4.4866543E-02 0.1150012 0.2544496 -7.8449592E-02 0.1209764
0.1076126 0.2480879 8.8033140E-02 -0.2845391 5.8349479E-02
0.1221117
T T T T T T T T T T T T T T T T T T T T
All tests passed.
<ERROR> Execution failed for object " test_mnist "
<ERROR>*cmd_run*:stopping due to failed executions
STOP 1
@milancurcic would you be open to a pull request with some refactoring that is minor but global? If so, I would submit one or more pull requests with the changes described below. In all honesty, one of the main reasons I do steps like these is because it walks me through the project in a way that keeps my brain actively involved in consuming and understanding the code, but there are potential inherent benefits to the project in the parenthetical descriptions below. If you like some ideas but not others, you could check the ones you like. Otherwise, I can check them off as the pull requests get reviewed and merged.
associate
wherever possible to ensure immutability (which reduces chances of mistaken data modifications)reverse
in one or two places)Note: Not using vegetables but essentially resolved as of v0.3.0.
This is a style that Brad (@everythingfunctional) and I adopted in recent code:
_m
_t
_submodule
Note: Using _submodule
for submodules as of v0.2.0.
Implement the backward pass for the conv2d
layer.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.