locustio / locust Goto Github PK
View Code? Open in Web Editor NEWWrite scalable load tests in plain Python 🚗💨
License: MIT License
Write scalable load tests in plain Python 🚗💨
License: MIT License
Hi,
I have hacked up locust to provide me with some custom stats, so that I can count the different occurrences of certain status codes. So far I have this working well when running the process by itself however, If fails to give me the stats when running in distributed mode.
I guess I need to do something here to send the stats back from the slave to the master or for the master to recognise them. Would someone be able to point me in the correct direction please ?
Thanks
Mark
This is a proposal for how we could implement official support for custom request/response based clients.
The reason we would like to do this is both to officially support testing of other request/response based systems than HTTP, but also to provide the ability to swap out the current requests based (http://docs.python-requests.org/) HTTP client, in favor for some other HTTP client. For example, if you're running extremely large load tests where you're doing tenths of thousands of requests per second, the overhead python-requests comes with can actually have a quite large impact.
I've given this some thought, but it's most likely not optimal. However it's good to start somewhere, so you can see it as a starting point for a discussion on how to best implement this :).
One would specify the client class on the locust class(es) like this:
class User(Locust):
client_class = ThriftClient
...
We would modify the Locust base class to something like this (which will allow one to either just set client_class on the Locust class, or override the get_client() method):
class Locust(...):
...
def init(self, ...):
self.client = self.get_client()
def get_client(self):
return self.client_class(self)
...
The client classes should then expect to get an instance of a Locust subclass to their init method (which can be used to read Locust.host etc.), and the client is also responsible for fire:ing request_success and request_failure events when it does requests (which of course should be clearly documented).
Finally, we should also change the HTTP specific labels that we have in the UI (I think it might only be "Method", which we could rename to "Type").
So, what do you think? ping @cgbystrom @Jahaja
We currently run locust tests manually with someone keeping track of the time. And we will stop the tests after 'x' minutes. We now want to automate running of locust tests in Continuous Integration server. But I dont see a command line option in locust to specify the duration to run the tests. This makes it very hard to integrate with CI server.
Just tried to get setup with a distributed swarm today and kept getting this error from zmq:
<load-testing> derek@DEREKS_MACBOOK_PRO_LOCAL s004 ~/load-testing> locust --master -H http://localhost:3001
locust --master -H http://localhost:3001
[2013-03-04 14:18:32,678] INFO/locust.main: Starting web monitor on port 8089
[2013-03-04 14:18:32,678] ERROR/stderr: Traceback (most recent call last):
[2013-03-04 14:18:32,678] ERROR/stderr: File "/Users/derek/load-testing/bin/locust", line 8, in <module>
[2013-03-04 14:18:32,678] ERROR/stderr:
[2013-03-04 14:18:32,679] ERROR/stderr: load_entry_point('locustio==0.6.2', 'console_scripts', 'locust')()
[2013-03-04 14:18:32,679] ERROR/stderr: File "/Users/derek/load-testing/lib/python2.7/site-packages/locust/main.py", line 363, in main
[2013-03-04 14:18:32,679] ERROR/stderr:
[2013-03-04 14:18:32,679] ERROR/stderr: runners.locust_runner = MasterLocustRunner(locust_classes, options.hatch_rate, options.num_clients, num_requests=options.num_requests, host=options.host, master_host=options.master_host)
[2013-03-04 14:18:32,679] ERROR/stderr: File "/Users/derek/load-testing/lib/python2.7/site-packages/locust/runners.py", line 244, in __init__
[2013-03-04 14:18:32,679] ERROR/stderr:
[2013-03-04 14:18:32,771] ERROR/stderr: self.server = rpc.Server()
[2013-03-04 14:18:32,772] ERROR/stderr: File "/Users/derek/load-testing/lib/python2.7/site-packages/locust/rpc/zmqrpc.py", line 12, in __init__
[2013-03-04 14:18:32,772] ERROR/stderr:
[2013-03-04 14:18:32,772] ERROR/stderr: self.receiver = context.socket(zmq.PULL)
[2013-03-04 14:18:32,772] ERROR/stderr: File "/Users/derek/cloudmine/load-testing/lib/python2.7/site-packages/zmq/sugar/context.py", line 82, in socket
[2013-03-04 14:18:32,772] ERROR/stderr:
[2013-03-04 14:18:32,772] ERROR/stderr: s = self._socket_class(self, socket_type)
[2013-03-04 14:18:32,772] ERROR/stderr: File "/Users/derek/load-testing/lib/python2.7/site-packages/gevent_zeromq/core.py", line 36, in __init__
[2013-03-04 14:18:32,772] ERROR/stderr:
[2013-03-04 14:18:32,772] ERROR/stderr: self.__in_send_multipart = False
[2013-03-04 14:18:32,773] ERROR/stderr: File "/Users/derek/load-testing/lib/python2.7/site-packages/zmq/sugar/attrsettr.py", line 38, in __setattr__
[2013-03-04 14:18:32,845] ERROR/stderr:
[2013-03-04 14:18:32,845] ERROR/stderr: self.__class__.__name__, upper_key)
[2013-03-04 14:18:32,845] ERROR/stderr: AttributeError
[2013-03-04 14:18:32,845] ERROR/stderr: :
[2013-03-04 14:18:32,845] ERROR/stderr: GreenSocket has no such option: _GREENSOCKET__IN_SEND_MULTIPART
[2013-03-04 14:18:32,845] ERROR/stderr:
pyzmq is now more strict about which attributes you can set on a socket. This seems to be caused by a recent zmq update (to version 13.0.0 from 2.2.0.1, 11 days ago). Installing 2.2.0.1 explicitly fixed the problem:
1 derek@DEREKS_MACBOOK_PRO_LOCAL s004 ~/load-testing> pip install pyzmq-2.2.0.1
Im running Pythpon 2.6.6(Centos 6.3) and I get the following error when im trying to run distributed locust.
$ locust -f locust_file.py --slave --master-host=some.master.server.com
ERROR/stderr: Traceback (most recent call last):
ERROR/stderr: File "/usr/bin/locust", line 8, in
ERROR/stderr:
ERROR/stderr: load_entry_point('locustio==0.6.1', 'console_scripts', 'locust')()
ERROR/stderr: File "/usr/lib/python2.6/site-packages/locustio-0.6.1-py2.6.egg/locust/main.py", line 365, in main
ERROR/stderr: runners.locust_runner = SlaveLocustRunner(locust_classes, options.hatch_rate, options.num_clients, num_requests=options.num_requests, host=options.host, master_host=options.master_host)
ERROR/stderr: File "/usr/lib/python2.6/site-packages/locustio-0.6.1-py2.6.egg/locust/runners.py", line 330, in init
ERROR/stderr: self.greenlet.spawn(self.worker).link_exception()
ERROR/stderr: File "build/bdist.linux-x86_64/egg/gevent/greenlet.py", line 370, in link_exception
ERROR/stderr: File "build/bdist.linux-x86_64/egg/gevent/greenlet.py", line 355, in link
ERROR/stderr: File "build/bdist.linux-x86_64/egg/gevent/greenlet.py", line 23, in init
ERROR/stderr: TypeError
ERROR/stderr: Expected callable: None
The docs says that I can test any system. I've written a simple test file for a code generating script, but when I run locust, even though I see it's spawning the processes as specified I get absolutely no stats. Can you give me a simple example of how to test something that's not a web app using locust?
Thanks!
Slave nodes should send a shutting down message to the master when they receive a kill signal. The master should handle shutdown messages from the slaves.
C:\Development\Test>locust -f main.py
[2013-02-27 19:30:11,346] INFO/locust.main: Starting web monitor on port 8089
[2013-02-27 19:30:11,348] INFO/locust.main: Starting Locust 0.6.2
...And then it never gets past this point. I am using the example locust file from the website for main.py
from locust import Locust, TaskSet, task
def index(l):
l.client.get("/")
def stats(l):
l.client.get("/stats/requests")
class UserTasks(TaskSet):
# one can specify tasks like this
tasks = [index, stats]
@task
def page404(self):
self.client.get("/does_not_exist")
class WebsiteUser(Locust):
host = "http://127.0.0.1:8089"
min_wait = 2000
max_wait = 5000
task_set = UserTasks
I am running Windows 7. Any ideas on what I could be doing wrong? Would be happy to clarify more information.
Starting locust with a log file and a high log level still spits out INFO logs to STDOUT.
[root@metrics locust]# locust -f /root/locustfile.py --loglevel=CRITICAL --logfile=/root/locust.log -H http://www.example.com -n 10 -c 5 --web
/usr/lib/python2.7/site-packages/locust/core.py:23: UserWarning: WARNING: Using pure Python socket RPC implementation instead of zmq. This will not affect you if your not running locust in distributed mode, but if you are, we recommend you to install the python packages: pyzmq and gevent-zeromq
warnings.warn("WARNING: Using pure Python socket RPC implementation instead of zmq. This will not affect you if your not running locust in distributed mode, but if you are, we recommend you to install the python packages: pyzmq and gevent-zeromq")
INFO:locust.main:Starting web monitor on port 8089
INFO:locust.main:Starting Locust 0.4
Domain specific code for formatting the task ratio output for Confluence is a bit too specific to keep in Locust.
I propose that we drop the require_once decorator. It was implemented a long time ago - before there were a concept of nested locust classes - and I see no point with it today. So to keep the API clean, I suggest that we remove it.
When leaving a test running for longer periods of time (24 or 72 hours) it can be hard to stop the test at the correct time. By adding a start time in the UI would help me to stop the test at the correct hour/minute.
I am newcomer from Jmeter background, I want to get some help.
just like the example in [Quick start] section in doc of locust,
how could I share the thousands of pairs of userid and password between all threads like jmeter do?
Does locust support ‘CSV Data Set Config‘ feature like jmeter?
In a large environment with many slaves, if one slave happens to be misconfigured (incorrect dependencies or something) it's currently difficult to see where an error has come from. I didn't see any standard way of doing this in python logging module so hatched the following (hackish) solution:
Is there a proper way to do this? Or would something like the above be OK?
To me - even though I was in favor of it when we implemented it - the catch_response feature is slightly odd, and I'm not sure it should be a feature in Locust. A web app should not return 200 OK if the request in fact failed. I also think it would be a mistake to keep locust features that are focused toward detecting errors in the app that should have been spotted by the app's own unit/integration tests. Simply because it's better to focus on one thing (in our case load testing and user simulation) and kick ass at that :).
If someone is wandering I'm talking about the following syntax:
from locust import ResponseError
with self.client.get("/inbox", catch_response=True) as response:
if response.data == "fail":
raise ResponseError("Request failed")
Part of the reason I'm bringing this up is that I'm currently working on a branch where I'm replacing Locust's built in HttpBrowser with python requests lib (http://python-requests.org). I have now fully replaced the old client with requests, except for the catch_response feature, and not having to port that feature would save time and code lines :).
Does anyone use the catch_response feature extensively? Would anyone miss it :)?
Hi,
First of all, thanks for the great product and the new version. I have a test case that makes a set number of requests by specifying the NUM_REQUESTS parameter. After I upgraded to version 0.6 and refactored my test for the new API, I found that when the number of requests reaches the limit, I get an error:
[2012-12-03 17:05:26,358] ERROR/stderr: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 242, in run
self.execute_next_task()
File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 263, in execute_next_task
self.execute_task(task["callable"], _task["args"], *_task["kwargs"])
File "/usr/local/lib/python2.7/dist-packages/locust/core.py", line 275, in execute_task
task(self, _args, *_kwargs)
File {My script}
File "/usr/local/lib/python2.7/dist-packages/locust/clients.py", line 221, in success
self.locust_request_meta["content_size"],
File "/usr/local/lib/python2.7/dist-packages/locust/events.py", line 27, in fire
handler(_args, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/locust/stats.py", line 270, in on_request_success
raise StopLocust("Maximum number of requests reached")
StopLocust: Maximum number of requests reached
This is the command I used:
locust -f {My script} -H {my site} --no-web -c 30 -r 5 -n 50
Thanks for your help!
From the code:
This is equivalent of python-requests' safe_mode, which due to a bug, does currently *not*
work together with Sessions. Once the issue is fixed in python-requests, this method should
be removed. See: https://github.com/kennethreitz/requests/issues/888
It looks like https://github.com/kennethreitz/requests/pull/953 fixed the issue, so this method is no longer necessary.
Hi,
I replaced the HttpBrowser in Locust.client with our client SDK that uses httplib directly. The switch itself (monkey-patching the log_request decorator) was fairly straightforward. However, I came across a problem in stats I could not easily solve: my client does most requests with accept-encoding: gzip and the HTTP 1.1 default keep-alive connections which results in Transfer-Encoding: chunked and Content-Encoding: gzip responses. Since chunked responses do not have the Content-Length header the stats module (stats.py:on_request_success) defaults to 0 content length.
I couldn't figure out how to elegantly patch the on_request_success function, but would you have a proposal?
Right now, the test coverage for task scheduling, stats reporting and the HTTP client are decent, but tests for LocustRunners and running locust distributed are lacking.
Some refactoring (removal of some global singletons etc) is probably needed to be able to instantiate multiple LocustRunners for testing, but it shouldn't be too hard, and it should make the code better as well.
Hey,
What I really want is to output a JSON representation of the swarm (and nothing else) to STDOUT upon completion. It doesn't seem like the hooks support this at present. Perhaps a swarm_complete hook could accomplish this.
Thanks! Great stuff!
Hi,
I was running some simple tests with locust (which is so cool btw) and I noticed that if you end up with no slaves connected, the UI does not reflect this change. The slave count in the UI sticks to 1 in this case.
Also, it would be nice to get a warning message in the webUI if you start swarming with no slaves connected. Now, you get this warning only in the command line.
I could provide a quick fix for this if you'd like.
Thanks!
Well, as the title suggests it's pretty straight forward to reproduce the problem. The actual problem is perhaps more that the slave node doesn't exit when the master does so.
Looking at a normal Locust script I find SubLocusts confusing since they seem to differ so little from a normal Locust class.
As I haven't been involved with SubLocusts, what was the original idea of creating a separate class for it? Looking at code it should be fairly simple to merge it into Locust class. Would simplify the API for end-users.
Also, using Locust classes both for user representation and essentially task grouping/task sets creates a bit of complexity.
Perhaps just one of them, either the user representation or task group need a new name better explaining it's purpose?
That would avoid some confusion of the two.
It'd be extremely useful to have dedicated setup and teardown functionality in Locust (or if there is something like this already, to have it documented).
My rough idea would be:
Thoughts? (Have I missed something that already exists?)
It looks like the master trusts the slaves to report the correct time and if the system clock drifts the rps value will be reported incorrectly.
During a 24 hour run, running 10k rps this value will drop to about 6k rps.
Doing a ntpdate ntp.kth.se on master and the slaves restores the rps count back to 10k
The original idea of Locust was to support multiple protocols, not just HTTP. However, with HTTP being the biggest (and only) use-case for Locust right now perhaps we should declare it HTTP only.
The biggest issue with designing for pluggable protocols would be handling stats collection and reporting for each protocol in a smart way. Doing that without having a concrete use-case or customer requesting additional protocol support is tricky as you most likely will get it wrong.
What do you think?
Do you guys have any plans/code for migrating to requests >1.0?
A simple get seems to fail on a clean install:
[2013-01-08 15:22:38,880] ERROR/stderr: File "/home/ubuntu/google/locustfile.py", line 7, in on_start
[2013-01-08 15:22:38,880] ERROR/stderr: self.client.get("/")
[2013-01-08 15:22:38,880] ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 254, in get
[2013-01-08 15:22:38,881] ERROR/stderr: return self.request('get', url, **kwargs)
[2013-01-08 15:22:38,881] ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/locust/clients.py", line 124, in request
[2013-01-08 15:22:38,881] ERROR/stderr: request_meta["response_time"] = int((time.time() - request_meta["start_time"]) * 1000)
[2013-01-08 15:22:38,881] ERROR/stderr: KeyError: 'start_time'
Heres the console output: https://gist.github.com/977b5f9b93fab7335348
Here's my locust.py: https://gist.github.com/0385d312d66d218822cb
My ubuntu setup is super-simple, just start from a EC2 ubuntu image, then:
aptitude install python-dev python-pip libevent
pip install locustio
The current way of specifying the wait time for a Locust makes a rather large assumption on how a developers will want to do this. It assumes that wait time should be randomly distributed in a given interval of milliseconds.
With the current API, a locust with random wait times between 2000 and 5000 ms looks like this:
class User(Locust):
tasks = [index, stats]
min_wait = 2000
max_wait = 5000
However, if we were to replace those hard constants with a function instead, determining that wait time would be a lot easier to both communicate and implement. It would make no assumption on how the developer would like to define his/her wait times and at the same time provide a small, isolated place were this can take place (inside a function). Since randomly distributed wait times within an interval arguably would be the most common, we naturally would ship with some default ones.
Here's an example:
class User(Locust):
tasks = [index, stats]
wait_time = random_between(2, 5)
As you can see, wait_time is assigned a function returned by the random_between
function. Very straight forward and pluggable. Even simpler than the current implementation as the Locust core would not even need to be aware of how to calculate wait times - just call the wait function to determine the wait time.
Two examples of wait functions that we could ship:
def random_between(from_time, to_time):
def wait():
return random.uniform(from_time, to_time)
return wait
def constant(time):
def wait():
return time
return wait
As with any API changes, discussing them here before diving in. What are your thoughts?
This is an intentional temporary bug. It's still present in the request stats CSV.
Should be fixed, perhaps by just taking the average of the already calculated median for each URL.
When leaving Locust running over night it would be helpful if i would tell it to stop running at 3am and mail me all the results.
Right now, plain old sockets can be used whenever ZeroMQ is not available. It is my personal belief that supporting two implementations will not be good in the long run.
I vote for keeping ZeroMQ and ditching socket support for master/slave communication. What do you think?
(ping @heyman)
I've implemented this in the "forking" branch. Seems to work fine under UNIX, but needs to be tested on Windows.
I am using a virtualenv on osx 10.8.3. I am unable to install locust.
python setup.py install # breaks down
Can you please help me install it in a virtualenv? Detailed error copied below:
Installed /Users/zzz/dev/skc/lib/python2.7/site-packages/Flask-0.9-py2.7.egg
Searching for gevent>=0.13
Reading http://pypi.python.org/simple/gevent/
Reading http://www.gevent.org/
Reading http://gevent.org/
Best match: gevent 0.13.8
Downloading http://pypi.python.org/packages/source/g/gevent/gevent-0.13.8.tar.gz#md5=ca9dcaa7880762d8ebbc266b11252960
Processing gevent-0.13.8.tar.gz
Running gevent-0.13.8/setup.py -q bdist_egg --dist-dir /var/folders/t8/szvbsng97ps56j30w_xl692r0000gn/T/easy_install-3ZM8dE/gevent-0.13.8/egg-dist-tmp-MYFzQ_
clang: warning: argument unused during compilation: '-mno-fused-madd'
In file included from gevent/core.c:253:
gevent/libevent.h:9:10: fatal error: 'event.h' file not found
^
1 error generated.
error: Setup script exited with error: command 'clang' failed with exit status 1
Using pip installed locust. locustio==0.6.2
ubuntu@swrm:~$ curl -v -XGET http://localhost:8089
* About to connect() to localhost port 8089 (#0)
* Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 8089 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.27.0
> Host: localhost:8089
> Accept: */*
>
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 405 METHOD NOT ALLOWED
< Content-Type: text/html
< Allow: HEAD, OPTIONS, GET
< Content-Length: 183
< Server: gevent/0.13 Python/2.7
< Connection: close
< Date: Thu, 31 Jan 2013 20:51:36 GMT
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>405 Method Not Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method POST is not allowed for the requested URL.</p>
* Closing connection #0
Raising an InterruptLocust exception within a with statement when using catch_response=True will result in a request failure
Hi,
I faced lot of issues with Jmeter and other performance tool and this tool seems to be really nice. I have just started learning locust tool for performance testing. I have some basic questions regarding the same.
1.Single Locust class resembles single user as per the document, is it mean if we want to simulate thousands of user’s then we need to define thousand locust classes? –c command line option represents number of clients (users) but I think it would create multiple instance of same class which would represent cloning of users. Please clarify.
2.How the test data can be controlled? Does it provide mechanism to input test data for example users information’s (username and password), different files to be uploaded by different users etc from external file like csv/excel/xml? I didn’t find anything in documents
Could you please clarify above queries?
Thanks,
Sumeet
As it seems the current version uses the same client for all users, thus all users share the same session.
For testing the behavior of a webapp with say 100 concurrent user sessions, the client sessions in locust should be distinct. Additionally it should be possible to identify the current user or client. Our requirement is that each user uses a distinct login name in the login logic in on_start()
Or did I miss something and this is already doable?
My load tests must include many query parameters that are close to random.
I changed the methods on_request_success
and on_request_failure
to use RequestStats.get(method, name.split('?')[0])
instead, so I could get the stats based on the paths without the query parameters.
I guess this could be useful for many people, so I thought about submitting a patch with an option like --ignore-query-parameters-stats
or something, but I thought better to ask here for some tips about where should I add the changes. Should I just do the change as I explained, and add an option for it? Or should I do that up in the call hierarchy?
Thanks!
It would be helpful to understand the rps over time when running a test for 24 hours or more. Either raw data for later graph visualization or built in to the UI.
I'm just starting using locust to test our application. Our application can have slugs in the URL that I want to test against. Rather than have to change the task file to put in a new slug to test, I hacked locust to have a -o key value
(or --option key value
) command line options that will be available in the task as self.options[key]
.
I've done some work on this branch: https://github.com/rory/locust/tree/custom-options (see the diff here amandasaurus/locust@master...custom-options ). I haven't requested a pull yet because I'm unsure if I'm doing it in a sensible way. I haven't tested this using the master/slave/distributed workflow, since we're not using that yet.
Can you tell me if this is a good feature others might want, and if I'm doing it in a good way?
I'd like locust to have a plugin system that would provide the ability to extend locust with different functionality. We could then probably refactor out some of the core functionality into plugins that are loaded by default and resides in a contrib package.
I think we could get a long way if we would implement a plugin system with the following features:
I believe the above features would cover a majority of the different plugin use cases one might have. I also think it's good to start small and keeping the plugin API as small as possible, since there's less risk of making bad API decisions and they will be easier to correct :).
It may be useful to add timers to the web interface to keep consistent data set run times. It would nice to see the current run time in the UI.
Also it would be great if the test would stop once the timer completed.
Running under Python 2.6.6 (Centos 6.3) I get the following crash as soon as I try and load the web UI. This seems to work fine using Python 2.7.3 as an altinstall on the same host. I have seen this on two different Centos 6.3 hosts...
--snip--
$ /usr/bin/locust -f some_locust_script.py
/usr/lib/python2.6/site-packages/locust/rpc/init.py:7: UserWarning: WARNING: Using pure Python socket RPC implementation instead of zmq. This will not affect you if your not running locust in distributed mode, but if you are, we recommend you to install the python packages: pyzmq and gevent-zeromq
warnings.warn("WARNING: Using pure Python socket RPC implementation instead of zmq. This will not affect you if your not running locust in distributed mode, but if you are, we recommend you to install the python packages: pyzmq and gevent-zeromq")
[2012-12-18 15:14:52,705] INFO/locust.main: Starting web monitor on port 8089
[2012-12-18 15:14:52,705] INFO/locust.main: Starting Locust 0.6.1
Modules/gcmodule.c:348: visit_decref: Assertion "gc->gc.gc_refs != 0" failed.
refcount was too small
object : <weakref at 0x15b1db8; to 'gevent.core.http_request' at 0x15b1d60>
type : weakref
refcount: 1
address : 0x15b1db8
Aborted
--snip--
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.