chromedevtools / devtools-protocol Goto Github PK
View Code? Open in Web Editor NEWChrome DevTools Protocol
Home Page: https://chromedevtools.github.io/devtools-protocol/
License: BSD 3-Clause "New" or "Revised" License
Chrome DevTools Protocol
Home Page: https://chromedevtools.github.io/devtools-protocol/
License: BSD 3-Clause "New" or "Revised" License
How to get the chrome device pixel ratio?
DeviceScaleFactor have no effect on screenshot.
Moved from https://bugs.chromium.org/p/chromium/issues/detail?id=706677#
Basically, I'm writing some tooling that interacts with the chromium remote debugger interface. The remote debugger interface has a json file that specifies it's methods, and also has a version number in it. That version number hasn't been changed while features have been added and removed repeatedly.
Aside: I'm basically trying to implement basically a complete toolset for remotely controlling the browser (e.g. something like PhantomJS without selenium/webdriver and all the pain and spectacularly asinine missing features that brings). As such, I find myself needing to touch a bunch of the functionality marked "experimental" in the protocol file.
I'd expect changes to the remote debugging interface to be associated with a change of the remote debugging interface number. This would be a clear indicator that I need to update my interface, and would also provide a definitive way to refer to one version of the interface specification when dealing with bugs in either my software or chromium.
As it is, I have no idea why the version number is even there, since it seems like it doesn't ever get changed or updated.
It'd probably be nice to make it clear what the version information means, or better yet, allow it to be used for meaningful versioning. I'd love if there was a further sub-version-number that was incremented for any change to the interface structure.
Hi,
I attempted to print 10inch x 10inch PDF, I declared some media queries in the html file, and the result is not as expected:
(blue texts are ones that match the queries)
Here is my gist:
https://gist.github.com/quangbuule/4b781d581f203e59358111c075252d32
Hi :)
I have a question. I want to find too many repainted DOM list using devtools protocol. So. I have investigated documation. I found a setShowPaintRects but This command have not return anything. Could you support command that return repainted dom list?
Hi,
I'm using Node.js with the chrome-remote-interface to communicate with an headless instance of Chrome (v59.0.3071.86).
I'm using the Network.responseReceived method to listen to responses but on a 301 the first response I get is the one right after with a 200 like shown below:
{
url: 'http://localhost/',
status: 200,
statusText: 'OK',
headers: {
Date: 'Mon, 19 Jun 2017 09:34:06 GMT',
Connection: 'keep-alive',
'Transfer-Encoding': 'chunked',
'Content-Type': 'text/html'
}
}
What I was expecting was something more like this:
{
url: 'http://localhost/no_longer_available.html',
status: 301,
statusText: 'Moved Permanently',
headers: {
// ...
}
},
{
url: 'http://localhost/',
status: 200,
statusText: 'OK',
headers: {
Date: 'Mon, 19 Jun 2017 09:34:06 GMT',
Connection: 'keep-alive',
'Transfer-Encoding': 'chunked',
'Content-Type': 'text/html'
}
}
I've been going through the docs here but I could not find anything of help.
Is there a way to achieve this? Maybe like I would do with a non-headless Chrome by recording the network log?
Thanks.
using headless chrome
1、navigated to www.google.com
these events fired:Page.frameNavigated Page.domContentEventFired Page.loadEventFired
2、input the keyword headless chrome
3、trigger the search button
and then no events like above fired
how can I judge the search result page is loaded and start craw the search result?
I am needing to interact with the contents of an iframe, and so far coming up short in my attempts. Does the devtools-protocol have anything analogous to https://w3c.github.io/webdriver/webdriver-spec.html#switch-to-frame or is there an alternative way to accomplish that interaction?
got nothing when use returnByValue
>>> Runtime.evaluate({expression:'window.performance',returnByValue:true})
{ result: { result: { type: 'object', value: {} } } }
also got nothing when use ownProperties
>>> Runtime.evaluate({expression:'window.performance',returnByValue:false})
{ result:
{ result:
{ type: 'object',
className: 'Performance',
description: 'Performance',
objectId: '{"injectedScriptId":1,"id":68}' } } }
>>> Runtime.getProperties({objectId: '{"injectedScriptId":1,"id":2}',ownProperties:true})
{ result:
{ result:
[ { name: '__proto__',
value:
{ type: 'object',
className: 'Performance',
description: 'Performance',
objectId: '{"injectedScriptId":1,"id":69}' },
writable: true,
configurable: true,
enumerable: false,
isOwn: true } ] } }
Testing out code, not getting anything back from this:
Using: https://github.com/cyrus-and/chrome-remote-interface
Runtime.consoleAPICalled(() => {
console.log(arguments);
});
Runtime.evaluate({
expression: `(${function(){
console.log('test console api call');
}})()`,
awaitPromise: true,
});
Is there any way to map back a size of the network request to what you'd get back from Network.dataReceived
to Network.responseReceived
?
Network.getResponseBody
sometimes returns an empty or truncated body compared to the output of Debugger.getScriptSource
. I attached an example where I navigate to http://golem.de/
. For all websites I crawled 70% contain requests where the body doesn't match the script content, so this happens a lot. When I visit the website again I usually get the correct body. Maybe it has something to do with caching? Do you have any ideas how to circumvent this?
I use Google Chrome 59.0.3071.115 in headless mode.
Request:
{ requestId: '1766.55',
frameId: '1766.1',
loaderId: '1766.2',
documentURL: 'https://www.golem.de/',
request:
{ url: 'https://s3-eu-central-1.amazonaws.com/prod.iqdcontroller.iqdigital/cdn_golem/live/iqadcontroller.js.gz',
method: 'GET',
headers:
{ Referer: 'https://www.golem.de/',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36',
Intervention: '<https://www.chromestatus.com/feature/5718547946799104>; level="warning"' },
mixedContentType: 'none',
initialPriority: 'High',
referrerPolicy: 'no-referrer-when-downgrade' },
timestamp: 298.426415,
wallTime: 1501804201.99757,
initiator: { type: 'script', stack: { callFrames: [Array] } },
type: 'Other' }
Response:
{ requestId: '1766.55',
frameId: '1766.1',
loaderId: '1766.2',
timestamp: 298.539254,
type: 'Script',
response:
{ url: 'https://s3-eu-central-1.amazonaws.com/prod.iqdcontroller.iqdigital/cdn_golem/live/iqadcontroller.js.gz',
status: 200,
statusText: 'OK',
headers:
{ Date: 'Thu, 03 Aug 2017 23:50:04 GMT',
'Content-Encoding': 'gzip',
'Last-Modified': 'Thu, 29 Jun 2017 14:26:42 GMT',
Server: 'AmazonS3',
'x-amz-request-id': '5626885826712CCD',
ETag: '"afc86a3e20c32075c18daae3d167edc6"',
'Content-Type': 'text/javascript',
'Accept-Ranges': 'bytes',
'Content-Length': '40808',
'x-amz-id-2': 'aMjbzUHW1W74IHo9sjk2oMfy46xtppN3JvmE9+89fu8+ty8nvs80KeNuFV/BXkguB1BSyGfBme8=' },
mimeType: 'text/javascript',
connectionReused: true,
connectionId: 66,
remoteIPAddress: '52.219.73.0',
remotePort: 443,
fromDiskCache: false,
fromServiceWorker: false,
encodedDataLength: 388,
timing: ...,
protocol: 'http/1.1',
securityState: 'secure',
securityDetails: ... } }
Script:
{ scriptId: '37',
url: 'https://s3-eu-central-1.amazonaws.com/prod.iqdcontroller.iqdigital/cdn_golem/live/iqadcontroller.js.gz',
startLine: 0,
startColumn: 0,
endLine: 382,
endColumn: 0,
executionContextId: 3,
hash: '0FD2E9009E2C1EB731F0B111C86652591FBAA8C5',
executionContextAuxData: { isDefault: true, frameId: '1766.1' },
isLiveEdit: false,
sourceMapURL: '',
hasSourceURL: false,
isModule: false,
length: 172643 }
Body:
{ body: '', base64Encoded: false }
Parts of my code:
async function storeContent(content) {
if (content === null) return null;
let id = sha256(content);
let path = __dirname+"/../storage/contents/"+id.substr(0, 2)+"/"+id.substr(2, 2)+"/"+id.substr(4);
let exists = await fileExists(path);
if (!exists) {
await createDirIfNotExists(dirname(dirname(path)));
await createDirIfNotExists(dirname(path));
await writeGzipped(path, content);
}
return { id };
}
async getRequestBody(requestId) {
try {
let body = await this.protocol.Network.getResponseBody({ requestId });
if (body.base64Encoded)
return Buffer.from(body.body, "base64").toString("utf-8");
else
return body.body;
} catch (e) {
return null;
}
}
async getScriptSource(scriptId) {
try {
let source = await this.protocol.Debugger.getScriptSource({ scriptId });
return source.scriptSource;
} catch (e) {
return null;
}
}
// ...
Network.responseReceived(params => {
// ...
console.log(params);
contentPromises.push((async () => {
let body = await getRequestBody(params.requestId);
response.body = await storeContent(body);
});
});
Debugger.scriptParsed(script => {
// ...
console.log(script);
contentPromises.push((async () => {
let source = await getScriptSource(script.scriptId);
if (source !== null) {
let { id } = await storeContent(source);
// ...
}
});
});
// ...
await Promise.all(contentPromises);
DOM.resolveNode will return a node;
Runtime.evaluate will also return a node;
Runtime.evaluate({
expression:`
var targetDOM = document.querySelector('#target');
targetDOM.click();
targetDOM;
`
}).then(({result}) => {
// result.click is not a function
});
But neither of them is same as the targetDOM
above;
How can I get the targetDOM out of the runtime?
It would be nice to add support for service worker to make protocol viewer available offline.
Hello, I'm trying to use the devtools protocol via a websockets connection to navigate to a page and listen to events generated by the user like mouse moves, clicks, inputs, etc.
I know I could inject some javascript to add event listeners or similar but my ideal solution would be to not need to inject any javascript in the page.
Is there any way to do this at the moment?
There are times where we need the Javascript running in the context of a web-page to send a signal to the remote process.
PhantomJS has this feature: http://phantomjs.org/api/webpage/handler/on-callback.html
This is currently possible by having the web application call: "console.log('expected event')", and then checked for the message received using the "Runtime.consoleAPICalled" event, but that seems a bit hacky.
We currently can only call DOM.querySelectorAll with NodeIds. We'd like to use backendnodeid instead so we can use querySelectorAll directly from a getSnapshot document or even using the root (0) backendnodeid.
Note that evaluating javascript does not give us the same as calling querySelectorAll: if we want to find a node inside an iframe coming from another source, we get a cross site scripting error. So we are only left with querySelectorAll here.
As a follow-up feature request, it would also be great to implement the piercing of iframe boundary and shadow DOM directly using arrays of selectors instead of making several calls with a single selector string.
Is it possible to disable source maps using the devtools protocol? In my application I use chrome headless to collect all console messages of a website (using Runtime.consoleAPICalled
) and want to parse the stackTrace of each message, but I need full urls instead of what was stated as sourceURL.
I'm looking for a way to get the raw response headers using headless chrome.
The current stable 1.2 version of the devtools protocol documents a field called headersText
within the Response
object but it's never included in the response, which is technically ok as the field is marked "optional".
Is there a flag somewhere to "activate" the optional fields?
Or is there a way to get a list of all headers like in HAR format (headers is an array of objects with name/value properties)?
For example, if the content is gzipped is there any way I can access the raw bytes? getResponseBody only seems to return a string and other events like dataReceived
, etc. don't seem to have that data :(
Hi there! I had a quick question re: the spec for Network.emulateNetworkConditions
. What would be the upper/lower bounds on input params like downloadThroughput
and uploadThroughput
? Just trying to get a ballpark idea of what these params represent.
Thanks again!
Page.addScriptToEvaluateOnLoad
doesn't run "on load" but runs when a new document is created. In Lighthouse we've renamed this to addScriptToEvaluateOnNewDocument
(roughly).
IMO it'd make sense to rename this method to avoid further confusion.
The WebSocket implementaiton for the Chrome DevTools does not seem to handle either fragmentation (continuation frames) or messages that are larger than 1 MB in size.
This problem is reproducible after connecting to the webSocketDebuggerUrl
of a page and either sending fragmented messages or messages above 1 MB in size.
In both cases the result is silent failure of the tcp connection. The error is found upon next write when the connection returns (among other things)write tcp 127.0.0.1:50872->127.0.0.1:9222: write: broken pipe
.
I spent some time tracking down these two issues and came to the following conclusions:
Chrome does not support websocket fragmentation (continuation frames), they remain unimplemented at the time of writing (net/server/web_socket_encoder.cc#86)
The underlying HTTP connection does not support bigger payloads than 1MB, I believe the reason is the kDefaultMaxBufferSize
, defined here: net/server/http_connection.h#33
When we send messages larger than 1MB, Chrome logs the following:
[89253:55299:0616/194024.005586:ERROR:http_connection.cc(37)] Too large read data is pending: capacity=1048576, max_buffer_size=1048576, read=1048576
The biggest use-case for this is probably sending large scripts over CDP, however, the current behavior does lead to hard-to-diagnose bugs. It would be nice if CDP could support WebSocket fragmentation, and at the very least provide useful error messages when it cannot handle a command.
Related issues:
How can I interpret the timestamps of Page.loadEventFired
and Page.domContentEventFired
? For example 40159.349892
. It can't be milliseconds since the initial request and it can't be a unix epoch with microseconds either. The documentation lacks this information.
I want to find unnecessary offscreen repainted gifs(dom). I can see repainted dom using Overlay.setShowPaintRects
. but I can't get repainted dom list using devtools-protocol. How to get repainted dom list using devtools-protocol?
Thanks.
At the moment, there appears to be no way of actually getting a Node element (including the nodeType, nodeName etc) from a NodeId in the DOM. Most things in DOM appear to return a nodeId, but to actually get the Node from a matching NodeId, you need to do a DOM.getDocument, DOM.requestChildNodes and then walk through the DOM "manually".
Would it be possible to add a DOM.getNodeById({nodeId:[nodeId]}) method which returns the relevant node? (I did originally think of just "getNode", but that may lead to confusion with the querySelector which is used to find nodes, but only returns the nodeId[s]).
I am using the chrome devtools protocol to capture a screenshot of part of a page. In the documentation, there is an optional clip
parameter listed for the Page.captureScreenshot
call, which is supposed to only take a screenshot of the specified part of the page.
However, even when I pass a value for the clip
parameter, the entire page's screenshot is returned.
Is this an issue with my code/understanding of the documentation, or could this be an error? I am using chrome beta 61 by the way.
Here's my code:
https://gist.github.com/bmikkelsen22/8510c05a0dec4de1001b10f99074aace#file-screenshot-ts-L73
Thanks!
The Debugger.scriptParsed
event contains hash of each script. What kind of hash is this? How can I calculate it myself?
The documentation does not list the units of emulateNetworkConditions
. #39 was a similar issue where the user assumed kb, but my tests do not seem to confirm this unit.
It looks like DevTools displays the fetches for the service worker js as well as SW-initiated requests but they aren't exposed at the tab-level for the page in the Network.* events or in the ServiceWOrker.* events.
For the HTTP Archive we use the network.* interface to get the response bodies but we don't currently get the response bodies for SW requests and rely on parsing the netlog to get the timings.
The issue was originally submitted to 'chrome-remote-interface' repo at cyrus-and/chrome-remote-interface#226 and I was redirected to this repo.
Component | Version |
---|---|
Operating system | Ubuntu 16.10 |
Node.js | 8.1.2 |
Chrome/Chromium/... | 60.0.3112.78 (Official Build) beta (64-bit) |
chrome-remote-interface | 0.24.3 |
Is Chrome running in a container? NO
My issue is with firing 'Enter' key on input field.
I thought that the code await Input.dispatchKeyEvent({ type: 'rawKeyDown', keyIdentifier: 'Enter' })
would work, but it did not.
This is my code:
const CDP = require("chrome-remote-interface")
const chromeLauncher = require("chrome-launcher")
const getPort = require("get-port")
const R = require("rambdax")
const chromeFlags = [
"--disable-gpu",
"--disable-sync",
"--no-first-run",
"--headless",
"--window-size=1366,768"
]
const main = async () => {
try{
const port = await getPort()
const chrome = await chromeLauncher.launch({
chromeFlags,
port,
})
const client = await CDP({ port })
const { Page, Runtime, Input } = client
await Promise.all([
Page.enable(),
Runtime.enable(),
])
await Page.navigate({ url : 'https://www.google.com' })
await Page.loadEventFired()
await R.delay(1000)
await Input.dispatchKeyEvent({ type: 'char', text: 'm' })
await R.delay(200)
await Input.dispatchKeyEvent({ type: 'char', text: 'o' })
await R.delay(200)
await Input.dispatchKeyEvent({ type: 'char', text: 'e' })
await R.delay(200)
await Input.dispatchKeyEvent({ type: 'rawKeyDown', keyIdentifier: 'Enter' })
await R.delay(3000)
}catch(err){
console.log(err)
}
}
main()
Any help will be appreciated. Thanks!
I couldn't find anything related to full page screenshot here.
https://chromedevtools.github.io/devtools-protocol/tot/Target/#method-createBrowserContext
When connecting to an existing Chrome instance, and launching multiple createBrowserContext
's there can be data leakages in things like local-storage (and potentially other things: cookies, indexdb...)
Repo steps:
createBrowserContext
According to the docs:
Similar to an incognito profile but you can have more than one
This leads me (and I assume others) to believe that each context will have a clean browser slate (no prior persisted data). I could be misunderstanding the motivation behind this API, and this is expected, but the docs lead me to believe that this is a bug.
Curious to hear thoughts on this or if there's a better way to generate a Target that is clean? Thanks!
const chromeLauncher = require('chrome-launcher');
const chromeRemoteInterface = require('chrome-remote-interface');
const prepareAPI = (config = {}) => {
const {host = 'localhost', port = 9222, autoSelectChrome = true, headless = true} = config;
const wrapperEntry = chromeLauncher.launch({
host,
port,
autoSelectChrome,
additionalFlags: [
'--disable-gpu',
headless ? '--headless' : ''
]
}).then(chromeInstance => {
const remoteInterface = chromeRemoteInterface(config).then(chromeAPI => chromeAPI).catch(err => {
throw err;
});
return Promise.all([chromeInstance, remoteInterface])
}).catch(err => {
throw err
});
return wrapperEntry
};
prepareAPI({
headless: false
}).then(([chromeInstance, remoteInterface]) => {
const {Runtime, DOM, Page, Network} = remoteInterface;
Promise.all([Page.enable(), Network.enable(), DOM.enable()]).then(() => {
Page.loadEventFired(() => {
DOM.getDocument().then(({root}) => {
DOM.querySelector({
nodeId: root.nodeId,
selector: '#kw'
}).then((inputNode) => {
//this works well as expected
Runtime.evaluate({
expression: 'document.getElementById("kw").value = "headless chrome"',
});
//the below code does not work and throws : Error: Can only set value of text nodes
// DOM.setNodeValue({
// nodeId: inputNode.nodeId,
// value: 'headless chrome'
// });
}).then(() => {
Runtime.evaluate({
expression: 'document.getElementById("kw").value',
}).then(({result}) => {
console.log(result)
})
})
});
});
Page.navigate({
url: 'http://www.baidu.com'
});
})
});
In the description of the Runtime.exceptionRevoked
event
The id of revoked exception, as reported in
exceptionUnhandled
.
exceptionUnhandled
isn't described anywhere. Is it supposed to refer to exceptionThrown
?
I'm confused. Does Page.setDocumentContent set the full page html or does it only set the body content?
Want to understand the working of Animation API, and the type of animations that it is able to record.
https://chromedevtools.github.io/devtools-protocol/tot/Animation/
Following questions will give better clarity:
I try to capture some animations from a website and stitch them together using ffmpeg.
As far as I understand the docs startScreencast is the way to go.
If I understand that right I can start the screencast with
await Page.startScreencast({format: 'png', everyNthFrame: 1});
and listen to every incoming frame with
Page.screencastFrame( image =>{
const {data, metadata} = image;
console.log(metadata);
});
But it's never printed out something. So I assume it's not triggered.
I archived my goal with something like this:
let counter = 0;
while(counter < 500){
await Page.startScreencast({format: 'png', everyNthFrame: 1});
const {data, metadata} = await Page.screencastFrame();
console.log(metadata);
counter += 1;
}
Which feels like a non-performant hack.
So any suggestions on how to use startScreencast
and screencastFrame
properly?
I often get requests (using Network.requestWillBeSent
) with an initiator of type script but with an empty callFrames array. Is there any other way to figure out the script which caused the request?
(I'm not sure if this issue is filed correctly with this project, but it seems plausible.)
I'd like to use Chrome in headless mode to capture screenshots as part of a test suite that is scripted w/ node (using this module to interface with a target instance running on the local machine), but I'm noticing that taking a screenshot takes anywhere from 7-25 seconds (from when Page.captureScreenshot is sent from the client to when the response is received).
I want to take quite a few screenshots in each run, so it ends up taking several minutes to run the test, which is just too much.
Has anyone else experienced this? I'm not sure if the issue is in the rendering of the screenshot or somewhere in the transmission of the data to the debugging client, but I'd expect to be able to fetch screenshots from a local instance very fast.
I'm using MacOS and Chrome 59, btw.
I've seen a number of developers interested in this.
Currently, there is no officially maintained typescript definition file. We have no immediate plans to begin offering one, but I wanted to point at a great alternative from the community.
I recommend looking at these two projects:
➡️ @krisselden's chrome-debugging-client (see tot.ts)
➡️ @nojvek's chrome-remote-debug-protocol (see crdp.d.ts). (a fork by @roblourens is currently more maintained)
Comment edited March 2018
Yo, What are you guys up to? 😎 Going for a open protocol called "DevTools protocol"?
when i use pushNodesByBackendIdsToFrontend to mapping backendIds to frontendIds,but aways get { nodeIds: [ 0, 0, 0, 0, 0, ... ] }
I am trying to pass a html file as string instead of navigating to an url. I followed the comment from this url: cyrus-and/chrome-remote-interface#95
Page.setDocumentContent sets the html content and loads the needed css files but javascript files are not loaded (cannot see them in the sources tab of the devtools)
Hello)
I tried to use CSS.RuleUsageTracking methods and CSS.takeCoverageDelta to get all css rules needed to render a page. I expected to get all css rules needed to render a page, but unfortunately it doesn't work. I got some css rules, but not all, because the page looks ugly if to apply them.
First i run CSS.startRuleUsageTracking
,
then i load the page,
and then (with 25sec timeout) i run CSS.stopRuleUsageTracking
and then i get css rules using RuleUsage
array (response from CSS.stopRuleUsageTracking
)
Does it work the way i expect? or i go wrong with this stuff? Could U tell me what should i use to get all css rules needed for page to render for the first time?
Thanks a lot)
Step-by-step:
ResourceType
and navigate to any entryActual: "back" does nothing.
This is unlike every other method, so it should probably be fixed.
In the short term it would be ideal if an error was returned on Network.setUserAgentOverride
communicating enable()
wasn't called yet.
Currently printToPDF
returns a base64
encoded string. It works great in general, but we are generating a bit huge PDFs - min 40MBs, they include lot of pages with high resolution images. Keeping such big data in memory causes performance and memories issues with node. It would be nice to take PDF content as a stream, or as a file.
Currently we are thinking about workaround with page per page printing. But it dramatically increases complexity of the service. So my question is: are there any plans to support streams for printToPDF
?
The/deep/
combinator is being deprecated in the platform. Unfortunately, we use this in Lighthouse in a couple of places to find nodes within shadow roots. This gets injected into the page:
document.querySelector('html, html /deep/ *');
In Chrome 60, we'll have to move to DOM traversal [1] or use the protocol to get the full tree:
driver.sendCommand('DOM.getFlattenedDocument', {depth: -1, pierce: true}).then(result => {
return result.nodes.filter(node => node.nodeType === 1); // element nodes.
});
However, this method is less convenient than qS() and only works against the root node. In some parts of the code, we still want to select nodes from the DOM using a complex CSS selector.
Ideally,DOM.querySelector/DOM.querySelectorAll
could be updated to accept a {depth: -1, pierce: true}
option....nodes within shadow trees would be returned.
Thoughts? Is there already a way to make this work?
[1]:
let allElements = [];
function findAllElements(nodes) {
for (let i = 0, el; el = nodes[i]; ++i) {
allElements.push(el);
// If the element has a shadow root, dig deeper.
if (el.shadowRoot) {
findAllElements(el.shadowRoot.querySelectorAll('*'));
}
}
}
findAllElements(document.querySelectorAll('*'));
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.