mdn / browser-compat-data Goto Github PK
View Code? Open in Web Editor NEWThis repository contains compatibility data for Web technologies as displayed on MDN
Home Page: https://developer.mozilla.org
License: Creative Commons Zero v1.0 Universal
This repository contains compatibility data for Web technologies as displayed on MDN
Home Page: https://developer.mozilla.org
License: Creative Commons Zero v1.0 Universal
Servo refers to https://servo.org/ which is, unlike everything else, a browser engine and not a browser. It is sponsored by Mozilla and runs on desktop systems and on Android.
I'm thinking about a few reasons why Servo isn't a good fit for this data at the moment.
Any other thoughts on this?
Noticed another small inconsistency between WebExtension data and CSS data (working on a macro really helps).
WebExtension data always uses arrays even if there is only 1 note in notes
. I think this is better from a consumer's perspective. The name notes also implies this I think (notes not note).
The docs say "notes is an array of zero or more translatable string containing additional pertinent information. If there are only 1 entry in the array, the array can be ommitted". I don't know the motivation behind this.
Proposed schema change:
Old:
"notes": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
}
New:
"notes": {
"type": "array",
"items": {
"type": "string"
}
}
@wbamberg @teoli2003 thoughts?
It's very tempting to have large JSON files as it make things easier to request (do one XHR request and you get all the data you need).
However, on a maintenance stand point I think it will make things easier if we have one file per feature set (JavaScript promise is an excellent example of that). Git usually behave better on smaller files (especially if we want to change large chunk of data) and it will make pull request review way easier.
That said I understand the need for larger JSON files for XHR. What I suggest here is to add a simple node script that will build the aggregated JSON files automatically (and a pre-commit hook to have it done automatically for committers). This is pretty trivial and can help us ease the maintenance of our data in the long term.
We have a few definitions for the terms obsolete and deprecated.
On MDN, the term obsolete marks an API or technology that is not only no longer recommended, but also no longer implemented in the browser. [...] For Web standard technology, the API or feature is no longer supported by current, commonly-used browsers. MDN reference
On MDN, the term deprecated marks an API or technology that is no longer recommended, but is still implemented and may still work. These technologies will in theory eventually become obsolete and be removed, so you should stop using them. [...] For Web standard technology, the API or feature has been removed or replaced in a recent version of the defining standard. MDN reference
A boolean value that indicates if the functionality is only kept for compatibility purpose and shouldn't be used anymore. It may be removed from the Web platform in the future. BCD reference
Currently, BCD does not have the term deprecated in the status object.
In BCD, version_added
and version_removed
indicate support ranges and whether support has ended.
I think it isn't necessary to also have "no longer implemented in the browser" as a meaning for obsolete. The current BCD definition for obsolete also doesn't mention this, it is only in the historical MDN meaning, but people get confused by these two definitions.
Further, on MDN, the deprecated meaning includes "is still implemented and may still work". I think in BCD we shouldn't have this as an implicit meaning of deprecated as we have version_added
and version_removed
for when something is supported or not.
Probably deprecated is the more common term for indicating the recommendation status ("do not use this anymore", "it's no longer recommended", "the spec discourages this", etc). So, I think we should use deprecated instead of obsolete. A definition for deprecated in BCD could be:
A boolean value that indicates if the feature is no longer recommended. It might be removed in the future or might only be kept for compatibility purposes. Avoid using this functionality.
On the Web platform, features are rarely ever removed, because the Web needs to be compatible. For WebExtensions, or other APIs, where deprecated features might actually be removed, I think that deprecated is still a good term, too, as we're not mixing implementation status and recommendation status in our definition.
I thought it would be nice to answer why we are thinking about this anyway why it isn't entirely bikeshedding (but somewhat bikeshedding). So here is my take:
Fundamentally, I think BCD should have well defined answers to the two questions "Can I use this?" and "Should I use this?".
version_added
and version_removed
and caveats like flags, alternative names etc. give you answers.experimental
boolean should be false for you standard_track
boolean should be true for youdeprecated
boolean should be false for you.Setting "additional_properties": false
should protect us from typos like "friefox" and "chome" as discussed in #86.
In order to make editing the JSON more user-friendly in editors that support automatic validation against a schema, like VS Code, we should have "description" entries in appropriate places so that the contributor can get information about the expectations for each field they're working on.
From looking at code that builds tables from the compat data, I had some questions about how we should use support_statement
objects (https://github.com/mdn/browser-compat-data/blob/master/compat-data.schema.json#L47), especially in the case where it consists of an array of simple_support_statement
objects.
To recap:
support_statement
can be either a single simple_support_statement
, or an array of simple_support_statement
s.
each simple_support_statement
must contain version_added
and may also contain version_removed
.
version_added
and version_removed
may be (1) a boolean, (2) null
, or (3) a version string.
I think of each simple_support_statement
as defining a range of versions during which some compat condition holds (like, it's supported, or behind a pref, or prefixed, or...). (I'm not sure that's a correct way to think of if, but if so...)
(1) Should it be OK for ranges to overlap:
35 — 49
47 - 53
(2) Should it be OK for ranges to include gaps:
35 — 42
47 - 53
If these are OK, what would constructs like that mean, and how can we present them in a comprehensible way?
(3) Do we assume that the ranges are ordered? If so, should we validate the data for it?
47 - 53 // should we be able to handle this?
35 — 47
(4) We allow version_added
and version_removed
to accept special imprecise values, true
, false
, and null
. What does it mean when we use one of these imprecise values in combination with version-defined ranges?
35 — ? // what does this mean?
47 - 53
35 — Yes // or this?
47 - 53
35 — No // or this?
47 - 53
The simplest, most restrictive, thing to say would be something like this:
you can only use true
, false
, and null
in the case that it is the only simple_support_statement
. If you supply an array of simple_support_statement
objects, you must supply actual versions in version_added
and version_removed
if you supply an an array of simple_support_statement
objects, they must be ordered
if you supply an an array of simple_support_statement
objects, only the last one is allowed to include version_removed
: the others are implicitly ended by the start of the next one.
I don't know if this would rule out too many useful expressions. But I do think that if we allow particular constructs we should be able to say what they mean.
(Initially brought up by @stephaniehobson)
We agreed on these browser names in the past, but we are not using them here:
https://browsercompat.herokuapp.com/browse/browsers
Some of the current issues she observed:
As shown by our Chrome Platform Status entries, no web feature added to Webview is given in terms of Android version. This is because since Android L the Webview updates on an approximate six week cycle in lockstep with the Chrome version.
I realize there's no easy solution to this problem. It will take considerable time for me (a Google employee) to update every reference to Webview in this database and on MDN.
What I'm asking is that you consider a Travis test forces (blocking) or encourages (non-blocking) using the correct form of Webview version number. Perhaps this solution could be generalized to encourage or enforce proper data in all browser versions and the rules could be stored in one or more JSON files so that browser vendors could update them as needed.
I would like to propose to rename this repository to mdn/data and use its tooling for all our JSON data.
Currently, whenever we edit "data macros" on MDN and editors are not providing valid JSON, things fall apart. We could highly benefit from having "MDN open data" generally available in a repository like this. Tools like the JSON validator and the schema validators are great to avoid breakage.
mdn/data
Features like geolocation, battery, and some types of storage require the user to authorize the site to access that feature.
We should have a way to record and display when this is the case.
We should disallow dots in feature identifiers as we are using them to access features and they are indicating a nested structure. (kumascript accesses these nested structures with say {{compat("webextensions.api.devtools.inspectedWindow")}} and thus expects a nested object structure)
"devtools.inspectedWindow": { ... }
shouldn't be in the JSON, it should be nested like "devtools": { "inspectedWindow": { ... } }
etc.
Helpful resource to create this schema rule: https://spacetelescope.github.io/understanding-json-schema/reference/regular_expressions.html.
As discussed here: https://groups.google.com/forum/#!topic/mozilla.dev.mdc/d86_ocGyYNc it might be good for the compat data to be able to represent whether a feature is only made available in secure contexts.
I assume this would need to be represented at the level of individual browsers, since different browsers might have different rules about it.
We started to add a CoC in every project. We should do it for this one too!
We have pretty extensive HTML compat info on MDN and we should convert it to the JSON format.
To coordinate the work, I've created this spreadsheet: https://docs.google.com/spreadsheets/d/1ivgyPBr9Lj3Wvj5kyndT1rgGbX-pGggrxuMtrgcOmjM/edit#gid=0
We need to investigate validating version numbers for different browsers. Otherwise we might end up with inconsistencies like "53.0" and "53", or things like "5.3" which would make no sense for Firefox, for example.
When we will have a good data set, I think it could be very useful to publish those data through NPM to let people fiddling with it. This will be easier than forking a raw GitHub repo and it will help us keeping track of the use of our data more easily (NPM as statistic and dependency checking)
Hi!
Our current proposal for the schema has notes: these are textual comments.
E.g.
...
"__compat": {
...,
"Internet Explorer": {
"support": "4.0",
"notes": ["In Internet Explorer 8 and 9, there is a bug where a computed <code>background-color</code> of <code>transparent</code> causes <code>click</code> events to not get fired on overlaid elements."]
There may be several notes (hence the []).
How do we want to translate them? We would like something that is simple, that is something that doesn't force us to build something outside github.
One way could be to have an object. Instead of:
["text1", "text2"]
we would have:
[{"en-US": "text1"}, {"en-US":"text2"}]
This would allow to store translated strings from the start and allows macros to use them easily. This would not make maintenance easy: if the en-US text changes, there is no easy way for a translator to know it (beside watching the file), also there is no way of knowing if a translation is up-to-date or not.
This is a basic proposal. Does anybody have a better idea?
Hi all!
Experimentation showed that it is difficult to come with the perfect schema for browser compat from the start. As we expect the schema to evolve, we need a way to be sure that this evolution is possible.
The problems to solve are:
Idea:
Start each file containing browser compat data with a version value
{"version": "1.0.0",
"css" : {...}
}
Semantic versioning should be used (3rd digit: minor change, don't break compat, 2nd digit: upward compat, 1st digit: no compatibility). That way macros and 3rd-party can know the version used for a file.
As the schema is controlled here, we can ensure that the versioning is indeed semantic.
Note that I don't think allowing the coexistence of several version of the schema inside the same file is useful.
Trying to get feedback on the HTML structure in early - but only because I am assuming it's hard to refresh the pages with the macro on them. If it's easy to refresh those pages this can all be adjusted later.
Put row headings into a tag.
<th scope="row">Basic Support</th>
Don't include the zone in the class names
Namespacing for zone specific classes is usually a good idea :) Since in this case we want this code to be used in other zones eventually, if you pick a more universal name space we can move the classes to the global style sheets more easily when they're ready.
I suggest .ct
for compat-table.
Rather than hard coding widths on the browser name cells, add classes to indicate they are browser names and add a parent class to the table to indicate how wide they should be.
<table class="ct-summary ct-browsers-5">
<th class="ct-browser">
.ct-browsers-5 .ct-browser { width: 12%; }
This will make it easier to to adjust the tables for mobile display and change widths globally if we want to.
Use a consistent namespace for all CSS classes
.ct-summary
.ct-browser
.ct-support-full
.ct-support-unknown
.ct-support-no
Hi folks :)
I noticed that our JSON are currently not really coherent, it would be nice if they could validate against the same JSON schema.
After looking at the existing JSON, I'm suggesting the following schema:
https://gist.github.com/JeremiePat/f83f3cb00471c1b90d126fd6989256eb
In short, it defines some meta type and how to aggregate them:
basic-support
browser
Moving to such schema will obviously require to update our JSON file, I think now is the right time before we get to much data to make such changes.
It also worth noting that such data structure change will require the MDN team to update the Kuma Macros they are using to pull the data.
This test states that neither IE nor Edge support max-age
. I have similar concerns about support for cookie prefixes....
There is no way to see when a specific manifest key is supported:
E.g. look at https://developer.mozilla.org/en-US/Add-ons/WebExtensions/manifest.json/commands
I want to add some version-specific compatibility notes. E.g. base support in Firefox 48, support for _execute_browser_action
in Firefox 52, note that Firefox requires suggested_key
to be set whereas Chrome allows it to be null
.
Currently the work-around is to write it in the text itself, but that makes it more difficult to quickly find whether an API is well-supported or not.
Currently, npm test
check all files
For daily work, it would be usefull to ask to check only a specific JSON file.
Something like: npm test my-dir/my-file.json
We should have a nice short doc covering things we'd like to check in the compat data, but that the schema doesn't enforce.
Because there doesn't seem to be much going on in the CSS Side.. I had started something very similar here...
https://github.com/praveenpuglia/css-support
but if this project is gonna be actively maintained, I would rather contribute here than doing things on my own where it's hard to get contribution.
On https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Browser_support_for_JavaScript_APIs it would be very helpful to have links to bugs that have been filed for issues.
With the current schemas, a feature is supported or not. If it is supported 'prefixed', we say it is not supported and add the prefix support as a note.
Similarly if a feature was supported in the past, it is marked as 'non-supported', with a note indicated it was in the past.
This works well for the use case to display inside MDN, but obviously limits the ability to answer questions like 'What features were supported in Firefox 37' as features removed after Fx 37 will just be marked as 'not-supported'.
Is this a limitation we are ready to take? Or should we add a way to describe these now?
WebGL 2.0 is available on Chrome for Android as of 58, and Firefox for Android as of (I think) 51.0 (@jdashg, can you confirm?)
This probably means it will be available in Opera for Android as of 45 or so, but I haven't checked.
Hi.
In generated browser compatibility data Internet Explorer appears to have a version 15. E.g.:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padEnd
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padStart
IE11 is said to be the last version and IIRC first Edge was internally 13 (or 14 I'm not sure). So they skipped a version probably leaving a gap to release IE12...
In any case there is no IE15 and browser compatibility tables generated are misleading. For IE it should say "No support" like e.g. here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/values#Browser_compatibility
Currently it's marked as unsupported on firefox on webextension docs.
Now that there are soon no subfeatures anymore, there is no way to tell if a feature has a dedicated MDN page. There was no guarantee before, but it was quite likely that a feature had an MDN page, and subfeatures didn't. I think it would be good to make this explicit in the data. Someone who integrates the data, might then offer links to MDN, which is beneficial for MDN and for the data displayer.
The schema might add mdn_url like this:
"compat_statement": {
"type": "object",
"properties": {
"description": { "type": "string" },
"mdn_url": { "type": "string", "format": "uri" },
"support": { "$ref": "#/definitions/compat_block" },
"status": { "$ref": "#/definitions/status_statement" }
},
"required": ["support"],
"additionalProperties": false
},
Thoughts?
Currently, all the JSON is in one directory. Right now, that's not too bad... but eventually we're going to have hundreds of them, and it will become unmanageable. We should preempt this issue by deciding on a directory structure for the data and start using it right away.
We could simply divide it by API, with folders named things like "DOM" and "Intersection_Observer", or we could use API groups, or some other structure that makes sense.
Florian tells me the code already scans all the subfolders if any for JSON, so the structure doesn't matter at all. That means we can make this change just by creating directories and moving existing files into them.
@wbamberg suggested that the schema documentation should be copy-edited. I agree with him.
Currently, APIs and their sub-features are listed in arbitrary order. I think we should put them in the order they have been added to Firefox (then Chrome). There is also the suggestion to sort them alphabetically.
I prefer implementation order, since the story reads more clearly, having essential features that are widely supported listed first, followed by newer features. Also this seems consistent with the vast majority of existing documentation on MDN.
The majority of feature gates are flags or preferences nowadays, but some experimental features are gated to a specific release channel such as Nightly or Canary. I'm unclear how to represent this in the current schema. These channels change version numbers every 6-10 weeks and are updated daily.
Collecting examples:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer/transfer
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/values
Should we expand the "flag"
object to contain type: "channel"
?
"firefox": {
"version_added": false,
"flag": {
"type": "channel",
"name": "Firefox Nightly"
}
}
The URL https://github.com/mdn/browser-compat-data/README.md, which I found hidden on https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition$edit in a way that leads me to suspect that it is also found on many other pages, unfortunately does not actually work.
I suspect https://github.com/mdn/browser-compat-data#readme would do nicely, but lack the means to do a global search/replace on MDN (and anyway, it's safer to get bulk edits reviewed before they are done).
Noticed that here is "true" (as a string) for a few features. This should either be a version number or a real true
. Would have been caught by better validation. We should work on that, too.
The Chrome documentation provides more accurate compatibility information for some (though not all) APIs than what is currently shown on MDN.
I hacked together a Python script that imports compatibility data for APIs we have listed here, that have explicit compatibility information in the Chrome documentation.
Here is how the changes generated by the script would look like. If these look good, I can open a pull request, or you could just run the script yourself. Otherwise, I'm also happy to make changes to the script, in case I missed something.
Edge now supports the bookmarks API: https://docs.microsoft.com/en-us/microsoft-edge/extensions/api-support/supported-apis.
We should update the compat data.
... when the WebExt macro supports that.
Hi!
How should we name the different user agents in the json.
E.g
{
"id1":{
"id2":{
"__compat": {
"Internet Explorer": {...}
"Firefox": {...}
}
Proposal 1:
Use a real name (even if it isn't the displayed one) like "Internet Explorer", "Firefox".
Proposal 2:
Use a more id-like name: "internet_explorer", "firefox", ... to make it clear it is not the displayed name.
Also do we want to make the list of values part of the schema (mandatory, able to be validated to allow the schema), or not (missing values will be considered as 'false')?
The current code has notifications.onButtonClicked events ignored:
onButtonClicked: ignoreEvent(context, "notifications.onButtonClicked"),
This is because buttons are not yet implemented (comment from the code):
// FIXME: Lots of options still aren't supported, especially
// buttons.
Right now, we were differentiating between "feature tables" and "feature aggregate tables" in the KS macro [1]. This requires that features and __compat never appear on the same level. However, Jean-Yves points out that you actually want to have this oftentimes.
Like you would have __compat, href and target on the same level in this structure, where __compat indicates the compat for the base element itself:
For APIs, we ran into the same problem, when we want to indicate support for the interface itself. I ended up repeating the interface name in these cases as a workaround to avoid __compat on the same level as the other features.
The String.String or base.base structure might not be ideal here and so we need to decide if we want to rewrite the {{compat}} macro to work with structures that allow __compat and feature trees on the same level. Basically, if we want structure (1) instead of structure (2) and (3).
Thoughts?
[1] https://github.com/mozilla/kumascript/blob/master/macros/Compat.ejs#L424
I'm wondering if we should add custom exports in addition to the exports by feature identifiers.
Right now, we are exporting e.g "api.WebGLRenderingContext" and we are getting back the whole interface's compat data. This works in most cases, also for css, http and webextensions.
Sometimes, there are use cases for custom aggregates, though. Like, for example, all WebGL extensions: https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Using_Extensions#Browser_compatibility
This table is still static and would consist of "api.ANGLE_instanced_arrays", "api.EXT_blend_minmax" and 26 more interfaces, which I can't display like this using the {{compat}} macro at the moment.
I think an additional export like "custom.webgl-extensions" (or similar) that would load all these interfaces would be useful.
Other ideas on how to deal with custom aggregate tables?
AFAICS there's jsonlint ... --environment json-schema-draft-04
call in .travis.yml.
It makes jsonlint
call JSV.createEnvironment(options.env)
.
But JSV
has only 3 environments register themselves.
JSV
provides a method registerEnvironment() to register custom environments, but jsonlint
doesn't proxy it.
I see 3 options here:
JSV
directly;JSV
support json-schema-draft-04
from the box;JSV
current environments.What would you choose?
Currently, we have a few tests executed directly in the .travis.yml file.
We should:
npm test
properlyHi all!
Our schema must be able to identify different features. We need to agree on the way to identify these.
We have a few constraints: we would like not to rely on the file structure we choose, as it is unclear how it will be in the future (1 large file, numerous small files, or even in a DB).
For the moment, the experiments showed 2 kind of file structures that we are using:
We don't know which of these file structures will be the best in the long term, so we need to have a identifier that works in both cases (without to much added complexity).
The proposal is the following: we put the id inside the json:
E.g. css.properties.line-height would be the id for the line-height properties and written:
{"css": {
"properties": {
"line-height" : {...}
}
E.g
{"WebGL2RenderingContext": {
"api": {
"WebGL2RenderingContext": {...},
"WebGL2RenderingContext.beginQuery": {...},
...
}
}
To indicate the bottom of the id, the idea would be to use a special keyword ("__compat" or similar) to indicates that what follows is the compat information.
What do you think of this idea?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.