Coder Social home page Coder Social logo

media-source's Introduction

Media Source Extensions™ Specification

This is the repository for the Media Source Extensions™ (MSE) specification. You're welcome to contribute! Let's make the Web rock our socks off!

Byte Stream Format specifications

The Byte Stream Format Registry and related specifications that used to be developed in this repository have moved to dedicated repositories end of November 2020:

Issue labeling and milestone guidance

Labels

Each bug may have multiple labels.

“needs feedback”:

The issue is pending further clarification from the assignee, likely the original bug filer or another who reported aspects of the issue in the bug’s history. The feedback request needs to be in a comment associated with the addition of this label, along with a request for reassignment back to an editor once feedback is provided.

“needs author input”:

The editors are seeking input from web authors on the issue. For example, whether a requested change is useful or how best to expose information.

“needs follow-up”:

The assignee, likely an editor, needs to investigate more deeply before we can decide if this “needs implementation” or to otherwise move forward. The editors have discussed the issue and do not need to discuss it further until we have the resulting follow-up from the assignee. This includes things like determining external spec dependencies, seeking input from other spec owners and/or WGs, confirming the understanding of the nature of the bug, and beginning to formulate a path to a solution.

This doesn’t necessarily mean follow-up has “Started” or is “In Progress.”

“needs implementation”:

The steps needed to resolve this issue are clearly understood and agreed upon. This likely means drafting and committing a spec change, possibly via a pull request. No further discussion is necessary at this time, though review of the change may still be appropriate. Should that change, this label should be removed.

For a bug to be labeled with this, it needs to be understood well enough and in scope of the marked milestone. Otherwise, “needs follow-up” or punting milestone might be options.

This label does not refer to user agent implementations.

“blocked”:

Some external dependency or another GitHub issue identified in comment associated with this label’s addition is (or might be) blocking progress on this bug.

“feature request”:

The issue is related to or requesting a new use case or capability not currently (explicitly) covered by the spec. Depending on the nature and impact of the request and the stage of the spec, it may be assigned to a future milestone.

“interoperability”:

Resolution of this issue is particularly important for interoperability among user agents. This may include breaking API changes, issues related to media compatibility across user agents, or ambiguous parts of the spec that could lead to different incompatible interpretations. There may be known or probable differing interpretations in implementations of the associated portion of the spec. If the identified issue is not addressed, there is a high likelihood of meaningful interoperability problems. The fix for the issue would need to provide a clear direction to prevent differing interpretations by user agents.

"breaking change"

The issue's resolution might cause re-entry to the current spec phase, for example CR.

“wontfix”, “invalid”, “duplicate”:

Self-explanatory :)

Issues with these labels should always be closed (unless they were re-opened at which time an editor should probably remove these labels if the re-opening is accepted).

Milestones

See milestones for the full list. Milestones “V1”, “V1Editorial” and “V1NonBlocking” were reached with the publication of the first version of MSE as a W3C Recommendation in 2016 and are no longer current. The following milestones are used to track issues:

“V2”

Issues flagged with a V2 milestone describe new features in scope of the second version of MSE. Immediate work on these issues is expected.

“V2BugFixes”

Issues flagged with a V2BugFixes describe bugs raised against the first version of MSE that need to be fixed before the second version gets published as a W3C Recommendation. These bugs do not introduce new features.

“Backlog”

Issues flagged with a BackLog milestone describe features, questions, bugfixes, editorial changes that are not yet in scope for the current version of the specification. Perhaps to be addressed by some later version of the spec.

(no milestone)

Issues that are not associated with any milestone have not been triaged yet.

media-source's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

media-source's Issues

Normative reference to the File API working draft

The File API spec, which MSE refers normatively, is not yet a Recommendation or Proposed Recommendation but still a Working Draft. It means that, before we submit our transition request to PR, we have to make it stable if needed. (See Normative References)

The File API spec currently covers five features:

  1. Blob interface/object
  2. File interface/object
  3. FileList interface
  4. FileReader API
  5. Blob URL

Given that MSE only uses the fifth item, Blob URL, which is relatively a small portion and seems stable, there would be four ways we can take to achieve the stability.

  1. If WPWG has no plan to develop the spec lately, we'll ask the WPWG chairs to make it PR.
  2. If WPWG has a plan to develop the spec,
    2-a) we'll ask the WPWG chairs to split the WD into two specs, one is about Blob URL and the other is for other stuffs, and make the Blog URL spec PR.
    2-b) we'll ask the WPWG chairs to make it clear in the spec that the Blob URL part is stable and won't change in a short term.
  3. If none of above is feasible, we copy'n'paste what we need from the File API spec to MSE with a note that we will change that part to a normative reference to the File API spec when WPWG make it a PR.

Through the initial conversation with the WPWG chairs, the team found that the option 1) was not the case there(the spec currently has a development proposal in the group), and the option 2-a) was not what the chairs wanted because they think they should keep the spec inclusive on file related features.

Thus, the options left to us currently are 2-b) and 3).

I'll start analyzing these two options and list up their pros and cons, while, at the same time, welcome your thoughts on other or different options we can pursue to resolve this issue.

Move VideoPlaybackQuality object to a separate extension spec

I propose that we remove the VideoPlaybackQuality object and corresponding getVideoPlaybackQuality() extension to the HTMLVideoElement element from v1 to be incubated as a separate extension. (Possible outcomes include a separate Playback Performance Extensions spec or integration into to HTML 5.x.)

  1. Although this functionality may be useful for adaptive streaming use cases, it is really an independent extension to HTMLVideoElement that is orthogonal to MSE [1]. It does not affect the core functionality of MSE and should be supported for all source types (including .src= ).
  2. There is some question about the best way to provide information about current playback performance to applications. As one example, there is not currently a way for applications to observe changes. This capability would benefit from broader incubation, including from those that did not participate in the MSE & EME TF. If this functionality goes to REC along with the rest of MSE, any future efforts will need to deprecate part of the MSE spec or build around it.
  3. One member (totalFrameDelay) of this object is already marked At Risk.

Moving this functionality out of the MSE spec should actually simplify the path to PR and REC by reducing the scope of the spec and removing one of the At Risk features.

Note: Moving this functionality to another spec/forum does not affect what browsers implement - only the declared maturity level of the feature. Implementations that have implemented some or all of it may continue to do so, but further incubation and iteration is likely before it becomes a Recommendation. Implementation experience will be valuable input to that process.

Note: We can temporarily move these sections to a separate page within the media-source repo to provide an informal reference until we identify a new home.

[1] When this was added to MSE, there was no established or lightweight mechanism for small specs or extensions to existing specs. Since that time, the W3C and even HTML have adopted more iterative processes.

High-level overview of media specifications

From
https://lists.w3.org/Archives/Public/public-html-media/2016Jan/0000.html

[[
We believe the MSE is not the appropriate specification to show
how multiple media objects, such as the primary video plus a
sign-language translation video plus captions plus described video are
unencrypted (EME) and synchronized, even when each comes from a
different server. However, we believe the W3C needs a high-level
overview of how our various specifications fit together to deliver the
total user experience defined by HTML and by the Media Accessibility
User Requirements (MAUR).[1] We request your assistance in creating this
high-level overview document, and in using alternative media examples
where appropriate across W3C media specifications in ourder to
illustrate for authors and user agent developers how W3C specifications
work together to meet the widest possible assortment of user needs.
]]

Should MSE cause any "preload" attribute on parent media element to be ignored?

For example of interop issues around at least the preload="none" case, see Chromium bug https://crbug.com/539707.

Since MSE API gives the web app control over buffering, the preload attribute on the media element seems meaningless. Am I missing some particular behavior of the user agent for a media element with MediaSource attached that should be conditioned on the media element's "preload" attribute? Or should such a media element explicitly ignore or override to "auto" the "preload" value on attachment of a MediaSource?

createObjectURL() not up-to-date with File API

The description of createObjectURL() has the following note:

This algorithm is intended to mirror the behavior of the createObjectURL()[FILE-API] method with autoRevoke set to true.

The note follows with an algorithm similar to the one in a draft from 25 October 2012

The issue is that the current version of File API has reworded the description of the algorithm and removed the autoRevoke parameter from that function. createObjectURL() will now always create non auto revoking URLs. createFor() was added to create auto revoking URLs.

I suggest that createFor() should be added to the specification and that the behaviour of both functions should mirror the appropriate functions from the current File API. If for some reason you only want to provide the auto revoking version, I suggest that the function should be called createFor() to avoid confusion with File API.

I personally think that forcing users to deal with the URL auto revoking at an unspecified time isn't a great thing to do and that both versions should be provided.

Change HTTP Referer

My player use Media Source Extensions, i would change the Headers (Referer) to a external video , how i can do it? ( i've asked and people tell me, i can change the Headers only with MSE)

Add names to Acknowledgements

In w3c/encrypted-media#109 (comment), Kazuyuki Ashimura (Web and TV IG Team Contact) requested adding names to MSE Acknowledgments for individuals that contributed to IG specs that reference MSE.

Names requested to be added are:

•Clarke Stevens, CableLabs (MPTF moderator)
•Hiroyuki Aizu, Toshiba
•Kaz Ashimura, W3C
•Richard Bardini, Intel
•Russell Berkoff, Samsung Electronics
•Pablo Cesar, CWI
•David Corvoysier, Orange
•Francois Daoust, W3C
•Franck Denoual, Canon Research Centre France
•Davy Van Deursen, Ghent University, IBBT
•Jean-Claude Dufourd, Telecom Paristech
•Jerry Ezrol, AT&T
•Narm Gadiraju, Intel
•Juhani Huttunen, Nokia
•Hyeonjae Lee, LG Electronics
•Jason Lewis, Disney
•Jan Lindquist, Ericsson
•David Mays, Comcast
•Nilo Mitra, Ericsson
•Giuseppe Pascale, Opera
•Youngsun Ryu, Samsung Electronics

abort() shouldn't attempt to interrupt the current buffer append

Overview:
I believe abort() shouldn't interrupt the Coded Frame Processing Algorithm; as abort() as it's currently described results in non deterministic behavior in regards to which frames may have actually been added to the source buffer.

Rather than interrupt something that is really non-interruptible, it should only guarantee that the next call to appendBuffer or modification to timestampOffset or appendWindow will succeed.

Detailed explanation:

Per spec:
"Aborts the current segment and resets the segment parser."

Which then calls the Reset Parser State which defines:

"If the append state equals PARSING_MEDIA_SEGMENT and the input buffer contains some complete coded frames, then run the coded frame processing algorithm until all of these complete coded frames have been processed."

Now, let's look at the behavior on the append state value, which is defined in "Segment Parser Loop".
In a typical MSE transaction, it would go from WAITING_FOR_SEGMENT to PARSING_INIT_SEGMENT to (WAITING_FOR_SEGMENT to PARSING_MEDIA_SEGMENT):repeat

Now let's assume an appendBuffer is done with data containing "media_segment1 | media_segment2 | media_segment3"
The Segment Parser Loop runs asynchronously and during the time of its execution will update the append state then run the coded frame processing algorithm for a single media segment, then set the append state to WAITING_FOR_SEGMENT and rinse and repeat.

Now we have an abort() and the Reset Parser State step.

Let's assume that at the exact time the Reset Parser State is run, the Segment Parser Loop had been interrupted after just having finished processing media_segment1. As such the append state is WAITING_FOR_SEGMENT.

As the append state is not equal to PARSING_MEDIA_SEGMENT, none of the remaining frames of the input buffer will be processed (as the conditional is AND)

As in step 7 of the Reset Parser State we have "Remove all bytes from the input buffer.", the source buffer following this abort contains only media_segment1.

If however, the abort of the append buffer occurred when the middle of the media_segment1 was being processed, the append state now being PARSING_MEDIA_SEGMENT all remaining complete frames in the input buffer will be processed and the source buffer will now contain all frames of media_segment1, media_segment2 and media_segment3.

The behavior of abort() is as such racy and non deterministic (and that's ignoring the fact that interrupting an asynchronous step is inherently impossible to achieve)

I believe abort() should be made clearer to remove all ambiguities.

abort should now be:
3: If the buffer append or stream append loop algorithms are running then run the following steps:

  1. Set the aborting flag to true
  2. Wait for the buffer append and stream append loop algorithms to complete.
    4: If the updating attribute equals true, then run the following steps:
    1.Set the updating attribute to false.
    2.Queue a task to fire a simple event named abort at this SourceBuffer object.
    3.Queue a task to fire a simple event named updateend at this SourceBuffer object.
    5: Run the reset parser state algorithm."

Step 1 of the Reset Parser State should be removed and add a last step (now 8)
"8. if the aborting flag equals true then set the aborting flag to false"

Buffer Append Algorithm now becomes:
2. If the segment parser loop algorithm in the previous step was aborted or if the aborting flag is set to true, then abort this algorithm.

Stream Append Loop now becomes:
14. If the segment parser loop algorithm in the previous step was aborted or if the aborting flag is set to true, then abort this algorithm.

Ultimately, the only reason for abort is to guarantee that the next operation relying on the updating attribute value will complete, and that any operations depending on the append state will also succeed (that is changing the mode attribute, and the timeStampOffset attribute)

At least this is how I've seen all DASH players using it (including YouTube)

WebM bytestream format is too restrictive around "random access points"

The current text: "A SimpleBlock element with its Keyframe flag set signals the location of a random access point for that track. Media segments containing multiple tracks are only considered a random access point if the first SimpleBlock for each track has its Keyframe flag set. The order of the multiplexed blocks must conform to the WebM Muxer Guidelines."

This has multiple overly-restrictive pieces that could be relaxed, given the robustness of the coded frame processing algorithm. I suggest:

  1. Replacing the first sentence with something like "A random access point for a track is signalled by a SimpleBlock element with its Keyframe flag set for that track, or a BlockGroup element having no ReferenceBlock elements for that track."
  2. Drop the second sentence; use (1) and the coded frame processing algorithm. Keep the third sentence as-is.

Do these changes relax that bytestream spec too much to achieve reliable interoperability?

Seekable differs from non-MSE behavior

The MSE spec seems to indicate that the highest end time for seekable should never exceed the highest buffered time when the duration is set to Infinity. The HTML standard indicates that duration should be Infinity for unbounded or live media and that user agents should be very liberal determining seekable ranges for media.

Safari on iOS and OSX seems to have interpreted this as meaning that the seekable range should include the time ranges covered by all segments in the current "sliding window" of content in a live HLS video. That definition is convenient because it makes seeking to the live point or building a DVR interface a simple operation for downstream developers, and seems in keeping with the spirit of the HTML standard. It does not seem possible to configure Source Buffers or a Media Source to achieve the same effect. Is there a mechanism to override seekable with out-of-band info like you might get from an M3U8?

More flexible error handling

It may be possible for app developers to transparently handle decode errors when the source material is available from redundant sources or at different quality levels. Currently, the append error algorithm dictates that the end of stream algorithm is invoked unconditionally on decode errors. This triggers an error on the HTMLMediaElement and updates its state to indicate a fatal condition. Ideally, the app developer would have some opportunity to intercept decode errors, prevent this condition from propagating to the Media Element, and provide alternate content at some point in the future. Maybe something as simple as Event.preventDefault()?

ISO BMFF Byte Stream Format requires avc1-4 support

https://w3c.github.io/media-source/isobmff-byte-stream-format.html#iso-init-segments normatively says:

The user agent must support parameter sets (e.g., PPS/SPS) stored in the sample entry (as defined for avc1/avc2), and should support parameter sets stored inband in the samples themselves (as defined for avc3/avc4).

There are several issues:

  • User agents are required to support parameter sets.
  • User agents must do so according to avc1-4.
  • An example ("e.g.") is provided in normative text.

Ideally, the format specification would not need to be updated to handle new specifications. Is there a way to accomplish the purpose of this text in another way? Perhaps an example in a non-normative Note will provide the necessary context for a more general statement.

CTS vs DTS, which is correct?

I could not find any information in the spec about whether buffered should give the DTS or the CTS of the appended frames. Firefox and Safari show CTS, while Chrome and IE/Edge show DTS. Is one or the other to-spec, or are both valid implementations that applications must deal with?

ISO BMFF bytestream: how can CEA 608 / 708 embedding be supported?

If I understand correctly, CEA 608 / 708 embedding of text track data is one option for sourcing text track data described within https://dev.w3.org/html5/html-sourcing-inband-tracks/#mpeg4
However, such embedding/signalling of the embedding occurs after the ISO BMFF initialization segment.
Since track types and counts must remain consistent across initialization segments for a SourceBuffer, and the initialization segment received algorithm is the sole place in MSE where various track attributes (like SourceBuffer.TextTracks and HTMLMediaElements.TextTracks) are populated, is it impossible to support CEA 608 / 708 embedding of text track data in ISO BMFF in any compliant MSE implementation?

@silviapfeiffer : Am I missing some part of signalling for CEA 608 / 708 in ISO BMFF that actually occurs within an MSE initialization segment?

@jdsmith3000 / @mwatson2 / other MSE user agent implementors: Do you have this working somehow? Does the MSE ISO-BMFF bytestream spec and/or https://dev.w3.org/html5/html-sourcing-inband-tracks/#mpeg4 need some update, or am I indeed missing some simple signalling within the defined MSE ISO BMFF initialization segment which allows such embedded text tracks to be known at the time of executing the initialization segment received algorithm?

Inconsistency in "ie 2 audio tracks"

Other instances use "i.e.". Searching for that I also see that "(i.e., Frames included in the totalVideoFrames count, but not in the droppedVideoFrames count." is missing a closing ).

Best practice for detaching?

How should an application detach MediaSource from the media element? In practice, I have been setting video.src = '', but it is unclear from the spec if there should be a better way.

There is an algorithm called "Detaching from a media element", but I don't understand how I am supposed to trigger it if at all.

Inaccessible diagram

From
https://lists.w3.org/Archives/Public/public-html-media/2016Jan/0000.html

[[
The diagram in this document should be made accessible. There
are several ways to achieve this, including accessible vector graphics
with SVG, a link to an external description (standard or
@longdesc), or an in-page description of the graphical content.

a.) We provide for your reference a draft textual diagram
description, though our draft requires enhancement as per b.)
below. The draft description is available at:

http://lists.w3.org/Archives/Public/public-apa/2016Jan/0014.html

b.) In addition to a textual description of your diagram for
the benefit of those readers who cannot see it, wWe find the
diagram also does not clearly convey its meaning and may benefit
significantly from redesign. Please reconsider what you're
trying to convey and determine if there is a more intuitive way
to present the data.  Please clearly label your switches,
connector endpoints, and relationships, e.g. What are the three
dots under the video decoders? Are all the audio decoder
switches open or is that a drawing error? What is the green
circled X that the audio decoders are pointing to? What are the
triangles? What is the meaning of the different colors of
SourceBuffers? What is the viewer expected to learn from this
diagram?

c.) Please consider creating this diagram using SVG. We
believe it is actually an excellent candidate for the accessible
graphic markup approaches being defined by the joint ARIA and
SVG Task Force.[2] One member of this TF has already offered to
assist, should you decide on SVG.

]]

Needs mechanism to present texts and graphics accurately synchronized with video at low processing load.

Since the interval of timeupdate event depends on implementation, it is difficult to find a good balance of the accuracy and processing load through different implementations. A use of setInterval can control frequency and accuracy but requires additional processing load which causes significant impact on low power devices like TV. It is ideal if the event mechanism like cuePoint in Actionscript is available, but such methods are not yet defined.

This is one of the issues raised in the Web and TV IG meeting at TPAC 2015 by Hybridcast group.
https://www.w3.org/2011/webtv/wiki/images/6/66/Webtv_mse_eme_iptvfj_player_20151026.pdf

Report buffer changes in update events

Currently it is difficult, and in some cases impossible, for application developers to determine the time ranges that were added or removed as result of operations on SourceBuffers. Some video formats (HLS, for example) do not provide timestamp information in their manifest files and require the application to synchronize manually during quality changes. With video-on-demand content the application is normally able to accurately calculate the correct content to fill the buffer without gaps because the start and end time are known.

Live videos with multiple quality levels are more complicated, however. Different renditions may not be synchronized due to delays encoding or pushing content to edge servers. If the application is unlucky with its decision about what content to download, it may waste a lot of bandwidth trying to find the edge of the current buffered time range.

For example, image a live stream with two quality renditions which present some window of available content but are not exactly in sync:

A |--a0--|--a1--|--a2--|--a3--|
B        |--b0--|--b1--|--b2--|
  0      5      10     15     20

Assume the application has buffered b0 and b1 and decides to switch to a1. Appending a1 into a SourceBuffer in that state would have no effect on the buffered ranges and so the application would be forced to make another blind guess for the next segment to download. If the video has a large amount of buffered content, the application could be required to make multiple requests to find a segment that allowed it to determine the time shift between the two streams.

If update events included added and removed TimeRanges with the results from the coded frame processing algorithm, the app could synchronize across quality levels much more quickly and save viewers and publishers significant bandwidth costs.

ISO BMFF Byte Stream format should support layered (scalable) encodings

ISO/IEC 14496-15 describes the carriage of layered (scalable) encodings in ISO Base Media File Format. Examples include SVC and MVC.

Such layered encodings can be encoded within a single track, or with multiple tracks, for example one for each layer. In the multi-layer case, when Movie Fragments are used, there are two ways the data can be organized into movie fragments:
(1) A single moof / mdat(s) pair can contain the data for the several tracks for each media segment
(2) The several tracks can be split into several consecutive moof / mdat(s) pairs

Option (1) is supported by our existing MSE byte stream format, but option (2) is not because we require that each "media segment" consists of a single moof and mdat(s). Option (2) has advantage because, typically, the sequence of moof / mdat(s) containing the "base layer" can be processed by a device which does not understand the scalable encoding.

So, I propose we modify our definition of Media Segment for the ISO BMFF byte stream format to consist of a sequence of one or more ( moof, mdat (, mdat)* ) structures where:

  • all data referred to in each moof appears in the immediately following sequence of mdats
  • all presentation timestamps fall within the range specified in the first moof of the sequence

If this is agreeable, I'll prepare the PR.

Support sample accurate audio splicing using timestampOffset/appendWindowStart/appendWindowEnd

One of the use cases for sample-accurate-audio-splicing is gapless audio playback (by removing the excess front/back padding added by most audio codecs).

Step 9 of the for loop in the Coded Frame Processing algorithm states:
If frame end timestamp is greater than appendWindowEnd, then set the need random access point flag to true, drop the coded frame, and jump to the top of the loop to start processing the next coded frame.

For audio, this means that we'll be dropping a complete coded frame (i.e. for AAC - 1024 audio samples) even if some of the audio samples would have fallen within the append window.
This granularity (coded frames) is not sufficient in order to achieve gapless audio playback.
It would be great if instead, we could keep the frame, and mark a range of samples to be discarded from frame, so that once the frame is decoded only the samples that fall within the append window will be used.

It's possible that this change alone is not sufficient to support sample-accurate-audio-splicing...

A test webpage can be found here, (based on Dale Curtis' article).
Further discussion can be found here

Use RFC 2119 words

ReSpec supports RFC 2119 words ("MUST", "MUST NOT", etc.). If such words are capitalized, ReSpec will format them in a way that indicates their importance. Neither MSE or the Byte Stream Format pages appear to use these.

Array bracket syntax is deprecated

The attribute definition

readonly        attribute DOMString[]      kinds;

in the interface TrackDefault should be changed to something like

readonly        attribute sequence<DOMString>      kinds;

Needs event to notify when sourceBuffer needs more data

For a device like TV that has limited size of sourceBuffer and low processing power, it is important to be able to manage buffer with minimum resource consumed as possible, i.e. without using extra setTimeout or timeupdate event handling. "2.4.4 SourceBuffer Monitoring" specifies that ‘canplay’ event fires when readyState changes from HAVE_FUTURE_DATA to HAVE_ENOUGH_DATA but no event would fire when an opposite change occurs, while it is important for application (dash.js) to know when is the right time to put next segment.

This is one of the issues raised in the Web and TV IG meeting at TPAC 2015 by Hybridcast group.
https://www.w3.org/2011/webtv/wiki/images/6/66/Webtv_mse_eme_iptvfj_player_20151026.pdf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.