Coder Social home page Coder Social logo

media-ui-extensions's Introduction

media-ui-extensions

Extending the HTMLVideoElement API (<video>) to support advanced video player user-interface features

With the addition of Web Components to the browser, video player developers can create custom elements that mimic the video tag API, with goals of creating stand-in compatibility with the video tag and compatibility across players. The video tag API is however lacking some important functions to support modern player UIs, including playback quality/resolution selection and awareness of ads. This repo is intended to capture requests and proposals for those functions.

Goals for Proposals

We want these proposals to be something that we can eventually propose to the Media WG or WhatWG as additional features to the video element and anything related. In the meantime, proposals accepted to media-ui-extensions can be used as a specification to keep things interoperable between implementers.

Before submitting a proposal

If you have a new idea, it might be worth creating an issue to discuss it a bit first before submitting a proposal to make sure that it's something could be a good fit and won't be less likely to be rejected later on.

Submitting a proposal

  • Fork the repo
  • Copy 0000-template.md to proposals/0000-your-feature.md. Don't change the number yet as that will be updated when the PR is created.
  • Fill out your proposal with as much detail as possible.
  • Submit a pull request. Once the PR is created, a new commit can be made to update the filename 0000- prefix and the header of the file to point to the new PR number.
  • Folks should now review the proposal and suggest changes. While in the PR, it's expected that many changes may be made. Ideally, please don't rebase commits in the branch as it'll make reviewing only new changes easier.
  • At some point, a committer should issue a Call for Consensus (CfC). This CfC will include whether this proposal should be accepted, rejected, or postponed.
  • The CfC will last a minimum of 2 weeks to make sure that all collaborators gets a chance to review. If all relevant parties have explicitly finished reviewing before 2 weeks, the CfC may end early.
  • If there is enough consensus, the CfC will close successfully. If there is no consensus, the proposal could be rejected or go back to be re-edited.

Implementing a proposal

Once a proposal has been accepted, it can start to be implemented. If during implementation, major issues are found, the existing proposal shouldn't be updated but a new proposal should be issued. The old proposal should be updated to link to the new proposal.

Resources

Inspiration

media-ui-extensions's People

Contributors

cjpillsbury avatar gkatsev avatar heff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

media-ui-extensions's Issues

Ads

Issue for gathering the complexities of ads that should be supported via the media element API, to make ad-related changes to the user interface.

Related discussion for media-chrome as we determine the ad-related HTML attributes that could be set on UI elements.
muxinc/media-chrome#34

Ads is a complex technology space, but similar to adaptive streaming and media source extensions, a lot of the ads complexity should live "below the surface" of the media element API. What should be exposed at the media element API level should be focused on making common ad-related user-interfaces possible.

  • Ad markers on the progress timeline
  • Hiding/disabling certain controls when an ad is playing
  • "Skip" button

Process Follow-ups

Follow-ups:

  • Add "Implemented" folder or Stages like TC39 proposals and update the process to include these
    • update proposal to include a link to the implementation PR
  • Add https://github.com/muxinc/custom-video-element as the canonical implementation of these proposals, but not the only implementation
  • Update the wording in the README to be more general of "Media Elements"
  • Clarify how to call a CfC (comment on the PR)

Originally posted by @gkatsev in #5 (comment)

Speed control

While many video & podcast players have a playback speed control. Also I & 1 million+ other people installed @igrigorik's VideoSpeed.
Good idea for a standard HTML video/audio player UI?

Some video speed UI have +/- 10 seconds or move frame-by-frame. Might be out of scope for this proposal, but maybe cross-mentioned?

Bonus: array input to hand config:

<video controls speed="0.5, 0.75, 0.9, 1, 1.1, 1.25, 1.5, 2" "width="250">
    <source src="/media/cc0-videos/flower.webm" type="video/webm">
    <source src="/media/cc0-videos/flower.mp4" type="video/mp4">
</video>

Invoker HTML buttons

Directly assigning HTML controls to video player controls:

<input type="checkbox" invoketarget="my-video" invokeaction="playpause" >Play/Pause</button>
<input type="checkbox" invoketarget="my-video" invokeaction="mute">Mute</button>

<video id="my-video"></video>

https://github.com/keithamus/invoker-buttons-proposal#customizing-videoaudio-controls
https://github.com/keithamus/invoker-buttons-proposal/issues/28
https://github.com/keithamus/invoker-buttons-proposal/issues/14#issuecomment-1744204920

While this proposal is not adding capabilities to <video>, it can directly affect that element, & I hope you can give feedback.

Stream Type - Proposal

Overview & Purpose

The idea of different “stream types” has been around for a long time in various HTTP Adaptive Streaming (HAS) standards and its precursors in some manner - minimally distinguishing between “live” content and “video on demand” content. However, these categories aren’t consistently named or distinguished in the same way across the various specifications. Moreover, there is no corresponding API in the browser. Yet these categories directly inform how one expects users to consume and interact with the media, including what sort of UI or “chrome” should be made available for the user. By way of example, the built in controls/UI in Safari that show up for a live src are different than those that show up for a VOD src. This proposal aims to normalize the names and definitions of StreamTypes (in a way that is extensible and evolvable over time) by way of how they are expected to be consumed and interacted with by a viewer/user. It also provides a concise and easy to understand differentiator for anyone implementing different UIs/controls/"chromes" for the various stream types.

An additional goal of this proposal is to recommend for MSE-based players or “playback engines” to try to normalize their use of existing APIs to be as consistent as possible with the proposed inferred StreamType Algorithm.

Proposed StreamType Types & Definitions

  • "unknown" (default) - There is no media content or there is currently insufficient information to determine the StreamType of the current media content (e.g. metadata or similar is still loading, async default StreamType inference not yet done)
  • "vod" (“Video on Demand”) - The media content has a known start and end time and is intended to be randomly seekable from start to end as long as the content is available at all
  • "live" - The media content is intended to be viewed at the “live edge” as forward/subsequent content is made available over time and is not intended to be seekable at all
  • "dvr" - The media content has a known start time and by default is intended to be viewed at the “live edge” as forward content is made available over time, but all backward/previous content is also available for seeking from start to the current “live edge”
  • (Future?) "sliding" (“Sliding Window”, “Partial DVR”) - The media content is by default intended to be viewed at the “live edge” as forward content is made available over time, but is also intended to be seekable within a (roughly) consistent time window relative to the current “live edge”
  • Others?

Proposed Interface

  • type StreamType = "unknown" | "vod" | "live" | "dvr" (| "sliding"?) (| string?)
  • HTMLMediaElement::get streamType() {} : StreamType
    • Will use Inferred stream type if no streamType is set. See below for algorithm
  • HTMLMediaElement::set streamType() {}
    • Intended to override inferred stream type
  • Event Types: streamtypechange
    • Should be fired whenever streamType changes (inferred or explicitly set)

Proposed Stream Type Inferring (overridable)

Algorithm (Pseudo-code):

  1. Let StreamType = "unknown"
  2. If mediaEl.duration === NaN (exit)
    • Aka StreamType = "unknown"
  3. If mediaEl.duration !== Infinity, StreamType = "vod" (exit)
    • Stricter: If mediaEl.seekable.end(0) === mediaEl.duration (or Math.abs(mediaEl.duration - mediaEl.seekable.end(0)) <= MOE for precision considerations)
  4. If media.duration === Infinity
    • Stricter: If mediaEl.seekable.end(0) < mediaEl.duration (or (mediaEl.duration - mediaEl.seekable.end(0)) > MOE for precision considerations)
    1. Let ChunkDuration = the presumed longest duration, in seconds, of a media chunk/segment
    2. Let SeekableStart0 = mediaEl.seekable.start(0)
    3. Let SeekableEnd0 = mediaEl.seekable.end(0)
    4. Wait ChunkDuration
    5. Let SeekableEnd1 = mediaEl.seekable.end(0)
    6. Let SeekableStart1 = mediaEl.seekable.start(0)
    7. If SeekableEnd1 === SeekableEnd0 (or Math.abs(SeekableEnd1 - SeekableEnd0) <= MOE for precision considerations), GOTO (iv)
    8. If SeekableStart1 === SeekableStart0 (or Math.abs(SeekableStart1 - SeekableStart0) <= MOE for precision considerations), StreamType = "dvr" (exit)
    9. If SeekableStart1 > SeekableStart0 (or Math.abs(SeekableStart1 - SeekableStart0) > MOE for precision considerations), StreamType = "live" (exit)
      • NOTE: This doesn’t account for/differentiate “sliding” StreamType
  5. (exit)
    • Aka StreamType = "unknown"

Additional Considerations

  • At the very beginning of a live stream, the algorithm above may misidentify or fail to disambiguate between "dvr" and "live"
    This algorithm should be re-applied/computed whenever the dependent variables may change
  • At the very end of a "live"/"dvr" stream, the computed stream type could change to "vod" based on the currently proposed algorithm.

Related Standards/Specs Definitions

Distinguishing/Categorizing Types

RFC 8216 (“HLS”)

ISO/IEC 23009-1 (“MPEG-DASH”)

  • Explicit - static (“vod”), dynamic (“dvr” or “live” - cannot differentiate by attr)
    • Defined by MPD@type attribute value (§5.3.1.2, Table 3 — Semantics of MPD element)
  • Implicit - “dvr”
    • MPD@timeShiftBufferDepth (§5.3.1.2, Table 3 — Semantics of MPD element) grows consistently with the available Segments & wall clock time and has a consistent computed start time (similar to inferred algorithm for "dvr")

Duration for "live"/"dvr"

Seekable Range for "dvr"

Warning API

Today, video.error means video playback has failed, but there are many things that happen under the hood that are not ideal even if they don't cause a playback failure. Most notable are problems that result in rebuffering/stalling, slow startup times, or lower quality renditions.

  • Segment failures
  • Network drops
  • Ad failures

We don't have a standard way to report/capture these issues for the sake of reporting to analytics or responding to in real time.

The API could mimic the errors API

video.addEventListener('warning', (evt)=>{
  let warning = video.warning;
});

That seems good enough as long as:

  • warnings can happen frequently during playback and not be missed as they're cleared out by later warnings
  • warnings include a timestamp to be clear when they occurred

I could also see an argument for limited-length array of warnings. But that feels more complicated to manage so I'd prefer to avoid that direction if possible.

This is as much a question for discussion as it is a suggestion. It's possible console warnings could be be good enough.

Playback quality selection

Allow a user to select from a set of video quality levels/resolutions/renditions/bitrates/variants/representations.

Was hoping to start this with a PR, but some research and discussion will be helpful first.


Related conversation: whatwg/html#562 from @dmlap

The proposed extension to VideoTrack seems promising.

partial interface VideoTrack {
  sequence<string> getAvailableRenditions();
  // promise resolves when change has taken effect
  Promise<void> setPreferredRendition(string rendition);
};

Something to solve for is "auto".

Ping @gkatsev @littlespex

Live Edge Window

Overview & Purpose

The Problem

For live/"DVR" content, it's common to have some indication as to whether or not they are currently playing "at the live edge". However, due to nature of HTTP Adaptive Streaming (HAS), the live edge cannot be represented as a simple point/moment in the media's timeline. This is for a few reasons:

  1. At the manifest/playlist level, the available live segments are typically known by the client by periodically re-fetching these using in-specification polling rules. These files may also be updated by the server in a discontiguous manner as segments are ready for streaming. Together, since the client does not know when segments will be added by the server, the known advertised "true live edge" will "jump" discontiguously through this process, which needs to be accounted for as a plausible "range" or "window" for what counts as the live edge.
  2. HAS provides segmented media via a client-side pull based model (most typically, e.g., a GET request), where each segment has a duration. This means that a client must first "see" the segment (via the process described above), then GET and buffer the segment, and then (eventually) play the segment, starting at its start time. Here again, this entails a discontiguous, per-segment update of the timeline, which again needs to be accounted for via a "range" or "window", rather than a discrete point.
  3. In order to avoid live edge stalls, both MPEG-DASH and HLS have a concept of a "holdback" or "offset," which inform client players that they should not attempt to fetch/play some set of segments from the end of a playlist/manifest. Luckily, this can be treated as an independent offset calculation applied to e.g. the seekable.end(0) of a media element, which can then be used as a reference for any other live edge window computation.

(Visual representation may help here)

A concrete sub-optimal (not worst case) but in spec example - HLS:

Let's say a client player fetches a live HLS media playlist just before the server is about to update it with the following values:

# ...
# Unfortunately, EXT-X-TARGETDURATION is only an upper limit (>= any EXTINF duration) after rounding to the nearest integer
#EXT-X-TARGETDURATION: 5
# Client side "LIVE EDGE" will be 5.46 seconds into the segment below, aka 3 * 5 (target duration) = 15 seconds from the playlist end duration
# NOTE: Assume playback begins at the beginning of the segment below, since some client players choose to do this to avoid stalling/rebuffering, meaning playback starts -5.46 seconds from the "LIVE EDGE"
#EXTINF:5.49
#EXTINF:4.99
#EXTINF:4.99
#EXTINF:4.99

The server then ends up updating the playlist with two larger-duration segments (in spec and happens under sub-optimal but not unheard of conditions) before the client re-requests the playlist after 4.99 seconds (the minimum amount of time the player must wait) and continues re-fetching the available segments, with an updated playlist of:

# ...
#EXT-X-TARGETDURATION: 5
# NOTE: Current playhead will be 4.99 seconds into the segment below, assuming optimal buffering and playback conditions at 1x playback speed
#EXTINF:5.49
#EXTINF:4.99
#EXTINF:4.99
# New Client side "LIVE EDGE" will be 0.97 seconds into the segment below, aka 3 * 5 (target duration) = 15 seconds from the playlist end duration
#EXTINF:4.99
#EXTINF:5.49
#EXTINF:5.49

In this example, playback started 5.46 seconds behind the computed "LIVE EDGE" and, after a single reload of the playlist, ended up 11.45 seconds behind the next computed "LIVE EDGE" without any stalls/rebuffering. Note that, even in this example, we do not account for round trip times (RTT) for fetches, time to parse playlists, times to buffer segments, initial seeking of the player's playhead/currentTime, and the like. Note also that, even without those considerations, the playhead still ends up > 2 * TARGETDURATION behind the "LIVE EDGE".

The solution

Since this information can be derived from a media element's "playback engine"/by parsing the relevant playlists or manifest, the extended media element should have an API to advertise what the live edge window is for a given live HAS media source. Call this the "live window offset"

Additionally, due to consideration (3), above, we should treat the seekable.end(0) as the end time of a live stream accounting for the per-specification "holdback" or "delay".

Proposed API

Constrained meaning of seekable.end(0) as "live edge" (with HOLD-BACK/etc) for HAS

To account for the distinction between the live edge duration of the media stream as advertised by the playlist or manifest vs. the latest time a client player should try to play, based on per-specification rules and additional information also provided in the playlist or manifest, extended media elements SHOULD set the seekable.end(0) value to account for this offset. This shall be assumed for all computations of the "live edge window", where seekable.end(0) will be the presumed "end" of the window/range, already taking into account the aforementioned offset. With these offsets presumed, seekable.end(0) may be treated as synonymous with a client player's "live edge" and these terms should be treated as interchangeable in this initial proposal.

For RFC8216bis12 (aka HLS)

  1. "Standard Latency" Live

seekable.end(0) should be based on the inferred or explicit HOLD-BACK attribute value, where:

HOLD-BACK

The value is a decimal-floating-point number of seconds that indicates the server-recommended minimum distance from the end of the Playlist at which clients should begin to play or to which they should seek, unless PART-HOLD-BACK applies. Its value MUST be at least three times the Target Duration.

This attribute is OPTIONAL. Its absence implies a value of three times the Target Duration. It MAY appear in any Media Playlist.

  1. Low Latency Live

seekable.end(0) should be based on the explicit PART-HOLD-BACK (REQUIRED) attribute value, where:

PART-HOLD-BACK

The value is a decimal-floating-point number of seconds that indicates the server-recommended minimum distance from the end of the Playlist at which clients should begin to play or to which they should seek when playing in Low-Latency Mode. Its value MUST be at least twice the Part Target Duration. Its value SHOULD be at least three times the Part Target Duration. If different Renditions have different Part Target Durations then PART-HOLD-BACK SHOULD be at least three times the maximum Part Target Duration.

For ISO/IEC 23009-1 (aka "MPEG-DASH")

  1. "Standard Latency" Live

seekable.end(0) should be based on the explicit MPD@suggestedPresentationDelay (OPTIONAL) attribute, when present, otherwise it may be whatever the client chooses based on its implementation rules. Per the spec:

it specifies a fixed delay offset in time from the presentation time of each access unit that is suggested to be used for presentation of each access unit... When not specified, then no value is provided and the client is expected to choose a suitable value.

  • From §5.3.1.2 Table 3 - Semantics of MPD element

(NOTE: there may be additional suggestions/recommendations available via the DASH IOP)

  1. Low Latency Live

seekable.end(0) should be based on the ServiceDescription -> Latency@target attribute. Note that this value is an offset not of the manifest timeline, but rather of the (presumed NTP or similarly synchronized) wallclock time. Per the spec:

The service provider's preferred presentation latency in milliseconds compared to the producer reference time. Indicates a content provider's desire for the content to be presented as close to the indicated latency as is possible given the player's capabilities and observations.

This attribute may express latency that is only achievable by low-latency players under favourable network conditions.

(NOTE: This implies that the value could change marginally over time based on precision and other wallclock time updates based on the runtime environment. However, since these differences should be minor, it's likely fine to treat this value as static for the case of this document and can likely be implemented as such in an extended media element)

liveWindowOffset

Definition

An offset or delta from the "live edge"/seekable.end(0). An extended media element is playing "in the live window" iff: mediaEl.currentTime > (mediaEl.seekable.end(0) - mediaEl.liveWindowOffset).

Possible values

  • undefined - Unimplemented
  • NaN - "unknown" or "inapplicable" (e.g. for streamType = "on-demand")
  • 0 <= x <= Number.MAX_SAFE_INTEGER - known stable value for current stream

Recommended computation for RFC8216bis12 (aka HLS)

  1. "Standard Latency" Live

liveWindowOffset = 3 * EXT-X-TARGETDURATION

Note that this is a cautious computation. In many stream + playback scenarios, 2 * EXT-X-TARGETDURATION will likely be sufficient. However, with this less cautious value, there may be edge cases where standard playback will "hop in and out of the live edge," so recommending the more cautious value here.

  1. Low Latency Live

liveWindowOffset = 2 * PART-TARGET

Unlike "standard" segments (#EXTINFs), parts' durations must be <= #EXT-X-PART-INF:PART-TARGET (without rounding). Also unlike "standard," HLS servers must add new partial segments to playlists within 1 (instead of 1.5) Part Target Duration after it added the previous Partial Segment. This means that, even under sub-optimal conditions, low latency HLS should end up with a much smaller liveWindowOffset.

Recommended computation for ISO/IEC 23009-1 (aka "MPEG-DASH")

TBD

Open Questions

  1. What should we actually call the property?
  • In #4, we decided to call the numeric value representing a live "DVR" window the targetLiveWindow. Since this value represents a window for the "live edge" and not for "available live content to seek through/play", having both refer to the "live window" will likely be confusing. In the current related preliminary implementation in Media Chrome, we refer to the related attribute as the livethreshold. Should that be the name here as well? Do we want the name to try to capture the fact that this is an "offset" value from the "live edge"/seekable.end(0)?
  1. Distinct event or repurposed event?
  • The above proposal makes no mention of a corresponding livewindowoffsetchange event. While we cannot likely rely on any of the built in HTMLMediaElement events, we should be able to guarantee computation of the relevant values before dispatching the streamtypechange event, as documented in #3. Is this repurposing of the event acceptable? Should we consider a more generic event name that more clearly relates to states announced for stream type, DVR, live edge window offset, and potentially additional future properties/state?

DVR State - Proposal

NOTE: This proposal began as a subset of the Stream Type - Proposal #3 but was descoped due to complexities and the decision to model it as a separate state.

NOTE: A discussion on the complexities and permutations of "DVR", both using available HTTP Adaptive Streaming (HAS) manifests/playlists and inferring from the state of a given HTMLMediaElement instance can be found in this google doc, which also has comments enabled. Please read this document, as it provides relevant context for the proposal below.

Overview and Purpose

A subset of "live streaming media" is intended to be played with seek capabilities for the viewer. This is frequently referred to as "DVR," and typically falls into one of two categories:

  1. "Standard DVR" - All previous media in the live stream will be available to seek to and play for the life of the live stream (and perhaps after its completion).
  2. "Sliding Window DVR" - Previous media in the live stream will be removed over the life of the live stream, but a "sufficiently long amount" of the previous media will be available to seek and to play during the live stream's life. Most often, the duration of the seekable media content will stay roughly the same (with some margin of error changes, due to the segmented nature of HAS), except for cases where the live stream has just begun. We can think of this value, implicit in the live media stream itself, as the "Sliding Window Media Size", or the size of the sliding window, as determined from e.g. the media manifest/playlists themselves.

For both of these cases, although the media is live, the "intention" is to still allow users to seek through the media during playback.

Proposed DVR Types & Definitions

Below are the total possible DVR states (for more on why, see the Google Doc, referenced above).

  • "standard" - The media stream is live and all previous media content will be available
  • "sliding" - The media stream is live and a sufficient amount of previous media content will be available for seeking
  • "none" - The media stream is on-demand, or the media stream is live and there will not be a sufficient amount of previous media content available for seeking.
  • "any" - The media stream is live and is either "standard" or "sliding", but it is (currently) ambiguous which of these two it is.
  • "unknown" - There is no media stream, or the media stream is live, but it is (currently) ambiguous if it's "none" (no DVR), "standard", "sliding", or "any".
  • undefined - The DVR feature is unimplemented by the media element.

Proposed Interface 1 (narrow implementation - "standard" support only)

This version of the proposal intentionally omits/"doesn't solve for" any account of "sliding".

  • HTMLMediaElement::get dvr() {} : boolean | null
    • true means "standard"
    • false means "none" | "sliding" (where "sliding" is not within the scope of this proposal and therefore is "under-determined" by this value alone)
    • null means "unknown"
    • (implicit) undefined or not defined means unsupported
  • Event Types: "dvrchange"
    • detail = dvr
    • Should be fired from the HTMLMediaElement whenever dvr changes

Proposed Inferring 1 (narrow implementation - "standard" support only)

Only rely on HLS playlist (EXT-X-PLAYLIST-TYPE:EVENT) or MPEG-DASH manifest (MPD@type="dynamic && !MPD@timeShiftBufferDepth) parsing to derive dvr. Any other process will result in ambiguities. For more, see the Google Doc, referenced above.

Proposed Interface 2 (exhaustive)

  • type DVRType = "standard" | "sliding" | "none" | "any" | "unknown"
  • HTMLMediaElement::get dvr() {} : DVRType
    • All values correspond to their corresponding definitions, above.
    • (implicit) undefined or not defined means unsupported
  • Event Types: "dvrchange"
    • detail = dvr
    • Should be fired from the HTMLMediaElement whenever dvr changes
  • HTMLMediaElement::get minSlidingWindow() {} : number
    • The minimum allowed seekable duration for media to definitively be considered "sliding", aka HTMLMediaElement::seekable.end(0) - HTMLMediaElement::seekable.start(0) >= HTMLMediaElement::minSlidingWindow -> HTMLMediaElement::dvr === "sliding"
    • Default Value: 60 (seconds)
      • NOTE: We may want to consider an even larger value for default. This is the smallest value recommended to stay consistent with MPEG-DASH, HLS, and their corresponding official guidelines. For more, see the Google Doc, referenced above).
  • HTMLMediaElement::set minSlidingWindow(value: number) {} : void

Proposed Inferring 2 (exhaustive)

To be documented formally if this is the preferred adopted proposal. Most of this may be determined from the Google Doc, referenced above.

Recommendation: Proposal 1 (narrow implementation - "standard" support only)

Reasons for recommendation

  1. much easier to implement
  2. provides a definitive true|false for both HLS and MPEG-DASH "immediately" (after loading and parsing the playlists/manifests once per media stream)
  3. provides less ambiguous controversial definitions derivable from the HLS and MPEG-DASH specs
  4. doesn't have significant concerns for backwards compatibility if/when we introduce "sliding" (and corresponding "uncertain" states such as "any" or "unknown" in the case of early stream starts). This is because any implementations that add a future "sliding" support (assuming new properties are introduced) will simply treat these as "live" unless/until they integrate with the new interface. This feels far less risky than the other way around, where "live" streams would suddenly and unexpectedly start showing up as "DVR" (seekable).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.