Coder Social home page Coder Social logo

vanilagy / mp4-muxer Goto Github PK

View Code? Open in Web Editor NEW
346.0 4.0 24.0 2.54 MB

MP4 multiplexer in pure TypeScript with support for WebCodecs API, video & audio.

Home Page: https://vanilagy.github.io/mp4-muxer/demo

License: MIT License

JavaScript 9.64% TypeScript 78.94% HTML 10.47% CSS 0.95%
audio javascript mp4 muxer typescript video webcodecs

mp4-muxer's Introduction

mp4-muxer - JavaScript MP4 multiplexer

The WebCodecs API provides low-level access to media codecs, but provides no way of actually packaging (multiplexing) the encoded media into a playable file. This project implements an MP4 multiplexer in pure TypeScript, which is high-quality, fast and tiny, and supports both video and audio as well as various internal layouts such as Fast Start or fragmented MP4.

Demo: Muxing into a file

Demo: Live streaming

Note: If you're looking to create WebM files, check out webm-muxer, the sister library to mp4-muxer.

Consider donating if you've found this library useful and wish to support it ❤️

Quick start

The following is an example for a common usage of this library:

import { Muxer, ArrayBufferTarget } from 'mp4-muxer';

let muxer = new Muxer({
    target: new ArrayBufferTarget(),
    video: {
        codec: 'avc',
        width: 1280,
        height: 720
    },
    fastStart: 'in-memory'
});

let videoEncoder = new VideoEncoder({
    output: (chunk, meta) => muxer.addVideoChunk(chunk, meta),
    error: e => console.error(e)
});
videoEncoder.configure({
    codec: 'avc1.42001f',
    width: 1280,
    height: 720,
    bitrate: 1e6
});

/* Encode some frames... */

await videoEncoder.flush();
muxer.finalize();

let { buffer } = muxer.target; // Buffer contains final MP4 file

Motivation

After webm-muxer gained traction for its ease of use and integration with the WebCodecs API, this library was created to now also allow the creation of MP4 files while maintaining the same DX. While WebM is a more modern format, MP4 is an established standard and supported on way more devices.

Installation

Using NPM, simply install this package using

npm install mp4-muxer

You can import all exported classes like so:

import * as Mp4Muxer from 'mp4-muxer';
// Or, using CommonJS:
const Mp4Muxer = require('mp4-muxer');

Alternatively, you can simply include the library as a script in your HTML, which will add an Mp4Muxer object, containing all the exported classes, to the global object, like so:

<script src="build/mp4-muxer.js"></script>

Usage

Initialization

For each MP4 file you wish to create, create an instance of Muxer like so:

import { Muxer } from 'mp4-muxer';

let muxer = new Muxer(options);

The available options are defined by the following interface:

interface MuxerOptions {
    target:
        | ArrayBufferTarget
        | StreamTarget
        | FileSystemWritableFileStreamTarget,

    video?: {
        codec: 'avc' | 'hevc' | 'vp9' | 'av1',
        width: number,
        height: number,

        // Adds rotation metadata to the file
        rotation?: 0 | 90 | 180 | 270 | TransformationMatrix
    },

    audio?: {
        codec: 'aac' | 'opus',
        numberOfChannels: number,
        sampleRate: number
    },

    fastStart:
        | false
        | 'in-memory'
        | 'fragmented'
        | { expectedVideoChunks?: number, expectedAudioChunks?: number }

    firstTimestampBehavior?: 'strict' | 'offset' | 'cross-track-offset'
}

Codecs currently supported by this library are AVC/H.264, HEVC/H.265, VP9 and AV1 for video, and AAC and Opus for audio.

target (required)

This option specifies where the data created by the muxer will be written. The options are:

  • ArrayBufferTarget: The file data will be written into a single large buffer, which is then stored in the target.

    import { Muxer, ArrayBufferTarget } from 'mp4-muxer';
    
    let muxer = new Muxer({
        target: new ArrayBufferTarget(),
        fastStart: 'in-memory',
        // ...
    });
    
    // ...
    
    muxer.finalize();
    let { buffer } = muxer.target;
  • StreamTarget: This target defines callbacks that will get called whenever there is new data available - this is useful if you want to stream the data, e.g. pipe it somewhere else. The constructor has the following signature:

    constructor(options: {
        onData?: (data: Uint8Array, position: number) => void,
        chunked?: boolean,
        chunkSize?: number
    });

    onData is called for each new chunk of available data. The position argument specifies the offset in bytes at which the data has to be written. Since the data written by the muxer is not always sequential, make sure to respect this argument.

    When using chunked: true, data created by the muxer will first be accumulated and only written out once it has reached sufficient size. This is useful for reducing the total amount of writes, at the cost of latency. It using a default chunk size of 16 MiB, which can be overridden by manually setting chunkSize to the desired byte length.

    If you want to use this target for live-streaming, i.e. playback before muxing has finished, you also need to set fastStart: 'fragmented'.

    Usage example:

    import { Muxer, StreamTarget } from 'mp4-muxer';
    
    let muxer = new Muxer({
        target: new StreamTarget({
            onData: (data, position) => { /* Do something with the data */ }
        }),
        fastStart: false,
        // ...
    });
  • FileSystemWritableFileStreamTarget: This is essentially a wrapper around a chunked StreamTarget with the intention of simplifying the use of this library with the File System Access API. Writing the file directly to disk as it's being created comes with many benefits, such as creating files way larger than the available RAM.

    You can optionally override the default chunkSize of 16 MiB.

    constructor(
        stream: FileSystemWritableFileStream,
        options?: { chunkSize?: number }
    );

    Usage example:

    import { Muxer, FileSystemWritableFileStreamTarget } from 'mp4-muxer';
    
    let fileHandle = await window.showSaveFilePicker({
        suggestedName: `video.mp4`,
        types: [{
            description: 'Video File',
            accept: { 'video/mp4': ['.mp4'] }
        }],
    });
    let fileStream = await fileHandle.createWritable();
    let muxer = new Muxer({
        target: new FileSystemWritableFileStreamTarget(fileStream),
        fastStart: false,
        // ...
    });
    
    // ...
    
    muxer.finalize();
    await fileStream.close(); // Make sure to close the stream

fastStart (required)

By default, MP4 metadata (track info, sample timing, etc.) is stored at the end of the file - this makes writing the file faster and easier. However, placing this metadata at the start of the file instead (known as "Fast Start") provides certain benefits: The file becomes easier to stream over the web without range requests, and sites like YouTube can start processing the video while it's uploading. This library provides full control over the placement of metadata setting fastStart to one of these options:

  • false: Disables Fast Start, placing all metadata at the end of the file. This option is the fastest and uses the least memory. This option is recommended for large, unbounded files that are streamed directly to disk.

  • 'in-memory': Produces a file with Fast Start by keeping all media chunks in memory until the file is finalized. This option produces the most compact output possible at the cost of a more expensive finalization step and higher memory requirements. This is the preferred option when using ArrayBufferTarget as it will result in a higher-quality output with no change in memory footprint.

  • 'fragmented': Produces a fragmented MP4 (fMP4) file, evenly placing sample metadata throughout the file by grouping it into "fragments" (short sections of media), while placing general metadata at the beginning of the file. Fragmented files are ideal for streaming, as they are optimized for random access with minimal to no seeking. Furthermore, they remain lightweight to create no matter how large the file becomes, as they don't require media to be kept in memory for very long. While fragmented files are not as widely supported as regular MP4 files, this option provides powerful benefits with very little downsides. Further details here.

  • object: Produces a file with Fast Start by reserving space for metadata when muxing begins. To know how many bytes need to be reserved to be safe, you'll have to provide the following data:

    {
        expectedVideoChunks?: number,
        expectedAudioChunks?: number
    }

    Note that the property expectedVideoChunks is required if you have a video track - the same goes for audio. With this option set, you cannot mux more chunks than the number you've specified (although less is fine).

    This option is faster than 'in-memory' and uses no additional memory, but results in a slightly larger output, making it useful for when you want to stream the file to disk while still retaining Fast Start.

firstTimestampBehavior (optional)

Specifies how to deal with the first chunk in each track having a non-zero timestamp. In the default strict mode, timestamps must start with 0 to ensure proper playback. However, when directly piping video frames or audio data from a MediaTrackStream into the encoder and then the muxer, the timestamps are usually relative to the age of the document or the computer's clock, which is typically not what we want. Handling of these timestamps must be set explicitly:

  • Use 'offset' to offset the timestamp of each track by that track's first chunk's timestamp. This way, it starts at 0.
  • Use 'cross-track-offset' to offset the timestamp of each track by the minimum of all tracks' first chunk timestamp. This works like 'offset', but should be used when the all tracks use the same clock.

Muxing media chunks

Then, with VideoEncoder and AudioEncoder set up, send encoded chunks to the muxer using the following methods:

addVideoChunk(
    chunk: EncodedVideoChunk,
    meta?: EncodedVideoChunkMetadata,
    timestamp?: number,
    compositionTimeOffset?: number
): void;

addAudioChunk(
    chunk: EncodedAudioChunk,
    meta?: EncodedAudioChunkMetadata,
    timestamp?: number
): void;

Both methods accept an optional, third argument timestamp (microseconds) which, if specified, overrides the timestamp property of the passed-in chunk.

The metadata comes from the second parameter of the output callback given to the VideoEncoder or AudioEncoder's constructor and needs to be passed into the muxer, like so:

let videoEncoder = new VideoEncoder({
    output: (chunk, meta) => muxer.addVideoChunk(chunk, meta),
    error: e => console.error(e)
});
videoEncoder.configure(/* ... */);

The optional field compositionTimeOffset can be used when the decode time of the chunk doesn't equal its presentation time; this is the case when B-frames are present. B-frames don't occur when using the WebCodecs API for encoding. The decode time is calculated by subtracting compositionTimeOffset from timestamp, meaning timestamp dictates the presentation time.

Should you have obtained your encoded media data from a source other than the WebCodecs API, you can use these following methods to directly send your raw data to the muxer:

addVideoChunkRaw(
    data: Uint8Array,
    type: 'key' | 'delta',
    timestamp: number, // in microseconds
    duration: number, // in microseconds
    meta?: EncodedVideoChunkMetadata,
    compositionTimeOffset?: number // in microseconds
): void;

addAudioChunkRaw(
    data: Uint8Array,
    type: 'key' | 'delta',
    timestamp: number, // in microseconds
    duration: number, // in microseconds
    meta?: EncodedAudioChunkMetadata
): void;

Finishing up

When encoding is finished and all the encoders have been flushed, call finalize on the Muxer instance to finalize the MP4 file:

muxer.finalize();

When using an ArrayBufferTarget, the final buffer will be accessible through it:

let { buffer } = muxer.target;

When using a FileSystemWritableFileStreamTarget, make sure to close the stream after calling finalize:

await fileStream.close();

Details

Variable frame rate

MP4 files support variable frame rate, however some players (such as QuickTime) have been observed not to behave well when the timestamps are irregular. Therefore, whenever possible, try aiming for a fixed frame rate.

Additional notes about fragmented MP4 files

By breaking up the media and related metadata into small fragments, fMP4 files optimize for random access and are ideal for streaming, while remaining cheap to write even for long files. However, you should keep these things in mind:

  • Media chunk buffering: When muxing a file with a video and an audio track, the muxer needs to wait for the chunks from both media to finalize any given fragment. In other words, it must buffer chunks of one medium if the other medium has not yet encoded chunks up to that timestamp. For example, should you first encode all your video frames and then encode the audio afterward, the multiplexer will have to hold all those video frames in memory until the audio chunks start coming in. This might lead to memory exhaustion should your video be very long. When there is only one media track, this issue does not arise. So, when muxing a multimedia file, make sure it is somewhat limited in size or the chunks are encoded in a somewhat interleaved way (like is the case for live media). This will keep memory usage at a constant low.
  • Video key frame frequency: Every track's first sample in a fragment must be a key frame in order to be able to play said fragment without the knowledge of previous ones. However, this means that the muxer needs to wait for a video key frame to begin a new fragment. If these key frames are too infrequent, fragments become too large, harming random access. Therefore, every 5–10 seconds, you should force a video key frame like so:
    videoEncoder.encode(frame, { keyFrame: true });

Implementation & development

MP4 files are based on the ISO Base Media Format, which structures its files as a hierarchy of boxes (or atoms). The standards used to implement this library were ISO/IEC 14496-1, ISO/IEC 14496-12 and ISO/IEC 14496-14. Additionally, the QuickTime MP4 Specification was a very useful resource.

For development, clone this repository, install everything with npm install, then run npm run watch to bundle the code into the build directory. Run npm run check to run the TypeScript type checker, and npm run lint to run ESLint.

mp4-muxer's People

Contributors

jhartman5 avatar nickbabcock avatar nytrm avatar syquel avatar szymonrw avatar vanilagy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

mp4-muxer's Issues

Inconsistent behavior between WebMMuxer and Mp4Muxer

Hi,
Thank you for all the work on webm-muxer and mp4-muxer!

I am using the libraries to record canvas animation frames. I've noticed each library behaves a little differently when used in the same way.

In the example below, I'm recording the total of 30 frames.

In the resulting WebM video, the last frameCount is 29, which is accurate. It also has the correct fps of 30.

But in the mp4 video, the last frameCount shows up as 28, missing the very last frame. What's also strange is that when I manually scrub the Quicktime timeline, I sometimes do see 29 displayed but it seems this frame doesn't have any duration. Also, the frame rate is displayed as 31.03 instead of 30.

I'm wondering if I need to handle frame recording differently with mp4-muxer.

Below is a simple demo to illustrate my issue. You can click the mouse inside the canvas to initiate the recording of both videos:

import * as Mp4Muxer from "mp4-muxer";
import * as WebMMuxer from "webm-muxer";

const canvas = document.createElement("canvas");
document.body.appendChild(canvas);
canvas.width = 400;
canvas.height = 400;
const ctx = canvas.getContext("2d")!;

let mp4Muxer: Mp4Muxer.Muxer<Mp4Muxer.ArrayBufferTarget> | null = null;
let webmMuxer: WebMMuxer.Muxer<WebMMuxer.ArrayBufferTarget> | null = null;
let mp4VideoEncoder: VideoEncoder | null = null;
let webmVideoEncoder: VideoEncoder | null = null;
let startTime: number | null = 0;
let recordingState: "start" | "recording" | null = null;
let lastKeyFrame = -Infinity;
let frameCount = 0;
let fps = 30;

const startRecording = async () => {
  mp4Muxer = new Mp4Muxer.Muxer({
    target: new Mp4Muxer.ArrayBufferTarget(),
    video: {
      codec: "avc",
      width: canvas.width,
      height: canvas.height,
    },
    fastStart: "in-memory",
  });
  webmMuxer = new WebMMuxer.Muxer({
    target: new WebMMuxer.ArrayBufferTarget(),
    video: {
      codec: "V_VP9",
      width: canvas.width,
      height: canvas.height,
      frameRate: 30,
    },
  });

  mp4VideoEncoder = new VideoEncoder({
    output: (chunk, meta) => mp4Muxer!.addVideoChunk(chunk, meta),
    error: (e) => console.error(e),
  });
  mp4VideoEncoder.configure({
    codec: "avc1.42001f",
    width: canvas.width,
    height: canvas.height,
    bitrate: 1e6,
  });

  webmVideoEncoder = new VideoEncoder({
    output: (chunk, meta) => webmMuxer!.addVideoChunk(chunk, meta),
    error: (e) => console.error(e),
  });
  webmVideoEncoder.configure({
    codec: "vp09.00.10.08",
    width: canvas.width,
    height: canvas.height,
    bitrate: 1e6,
  });
};

const encodeVideoFrame = () => {
  let elapsedTime = (frameCount * 1e6) / fps;
  let frame = new VideoFrame(canvas, {
    timestamp: (frameCount * 1e6) / fps, // Ensure equally-spaced frames every 1/30th of a second
  });

  // Ensure a video key frame at least every 10 seconds for good scrubbing
  let needsKeyFrame = elapsedTime - lastKeyFrame >= 10000;
  if (needsKeyFrame) lastKeyFrame = elapsedTime;

  mp4VideoEncoder?.encode(frame, { keyFrame: needsKeyFrame });
  webmVideoEncoder?.encode(frame, { keyFrame: needsKeyFrame });
  frame.close();
};

const endRecording = async () => {
  recordingState = null;

  await mp4VideoEncoder?.flush();
  mp4Muxer?.finalize();
  let mp4Buffer = mp4Muxer?.target.buffer!;
  downloadBlob(new Blob([mp4Buffer]), "test.mp4");

  await webmVideoEncoder?.flush();
  webmMuxer?.finalize();
  let webmBuffer = webmMuxer?.target.buffer!;
  downloadBlob(new Blob([webmBuffer]), "test.webm");

  mp4VideoEncoder = null;
  mp4Muxer = null;
  webmVideoEncoder = null;
  webmMuxer = null;
  startTime = null;
};

const downloadBlob = (blob: Blob, filename: string) => {
  let url = window.URL.createObjectURL(blob);
  let a = document.createElement("a");
  a.style.display = "none";
  a.href = url;
  a.download = filename;
  document.body.appendChild(a);
  a.click();
  window.URL.revokeObjectURL(url);
};

canvas.addEventListener("click", () => {
  recordingState = "start";
});

function run() {
  if (recordingState === "start") {
    frameCount = 0;
    startRecording();
    recordingState = "recording";
  }

  draw();

  if (recordingState === "recording") {
    encodeVideoFrame();
  }

  frameCount++;

  if (recordingState === "recording" && frameCount === 30) {
    endRecording();
    frameCount = 0;
    recordingState = null;
  }

  window.requestAnimationFrame(run);
}
window.requestAnimationFrame(run);

function draw() {
  ctx.fillStyle = "gray";
  ctx.fillRect(0, 0, canvas.width, canvas.height);

  ctx.font = "100px monospace";
  ctx.fillStyle = `white`;
  ctx.fillText(frameCount.toString(), 200, 200);
}

Dynamic browser support

Similar to Vanilagy/webm-muxer#32
I use both webm-muxer and mp4-muxer with a manual check for browser version, which had proved unreliable.
Ideally, I would like to start with the webm-muxer, try to make it playable (remove alpha, change codec, etc), but if the container is not supported at all, revert to mp4-muxer, and then do the same (remove alpha, change codec, etc).

This requires both libraries to have isPlayable() and makePlayable() methods

output file can't be played everywhere. something missing

Hi,

I managed to get the muxing working but the output file seems to be corrupted in some way. it can be played with VLC and chrome on my computer. but it fails playing on my android phone or with windows media player, preview is not shown in windows folder and it can't be opened by https://gpac.github.io/mp4box.js/test/filereader.html

if I just remux with ffmpeg with copy, it works fine. ffmpeg adds something when I compare the ffprobes :
[mov,mp4,m4a,3gp,3g2,mj2 @ 000000000284af40] Processing st: 0, edit list 0 - media time: 0, duration: 127492
[mov,mp4,m4a,3gp,3g2,mj2 @ 000000000284af40] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 000000000284af40] Processing st: 1, edit list 0 - media time: 0, duration: 483336

also mediainfo shows some extra lines for the remuxed ffmpeg version :

  • for global:
    Overall bit rate mode : Variable
    encoded-date and tagged-date
  • video :
    stream-size , encoded-date and tagged-date
  • audio :
    Bit rate mode : Variable
    stream-size , encoded-date and tagged-date

I use avc1.420033 and mp4a.40.2

my 2 test files :
https://keny.toys/tmp/test_mp4muxer.mp4
https://keny.toys/tmp/test_remuxedffmpeg.mp4

Video quality on Safari

H!

I have a hard time getting good quality MP4 videos created on desktop Safari.

This config gives me perfect results in Chrome (frames are encoded from canvas):

videoEncoder.configure({
 codec: "avc1.640028",
 width: 720,
 height: 720,
 bitrate: 10_000_000,
 framerate: 30,
 latencyMode: "quality",
});

But no matter how I tweak the config Safari creates MP4 videos with ~110 kb/s.

Can anyone share a config that works in Safari?

strange codec mismatch

Hello!

I'm using this muxer to generate MP4 videos using VideoEncoder to which I pass either avc1.F4E034 or avc1.42001F codecs.

However, when I try to demux the generated videos using mp4box.js it returns different codec names.

I.e. the mp4 generated using avc1.F4E034 is shown by mp4box as having the codec avc1.42002a and the one generated using avc1.42001F is now avc1.42031e. That second one avc1.42031e makes the VideoDecoder fail.

Is there a reason why?

VP9-encoded MP4 causes `Invalid typed array length: -4` error when passed to MP4Box.js

Error is easily observed by attempting to load a VP9 MP4 encoded by mp4-muxer (cyan video attached below) into the MP4Box.js online demo and checking browser console.
Compare it to the VP9 MP4 muxed by ffmpeg.wasm (Big Buck Bunny video attached) which does not trigger the same error and successfully loads within the demo.

This minimal reproduction repo demonstrates the error and shows how the two videos were muxed.

v9-mp4-muxer-mp4box-error.mp4
mp4box-no-error.mp4

Is there an issue with the data generated by StreamTarget?

Is there an issue with the data generated by StreamTarget?

I am using Node.js API (electron app) to write data to the file system.

One approach is to use StreamTarget to continuously write data to the file.
Another approach is to use ArrayBufferTarget and then write the data to the file at the end.

The file written with StreamTarget appears to be corrupted.
The file written with ArrayBufferTarget is fine.

Here is an example code.

  const writeStream1 = fs.createWriteStream("file-1.mp4");
  const writeStream2 = fs.createWriteStream("file-2.mp4");
    
  // videoMuxer1 use fileTarget1
  const fileTarget1 = new Mp4Muxer.StreamTarget(
    (data, position) => {
      WriteStream.write(data);
    },
    () => {
      WriteStream.end();
      // file-1.mp4 is fail
    }
  );
  
  // videoMuxer2 use fileTarget2
  const fileTarget2 = new Mp4Muxer.ArrayBufferTarget();
  const onStop = () => {
    let { buffer } = videoMuxer2.target;
    writeStream2.write(Buffer.from(buffer));
    writeStream2.end(); // in electron node
    // file-2.mp4 is success
  };
  
  const videoMuxer1 = new Mp4Muxer.Muxer(...);
  const videoMuxer2 = new Mp4Muxer.Muxer(...);
  videoMuxer1.addVideoChunk(chunk, meta);
  videoMuxer2.addVideoChunk(chunk, meta);
  videoMuxer2.addAudioChunk(chunk, meta);
  videoMuxer2.addAudioChunk(chunk, meta);

position argument in the onData callback

I just wanted to confirm my understanding on the position argument. I have a bit different requirement, in our app. we have the muxer in a worker, the worker forwards the muxed codec chunks to the main thread. The actual file write is happening in the main thread, we are using Streamtarget with onData callback for this.

Is it a correct assumption that the 'position' argument corresponds to the file write offset, the writer just needs to seek to the position and write the chunk at the position in the file ??

Please advise.

Compatibility issues

Hey, huge fan of this library! 💜

I've recently change from using minimp4 to use this muxer! It works like a charm for most players but I seem to have lost some compatibility:

Repros

For colorful repros

download new.

demo-new-muxer.mp4

download old.

demo-old-muxer.mp4
For minimal repros

download new.

demo-new-muxer2.mp4

download old.

demo-old-muxer2.mp4

Errors

Here is a screenshot of an error message from PPT:

image

This seems to have been fixed by installing K-Lite_Codec_Pack_1835_Standard

Here is the diplayed embedded version in Slack

Screenshot 2024-06-12 at 11 05 49

Small investigation

From the MP4Box filereader page I could found small differences in the exported videos (I used the minimal ones)

New one

Screenshot 2024-06-12 at 11 12 07

Old one

Screenshot 2024-06-12 at 11 12 46
  • Could it be the brands?
  • Could it be the 0 duration in the header?
  • Could it be the 64 bit thing that powerpoint reports? I saw some mentions of 64-bit in the codebase: [1] [2]

Thanks for your time & insights

I'm happy to help fix the bug if you can provide guidance 👍

avc1.7a1034 codec fails with error DOMException: Unsupported codec profile.

Hi,

I am not sure whether this is related to mp4muxer or this is a generic browser constraint, but in Chrome I am getting an error when trying to record a video using avc1.7a1034 codec. Ultimately my goal is to record a video in yuv422 format, so I am trying this configuration.

I configure muxer as follows:

 muxer = new Muxer({
    target: new ArrayBufferTarget(),
    video: {
        codec: "avc",
        width: 1920,
        height: 1080,
    },
});

and VideoEncoder in the following way:

this.videoEncoder = new VideoEncoder({
    output: (chunk, meta) => {
        // console.log(chunk, meta);
        try {
            return this.muxer.addVideoChunk(chunk, meta); // <--------- ERROR "DOMException: Unsupported codec profile." IS THROWN IN HERE
        } catch (e) {
            console.error(e); 
        }
    },
    error: (e) => console.error(e),
});
this.videoEncoder.configure({
    codec: "avc1.7a1034",
    width: 1920,
    height: 1080,
    bitrate: 50e6,
    framerate: 25,
});

Are you aware of such issue, and whether there could be any fix for that?

Error on android

Chrome for android version 120 added support for aac in webcodecs but I get error while using mp4-muxer(on 2 different android devices running 2 different OS version but chrome 120). Things are fine on desktop. The error screenshot is provided below.
mp4muxer error

Seems similar to #32 .
Also, support for hls straming (fMP4) would be great feature addition.

Audio & video time out of sync

This is great, but I found a problem, if you use it to encode a 3 minute video, the video timestamp generated by mp4-muxer is always a few seconds faster than the actual one, and the audio is out of sync. The same code is completely normal for testing with webm-muxer.

The nvidia series graphics cards have never supported hardware encoding of webm.
Choosing mp4-muxer is a best solution. It supports hardware encoding in the browser.
Hope to fix the problem of mp4-muxer timestamp out of sync.

Thank you very much.

Timestamps must be monotonically increasing.

Dear Sir,
I am designing a video editor in JavaScript. I am a using your library which is awesome anyhow some times while using large files I am facing the following error

"Timestamps must be monotonically increasing.
Muxer.validateTimestamp_fn (mp4-muxer.js:1088:13)"

I am not sure if it is a bug or my coding error. I tried to fix this error a lot but could not succeed in fixing it.
You can see the code by clicking following link.
https://github.com/khuramhaf/Video-Editor

The code is open source and I shall be thankful to you if you kindly help me in fixing the error.

Muxing avc data from polyfill library

Hello again,

back again with one challenging question. I'm using webcodecs VideoEncoder wherever it's possible, but e.g. firefox or older iOS / Safari versions don't support it yet. So I'm testing some polyfill projects and I was interested with this one: https://github.com/everywill/VideoEncoder

There is solution with own muxer, which really works but I would like to stay to use this mp4-muxer library. Also because it's supports sound muxing.

So I was trying to use addVideoChunkRaw method and use data from VideoEncoder. Again it seems like some problem with raw avc data. When I add there complete frame, it's playable in Chrome but not in Quicktime - mp4 seems to be broken.
So I tried to use ChatGPT to get some help with avc codec parsing (as previously helped me with aac) but I'm not luck for now. When I try to detect header and cut it from ArrayBuffer, video is still not playable.

I understand this time is hard to say why this not work due to third-party library but maybe you can have some tips where problem could be. I have two ideas:

  1. AVC codec - header/data splitting - I'm doing it wrong and ChatGPT doesn't give me correct anwsers

  2. Description attribute is missing in my muxer -> I noticed that when I use native VideoEncoder, it gives me metadata with description field within keyframe videochunk. When I not pass this metadata to your muxer - especially the description field - muxed video doesn't work correctly.

Thank you for any ideas

Questions about the size of video composites

The video synthesized using canvas screenshots is particularly large. The 10-minute video (1280 * 720 30fps) is about 2G. What ideas can you use to reduce the size of the video?

Works on Chrome but not on Safari

Strangely, I can create valid mp4 videos in Chrome, but fail to do so in Safari.
Safari does not complain about an unsupported codec or anything.

I create an mp4 muxer using the codecs in the example in the README

    const {Muxer, ArrayBufferTarget} = await import('mp4-muxer');

    // Set the metadata
    this.container = 'mp4';
    this.codec = 'avc1.42001f';
    // H264 only supports even sized frames
    this.width = this.image.width + (this.image.width % 2);
    this.height = this.image.height + (this.image.height % 2);

    // Create the muxer
    this.muxer = new Muxer({
      target: new ArrayBufferTarget(),
      fastStart: 'in-memory',
      video: {
        codec: 'avc',
        width: this.width,
        height: this.height,
      },
    });

Followed by a VideoEncoder

    this.videoEncoder = new VideoEncoder({
      output: (chunk, meta) => this.muxer.addVideoChunk(chunk, meta),
      error: e => console.error(e),
    });
    const config = {
      codec: this.codec,
      width: this.width,
      height: this.height,
      bitrate: this.bitrate,
      framerate: this.fps,
    };
    this.videoEncoder.configure(config);

Full reproduction video:

mp4-muxer.mp4

Rotation metadata

Hi is there a way to add rotation metadata to final file?

I found, that safari always record video in landscape, but for portrait adds rotation metadata, and that add weird issue to final file.
ffprobe for source file:
image

Does mp4-muxer support muxer encoding data for multiple audioEncoders simultaneously

Hello Vanilagy, mp4-muxer is a great library. May I ask if it supports muxer encoding data for multiple audioEncoders simultaneously?I know that muxers support both a video stream and an audio stream simultaneously,but Is it feasible for me to combine multiple audio streams played simultaneously with encoded video stream for export.

Use MediaStream from webaudio, rather than mic input.

Hi!

I'm trying to record my canvas as well as a mediastream that comes from webaudio (tonejs, to be more precise).
I have extracted the audiotrack from the audiocontext, and fed it to the MediaStreamTrackProcessor as you do in your example. However, the AudioEncoder doesn't work, it throws:

DOMException: Input audio buffer is incompatible with codec parameters

This is full relevant code:

const audioStream = Tone.getContext().createMediaStreamDestination();
const audioTrack = audioStream.stream.getAudioTracks()[0];
const audioSampleRate = 48000;

audioEncoder = new AudioEncoder({
      output: (chunk, meta) => muxer.addAudioChunk(chunk, meta),
      error: (e) => console.error(e),
    });
    audioEncoder.configure({
      codec: "mp4a.40.2",
      numberOfChannels: 1,
      sampleRate: audioSampleRate,
      bitrate: 128000,
    });

let trackProcessor = new MediaStreamTrackProcessor({ track: audioTrack });
    let consumer = new WritableStream({
      write(audioData) {
        if (!recording) return;
        audioEncoder.encode(audioData);
        audioData.close();
      },
    });

trackProcessor.readable.pipeTo(consumer);

Thanks in advance!

audio video way out of sync in latest version

I am trying to understand and debug what is happening, may be it is the issue with our application. please bear with me. All good except the audio is very fast and way out of sync with the video, the audio randomly plays smooth at few intervals , rest it goes pretty fast and unintelligible.
I recorded a 10min youtube video, and found that the audio duration is only 1 minute while the video is full 10 mins. The video shows up fine. I am pretty sure, i saw this working few months back.
here is the output of mediainfo command of the recorded mp4 video.

mediainfo 'soundkone.net_recording_2024-06-03 10_53_44.mp4'

General
Complete name : soundkone.net_recording_2024-06-03 10_53_44.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/avc1/mp41)
File size : 32.7 MiB
Duration : 10 min 6 s
Overall bit rate : 451 kb/s
Frame rate : 5.826 FPS
Encoded date : 2024-06-03 05:23:47 UTC
Tagged date : 2024-06-03 05:23:47 UTC

Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Constrained [email protected]
Format settings : 1 Ref Frames
Format settings, CABAC : No
Format settings, Reference frames : 1 frame
Format settings, Slice count : 2 slices per frame
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 10 min 6 s
Source duration : 10 min 7 s
Bit rate : 435 kb/s
Width : 800 pixels
Height : 600 pixels
Display aspect ratio : 4:3
Frame rate mode : Variable
Frame rate : 5.826 FPS
Minimum frame rate : 1.562 FPS
Maximum frame rate : 12.508 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.156
Stream size : 31.5 MiB (96%)
Source stream size : 31.5 MiB (96%)
Title : mp4-muxer-hdlr
Encoded date : 2024-06-03 05:23:47 UTC
Tagged date : 2024-06-03 05:23:47 UTC
Color range : Limited
Color primaries : BT.601 NTSC
Transfer characteristics : BT.601
Matrix coefficients : BT.601
mdhd_Duration : 606960
Codec configuration box : avcC

Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : mp4a-40-2
Duration : 1 min 10 s
Source duration : 1 min 10 s

Bit rate mode : Constant
Bit rate : 132 kb/s
Channel(s) : 1 channel
Channel layout : M
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 1.11 MiB (3%)
Source stream size : 1.11 MiB (3%)
Title : mp4-muxer-hdlr
Encoded date : 2024-06-03 05:23:47 UTC
Tagged date : 2024-06-03 05:23:47 UTC
mdhd_Duration : 70187

Incorrect video codec value saved (possibly...) ?

Hi, Thanks for the great software.

I am creating my videos using your demo example code and the following codec string avc1.42403e. However when I try to ready the file, using mp4box, I get the following codec string avc1.420320, but I can read the file only if I use the value avc1.42403e. I am not sure if the issue is with your package or with mp4box, or my general lack of understanding regarding this topic :)

Thanks
Pedro

any possibility to add vp9 and opus ?

I made some extensive tests for a project that involves encoding and decoding with webcodecs, and I need it to work on mobile phones. the mp4a codec is not yet supported on android chrome so using opus is the only option.

I've tried muxing vp9 and opus and it works nicely with webm-muxer, but I can't demux the output of webm-muxer with jswebm. jswebm fails for most webm videos (track data element not found, invalid file size, Invalid Seek Formatting, etc) I've been trying, but works for the big bad bunny demo video. project has not been updated since a while and I couldn't find a better option.

I don't know how hard it would be to add support for vp9 and opus to mp4-muxer. it would be a nice addition so along with mp4box, it would cover all webcodec needs.

the following reference show vp9 and opus are valid codecs for mp4 :
https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Video_codecs
https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Audio_codecs

I will meanwhile craft a mp4 with vp9 and opus to check if it can be demuxed correctly with mp4box

regards,

[HELP] Generated MP4 files does not work on Whatsapp Web

Hello, I hope you are well!

I'm using canvas-record to record my canvas and save a video to MP4 using AVC1 codec (which internally uses mp4-muxer to generate the mp4 file). The mp4 generated files works well, but seems they don't work with Whatsapp Web (I think this is a problem with codecs, although the recommended codec to use with Whatsapp is ACV1).

I'm still not sure what causes this problem, I've already tried several profile, level, latencyMode, etc. settings. The MP4 videos are generated correctly, and I can watch them through the video player or in the browser, but when I send them via Whatsapp Web, a preview is not generated, and the videos are only sent as attached files.

I will attach two examples of similar videos. The first one works on Whatsapp (it was converted from webm to mp4 on a remote server), and the second does not (it was generated directly in mp4 using canvas-record).

First video, converted from webm to mp4 using ffmpeg on a server (works on whatsapp web):
https://github.com/dmnsgn/canvas-record/assets/11933789/80b6745b-cce3-42fa-aa98-bd8670a95d50

Second video, converted using webcodecs on the browser using canvas-record (does not work on whatsapp web):
https://github.com/dmnsgn/canvas-record/assets/11933789/1531e1a7-59be-4418-b079-2e71477bea20

Seems the only relevant setting that is different between those videos is the Codec ID, which is mp41 for video 1 and mp42 for video 2. But I really don't know how to change this using mp4-muxer. I tried to locate it inside the library code and options, but without success.

My question is: Is there any way to change the video from mp42 to mp41 before saving it, so that I can make it work with Whatsapp?

Some informations about both videos:

Visual comparison

image

Video 1 - Ffmpeg

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '.\video1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
  Duration: 00:00:06.56, start: 0.000000, bitrate: 6494 kb/s
  Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 898x584 [SAR 1:1 DAR 449:292], 6492 kb/s, 16 fps, 16 tbr, 16384 tbn (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]

Video 2 - Webcodecs

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '.\video2.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42isom
    creation_time   : 2023-10-11T18:34:44.000000Z
  Duration: 00:00:06.57, start: 0.000000, bitrate: 2158 kb/s
  Stream #0:0[0x1](und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(progressive), 954x604, 2155 kb/s, 60 fps, 60 tbr, 90k tbn (default)
    Metadata:
      creation_time   : 2023-10-11T18:34:44.000000Z
      vendor_id       : [0][0][0][0]
      encoder         : JVT/AVC Coding

Possibility to mux aac audio without encoding it

Hello,

first want to say that thanks for your webm and mp4 muxer libraries. There are awesome!

I have only one question. If I have audio - in aac codec - without any container like mp4 - want I want to mux this audio to mp4 container using aac codec. Do I really need AudioEncoder? Is there way to mux audio inside mp4 container without AudioEncoder?

If I try it this simple way, audio is playable with Chrome, but quicktime, vlc etc. can't play it - it immediately skips to end on start playing:

const muxer = new Muxer({
    target: new ArrayBufferTarget(),
    audio: {
        codec: 'aac',
        numberOfChannels: 2,
        sampleRate: 44100
    },
    firstTimestampBehavior: 'offset'
});

const arrayBuffer = await fetch('https://:DOMAIN:/assets/audio_testing/background_short.aac')
    .then((response) => response.arrayBuffer());

const audioContext = new AudioContext({sampleRate: 44100});

const audioBuffer = await audioContext.decodeAudioData(buffer.slice(0));

muxer.addAudioChunkRaw(new Uint8Array(arrayBuffer), 'key', 0, audioBuffer.duration * 1000000, {
    decoderConfig: {
        codec: 'aac',
        numberOfChannels: 2,
        sampleRate: 44100,
    }
});

Is there some proper way to do it without encoder or encoding is necessary for muxer in case that input is same aac source like mp4 container audio aac output.

Thank you for your anwser

wrong duration & framerate calculation

I am facing an issue with video frames timestamps & framerate calculation.

Let's say I am creating an 1 second video made of 10 frames: one frame every 1/10 seconds.

the encoded timestamps will be encoded as follow:
0 : 0
1 : 100000
2 : 200000
3 : 300000
4 : 400000
5 : 500000
6 : 600000
7 : 700000
8 : 800000
9 : 900000

When opening this video into a video player ( VLC, quicktime... ), the video is seens as an 11.11111 fps framerate with a duration of 0.9 seconds. That must be because video players see it as a 10 frames video with a duration of 0.9 seconds ( the timestamp of the last frame ).

Is there a way to specify the video duration when muxing the mp4 file, so that the video will be seen with 1s length?

I can provide a video file if that helps

live streaming

How can I do live streaming?

According to the documentation
StreamTarget is not intended for live-streaming

need some advice on doing live streaming with mp4-muxer

Hi ,
Thanks for a wonderful library, i have used the mp4-muxer library for recording and it worked like a charm. Now, i am looking to do live streaming over dash , any pointers or advice would speed up my efforts.
Thanks

B-frame support

Hi there! Great library! Thank you for providing it! I'm using it to mux together "raw" video and audio packets for use with the MediaSource API to do a low-latency preview for a live streaming application. It works great!

While we discourage the use of B-frames (to minimize decode latency), one of our testers had B-frames enabled in his encoder, and received the Timestamps must be monotonically increasing error. I am using the PTS as the timestamp, but with B-frames these can definitely be out of order (the decode times, aka DTS values, are monotonically increasing, however).

I saw you referenced this issue in this comment:

Regarding b-frames and PST and DTS, there's actually no logic in here from my side - I simply pipe all of the encoded frame data in and have PST=DTS everywhere. Is this an error on my part? I had to read into this topic while writing the muxer, so it's possible I missed something! All of the files I muxed with high profiles worked great, though.

I don't think it's an error, per se, but if B-frame support is required, I think that we need to write a ctts box to provide the composition time offset (PTS - DTS) for each sample.

I started to make this change, but I noticed that for fastStart !== 'fragmented', we don't even keep sample times (timeToSampleTable, to write into the stts box), so then I got confused. Shouldn't we be tracking sample times and writing the stts box for fragments as well? Or is that not supported for some reason? I'm happy to work on this issue, but I need some guidance on the design choices so far.

Thank you!

Some players don't show the last frame

Maybe this is a problem with mp4 of with the players, but I noticed that if I export a simple video such as this:

davinci-no-key-frames.mp4

Quicktime on my mac, Photos on my iPhone, and Instagram story creator don't show the last frame where the canvas is all red. In the browser, like you can see above, it works correctly.

In instagram:

RPReplay_Final1706780663.MP4

Here's my code:

    function drawFrame(nFrame: number) {
      if (ctx === null) return;
      const p = nFrame === 0 ? 0 : nFrame / (TOTAL_FRAMES - 1);
      ctx.fillStyle = 'blue';
      ctx.fillRect(0, 0, CANVAS_WIDTH, CANVAS_HEIGHT);
      ctx.fillStyle = 'red';
      ctx.fillRect(0, 0, CANVAS_WIDTH * p, CANVAS_HEIGHT * p);
    }

    // Create an MP4 muxer with a video track and maybe an audio track
    const muxer = new Muxer({
      target: new ArrayBufferTarget(),

      video: {
        codec: 'avc',
        width: CANVAS_WIDTH,
        height: CANVAS_HEIGHT,
      },

      // Puts metadata to the start of the file. Since we're using ArrayBufferTarget anyway, this makes no difference
      // to memory footprint.
      fastStart: 'in-memory',

      // Because we're directly pumping a MediaStreamTrack's data into it, which doesn't start at timestamp = 0
      firstTimestampBehavior: 'offset',
    });

    const videoEncoder = new VideoEncoder({
      output: (chunk, meta) => muxer.addVideoChunk(chunk, meta),
      error: (e) => console.error(e),
    });
    videoEncoder.configure({
      codec: 'avc1.42001f',
      width: CANVAS_WIDTH,
      height: CANVAS_HEIGHT,
      bitrate: 1e6,
    });

    currentFrame.current = 0;
    while (currentFrame.current < TOTAL_FRAMES) {
      // add frame
      drawFrame(currentFrame.current);
      const frame = new VideoFrame(canvasRef.current, {
        timestamp: (currentFrame.current * 1e6) / 30, // Ensure equally-spaced frames every 1/30th of a second
      });
      videoEncoder.encode(frame);
      frame.close();
      currentFrame.current++;
    }
    await videoEncoder.flush();
    muxer.finalize();

    let buffer = muxer.target.buffer;
    downloadBlob(new Blob([buffer]));

I hope it's not a silly mistake with my code. Thanks! :)

Audio and Video Sync Issue

Thanks for your great module.
Until 480p video, it is good.
But when I record 720p video, then audio & video sync problem occurs. Audio length is longer than video.
Can you guide me how to resolve this issue?
Or is it this module's bug?

Export MuxerOptions

Hello!

Could you export MuxerOptions?

now:

export { Muxer, ArrayBufferTarget, StreamTarget, FileSystemWritableFileStreamTarget };

then:

export { Muxer, ArrayBufferTarget, StreamTarget, FileSystemWritableFileStreamTarget, MuxerOptions };

It would reduce code lines significantly in my project.

Thank you!

Error while running the demo

Thanks for the library,but I am getting error while running the demo on Edge in Windows 10(the webm-muxer demo works fine). Screenshot of console attached
mp4muxer

Muxer.finalize() never returns anything

Hi,

I think there is a mismatch between the doc and the source code.

The doc states : @returns Should you have used target: 'buffer' in the configuration options, this method will return the buffer containing the final MP4 file.

But no return is to be found in the source code : https://github.com/Vanilagy/mp4-muxer/blob/8371be1feacd57918016b37c6b9baa44a0de190b/src/muxer.ts#LL389C7-L389C7

This also brings the question : why is the doc not in the source code but in a different .d.ts file ?

Error when using FileSystemWritableFileStreamTarget

The following error:

Screenshot 2023-11-30 at 15 26 27

Occurs when trying to use FileSystemWritableFileStreamTarget with fileWriter.

This write function (in the build code) is expecting a blob not an object

Screenshot 2023-11-30 at 15 27 36

For now I've worked around the issue by wrapping the fileWriter

   const wrapFileWriter = (fileWriter) => {
        const originalWrite = fileWriter.write;
        const modifiedWrite = (payload) => {
            const blob = new Blob([payload.data]);
            originalWrite.call(fileWriter, blob);
        }
        fileWriter.write = modifiedWrite;
        return fileWriter;
    };
    let muxer = new Mp4Muxer.Muxer({
	target: new Mp4Muxer.FileSystemWritableFileStreamTarget(wrapFileWriter(fileWriter)),
        ...
    }

Chunking not working properly?

So I have this code.

const muxer = new Mp4Muxer({
  target:  new StreamTarget({
    onData: async (data: Uint8Array, position: number) => {
      console.log(data, position);

      // this function performs a write to file via stream (fs.createWriteStream method)
      window.desktop.writeChunkStream(data);
    },

    // looks like this one is bugged? it doesn't send all the correct data
    chunked: true,
    chunkSize: 8 * 2 ** 20,
  }),
  fastStart: "fragmented",
  firstTimestampBehavior: 'offset',
});

I have a test video which is 2minutes and 48 seconds long. around 110MB

If I remove chunked and chunkSize parameter, everything works correctly, my video is also correctly written without any loss. will generate 2:48 with 110MB

If I add those parameter back, It would build the video but it's an incomplete one. like it would only generate a 12 second video but with the same size 110MB.

Using the logs, I am receiving the data with different position everytime (so I guess it doesn't send an identical data) but I noticed that I am getting 2 sets of data that is not identical in chunkSize even if I specifically added a chunkSize:

image

latest version of mp4 muxer is broken

Latest version of mp4 muxer is broken, the onData callback never gets invoked, when the mp4-muxer.js is reverted to a previous version ( commit b295e92) , things started working. please check.

TypeError: track.currentChunk.sampleData is not iterable

Hey, I'm getting a stack trace error, kinda randomly here

    const muxer = new Mp4Muxer({
        target: new FileSystemWritableFileStreamTarget(fileStream),
        video: {
            codec: 'avc',
            width: params.width,
            height: params.height,
        },
        fastStart: false,
    });
    
    const encodeFrame = (chunk, meta) => {
        muxer.addVideoChunk(chunk, meta, chunk.timestamp) // I'm making adjustments to timestamp
    }
    
    const videoEncoder = new VideoEncoder({
        output: (chunk, meta) => {
            encodeFrame(chunk, meta);
            encodedFrameCount += 1;

            if (encodedFrameCount >= frameCount) {
                videoEncoder.flush().then(() => {
                    console.log('flushed video')
                    muxer.finalize()
                })
            }
        },
        error: (error) => {
            console.error('error', error);
        },
    });

    // The videoEncoder.configure()
    // The calls to videoEncoder.encode()
Uncaught (in promise) TypeError: track.currentChunk.sampleData is not iterable
    at Muxer.finalizeCurrentChunk_fn (mp4-muxer.js?v=ff8ce078:1250:41)
    at Muxer.finalize (mp4-muxer.js?v=ff8ce078:958:77)
    at finalize (encode.ts:38:31)
    at encode.ts:48:25

Do you know where it could come from?

Thanks for the work, it's greatly appreciated!

Can I convert WebM videos into mp4?

I record videos in WebM using MediaRecorder. I wonder if, perhaps with the combination of VideoDecoder, it would be possible to re-encode them into mp4 file without having to use ffmpeg for the conversion. 🤔

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.