Coder Social home page Coder Social logo

mugen's Introduction


 _ __ ___  _   _  __ _  ___ _ __
| '_ ` _ \| | | |/ _` |/ _ \ '_ \
| | | | | | |_| | (_| |  __/ | | |
|_| |_| |_|\__,_|\__, |\___|_| |_|
                  |___/

tests coverage Maintainability Code style: black license Ko-Fi

A command-line music video generator based on rhythm

Use it to brainstorm AMVs, montages, and more! Check it out.

Built with moviepy programmatic video editing and librosa audio analysis.

Strategy

  1. Provide an audio file and a set of video files

  2. Perform rhythm analysis to identify beat locations

  3. Generate a set of random video segments synced to the beat

  4. Discard segments with scene changes, detectable text (e.g. credits), or low contrast (i.e. solid colors, very dark scenes)

  5. Combine the segments in order, overlay the audio, and output the resulting music video

Installation

Mugen is supported across Linux, macOS, and Windows.

1. Install Miniconda

Miniconda helps create an isolated virtual environment and install the project's dependencies across platforms.

2. Download this repository

git clone https://github.com/scherroman/mugen

3. Create the project's virtual environment

conda env create --file mugen/environment.yml

4. Activate the virtual environment

conda activate mugen

Usage

Help Menu


mugen --help
mugen create --help
mugen preview --help

Use the above commands at any time to clarify the examples below and view the full list of available options.

By default output files are sent to the desktop. This can be changed with the -od --output-directory option.

Preview a music video


Create a quick preview of how your music video will be cut to the music with beeps and flashes.

It's common for beat timing to be a little off or too fast, so to save time it's recommended to generate and tweak previews beforehand to make sure the timing feels right.

mugen preview --audio-source Spazzkid_Goodbye.mp3

Slow down cuts to every other beat

--events-speed 1/2

Offset the grouping of beats when slowing down cuts

--events-speed 1/4 --events-speed-offset 2

Globally offset beat locations in seconds

--events-offset 0.25

Slow down cuts for leading and trailing weak beats

--beats-mode weak_beats --group-events-by-type --group-speeds 1/2 1 1/4

Control the speed of cuts for specific sections

--group-events-by-slices (0,23) (23,32) (32,95) (160,225) (289,321) (321,415) --group-speeds 1/2 0 1/4 1/2 1/2 1/4

Input event locations manually in seconds

--event-locations 2 4 6 10.5 11 12

Use onsets instead of beats

--audio-events-mode onsets

Create a music video


mugen create --audio-source MACINTOSH_PLUS_420.mp3 --video-sources TimeScapes.mkv

Use a series 60% of the time and a movie 40% of the time

--video-sources Neon_Genesis_Evangelion/ The_End_of_Evangelion.mkv --video-source-weights .6 .4

Use all files and subdirectories under a directory

--video-sources Miyazaki/*

Use files that match a prefix

--video-sources Higurashi/S01*

Allow clips with cuts and repeat clips

--exclude-video-filters not_has_cut not_is_repeat

Use only clips that have text

--video-filters has_text

Save individual segments

To save all the segments that make up the music video as separate files:

--save-segments

To save all the rejected segments that did not pass filters as separate files:

--save-rejected-segments

These will be saved as .mp4 files in folders alongside the music video.

Python Usage

Preview a music video


from mugen import MusicVideoGenerator

generator = MusicVideoGenerator("Pogo - Forget.mp3")
beats = generator.audio.beats()
beats.speed_multiply(1/2)

preview = generator.preview_from_events(beats, "forget-preview.mkv")
preview.write_to_video_file("preview.mkv")

Create a music video


from mugen import MusicVideoGenerator

generator = MusicVideoGenerator("in love with a ghost - flowers.mp3", ["wolf children.mkv"])
beats = generator.audio.beats()
beat_groups = beats.group_by_slices([(0, 23), (23, 32), (32, 95), (160, 225), (289,331), (331, 415)])
beat_groups.selected_groups.speed_multiply([1/2, 0, 1/4, 1/2, 1/2, 1/4])
beats = beat_groups.flatten()

music_video = generator.generate_from_events(beats)
music_video.write_to_video_file("flowers.mkv")
music_video.save("flowers.pickle")

Replace a segment in a music video


from mugen import VideoSource, SourceSampler, MusicVideo

music_video = MusicVideo.load("flowers.pickle")
wolf_children = VideoSource("wolf children.mkv", weight=.2)
spirited_away = VideoSource("spirited away.mkv", weight=.8)
sampler = SourceSampler([wolf_children, spirited_away])
music_video.segments[1] = sampler.sample(music_video.segments[1].duration)

music_video.write_to_video_file("flowers.mkv")

Preview a segment in a music video


from mugen import MusicVideo

music_video = MusicVideo.load("flowers.pickle")

''' Basic Previews (less smooth) '''

# Use a lower fps to reduce lag in playback
music_video.segments[1].preview(fps=10)

# Preview a frame at a specific time (seconds)
music_video.segments[1].show(.5)

''' Jupyter Notebook Previews (smoother) '''

music_video.segments[1].ipython_display(autoplay=1, loop=1, width=400)

# Preview a frame at a specific time (seconds)
music_video.segments[1].ipython_display(t=.5, width=400)

Notes

Subtitles

The videos generated by create and preview include a subtitle track which display segment types, numbers, and locations.

Text detection

Currently text detection uses the Tesseract optical character recognition engine and thus has been trained mainly on documents with standard type fonts. Credit sequences with nonstandard or skewed fonts will likely not be detected. It is also possible for Tesseract to occasionally falsely detect text in some images.

Troubleshooting

Progress is stuck

The most common reason progress gets stuck is that mugen is trying but can't find any more segments from your video source(s) that pass the default video filters listed under mugen create --help. The not_is_repeat and not_has_cut filters in particular could be causing this if your video source is especially short and/or with little to no time between scene changes. The first one throws out segments that have already been used, and the latter throws out segments where there are scene changes detected. Try using one or more videos that are longer than your music, or otherwise disable the filters with --exclude-video-filters not_has_cut not_is_repeat.

Contributing

Thanks for considering contributing! To get started, see the contributing documentation for details on development setup and submitting pull requests.

mugen's People

Contributors

dependabot[bot] avatar scherroman avatar tartrsn avatar tirkarthi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mugen's Issues

How to speedup?

Hello, it's possible to speed up generation music video?
Maybe using GPU instead of CPU?
Thx you.

Weighting Controls

Allow the user to apply percentage weights to videos/set of videos, to control how often a video is sampled from for the music video.

i.e. Use a series (26 episodes) 50% of the time, and use the movie 50% of the time.

Currently, each video input is sampled equally as frequently.

Option to label segments visually

I know new_mugen doesn't have 'recreate' yet, or even the json file, like 'original' mugen, but before I forget, cause I realized how much time this would save me...

How I review my generated video now: make a video, watch the video, realize I dislike a number of random clips used, and want to replace them, and now I have to recreate the video with some changes. So I have to figure out which clips they are. If I save segments, I can go in and find the videos manually, and this works... but it's annoying, I have to look at hundreds of thumbnails sometimes, and pick out the ones I dislike and try again...

It would be nice to have a way to generate 'debugging' videos where the clips are labeled with what # they are, so I could just pause the video and note the number, and then recreate telling it to replace that # (or #s)? And once I'm happy with the resulting videos, recreate without that debugging and the final video will be pristine, but no need to keep saving segments over and over?

Adding a text overlay in the upper corner (or whatever location) with the clip number should be a fairly trivial tweak, right?

Chronological order

Testing the create code and it looks good so far.

One thing I noticed is that you are only passing a desired duration in getting a clip, and not 'start location' (ie where the new segment will go in the new video). The time the clip is going in is valuable for some potential filters/options.

One obvious example (and looking at the code I realized to implement it now would require cloning/extending a lot of code, which is why I'm posting now, when you could easily change this):
I have Music Video A (and matching audio for A as audio source), and a variety of Videos in directory B.... currently, I can weight A and B directory, so create a video that using as much of A as I like, and splices in clips from B... but the video clips of A will be random. I don't want random, I want sequential as in the original A. (so essentially, the new music video will be A+clips from B on the beat, weighted in as desired). There needs to be a way to say "for A, don't go random, get clip starting at time X (the current time in video generating), for duration, but for B/etc, random is fine."

Or perhaps, differently, I want to grab clips sequentially from all, so that early clips are early from the videos and later clips later... and not have a clip from an ending of one of these videos too early in my new generated video.

I see an argument along the lines of "chronological" or just "order", with options being similar to weighting, per source, where the choices are

  • "strict" (use exact new video time for a clip of duration),
  • "loose" (pick a time relatively close to [new video time/total new video time] relative to [selected video length] for video's duration.) (example for clarity: We're in minute 2 of a 5 minute new video... pick a clip from video X somewhere roughly about 2/5 of the way into that video X, for duration Y) [Hopefully that's clear, and you see why that's useful - think story unfolding images, where random loses that arrow of direction...
  • "random" the current default

so in my above example: with -video-sources A.mp4 OtherDirectory/B/ -vw 4 6 -order strict random, th resulting video would be like A 40% of the time, but 60% of the time, pull clips from some video from B

Or perhaps another good example, I have 2+ music videos for the same song, and I want to meld them together equally: -v A.mp4 B.mp4 -order strict strict

For the loose usage: imaginary B and C videos are all time lapse video (ala Timescape, your example), or other sequential events... We don't want to jump around randomly for those, we want to show clips in an orderly, but still randomish way.... telling a story visually, and showing items out of order (5,1,4,2 as random) isn't as good as some order (1,2,4,5) -v A.mp4 B.mp4 C.mp4 -order random loose loose.

the progress_bar issue

Thanks for the code firstly, but I constantly got the errors for TypeError:write_videofile() got an unexpected keyword argument 'progress_bar'. I don't know how to solve this issue.

Building wheel for Pillow (setup.py) ... error

When I run pip install -e mugen in anaconda I recive this message:

Building wheel for Pillow (setup.py) ... error

"The headers or library files could not be found for zlib,
a required dependency when compiling Pillow from source."

No module named 'dill'

Error message:

File "cli.py", line 10, in
import bin.utility as cli_util
File "/home/grzana/Desktop/mugen-master/src/bin/utility.py", line 8, in
import mugen.paths as paths
File "/home/grzana/Desktop/mugen-master/src/mugen/init.py", line 5, in
from mugen.video.video_filters import VideoFilter
File "/home/grzana/Desktop/mugen-master/src/mugen/video/video_filters.py", line 4, in
import mugen.video.detect as v_detect
File "/home/grzana/Desktop/mugen-master/src/mugen/video/detect.py", line 15, in
from mugen.video.segments.VideoSegment import VideoSegment
File "/home/grzana/Desktop/mugen-master/src/mugen/video/segments/VideoSegment.py", line 8, in
from mugen.video.segments.Segment import Segment
File "/home/grzana/Desktop/mugen-master/src/mugen/video/segments/Segment.py", line 11, in
from mugen.mixins.Persistable import Persistable
File "/home/grzana/Desktop/mugen-master/src/mugen/mixins/Persistable.py", line 1, in
from dill import dill
ImportError: cannot import name 'dill'

Using Ubuntu 18.04
I installed dev env version but on every type env the same error happend
First i have this error #17 but i fix this and now i have this error

Getting started trouble

I ran these commands to get started, but when I try anything mugen wants a missing 'bin' directory. Any ideas?

mkdir 3_try
cd 3_try
git clone https://github.com/scherroman/mugen.git
cd mugen
conda env create -f environment.yml
everything downloads...
source activate mugen
cd src/bin/
python cli.py --help
Recieve this error:
Traceback (most recent call last):
File "cli.py", line 9, in
import bin.constants as cli_c
ModuleNotFoundError: No module named 'bin'

Feature Request - Source time boundaries

Awesome job so far... adding items I've brainstormed and/or desired as I played with this more and more...

The ability to provide a time boundary for where to pull video from out of a source... so I might want to pull from Video X, but only between minutes 5 and 20, and not from the first 5 or after 20. Some way to specify this, per video (a filename.times for each video, if it exists, use that?)
(multiple limits as file lines?
00:05:00-00:10:00
00:12:30-00:20:00

Why: without this, the only way to ensure clips aren't grabbed is to create a smaller 'preclipped' file, or to look at clips and reject items wrongly picked.

Make text detection optional

Mugen should exclude text detection if tesserocr is not pip installed.
If tesserocr is pip installed, Mugen should use text detection by default.

Feature suggestions

Playing with this... and having fun imagining uses for it.

Some suggestions:

  • Make Text detection optional
  • Add some visual beat option, or maybe a beat visualization layer?
  • allow for slight offset for beat/video sync (so that you can adjust to correct for visual/auditory lag) (ie appear to change just on the beat)
  • allow setting specific videos to only certain time windows (so you could ensure Video X is only at end, or Video Y won't be used after halfway point, etc...)
  • order video clips so that they are chronological (ie pick from beginning of videos at start, then pick later as song progresses...)
  • overlapping clips - similar to the way the Cup Song works, adding new non fullscreen clips to the beat...
  • stay nearby - pick the next clip nearby from the last one, for at least X clips, before switching video sources
  • ken burns the clips (zoom and pan)
  • leave clip audio intact (or mix with song?)

No module named 'scripts.cli'

(base) abcdeMacBook-Pro:mugen abc$ mugen
Traceback (most recent call last):
File "/Users/abc/anaconda3/bin/mugen", line 11, in
load_entry_point('mugen', 'console_scripts', 'mugen')()
File "/Users/abc/anaconda3/lib/python3.5/site-packages/pkg_resources/init.py", line 476, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/Users/abc/anaconda3/lib/python3.5/site-packages/pkg_resources/init.py", line 2700, in load_entry_point
return ep.load()
File "/Users/abc/anaconda3/lib/python3.5/site-packages/pkg_resources/init.py", line 2318, in load
return self.resolve()
File "/Users/abc/anaconda3/lib/python3.5/site-packages/pkg_resources/init.py", line 2324, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
ImportError: No module named 'scripts.cli'

Feature request - Video source selection based on time

Consider this as another 'would be cool if...'

I have multiple folders of video sources... I want to have mugen use source A for the first minute, source B for the second minute, source C for the 3rd minute... etc. So the generated video wouldn't use anything from Source A after minute 1...

My workaround (I'm still using older mugen code for this, since no regen in new_mugen yet) is to generate multiple videos, and then edit the json specs into a single file, so that it'll make a new json with the desired source separation intact. And then regen a new video that does the above. Hacky but it works for now.

ModuleNotFoundError: No module named 'numpy'

I cannot get it to install, I've tried running the code on Windows 7 and Ubuntu 16.04. I get the same error

K:\mugen>conda env create -f environment.yml
Fetching package metadata ...........
Solving package specifications: .
Collecting cython>=0.25.2
Using cached Cython-0.26-cp36-none-win_amd64.whl
Collecting moviepy>=0.2.3.2
Using cached moviepy-0.2.3.2-py2.py3-none-any.whl
Collecting librosa>=0.5.0
Using cached librosa-0.5.1.tar.gz
Collecting Pillow>=3.4.2
Using cached Pillow-4.2.1-cp36-cp36m-win_amd64.whl
Collecting numpy>=1.12.0
Using cached numpy-1.13.1-cp36-none-win_amd64.whl
Collecting pysrt>=1.1.1
Using cached pysrt-1.1.1.tar.gz
Collecting tqdm>=4.10.0
Using cached tqdm-4.15.0-py2.py3-none-any.whl
Collecting decorator>=4.0.11
Using cached decorator-4.1.2-py2.py3-none-any.whl
Collecting dill>=0.2.7.1
Using cached dill-0.2.7.1.tar.gz
Collecting imageio==2.1.2 (from moviepy>=0.2.3.2)
Using cached imageio-2.1.2.zip
Collecting audioread>=2.0.0 (from librosa>=0.5.0)
Using cached audioread-2.1.5.tar.gz
Collecting scipy>=0.13.0 (from librosa>=0.5.0)
Using cached scipy-0.19.1.tar.gz
Collecting scikit-learn>=0.14.0 (from librosa>=0.5.0)
Using cached scikit_learn-0.19.0-cp36-cp36m-win_amd64.whl
Collecting joblib>=0.7.0 (from librosa>=0.5.0)
Using cached joblib-0.11-py2.py3-none-any.whl
Collecting six>=1.3 (from librosa>=0.5.0)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting resampy>=0.1.2 (from librosa>=0.5.0)
Using cached resampy-0.1.5.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\SamM\AppData\Local\Temp\pip-build-18o9g51f\resampy\setup.p
y", line 6, in
import numpy as np
ModuleNotFoundError: No module named 'numpy'

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in C:\Users\SamM\Ap
pData\Local\Temp\pip-build-18o9g51f\resampy\

CondaValueError: pip returned an error.

Creating mugen conda environment fails

Creating the mugen conda environment with "conda env create -f environment.yml", fails while collecting tesserocr with "ImportError: No module named Cython.Distutils".

Feature request - Photos instead of video

It would be great if there was a way to import photos / images as well as videos and cut on the beat.

Is this something that has been concidered for the project?

Stuck at 45/46

am I doing something wrong here?
It looks like I configured it in such a way there's 1 too few video events available, but I'm not sure if I just did it bad.

I'm stuck in a loop that I tried to capture below (sorry for the duplicate sub-progress-bar) and it just does the sub-task that has 398 items over and over again.
I've been here for 10 minutes, but it feels like I've been here all year. send for help.

(mugen) bash-3.2$ mugen create -a bright-lights.wav -vn LostInTranslation.mkv -ss -es 1/8 -aem onsets -bm weak_beats -v output.mkv

Weights
------------
output: 100.00%

Analyzing audio...

Events:
[<EventList 0-44 (45), type: Onset, selected: False>]

Generating music video from video segments and audio...
 98%|███████████████████████████████████▏| 45/46 [02:13<00:01,  1.94s/it]

t:   0%|                               | 0/398 [00:00<?, ?it/s, now=None]
t:  93%|██████████████████▋ | 372/398 [00:03<00:00, 111.91it/s, now=None]

Container ?

Description

I would like to run mugen in a container.

Rationale

Because it would be so much faster, and simpler t run. Because it could make think of how to scale up that service (scale up a Kubernetes deployment to 20 pods, and each of the 20 pods processes 2 seconds of the videos, in, the end all is put back together and returned to request issuer)

Alternatives

running conda in a Virtual Machine

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.