Coder Social home page Coder Social logo

cyfeng16 / mvimp Goto Github PK

View Code? Open in Web Editor NEW
71.0 8.0 20.0 263.79 MB

Mixed Video and Image Manipulation Program

License: GNU General Public License v3.0

Python 100.00%
animegan dain colab 3d-photo-inpainting deoldify 1080p 3d-photo video waifu2x-ncnn-vulkan vt-vl-lab

mvimp's Introduction

The Hard Thing About Hard Things!

Hi there, I am CyFeng16, currently researching and designing an AutoML system specifically for the medical field, which medical researchers can quickly verify various possible medical AI solutions.

Our ultimate goal is to control medical expenses, reduce medical costs, and improve universal medical security through cross-professional cooperation.

Github Stats

 Total Stars                                64
 Total Commits(2022)                       382
 Total Pull Requests                        53
 Total Issues                               28
 Contributed To                              3
 Total Repositories                         38

Most Used Language

 Jupyter Notebook █████████████████░░░░  80.8%
 Python           ████░░░░░░░░░░░░░░░░░  17.9%
 Stylus           ░░░░░░░░░░░░░░░░░░░░░   0.5%
 JavaScript       ░░░░░░░░░░░░░░░░░░░░░   0.5%
 Starlark         ░░░░░░░░░░░░░░░░░░░░░   0.2%
 Dockerfile       ░░░░░░░░░░░░░░░░░░░░░   0.1%

Commit stats

 Morning     0 commits   ░░░░░░░░░░░░░░░   0.0%
 Daytime     0 commits   ░░░░░░░░░░░░░░░   0.0%
 Evening     2 commits   ███████████████ 100.0%
 Midnight    0 commits   ░░░░░░░░░░░░░░░   0.0%

Recent Pushed

 cpmmi(dependabot/pip/pillow-9.3.0)   1 files 11/15/2022
 CyFeng16(main)                       1 files 11/15/2022
 tjeh-oc-sc(main)                    12 files  9/24/2022
 automl4sheets(main)                  2 files  9/19/2022

mvimp's People

Contributors

cyfeng16 avatar meguerreroa avatar re2cc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mvimp's Issues

Need feedback for test branch

Suggestion:
Make it so we can DAIN interpolate multiple videos in a row.
Example: Have multiple videos in the input folder, and DAIN interpolate them all 1 by 1 automatically.

*I have tried doing this, but I'm not a python expert, and I'm still struggling to change whats needed for this to work.

This is good for when we have large videos, we split them into parts first, so when using COLAB, even if we hit the 12h session limit, we don't lose the entire progress.

Also good for when we split the video into different scenes using scene change detection beforehand. (Still trying to implement auto scene detection into the repo, for now I'm using "PySceneDetect" on my local computer to split the video into the different scenes first.)

Originally posted by @Brokensilence in #2 (comment)

MVIMP-Photo3D-Demo missing Loader???

Idk what this is but it's happening on almost all the 3D inpainting colabs I've tried?
Fix anyone?

Current PyTorch version is 1.12.1+cu113
Current configuration is :
fps: 30
num_frames: 240
longer_side_len: 960
Traceback (most recent call last):
File "main.py", line 30, in
config = yaml.load(open(args.config, "r"))
TypeError: load() missing 1 required positional argument: 'Loader'

Interpolated Frames turn out Pixelated

Hello again.
I finally had some time, and managed to get the anime dupe only interpolation working. The thing I spoke on #4 (You can see on my fork) It can use some improvements, but it's working :)

So now I'm left with solving the Pixelation that happens on 90% of interpolated frames. Very noticeable on fast moving scenes.

I did some testing with DAIN original repo today, and the pixelation doesn't happen (DainApp also doesnt happen). There are artifacts, ofc, but no pixelation.
So I'm inclined to believe something is wrong with your DAIN interpolation code.
I took a look, but I'm afraid it's too much for my noob self. :)

I used the original DAIN with the latest colab file they provided.

Here are some examples of what I'm talking about:

Original DAIN: (Notice the artifacts)
image

And this is using MVIMP: (Notice how pixelated everything looks. Also the weapon fire effect is a green blue color for some strange reason)
image

Here's another example from another video:
image

Here's a video where you can actually tell without going frame by frame: https://www.youtube.com/watch?v=Qa54A1-FOWY
*You can use , and . on youtube to go frame by frame.

And heres a video using the original DAIN: https://www.youtube.com/watch?v=sN1VA-gg7yM (No pixelation)

I understand artifacts will always happen. But Artifacts + Pixelation makes it much worse.
I'd love to solve this problem so I can start interpolating some anime. The pixelation is even worse on anime.

*The pixelation is happening whether I use High Resolution splitting or not. So that's not it.

Any help would be greatly appreciated.
Thank you.

Possibly unnecessary compilation of waifu2x

Well, when I wanted to add waifu2x to the list of models, there were no linux binaries for waifu2x-ncnn-vulkan, but almost as if the developer was watching us, I release linux binaries.
I tested it in Colab, and it works... I do not think we should delete it from the list, but maybe we should consider just downloading it directly, the only thing that needs to be installed separately is the Vulkan sdk.

Instalation of VulkanSDK:
wget -qO - http://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo apt-key add - sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-1.2.135-bionic.list http://packages.lunarg.com/vulkan/1.2.135/lunarg-vulkan-1.2.135-bionic.list sudo apt update sudo apt install vulkan-sdk

Latest version of linux here.

... This is sad.

Regarding Compiling DAIN on Tesla V100 and RTX line

Hello, just wanted to take this opportunity to mention that what you're doing is amazing! I really appreciate how easy you've made it to use and work with the algorithms.
When I was using preparation.py for building dependencies for DAIN, it kept failing.

I noted that this is because in the setup.py for the custom operation, if you're compiling for GPUs like the V100 and the RTX series the nvcc_args should be:

nvcc_args = [
"-gencode",
"arch=compute_50,code=sm_50",
"-gencode",
"arch=compute_52,code=sm_52",
"-gencode",
"arch=compute_60,code=sm_60",
"-gencode",
"arch=compute_61,code=sm_61",
'-gencode',
'arch=compute_70, code=sm_70',
'-gencode',
'arch=compute_75, code=sm_75',]

This can be easily added to setup.py of each op, and I've made the changes on my end. I assume other users might benefit if they come across the same error.

Thanks once again for this project!

Anime DAIN and 2/3-dupe

BrokenSilence's thought is mainly correct, I am with him/her. I did some homework recently and let's talk about why anime use 2-dupe or 3-dupe first.

The hand-painting of anime is expensive, and the general (Japanese) animation company can accept the 2-dupe trade-off, which also means that we see 24fps animation is actually 12fps.

The first to use 3-dupe (search てづか おさむ plz) to further reduce the painting cost (storyboards) to adapt to the update speed, the actual frame rate of the animation further dropped to 8fps. So the problem we have to think about is to enhance the 8fps animation works to 60fps actual performance frames as we expected Is it the same?

Anyway, time continuity is not a big issue for me . Just set the DAIN rate as follows:

Let us make an assumption:

  • Input: 3-dupe anime (24fps)
  • Needed Output: no-dupe anime (60fps)
    According to the upper equation, DAIN rate is calculated 7.5 and we set it to 8. Then we evenly cut out 30 frames from the generated frames. (60*(8-7.5)) We solved the problem with slightly flawed.

Any discussion is welcome.

Resizing problem in DAIN?

Hi, i was using your MVIMP DAIN notebook in Google Colab to interpolate a short gif image:
Original
I used a 0.25 time_step (original have 8.33 fps so the interpolated should have 33.32) and i got this:
Result in MVIMP Colab
It looks like a strobing of different sizes frames.

I tried near the same (to 30 fps) in this notebook and i got this, less "strobing" (but still there). Looking up into the notebook i found a "RESIZE HOTFIX" method where the interpolated frames are upscaled slightly (apparently, in the interpolation process, they are generated a bit smaller than originals), also found that there are two interpolation methods, INTER_CUBIC and INTER_LANCZOS4

My question is: Could be there a "universal method" to solve this problem? ? Like an algorithm. Is it because of those two interpolation methods? Or maybe is something i'm doing wrong (codecs?) or a problem with DAIN itself.

Thanks in advance.

waifu2x don't work on tpu

I try to run MVIMP_Waifu2x-ncnn-Vulkan_Demo.ipynb on Google Colab with TPU and it's so much faster than GPU (5.00s/it vs 15.00it/s) but after that Output directory is empty. Is there a way to run it normally on TPU?

DAIN inference_dain.py issues .

Hi .
I wanted to use dain on colab notebook and i encountered an issue .
I followed this link and did it .
It went smoothly at first , i downloaded the video and then deleted the video manually and when i tried to do it second time i encountered a problem and it was in inference_dain.py .
I did find a way by terminating the notebook and repeat the process of installation but i don't want to .

Current PyTorch version is 1.0.0
ffmpeg -hide_banner -loglevel warning -threads 4 -i /content/MVIMP/Data/Input/test.mp4 /content/MVIMP/Data/Input/%8d.png
The video-image extracting job is done.
Traceback (most recent call last):
File "inference_dain.py", line 58, in
os.remove(os.path.join(input_data_dir, input_file))
IsADirectoryError: [Errno 21] Is a directory: '/content/MVIMP/Data/Input/.ipynb_checkpoints'

Please respond to my request .
Thank you .
And this is my first time reporting something in GitHub .

Unable to git clone Waifu2x

Yesterday i tried Waifu2x on colab notebook and it worked , but today when i tried i got a fatal error .

fatal: Remote branch waifu2x-ncnn-vulkan not found in upstream origin

Can you fix it please .

DAIN: "no kernel image is available for execution on the device"

When I proceed to try DAIN on a video file I receive the following:

(Btw, will or does your version of DAIN support Adaptive Record timestamps like the DAIN APP? Reference: https://imgur.com/a/7ihS2ir )

Current PyTorch version is 1.0.0
ffmpeg -hide_banner -loglevel warning -threads 4 -i /content/MVIMP/Data/Input/danny.mp4 /content/MVIMP/Data/Input/%8d.png
The video-image extracting job is done.

--------------------SUMMARY--------------------
Current input video file is danny.mp4,
danny.mp4's fps is 29.97,
danny.mp4 has 6211 frames.
Now we will process this video to 59.94 fps.
Frame split method will not be used.
--------------------NOW END--------------------

python3 -W ignore vfi_helper.py --src /content/MVIMP/Data/Input --dst /content/MVIMP/Data/Output --time_step 0.5
revise the unique id to a random numer 33628
Namespace(SAVED_MODEL='./model_weights/best.pth', alpha=[0.0, 1.0], arg='./model_weights/33628-Sun-Sep-06-02:23/args.txt', batch_size=1, channels=3, ctx_lr_coe=1.0, datasetName='Vimeo_90K_interp', datasetPath='', dataset_split=97, debug=False, depth_lr_coe=0.001, dst='/content/MVIMP/Data/Output', dtype=<class 'torch.cuda.FloatTensor'>, epsilon=1e-06, factor=0.2, filter_lr_coe=1.0, filter_size=4, flow_lr_coe=0.01, force=False, high_resolution=False, log='./model_weights/33628-Sun-Sep-06-02:23/log.txt', lr=0.002, netName='DAIN_slowmotion', no_date=False, numEpoch=100, occ_lr_coe=1.0, patience=5, rectify_lr=0.001, save_path='./model_weights/33628-Sun-Sep-06-02:23', save_which=1, seed=1, src='/content/MVIMP/Data/Input', time_step=0.5, uid=None, use_cuda=True, use_cudnn=1, weight_decay=0, workers=8)
cudnn is used
Interpolate 1 frames
The model weight is: ./model_weights/best.pth
************** current handling frame from /content/MVIMP/Data/Input. **************
************** current time_step is 0.5 **************
************** current output_dir is /content/MVIMP/Data/Output **************
************** high resolution method not used. **************
0% 0/6210 [00:00<?, ?it/s]error in correlation_forward_cuda_kernel: no kernel image is available for execution on the device
0% 0/6210 [00:04<?, ?it/s]
Traceback (most recent call last):
File "vfi_helper.py", line 204, in
input_dir=args.src, output_dir=args.dst, time_step=args.time_step,
File "vfi_helper.py", line 45, in continue_frames_insertion_helper
time_step=time_step,
File "vfi_helper.py", line 77, in frames_insertion_helper
y_0 = model_inference_helper(im_0, im_1)
File "vfi_helper.py", line 150, in model_inference_helper
y_s, _, _ = model(torch.stack((x_0, x_1), dim=0))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/content/MVIMP/third_party/DAIN/networks/DAIN_slowmotion.py", line 170, in forward
self.flownets, cur_offset_input, time_offsets=time_offsets
File "/content/MVIMP/third_party/DAIN/networks/DAIN_slowmotion.py", line 268, in forward_flownets
input
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/content/MVIMP/third_party/DAIN/PWCNet/PWCNet.py", line 241, in forward
corr6 = self.corr(c16, c26)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, kwargs)
File "/content/MVIMP/third_party/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 106, in forward
)(input1, input2)
File "/content/MVIMP/third_party/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 45, in forward
self.corr_multiply,
RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:80)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f429e43bfe1 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f429e43bdfa in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #2: correlation_forward_cuda(at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, int, int, int, int, int, int) + 0x624 (0x7f429b008ba4 in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #3: + 0x1556a (0x7f429b01456a in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #4: + 0x12767 (0x7f429b011767 in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #5: python3() [0x50a7f5]

frame #8: python3() [0x594b01]
frame #10: THPFunction_do_forward(THPFunction
, _object
) + 0x15c (0x7f42d876dbdc in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #12: python3() [0x54ac61]
frame #14: python3() [0x50a783]
frame #17: python3() [0x594b01]
frame #20: python3() [0x507f24]
frame #22: python3() [0x594b01]
frame #23: python3() [0x54ac61]
frame #25: python3() [0x50a783]
frame #27: python3() [0x507f24]
frame #29: python3() [0x594b01]
frame #32: python3() [0x507f24]
frame #34: python3() [0x594b01]
frame #35: python3() [0x54ac61]
frame #37: python3() [0x50a783]
frame #39: python3() [0x507f24]
frame #40: python3() [0x509c50]
frame #41: python3() [0x50a64d]
frame #43: python3() [0x507f24]
frame #45: python3() [0x594b01]
frame #48: python3() [0x507f24]
frame #50: python3() [0x594b01]
frame #51: python3() [0x54ac61]
frame #53: python3() [0x50a783]
frame #55: python3() [0x507f24]
frame #56: python3() [0x509c50]
frame #57: python3() [0x50a64d]
frame #59: python3() [0x507f24]
frame #60: python3() [0x509c50]
frame #61: python3() [0x50a64d]
frame #63: python3() [0x507f24]

ffmpeg -hide_banner -loglevel warning -threads 4 -r 59.94 -f image2 -i /content/MVIMP/Data/Input/%10d.png -y -c:v libx264 -preset slow -crf 8 /content/MVIMP/Data/Output/danny-59.94.mp4
[png @ 0x559727e3c800] Invalid PNG signature 0x1A45DFA301000000.
Error while decoding stream #0:0: Invalid data found when processing input
The image-video fusion job is done.

Future Features

This issue will continue open and receive the wanted feature, as well as discuss the priority of which the feature merged in.
Feel free to share your point and welcome to join and contribute together.

Colab inference_dain error

I'm getting this error when running the colab inference_dain command.
VIDIOC_REQBUFS: Inappropriate ioctl for device

This is the console output:

Current PyTorch version is 1.4.0+cu100
VIDIOC_REQBUFS: Inappropriate ioctl for device
Traceback (most recent call last):
  File "inference_dain.py", line 50, in <module>
    raise FileNotFoundError("You need more than 2 frames in the video to generate insertion.")
FileNotFoundError: You need more than 2 frames in the video to generate insertion.

The video file I have is definetly more that 2 frames.
Something isnt working properly when it tried to detect the fps of the input video.

colab DAIN THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=405 error=11 : invalid argument

video is 1920x1080 and 60fps
using arguments !python3 inference_dain.py --input_video test.mp4 --time_step 0.5 -hr

ffmpeg -hide_banner -loglevel warning -threads 4 -i /content/MVIMP/Data/Input/test.mp4 /content/MVIMP/Data/Input/%8d.png
The video-image extracting job is done.

--------------------SUMMARY--------------------
Current input video file is test.mp4,
test.mp4's fps is 30.00,
test.mp4 has 14854 frames.
Now we will process this video to 60.0 fps.
Frame split method will be used.
--------------------NOW END--------------------


python3 -W ignore vfi_helper.py --src /content/MVIMP/Data/Input --dst /content/MVIMP/Data/Output --time_step 0.5 --high_resolution 
revise the unique id to a random numer 50445
Namespace(SAVED_MODEL='./model_weights/best.pth', alpha=[0.0, 1.0], arg='./model_weights/50445-Wed-Oct-28-22:05/args.txt', batch_size=1, channels=3, ctx_lr_coe=1.0, datasetName='Vimeo_90K_interp', datasetPath='', dataset_split=97, debug=False, depth_lr_coe=0.001, dst='/content/MVIMP/Data/Output', dtype=<class 'torch.cuda.FloatTensor'>, epsilon=1e-06, factor=0.2, filter_lr_coe=1.0, filter_size=4, flow_lr_coe=0.01, force=False, high_resolution=True, log='./model_weights/50445-Wed-Oct-28-22:05/log.txt', lr=0.002, netName='DAIN_slowmotion', no_date=False, numEpoch=100, occ_lr_coe=1.0, patience=5, rectify_lr=0.001, save_path='./model_weights/50445-Wed-Oct-28-22:05', save_which=1, seed=1, src='/content/MVIMP/Data/Input', time_step=0.5, uid=None, use_cuda=True, use_cudnn=1, weight_decay=0, workers=8)
cudnn is used
Interpolate 1 frames
The model weight is: ./model_weights/best.pth
************** current handling frame from /content/MVIMP/Data/Input. **************
************** current time_step is 0.5 **************
************** current output_dir is /content/MVIMP/Data/Output **************
************** high resolution method used. **************
  0% 0/14853 [00:00<?, ?it/s]THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=405 error=11 : invalid argument
  0% 0/14853 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "vfi_helper.py", line 204, in <module>
    input_dir=args.src, output_dir=args.dst, time_step=args.time_step,
  File "vfi_helper.py", line 45, in continue_frames_insertion_helper
    time_step=time_step,
  File "vfi_helper.py", line 81, in frames_insertion_helper
    ym_0_0 = model_inference_helper(im_0[:, 0::2, 0::2], im_1[:, 0::2, 0::2])
  File "vfi_helper.py", line 150, in model_inference_helper
    y_s, _, _ = model(torch.stack((x_0, x_1), dim=0))
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/content/MVIMP/third_party/DAIN/networks/DAIN_slowmotion.py", line 138, in forward
    (cur_filter_input[:, :3, ...], cur_filter_input[:, 3:, ...]), dim=0
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 320, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: cuda runtime error (11) : invalid argument at /pytorch/aten/src/THC/THCGeneral.cpp:405
ffmpeg -hide_banner -loglevel warning -threads 4 -r 60.0 -f image2 -i /content/MVIMP/Data/Input/%10d.png -y -c:v libx264 -preset slow -crf 8 /content/MVIMP/Data/Output/test-60.0.mp4
The image-video fusion job is done.```

Is waifu2x the right choice?

Hi, I know, I am the one who suggested waifu2x, but I have been doing some research and came across some projects based on waifu2x.
Essentially they look the same, but the speed and ease of installation is greater.
I think the best option would be Dandere2x. It is a tool which gathers multiple "editions" of waifu2x, besides that, it has the possibility to do it directly in videos, it seems to be quite fast. Honestly I do not understand if there is a way to run it in terminal directly, but I do not think it is difficult to transform it to work in terminal.
I was unable to compile it, the binaries that there are, are completely designed for Windows, has a guide to compile in linux, but the truth is that I could not run it, or my English and knowledge of linux are horrible, or the documentation is not understood. The most I got to compile, was waifu2x-ncnn-vulkan, but my notebook, was not able to run it at the end, I get an error.
I am sorry to have mentioned this so late, but I found it very recently, I hope you can raise it or tell me if there is something wrong with what I say.
Thanks.

/content/MVIMP/Data/Input/%10d.png: No such file or directory

using a test video you provided and got this error.

Current PyTorch version is 1.0.0
ffmpeg -hide_banner -loglevel warning -threads 4 -i /content/MVIMP/Data/Input/test.mp4 /content/MVIMP/Data/Input/%8d.png
The video-image extracting job is done.

--------------------SUMMARY--------------------
Current input video file is test.mp4,
test.mp4's fps is 23.98,
test.mp4 has 575 frames.
Now we will process this video to 47.96 fps.
Frame split method will be used.
--------------------NOW END--------------------


python3 -W ignore vfi_helper.py --src /content/MVIMP/Data/Input --dst /content/MVIMP/Data/Output --time_step 0.5 --high_resolution 
revise the unique id to a random numer 29614
Namespace(SAVED_MODEL='./model_weights/best.pth', alpha=[0.0, 1.0], arg='./model_weights/29614-Sun-Jun-21-17:33/args.txt', batch_size=1, channels=3, ctx_lr_coe=1.0, datasetName='Vimeo_90K_interp', datasetPath='', dataset_split=97, debug=False, depth_lr_coe=0.001, dst='/content/MVIMP/Data/Output', dtype=<class 'torch.cuda.FloatTensor'>, epsilon=1e-06, factor=0.2, filter_lr_coe=1.0, filter_size=4, flow_lr_coe=0.01, force=False, high_resolution=True, log='./model_weights/29614-Sun-Jun-21-17:33/log.txt', lr=0.002, netName='DAIN_slowmotion', no_date=False, numEpoch=100, occ_lr_coe=1.0, patience=5, rectify_lr=0.001, save_path='./model_weights/29614-Sun-Jun-21-17:33', save_which=1, seed=1, src='/content/MVIMP/Data/Input', time_step=0.5, uid=None, use_cuda=True, use_cudnn=1, weight_decay=0, workers=8)
cudnn is used
Interpolate 1 frames
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=51 error=38 : no CUDA-capable device is detected
Traceback (most recent call last):
  File "vfi_helper.py", line 179, in <module>
    training=False,
  File "/content/MVIMP/third_party/DAIN/networks/DAIN_slowmotion.py", line 51, in __init__
    self.flownets = PWCNet.__dict__["pwc_dc_net"]()
  File "/content/MVIMP/third_party/DAIN/PWCNet/PWCNet.py", line 566, in pwc_dc_net
    model = PWCDCNet()
  File "/content/MVIMP/third_party/DAIN/PWCNet/PWCNet.py", line 166, in __init__
    xx = torch.arange(0, W_MAX).view(1, -1).cuda().repeat(H_MAX, 1)
  File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 162, in _lazy_init
    torch._C._cuda_init()
RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:51
ffmpeg -hide_banner -loglevel warning -threads 4 -r 47.96 -f image2 -i /content/MVIMP/Data/Input/%10d.png -y -c:v libx264 -preset slow -crf 8 /content/MVIMP/Data/Output/test-47.96.mp4
[image2 @ 0x55a31a2fa000] Could find no file with path '/content/MVIMP/Data/Input/%10d.png' and index in the range 0-4
/content/MVIMP/Data/Input/%10d.png: No such file or directory
The image-video fusion job is done.

it's say the image-video function is done but there is nothing on MVIMP/Data/Output

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.