Coder Social home page Coder Social logo

nas_public's People

Contributors

chaos5958 avatar jaykim305 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nas_public's Issues

I seem to have "BrokenPipeError" when i run test_nas_video_process.py

I have the error log:
model_loading [elapsed] : 0.03586411476135254sec
encode [after video info]: 1.9073486328125e-06sec
encode [start]: 0.046860694885253906sec
decode [configuration]: 0.050164222717285156sec
decode [video read prepare]: 0.06549692153930664sec
decode [prepare_frames-96]: 0.3793065547943115sec
Process Process-3:
Traceback (most recent call last):
File "/home/docker/miniconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/docker/miniconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/docker/workspace/NAS_public/process.py", line 366, in encode
pipe.stdin.flush()
BrokenPipeError: [Errno 32] Broken pipe
decode [super-resolution]: 2.427304744720459sec

How to solve the problem,please?

CUDA error:out of memery and other errors

when I run the code -- python test_nas_video_process.py --quality [quality level] --data_name [dataset name] --use_cuda --load_on_memory
The error is the fig.1,and I checked the Display Card that is out of merrory.So I made the following adjustments to the code files:
1.option.py
parser.add_argument("--num_batch", type=int, default=64) to num_batch=1
2.process
MAX_FPS = 30 to MAX_FPS = 10
MAX_SEGMENT_LENGTH = 4 to MAX_SEGMENT_LENGTH = 1
3.test_nas_video_process.py
MAX_FPS = 30 to MAX_FPS = 10
MAX_SEGMENT_LENGTH = 4 to MAX_SEGMENT_LENGTH = 1

But an new error is still reported in the fig.2

It seems that the process is too much so that make it error.
Could you give me some advice?thanks
error1
error2

my computer is :
CUDA11.1
torch 1.9.0(pip installed)
RTX2060 6G

Looking forward to your reply!

can not generates MPEG-DASH from my own video

hi,thank u for your great job.
now i'm running this project on my own computer but i meet some trouble.
i run the code on Windows10.and put my video as the read.md asked.
but after i type"../../dash_vid_setup.sh -i [video_file]",nothing generates.
Here i've downloaded MP4Box and x264 and i know ther are used to generate MPEG-DASH.but I don't know how use it.
i need your help and thanks a lot.

Some training problems.

Dear Hyunho Yeo:

         I have read your article carefully recently and think this is a great project! In the process of learning, I have encountered some problems.
1. Start training from a content-agnostic model (as described in this article). Does this content-agnostic model refer to using the provided DNN model with DIV2K training set?
2. When can you open source dash.js module? I have a question, how does dash.js call pytorch trained models in the browser?

Looking forward to your reply!

How to process output.mp4 to m4s for dash.js client to play?

I'm trying to do research based on your paper and code.And I have a question.

I noticed that the code first merge the .m4s file and the segment_init.mp4 to an input.mp4 file,then using DNN model to process the input.mp4 to output.mp4 which cannot directly played by a dash.js client.

Can we directly extract frames from the .m4s file,then use DNN model to do super resolution,finally encode the HR frame to an "HR" .m4s file?I can't find any solution to do that on the Internet.If we can do like that,I think it will be more convenient to play the processed video chunk using dash.js.

Or how can I directly play the output.mp4 file using a dash.js client?When playing video using dash.js as client,what steps should be done to the output.mp4 file?I wonder how you solve this problem.I will appreciate it if you can give some help.

hi i seem to have new issue when i do the above process, it generates output video for three segments and then has segmentation fault(core dumped).

hi i seem to have new issue when i do the above process, it generates output video for three segments and then has segmentation fault(core dumped).
I have the error log-
#0 0x00007fffe8c6bd1d in torch::CudaIPCSentData::~CudaIPCSentData() ()
from /home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so
#1 0x00007fffe8c6e5d8 in torch::(anonymous namespace)::CudaIPCGlobalEntities::~CudaIPCGlobalEntities() ()
from /home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so
#2 0x00007ffff7829ff8 in __run_exit_handlers (status=0, listp=0x7ffff7bb45f8 <__exit_funcs>,
run_list_atexit=run_list_atexit@entry=true) at exit.c:82
#3 0x00007ffff782a045 in __GI_exit (status=) at exit.c:104
#4 0x00007ffff7810837 in __libc_start_main (main=0x5555556377a0

, argc=8, argv=0x7fffffffe368, init=,
fini=, rtld_fini=, stack_end=0x7fffffffe358) at ../csu/libc-start.c:325
#5 0x0000555555717160 in _start () at ../sysdeps/x86_64/elf/start.S:103

I wanted to know if i am still having issue with opencv or any other. Please help me out if you guys can

Originally posted by @RavaliKolli in #4 (comment)

Cannot generate MPEG-DASH dataset from the original video

Dear Hyunho Yeo,

The steps I used to install x264 on ubuntu are stated as follows. But I cannot run the command "dash_vid_setup.sh -i [video file]"

steps:

  • sudo apt-get install yasm

  • cd x264

  • ./configure --enable-shared --enable-static

  • make

  • make install

The error information:
image

Could you please help me to fix the problem? Or could you please share the step how you to install the x264?
Thanks in advance.

Best,

I want to ask a question about integrating ABR

Hello, I am a graduate student. I want to ask a question about your work NAS. When you train the integrated ABR algorithm, how do you calculate the reward when the integrated ABR chooses to download DNN? The paper only mentioned reward when downloading video blocks.

Could you open up the entire DASH framework?

Dear Hyunho Yeo:

         I have read your article carefully recently and think this is a great project!
However, I see that you only open source the NAS-MDSR module. Can you open up the entire framework? I think this will be the most direct and effective way to improve QoE for HAS technology.
I am a student in China and would like to do some research in this area.

Looking forward to your reply!

Hello!

Dear Author:
I'm WU.I'm sorry to bother you. Recently, I'm trying to test the code of NAS. But the dataset webpage can't be accessed.
Could you please update the link? I am much obliged to you for your help.
Looking forward to your reply!

testing error

File "/home/ubuntu/NAS_public/process.py", line 366, in encode
pipe.stdin.flush()
BrokenPipeError: [Errno 32] Broken pipe
decode [super-resolution]: 1.8120205402374268sec

not sure why this error happened. can you guys help me out?

Ask a question about SSIM^(-1) in the paper

In the part 5.3 of the paper,the function SSIM^(-1) was defined.
The paper said "To create the mapping, we measure the SSIM of original video chunks at each bitrate (or resolution) and use piece-wise linear interpolation (e.g., (400 Kbps, SSIM1), ..., (4800 Kbps, SSIM5))".
image
And I have a question about this :
When caculating the R_effective(C_n),we use the SSIM(-1) function to map a ssim value to a corresponding effective bitrate.
However, the bitrate values are discrete, what if there are no corresponding bitrate for a ssim value in the mesurement data (400 Kbps, SSIM1), ..., (4800 Kbps, SSIM5)).
For example, we have mesurement data like this:
(400Kbps,SSIM1),(800Kbps,SSIM2)...(4800Kbps,SSIM5)
Now we have a SR reconstructed chunk, whose SSIM value is greater than SSIM1 and less than SSIM2,then how do we get the effective bitrate of this chunk?
Do you mean we can do a linear interpolation in those discrete mesurement data?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.