Coder Social home page Coder Social logo

transform360's Introduction

Transform360

Transform360 is a video/image filter that transforms a 360 video from one projection to another. Usually, the input projection is equirectangular and the output projection is cubemap. We also keep the previous version of the transform, Transform_V1, in the file vf_transform_v1.c

Advantages of Transform360

  1. Transform360 achieves better performance in memory usage and visual quality. For many different algorithm and parameter settings in real-world applications, it also achieves better processing speed.
  2. Transform360 allows people to have more control on the quality of the output video frames, and even the quality of different regions within a frame.
  3. Transform360 separates the computation and transform logic from ffmpeg. Thus, people have much more flexibility to use different resources to develop the transforms, but do not need to be concerned with the details of the ffmpeg implementation/integration.

To Build And Use Transform360

Building on Ubuntu

Transform360 is implemented in C++ and is invoked by ffmpeg video filter. To build and use Transform360, follow these steps (special thanks to https://github.com/danrossi):

  1. Checkout transform360
  2. Checkout ffmpeg source
  3. Install ffmpeg, dev versions of openCV and codec libraries that you need, e.g.
sudo apt-get install ffmpeg
sudo apt-get install libopencv-dev
sudo apt-get install nasm libxvidcore-dev libass-dev libfdk-aac-dev libvpx-dev libx264-dev
  1. Build and install transform360 in Transform360 folder:
cmake ./
make
sudo make install
  1. Copy vf_transform360.c to the libavfilter subdirectory in ffmpeg source.
  2. Edit libavfilter/allfilters.c and register the filter by adding the following line in the video filter registration section:
extern AVFilter ff_vf_transform360;

For older ffmpeg versions (i.e., if you see existing filters are registered with REGISTER_FILTER), please instead add

REGISTER_FILTER(TRANSFORM360, transform360, vf);
  1. Edit libavfilter/Makefile and add the filter to adding the following line in the filter section:
OBJS-$(CONFIG_TRANSFORM360_FILTER) += vf_transform360.o
  1. Edit vf_transform360.c in libavfilter folder

Change the include path from

#include "transform360/VideoFrameTransformHandler.h"
#include "transform360/VideoFrameTransformHelper.h"

to

#include "Transform360/Library/VideoFrameTransformHandler.h"
#include "Transform360/Library/VideoFrameTransformHelper.h"
  1. Configure ffmpeg in the source folder:
./configure --prefix=/usr/local/transform/ffmpeg --enable-gpl --enable-nonfree --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libvpx --enable-libx264 --enable-libxvid --enable-libopencv --extra-libs='-lTransform360 -lstdc++'
  1. make ffmpeg
make
  1. use local binary with ./ffmpeg or by installing it with make install

Running

Check out the options for the filter by running ffmpeg -h filter=transform360.

A typical example looks something like:

ffmpeg -i input.mp4
    -vf transform360="input_stereo_format=MONO
    :cube_edge_length=512
    :interpolation_alg=cubic
    :enable_low_pass_filter=1
    :enable_multi_threading=1
    :num_horizontal_segments=32
    :num_vertical_segments=15
    :adjust_kernel=1"
    output.mp4

To Build And Use Transform_V1

Building

Transform is implemented as an ffmpeg video filter. To build Transform, follow these steps:

  1. Checkout the source for ffmpeg.
  2. Copy vf_transform_v1.c to the libavfilter subdirectory in ffmpeg source.
  3. Edit libavfilter/allfilters.c and register the filter by adding the line: extern AVFilter ff_vf_transform_v1; For older ffmpeg versions (i.e., if you see existing filters are registered with REGISTER_FILTER), please instead add REGISTER_FILTER(TRANSFORM_V1, transform_v1, vf); in the video filter registration section.
  4. Edit libavfilter/Makefile and add the filter to adding the line: OBJS-$(CONFIG_TRANSFORM_V1_FILTER) += vf_transform_v1.o in the filter section.
  5. Configure and build ffmpeg as usual.

Running

Check out the options for the filter by running ffmpeg -h filter=transform_v1.

A typical example looks something like:

ffmpeg -i input.mp4
    -vf transform_v1="input_stereo_format=MONO
    :w_subdivisions=4
    :h_subdivisions=4
    :max_cube_edge_length=512"
    output.mp4

License

Transform360 and Transform_V1 are BSD licensed, as found in the LICENSE file.

transform360's People

Contributors

abuisine avatar evgenykuzyakov avatar joelmarcey avatar mateuszitelli avatar nathansizemore avatar pangch avatar puffpio avatar zpao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

transform360's Issues

Script testing

The compiling of ffmpeg source code + the filter plugin requires a lot of time. Is there some method do avoid recompiling the code after every change? In other words, is there a way to recompile only the filter code without having to recompile ffmpeg each time?

Simple instructions for Newbie Help!

I work for a project to convert a series of videos from equirectangular in cube maps. This tool is the soluzione..ma I can not use it. Too complicated for me. Could you explain step by step how to do? Thank you very much

Unable to pass the compiler on mac osx 10.11

The default ffmpeg source code compile success, when add this filter, screw up everything.
Then I try on debian, compile success, but not work on mac.

I have install some dependency library tools, and try ./configure --enable-gpl,
Here is the complain.
screen shot 2016-06-16 at 9 47 19 am

ERROR: opencv not found

According to the instruction :"Building on Ubuntu".
Everytime I tried ./configure --prefix=/usr/local/transform/ffmpeg --enable-gpl --enable-nonfree --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libvpx --enable-libx264 --enable-libxvid --enable-libopencv --extra-libs='-lTransform360 -lstdc++'
It shows :ERROR: opencv not found. But I have already installed opencv.

Distro: Ubuntu 16.04.2 LTS ,
ffmpeg source: v3.4

transform360 doesn't offer "FLAT_FIXED" output_layout

transform360 appears to not offer the output_layout FLAT_FIXED, which was offered in transform_v1. Running ./ffmpeg -h filter=transform360 produces the following output_layout options:

  output_layout     <int>        ..FV.... Output video layout format (from 0 to 4) (default CUBEMAP_32)
     CUBEMAP_32                   ..FV....
     CUBEMAP_23_OFFCENTER              ..FV....
     EQUIRECT                     ..FV....
     BARREL                       ..FV....
     EAC_32                       ..FV....
     cubemap_32                   ..FV....
     cubemap_23_offcenter              ..FV....
     equirect                     ..FV....
     barrel                       ..FV....
     eac_32                       ..FV....

transform360 offers great quality and performance improvements, but in order to adopt it, I will need to be able to output flat versions of my equirectangular content, like I could with transform_v1. Are there any plans to re-implement FLAT_FIXED layout into transform360? If not, are there other means by which I can use transform360 to achieve the same outcome as transform_v1's FLAT_FIXED?

Troubles with building ffmpeg

Please may you add some details on how to build Transform360 library together with OpenCV and then build ffmpeg?

I'm doing the following steps (on a MacBook Pro with Sierra, x86_64 cpu):
0 - copy vf_transform360.c, and edit libavfilter/allfilters.c and libavfilter/Makefile as guide says
1 - call cmake inside of opencv_dir/ (tryed with opencv 2.4.13 and 3.2.0)
2 - put Transform360 folder inside of opencv_dir/lib/
3 - call cmake inside of opencv_dir/lib/Transform360/
4 - call make inside of opencv_dir/lib/Transform360/ to obtain libTransform360.a
5 - put libTransform360.a inside of ffmpeg_dir/
6 - call ./configure --prefix=/usr/local --enable-gpl --enable-nonfree --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-libopencv --extra-libs='ffmpeg_dir/libTransform360.a -lstdc++' --extra-cflags="-Ifolder_containing_transform360_headers/" --extra-ldflags="-Lffmpeg_dir/ inside of ffmpeg_dir/
7 - call make inside of ffmpeg_dir/

After 15 minutes it finishes with the following error:

LD ffmpeg_g
Undefined symbols for architecture x86_64:
"cv::error(int, cv::String const&, char const*, char const*, int)", referenced from:
cv::Mat::Mat(int, int, int, void*, unsigned long) in libTransform360.a(VideoFrameTransform.cpp.o)
"cv::String::deallocate()", referenced from:
cvflann::anyimpl::big_any_policycv::String::static_delete(void**) in libTransform360.a(VideoFrameTransformHandler.cpp.o)
cvflann::anyimpl::big_any_policycv::String::move(void* const*, void**) in libTransform360.a(VideoFrameTransformHandler.cpp.o)
cv::Mat::Mat(int, int, int, void*, unsigned long) in libTransform360.a(VideoFrameTransform.cpp.o)
cvflann::anyimpl::big_any_policycv::String::static_delete(void**) in libTransform360.a(VideoFrameTransform.cpp.o)
cvflann::anyimpl::big_any_policycv::String::move(void* const*, void**) in libTransform360.a(VideoFrameTransform.cpp.o)
"cv::String::allocate(unsigned long)", referenced from:
cv::Mat::Mat(int, int, int, void*, unsigned long) in libTransform360.a(VideoFrameTransform.cpp.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [ffmpeg_g] Error 1

I'm not understanding where the error is, or if there are many errors in my proceedings. I guess it is in point 3 and 4, when I build Transform360 along with OpenCV.

Speed optimisation for transform360 filter.

Currently I am using the following commands to convert my input video. It is taking 1 min 8sec for a 25 sec video to convert. I am running it on a 8 core Intel xeon processor. The CPU is only being utilized at 40-50 percent.

ffmpeg -i input.mp4 -vf transform360="input_stereo_format=MONO :cube_edge_length=512 :interpolation_alg=cubic :enable_low_pass_filter=1 :enable_multi_threading=1 :num_horizontal_segments=32 :num_vertical_segments=15 :adjust_kernel=1" output.mp4

According to my requirement it should run in the original video length or faster (1x + speed). I tried looking at what each of these options do but didn't get much information on them. Can you point me in the correct direction please.

With out any kind of optimisations I want you to just input a equirectangular mp4 file and output a cubemap mp4 file. Size of the file is not the constraint, time taken to process it is.

What are the disadvantages of using something like this command

ffmpeg -i input.mp4 -vf transform360="input_stereo_format=MONO :cube_edge_length=512 :enable_multi_threading=1" output.mp4

I just want to input a file a equirectangular file and want a cubemap output with as much less operations on it as possible and as fast as possible. As long as there is no significant quality difference it doesn't matter.

In between an interestin project where this became the bottle neck. So, any kind of useful reply is hugely appreciated.

Question about multi-threading and segments

I generally read the source code of Transform360. I get some question about the segments and multi-threading low-pass filter.
As I read the method of generateKernelAndFilteringConfig, segment is used to divide the frame into every small segment and init the config of every menber of vector. And then, when process the Frame plane, the method of opencv sepFilter2D will process every small segment.
If the segment decide the number of threads, the filter may speed up with the number of threads increasing.But, the result is totally opposite.
Is the number of segment increasing means the quality of the low-pass filter increasing?
Maybe I get a wrong understand, hope someone can help me,thanks!

File size reduction

I've just applied the transform into a 1-minute equirectangular 360 video which I received here:
http://cloud-mobile.360heros.com/producers/4630608605686575/1816573510412126/video/video_13a58875cc235e11f295bd5ac297d666_output_2k.mp4

The video is 1920 × 960 resolution, H264 AAC encoded, and the output is 1440x960 MPEG-4 AAC encoded file. I was wondering about the huge file size reduction! The original file was 123MB and now it is almost 10MB. Can you help me to understand what is going on? Why is the size reduction so much?! Maybe one of the options I've selected as below messes with something?!

I used the exact command you provided as example:
ffmpeg -i input.mp4 -vf transform='input_stereo_format=MONO:w_subdivisons=4:h_subdivisons=4:max_cube_edge_length=512' output.mp4

FFmpeg crashes with transform360 filter on Nvidia GPU

Hello,
I have installed all the necessary Nvidia drivers and CUDA tool kit necessary and compiled FFMPEG with Nvidia support by following the steps in NVIDIA blog. I encoded videos using h264_nvenc and it is using GPU perfectly so there is no issue with the FFmpeg compiled with GPU support. But when I try to apply the the transform360 filter the encoding it not taking place. FFmpeg is giving me the following error. What seems to be the problem? Does this filter support running on GPU?

ctx->cudl->cuMemcpy2D(&cpy) failed -> CUDA_ERROR_ILLEGAL_ADDRESS:

I am using Ubuntu 16.04 with NVIDIA K80 GPUs with driver version of 390.12 and CUDA toolkit version of 9.1

various pixel formats not supported

YUV 8bit content is supported but (almost) anything else produces garbage (including RGB24)

supported pixel formats should be declared through query_formats

Separate filter and algorithm logic

Hi, Libav/FFmpeg developer here: let me first say it's great that Facebook is willing to open their tools and share IP. However the current state of this project makes it really hard for people to use (not many can actually edit source code), and the incompatible license terms prevents this filter to actually get merged in upstream.

I'm not going to judge the license selection, and I can understand your necessity of keeping rights within the company, however this should not leak to the filter itself, and there are ways to prevent this. The common approach for example is to release a library with a relatively simple API, with any license makes sense to you, and then release a lgpl filter which gets enabled with --enable-nonfree configure option. This would allow the basic filtering to be always available, and all the improvements to the library would still be yours.

Of course there are other options, and I know that this filter is targeting a niche audience (for now), but the right way to spread this technology is to let it reach as many users as possible, and without community/upstreaming support this severely impacts any chance of succeeding.

Offset Cubemap pipeline documentation

I'm currently trying to utilise the offcenter cubemap format in a custom stereo VR video player built in Unity (for GearVR)

As previously mentioned by another user in Issue 33 I'm seeing severe distortion around edges of cube and bowing in the image when it's brought over to Unity. It is also destroying the stereo effect in the forward and back directions.

I assume I'm missing a key step when unwarping the resulting offset cubemap in Unity, but there is little to none in terms of information or documentation with regards to this.

I had (wrongly?) assumed I could just offset the Unity camera by the same percentage used during the encode.

I'm looking at the talk about Offset Cubemaps here https://www.youtube.com/watch?v=uSHTsGNFmeM&t=1424s, and it mentions the following:

texture( cubeSampler, normailze(dir) + offset)

instead of

texture (cubeSampler, dir)

Can anyone talk more to this process of getting the distortions corrected back out at the other end?

I've also looked briefly at the OVROverlay in latest Mobile SDK, which talks of Offcenter Cubemaps, but having even more trouble ingesting the stereo 2x3 (LR) layout.

Is there a way to test offset cubemaps?

As I understand, this code should also be related to this article, i.e. it should be possible to create offset cubemaps (as well as cubemaps) starting from an equirectangular video.

I'm able to use it and setting an offset (by using the params cube_offcenter_x, cube_offcenter_y and cube_offcenter_z) and the output video is actually compliant with my expectations.

The problem is: when I try to render the video on the internal faces of a cube, by translating camera in accordance with the cube_offcenter_* params previously used, the video is not rendered properly (I'm using OpenGL ES on Android) but instead it has some distortions. May you provide some additional details on how to map the video (the offset cubemap one) inside of the cube? Are there some examples of how it actually works or some tool to see it working?

Thanks

Offset to 0 Z Offset to 0.50
Offset to 0Z Offset to 0.50

PNG Sequence resulting in garbled image.

out

I'm having trouble ingesting any sort of uncompressed inage or video format. I'm not sure if this is an ubuntu / ffmpeg issue or if it resides with the transform process.

Attached is an image showing result of ingesting a PNG 5120 x 5120 360 TB stereo image.

It works fine if we first convert the sequence to JPG or convert the uncompressed QT to H265/H264 before ingesting, but then we are introducing encoding artifacts before running it through this transform process.

Any assistance appreciated.

New compile docs for Linux

The include paths have changed.

Once copying vf_transform360.c into libavfilter.

The include path requires to be this. This is the installation path from the previous step.

#include "Transform360/Library/VideoFrameTransformHandler.h"
#include "Transform360/Library/VideoFrameTransformHelper.h"

LIve360 api

hello
any informations for the LIve api to publish 360 degree live (cubemap)

Compiling on Ubuntu 16: undefined reference to VideoFrameTransform_new

One of our researchers has asked us to compile this but I've run into an issue.

$ make
[...]
LD	ffmpeg_g
libavfilter/libavfilter.a(vf_transform360.o): In function `generate_map':
/local-scratch/swbuilder/Linux/Ubuntu/16.04/amd64/ffmpeg-3.3.1/libavfilter/vf_transform360.c:136: undefined reference to `VideoFrameTransform_new'
/local-scratch/swbuilder/Linux/Ubuntu/16.04/amd64/ffmpeg-3.3.1/libavfilter/vf_transform360.c:152: undefined reference to `VideoFrameTransform_generateMapForPlane'
libavfilter/libavfilter.a(vf_transform360.o): In function `filter_frame':
/local-scratch/swbuilder/Linux/Ubuntu/16.04/amd64/ffmpeg-3.3.1/libavfilter/vf_transform360.c:344: undefined reference to `VideoFrameTransform_transformFramePlane'
libavfilter/libavfilter.a(vf_transform360.o): In function `uninit':
/local-scratch/swbuilder/Linux/Ubuntu/16.04/amd64/ffmpeg-3.3.1/libavfilter/vf_transform360.c:295: undefined reference to `VideoFrameTransform_delete'
collect2: error: ld returned 1 exit status
Makefile:136: recipe for target 'ffmpeg_g' failed
make: *** [ffmpeg_g] Error 1

Distro: Ubuntu 16.04.2 LTS (Xenial Xerus)
ffmpeg source: v3.3.1
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609

transform360 itself compiles fine, but following the instructions at
https://github.com/facebook/transform360 yields the error above.

Troubleshooting steps I've tried:

  • vf_transform360.c: changing these two includes to absolute paths
    #include "transform360/VideoFrameTransformHandler.h"
    #include "transform360/VideoFrameTransformHelper.h"

  • vf_transform360.c: #include "Transform360..." (capital T, as
    suggested in #24)

  • starting over with
    export ELIBS='-lTransform360 -lstdc++'
    export LDFLAGS='-L/usr/local/lib'
    export CFLAGS='-I/usr/local/include/Transform360/Library'
    ./configure --enable-gpl --enable-nonfree --enable-libass --enable-libfdk-aac
    --enable-libfreetype --enable-libmp3lame --enable-libopus
    --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264
    --enable-libxvid --enable-libopencv

All of these yielded the same error. Any suggestions would be most welcome. Thank you!

Transform360

Hi,

Thanks for keeping this codebase updated. Was wondering if you can provide more details on how to install the latest update you pushed, specifically on how to do the following:
"

  1. Checkout the source for the Transform360, openCV and ffmpeg.
  2. Build .cpp and .h files in Transform360, together with openCV, as a library, where these files are dependent on openCV.
  3. Add the Transform360 library file to the extra-libs of ffmpeg.
    "?

Thanks so much.

Offset Cubemap

Hi,

Regarding the latest update to the code. I was wondering what is the "LAYOUT_CUBEMAP_23_OFFCENTER" layout? is it supposed to be the offset cubemap? as I can't quite visualize it's output. And if not, is there plans that you'll release some documentation/code on the implementation details of the offset cubemap? Also, I noticed you have in the code "LAYOUT_FB", but no more details about it.

Thanks so much

expand_coef usage

Hi there !
Could you please explain the interest of expand_coef being at 1.01 by default ?
Is it because you do not map the resulting texture to a cubemap but rather to multiple planes in order to mitigate a clamp to edge setting ?

Cheers

subdivisions calculation

float y = (i + (suby + 0.5f) / s->h_subdivisons) / out_h;
float x = (j + (subx + 0.5f) / s->w_subdivisons) / out_w;
Should it be something like this?
float y = (i + suby / (float) h_subdivisions - 0.5f) / out_h
Because right now we don't look for 0.5 around instead we look at 1 aside and 0.5f is not clear what doing at all.

why the cubemap video is very fuzzy?

I converted a video using command “ ffmpeg -i input.mp4 -vf transform='input_stereo_format=MONO:w_subdivisons=4:h_subdivisons=4:max_cube_edge_length=512' output.mp4 ”.
The original video size is 82MB with a resolution ratio of 1920*960. The output video size is only 17MB with a resolution ratio of 1440 * 960. The compression ratio is too high, so the output video is very fuzzy. How can I change the command option to make the output video more clear?

Transform360 issues building for Windows

Hi,

As others seem to have had, I am getting issues building ffmpeg with Transform360. I am using MSYS2 and the CMake produces a VisualStudio2015 solution to build a Transform360.lib. I then use this via the configure command...

./configure --prefix=ffmpeg/ --extra-cflags=-I"/usr/local/include" --extra-ldflags='-L/usr/local/lib -static' --pkg-config-flags="--static" --enable-nonfree --enable-gpl --enable-static --disable-shared --enable-libx264 --enable-libx265 --enable-libfdk_aac --enable-nvenc --extra-libs='Transform360.lib'

The subsequent 'make' call then spits out the error...

libavfilter\libavfilter.a(vf_transform360.o): In function generate_map': C:\msys64\home\stebu\ffmpeg_nvidia_cubemap2/libavfilter/vf_transform360.c:140: undefined reference to VideoFrameTransform_new'
C:\msys64\home\stebu\ffmpeg_nvidia_cubemap2/libavfilter/vf_transform360.c:140:(.text+0x57f): relocation truncated to fit: R_X86_64_PC32 against undefined symbol `VideoFrameTransform_new'

I am betting it is because the transform lib has been made with VS compiler, and ffmpeg is being compiled with gcc...

Can anyone offer any guidance?

Thanks!

How can I build the support to generate cube faces with different resolution ?

I am very new to this field and trying out various things to learn ?

I would like to generate cube faces in the output video with different resolutions to hack together an adaptive solution as described in the article for transform360

I have the two ideas, and would like the feedback

  1. Enhance the tranform360 library to take more options to support different resolution for each of the cubeface.

  2. Use a filter which will change the resolution as per the inputs for each of the faces.

Forum for developer discussions?

To my knowledge, there is no community or forum to discuss the transform project. I am hopeful that Facebook would turn on the "gitter" chat capability for this repository, but barring that, would suggest that interested people convene on the "#360" channel of the "vrdev" slack team.

Slack doesn't allow for public joins, so if you are interested please request an invite here: https://t.co/tltfdGvqXf

Transform360 Building Issue

Hi.

I think I have checked quite many issues related to my problem but I am not able to fix it.
The problem is
./configure --enable-libopencv
generates an error message, "ERROR: opencv not found".

#36 suggested to install ffmpeg by "apt-get install ffmpeg". However, the source files generated from this command does not have configure file in it.

I cloned the latest version of opencv and ffmpeg from github and installed them by using official guideline.

#22 suggested to compile transform360 by changing CMakeLists.txt. I did the same thing as telganainy suggested and compiled it without any errors.

I think the main problem is because of the link between opencv and ffmpeg but owing to some version issues or whatever, it doesn't work..

Any suggestions? Thanks in advance.

need solution for "libopencv not found"

follow the guide to install the latest vesion of transform360, when compile with --enable-opencv.

got "libopencv not found"

i try the solution in the resolved issue but none of them can correct this.

is there new advice?

Recommended filter settings for 4K sources

Hi there. I am just wondering on extra information about recommended filter settings for 4K sources ?

With this config

cube_edge_length=512 

Each cube is a pow2 512 dimension. Making the video dimension 1024 pixels in height. Is this still good quality or should it be matching the source dimension ?

Is a lower cube dimension the whole point of same quality for reduced bandwidth ?

Without a bitrate encoder setting the bitrate is reduced from the source also.

After the new include path change I am able to run an ffmpeg encode finally to check its output. It encodes to cubemap correctly.

performance issue

as you can see, the performance continue to drop
frame= 42 fps= 12 q=0.0 size= 0kB time=00:00:02.22 bitrate= 0.0kbits/s speed=0.629x
frame= 335 fps= 16 q=28.0 size= 8388kB time=00:00:13.95 bitrate=4924.1kbits/s speed=0.646x
frame= 779 fps= 16 q=28.0 size= 24771kB time=00:00:31.71 bitrate=6398.0kbits/s speed=0.634x
frame= 1004 fps= 15 q=28.0 size= 38468kB time=00:00:40.70 bitrate=7742.4kbits/s speed=0.61x
frame= 1128 fps= 15 q=28.0 size= 49041kB time=00:00:45.67 bitrate=8796.5kbits/s speed=0.597x
frame= 1420 fps= 14 q=28.0 size= 72797kB time=00:00:57.34 bitrate=10398.5kbits/s speed=0.578x
frame= 1901 fps= 14 q=28.0 size= 106590kB time=00:01:16.59 bitrate=11399.6kbits/s speed=0.566x
frame= 2514 fps= 14 q=28.0 size= 140127kB time=00:01:41.11 bitrate=11352.4kbits/s speed=0.561x
frame= 3328 fps= 14 q=28.0 size= 187116kB time=00:02:13.66 bitrate=11467.6kbits/s speed=0.557x
frame= 4115 fps= 14 q=28.0 size= 233225kB time=00:02:45.15 bitrate=11568.6kbits/s speed=0.555x
i check the cv::remap function(didn't use gnu acceleration), and found it runs slower and slower. I tested about 4500 frames and i wonder if the performance will continue to drop ? is this the cv::remap issue? do we have any solution?

From cubmap to equirectangular

A nice option would be to allow for the inverse transformation with ffmpeg, from cubmap to equirectangular. One would give ffmpeg a sequence of 6xN files, one for each face of the cube for the N frames, and convert to N equirectangular frames.

Still lacking build documentation.

How to include opencv into the build ? I tried everything and still get this

  By not providing "FindOpenCV.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "OpenCV", but
  CMake did not find one.

  Could not find a package configuration file provided by "OpenCV" with any
  of the following names:

    OpenCVConfig.cmake
    opencv-config.cmake

  Add the installation prefix of "OpenCV" to CMAKE_PREFIX_PATH or set
  "OpenCV_DIR" to a directory containing one of the above files.  If "OpenCV"
  provides a separate development package or SDK, be sure it has been
  installed.

tried

export OpenCV_INCLUDE_DIRS=opencv/include

and

export OpenCV_DIR=opencv/

Build instructions

The build instructions are somewhat lacking. Any pointers on how to build for linux?

border of six squares

I converted a 4k video using this tool.
I found that every square has about 1 pix border which will split the video when playing.

see the pic below :
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.