Coder Social home page Coder Social logo

ffmpegcv's People

Contributors

armin3731 avatar chenxinfeng4 avatar rankor avatar xial-kotori avatar zylo117 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ffmpegcv's Issues

ffmpegcv.VideoCaptureNV is not working

my env
cuda 11.3
cudnn 8.2.1

pip install tensorflow-gpu==2.7.0

pip install torch==1.12.0+cu113 torchaudio==0.12.0+cu113 torchvision==0.13.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113

opencv-contrib-python 4.5.3.56
ffmpeg version N-87353-g183fd30 Copyright (c) 2000-2017 the FFmpeg developers
libavutil 55. 76.100 / 55. 76.100
libavcodec 57.106.101 / 57.106.101
libavformat 57. 82.101 / 57. 82.101
libavdevice 57. 8.101 / 57. 8.101
libavfilter 6.105.100 / 6.105.100
libswscale 4. 7.103 / 4. 7.103
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100

when i change to ffmpegcv.VideoCapture it read all frame. i think there is a problem with cuda config

Request: RTSP routes

first of all, great job!, I wanted to ask if it is possible to add the option to get frames from an rtsp route, something like
'rtsp://admin:admin@ip_cam/'

ffmpegcv is not working in ubuntu 22

Hello ! I was using ffmpegcv in windows, and it worked perfectly fine. But when I shifted to Ubuntu22 and installed it could not read stream from same cameras. ffmpegcv is installed without any error. But When I tried to read rtsp stream it is not reading any frame. Although opencv works fine, but I want to use ffmpegcv instead of opencv. Please help me in this regard.

Read by frame id

Great work! Do u plan to support reading frames by indexes (random access) ?

How to minimize the buffer of VideoCaptureStream?

How to reduce the buffer of VideoCaptureStream as little as possible?
For some purpose, when the VideoCaptureStream starts receiving the frame, I want it to send immediately that frame to the data process instead of buffering.
How to achieve that with ffmpegcv? And is there a way to do it with OpenCV?

Modification on line 55 in ffmpeg_reader_camera.py

Hello, thank you so much for your contribution.
I was running the following code shown in the documentation:

from ffmpegcv.ffmpeg_reader_camera import query_camera_options
options = query_camera_options(0) # or query_camera_options("Integrated Camera")
print(options)

i got AssertionError on line 59 in _query_camera_divices_win:
assert len(matches) == len(alternative_names)

I modified line 55 in order to get the first matches. It was a regex problem:

from this:
pattern = re.compile(r'\[[^\]]*?\] "([^"]*)"')

to this:
pattern = re.compile(r'"([^"]*)" ')

i ran the code again and now it works as expected.

Is it possible if you modify this expression in your source code?

line 55 in ffmpeg_reader_camera.py

Thanks in advance

Bruno

How should I extract all of the frames from a video?

Hello, excuse me if this is a very basic question, I am still learning.
I tried an approach that was similar to this:

import ffmpegcv
vfile_in = 'A.mp4'
vfile_out = 'A_h264.mp4'
vidin = ffmpegcv.VideoCapture(vfile_in)
vidout = ffmpegcv.VideoWriter(vfile_out)

with vidin, vidout:
    for frame in vidin:
        vidout.write(frame)

but instead of vidout = ffmpegcv.VideoWriter(vfile_out) I wanted to have this:

vidout = ffmpegcv.VideoWriter(os.path.join(output_temp, 'frame_' + str(i).zfill(6) + "." + image_extension))

Sadly the output, whilst having the correct name it didn't maintain or parse the image data ( it didn't have any height, width and it even showed up as as corrupted image in Windows ).

Whilst I am here, I'd like to know if there's a way, similar to my current FFMPEG approach to speeds things up

ffmpeg_command = f'{ffmpeg_path} -i "{video_file}" -vsync 0 -qmin 1 -qscale:v 1 -v quiet -stats "{output_temp}"/"{frame_name}"."{image_extension}'

Particularly, adding " -vsync 0 -qmin 1 -qscale:v 1 " have virtually doubled the performance, granted it has a higher overall size but space isn't an issue, speed is.

Could this be replicated, and if so how?

And another one, could ffmpegcv support Asynchronous encoding, as in encoding frame by frame so that each time I process a frame ( extract the frames, upscale them and then encode ) I could also have it be encoded without having to wait for all of the images to be upscaled and only then start the encoding. Basically encoding on the go, I guess?

对release和isOpened函数存在一些疑问

  1. 您好,我进行并行处理多视频流时(使用的是VideoCaptureStream;循环条件是判断isOpened返回值是否为True),发现运行一段时间后__isopen会自动变为False,这是为啥呀?
  • 例如:

cap=ffmpegcv.VideoCaptureStream(url)
while cap.isOpened():
//运行一段时间后isOpened()会返回False
pass

  1. 另外,自建了一个处理视频流得对象,在初始化阶段通过VideoCaptureStream获取帧的尺寸信息,并在运行独立的进程获取、处理帧。在运行独立进程前release获取尺寸信息的capture是正常的,但是在运行独立进程后release就会卡住,这是为啥啊?
  • 例如:

cap=ffmpegcv.VideoCaptureStream(url)
shape = cap.out_numpy_shape
// cap.release() //这时是正常的
Process(target="child_prc", args=(url, shape))
// cap.release() // 这时会直接卡住

preset default set 'p2' when use nvenc_hevc encoder will get error

When i use hevc as videowrite encoder and not set preset,i get error as pic
image

from this error, i track the code and find the default preset = p2 in ffmpeg_writer.py
image

I use ffmpeg -h encoder=nvenc_hevc to check preset,but 'p2' is not in options
image

In order to solve this problem, either force the preset parameter to be set, or modify the default value setting here in the source code.

AttributeError: 'NoneType' object has no attribute 'shape' when using cv2.VideoCapture() function with source file

objective
can use cv2.VideoCapture() function with source file

steps

!wget -c https://cdn.creatomate.com/demo/mountains.mp4
pip install -U opencv-python ffmpegcv

code

import cv2
import ffmpegcv
input_file = 'mountains.mp4'
output_file = 'flipud.mp4'
stream = cv2.VideoCapture(input_file)
vidout = ffmpegcv.VideoWriter(output_file , 'hevc', pix_fmt = 'bgr24')
while True:
    (grabbed, frame) = stream.read()
    vidout.write(frame)
stream.release()
vidout.release()

result

AttributeError                            Traceback (most recent call last)
[<ipython-input-13-5428b4c0b500>](https://localhost:8080/#) in <cell line: 7>()
      7 while True:
      8     (grabbed, frame) = stream.read()
----> 9     vidout.write(frame)
     10 stream.release()
     11 vidout.release()

[/usr/local/lib/python3.10/dist-packages/ffmpegcv/ffmpeg_writer.py](https://localhost:8080/#) in write(self, img)
     65                 height = int(height_15 / 1.5)
     66             else:
---> 67                 height, width = img.shape[:2]
     68             self.width, self.height = width, height
     69             self.in_numpy_shape = img.shape

AttributeError: 'NoneType' object has no attribute 'shape'

best regards

FFmpegWriter.release() does not wait for ffmpeg process completion

This causes race condition, when video file is still being written by running ffmpeg process while FFmpegWriter object is released.

Details:

run_async() in video_info.py actually returns the PID of the shell, not the PID of ffmpeg process.
So release_process() waits for shell process completion, not ffmpeg process completion.
On some environments (like in Google Colab) it takes quite a while for ffmpeg to finish.

The following code demonstrates the problem:

import numpy as np, ffmpegcv
import psutil

img = np.zeros((100, 100, 3))

fn = 'test.mp4'
w, h = 100, 100
writer = ffmpegcv.VideoWriter(fn, None, 30, (w, h))
for i in range(1000):
    if i % 10 == 0:
        img = np.random.randint(0,255,(w,h), dtype='uint8')
    writer.write(img)

for p in psutil.process_iter(['name']):
    print(p)

print(">>>>>", writer.process.pid)

writer.release()

for p in psutil.process_iter(['name']):
    print(p)

The sample output is like this:

psutil.Process(pid=25371, name='sh', status='sleeping', started='02:05:35')
psutil.Process(pid=25372, name='ffmpeg', status='sleeping', started='02:05:35')
psutil.Process(pid=25383, name='sleep', status='sleeping', started='02:05:36')
>>>>> 25371

As you can see, the writer.process.pid == 25371, which corresponds to pid=25371, name='sh', not ffmpeg.

support async

nice to support async ?

the ffmpeg parameter can be run using another python ffmpeg module
e.g. module

typed-ffmpeg 
ffmpeg-python

e.g. code

import ffmpeg
url = 'http://takemotopiano.aa1.netvolante.jp:8190/nphMotionJpeg?Resolution=320x240&Quality=Standard&Framerate=30'
output = 'rtsp_%Y_%m_%d_%H_%M_%S.mp4'
scale = 'scale = 320 : -1'
segment_time = '00:01:00' 

stream = ffmpeg.input(url).output(filename = output, 
                                  vcodec = 'libx264', #acodec = 'copy', 
                                  reset_timestamps = 1, 
                                  strftime = 1, 
                                  f = 'segment', 
                                  segment_time = segment_time, 
                                  segment_atclocktime = 1, 
                                  vf = scale, 
                                  r = 25, ).overwrite_output().run_async()

to stop async
stream.terminate()

best regards

RuntimeError: No NVIDIA GPU found despite compiling ffmpeg with CUDA support on Google Colab GPU Instance

I have configured ffmpeg with GPU Support on Google Colab GPU Instance using the Nvidia Official Docs but despite this I was unable to read a video utilizing GPU method ffmpegcv.VideoCaptureNV() .Also the ffmpeg build with CUDA support is successful and verified.

I have made my Google Colab Notebook public where all of the steps are implemented. Could you please have a look at it and advise me on a possible way to circumvent this issue.

FFMPEGCV Install on Google Colab GPU Instance

Using with Raspberry Pi Camera

I saw the VideoCaptureCAM() function, passing "/dev/video0" to camname doesn't seem to work.
Is there any usage with the camera?

Writing video with different input and output sizes

Hi,
Everything works, except when I try to write a video using a different dimension from the input.

I'm reading a movie with vidin = ffmpegcv.VideoCapture(vfile_in) and vfile_in has a (1680, 1050) size. Then I'm adding some margins to the frame so I can crop it (some points eventually can lie outside dimensions) both using opencv, and writing it with vidout = ffmpegcv.VideoWriter(vfile_out, None, vidin.fps, frameSize=(54, 54)).

It returns no error but produces a really small .mp4 file that can't be read.

If I just plot the images using opencv, without writing to file, they are fine and with the right dimensions (54, 54).

What am I doing wrong here? Thanks!!

cannot open camera

'ffprobe -v quiet -print_format xml -select_streams v:0 -show_format -show_streams "rtsp://ipcam:8554/main"' returned non-zero exit status 1.

can you help me?

How to index frame

Hey,
is there a way how to index certain frame range with ffmpegcv? Something like this (in opencv):

for fra in frames:
capture.set(cv2.CAP_PROP_POS_FRAMES, fra) ##THIS SPECIFICALLY
ret, frame = capture.read()
if ret:
#cv2.imshow('frame',frame)
#cv2.waitKey(1000)
out.write(frame)
# display th loading image with im.show() to see if the frames are loaded correctly
if not ret:
# print('a frame was dropped: ' + str(fra))
capture.release()
# out.release()

Thank you!

.isOpened()

Is there an equivalent of cv2.VideoWriter's .isOpened()?

How to tell that a videowriter file handle is open or not?

Unable to read from camera

Hello, I've just installed the package and was trying to do a simple camera capture to try it out. So I ran the following code and got the error bellow.

import ffmpegcv
cap = ffmpegcv.VideoCaptureCAM(0)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\dev\code\faze-exercise\venv\Lib\site-packages\ffmpegcv\__init__.py", line 330, in VideoCaptureCAM
    return FFmpegReaderCAM.VideoReader(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\dev\code\faze-exercise\venv\Lib\site-packages\ffmpegcv\ffmpeg_reader_camera.py", line 277, in VideoReader
    id_device_map = query_camera_devices()
                    ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\dev\code\faze-exercise\venv\Lib\site-packages\ffmpegcv\ffmpeg_reader_camera.py", line 94, in query_camera_devices
    result = {
             ^
  File "C:\dev\code\faze-exercise\venv\Lib\site-packages\ffmpegcv\ffmpeg_reader_camera.py", line 59, in _query_camera_divices_win
    assert len(matches) == len(alternative_names)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

To further explore the error I ran:

from ffmpegcv.ffmpeg_reader_camera import query_camera_devices
devices = query_camera_devices()

And got the error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\dev\code\faze-exercise\venv\Lib\site-packages\ffmpegcv\ffmpeg_reader_camera.py", line 94, in query_camera_devices
    result = {
             ^
  File "C:\dev\code\faze-exercise\venv\Lib\site-packages\ffmpegcv\ffmpeg_reader_camera.py", line 59, in _query_camera_divices_win
    assert len(matches) == len(alternative_names)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

I am running an environment with Python 3.11.2, and ffmpegcv is version 0.3.3. I got ffmpeg version 6.0.

AttributeError: 'NoneType' object has no attribute 'shape' during read rtsp

objective
read rtsp and write using ffmpegcv

steps
pip install -U rtsp ffmpegcv

code

import rtsp
import ffmpegcv
url = 'http://clausenrc5.viewnetcam.com:50003/nphMotionJpeg?Resolution=320x240' # aspect ratio = 4:3
client = rtsp.Client(rtsp_server_uri = url, verbose = False)
output_file = 'rtsp_ffmpegcv_time.mp4'
vidout = ffmpegcv.VideoWriter(output_file, None, pix_fmt = 'bgr24')
i = 0
while i <= 9:
    frame = client.read(raw = True)
    vidout.write(frame)
    i += 1
client.close()
vidout.release()

result

AttributeError                            Traceback (most recent call last)
[<ipython-input-29-fc5430a80232>](https://localhost:8080/#) in <cell line: 11>()
     12     frame = client.read(raw = True)
     13 
---> 14     vidout.write(frame)
     15 
     16     i += 1

[/usr/local/lib/python3.10/dist-packages/ffmpegcv/ffmpeg_writer.py](https://localhost:8080/#) in write(self, img)
     65                 height = int(height_15 / 1.5)
     66             else:
---> 67                 height, width = img.shape[:2]
     68             self.width, self.height = width, height
     69             self.in_numpy_shape = img.shape

AttributeError: 'NoneType' object has no attribute 'shape'

best regards

BrokenPipeError when Reading & Writing video On Nvidia A100 GPU

Hi ffmpegCV Team,
I ran the code for video read & write with ffmpegcv ( CUDA acceleration ) on Google Colab GPU Instances .The Code runs without error on Tesla T4 Instance but gives a BrokenPipeError when the same code is run on Nvidia A100.Could you please help me out in understanding why such a thing would occur

PFA the link for the public Colab Notebook that contains the code alongwith the error message: ffmpegcv notebook

Given Below is the Code and Error that I received.

import ffmpegcv
from tqdm.notebook import tqdm

vfile_in = '/content/youtube_traffic_videos/traffic_video_1.mp4'
vfile_out = "/content/traffic_video_output_gpu_A100.avi"

cap_gpu = ffmpegcv.VideoCaptureNV(vfile_in,gpu=1) #NVIDIA GPU0
out_gpu = ffmpegcv.VideoWriterNV(vfile_out,"hevc", cap_gpu.fps,gpu=1) 

nframe = len(cap_gpu)
t = tqdm(total=nframe,desc="Reading + Writing Video-File using A100 GPU",leave=True)

while True:
    ret, frame = cap_gpu.read()
    if ret:
      out_gpu.write(frame)
      t.update(1)
    if not ret:
        break
    pass

cap_gpu.release()
out_gpu.release()
---------------------------------------------------------------------------
BrokenPipeError                           Traceback (most recent call last)
<ipython-input-25-a5ed10679eab> in <module>
     14     ret, frame = cap_gpu.read()
     15     if ret:
---> 16       out_gpu.write(frame)
     17       t.update(1)
     18     if not ret:

/usr/local/lib/python3.9/dist-packages/ffmpegcv/ffmpeg_writer.py in write(self, img)
     57         assert self.size == (img.shape[1], img.shape[0])
     58         img = img.astype(np.uint8).tobytes()
---> 59         self.process.stdin.write(img)
     60 
     61     def release(self):

BrokenPipeError: [Errno 32] Broken pipe

VideoCapture crop

HI
when I use VideoCapture with crop and gray pixel format the beginning for each frames gets added to the end of the last frame so the frames gets exported as show in this image.
image
here is the code that exported these frames

from ffmpegcv import VideoCapture
import cv2

cap = VideoCapture('test2.mp4', pix_fmt='gray',
                   crop_xywh=(332, 57, 84, 35))

frameCounter = 0

while frameCounter < ((23*60+14)-(2*60+23)) * 60:
    frameCounter += 1
    try:
        ret, frame = cap.read()
        if not ret:
            break
    except:
        break

    if frameCounter % 60 != 0:
        continue

    cv2.imwrite('out/frame'+str(frameCounter)+'.jpg', frame)
    frameCounter += 1
    pass
cap.release()

cap.release()

support to read link url, uri or rtsp

code

input_file = 'http://195.196.36.242/mjpg/video.mjpg'
output_file = 'rtsp.mp4'
vidin = ffmpegcv.VideoCapture(input_file, pix_fmt = 'bgr24') # bgr24 (default), rgb24, gray
vidout = ffmpegcv.VideoWriter(output_file, 'hevc', vidin.fps, pix_fmt = 'bgr24')
i = 0
with vidin, vidout:
    for frame in vidin:
        vidout.write(frame)
        if i == 9:
            break
        i += 1
vidout.release()
vidin.release()

result

AssertionError                            Traceback (most recent call last)
[<ipython-input-4-e4d2397485bb>](https://localhost:8080/#) in <cell line: 4>()
      2 output_file = 'rtsp.mp4'
      3 
----> 4 vidin = ffmpegcv.VideoCapture(input_file, pix_fmt = 'bgr24') # bgr24 (default), rgb24, gray
      5 
      6 # codec available value None, 'h264', 'hevc'

1 frames
[/usr/local/lib/python3.10/dist-packages/ffmpegcv/ffmpeg_reader.py](https://localhost:8080/#) in VideoReader(filename, codec, pix_fmt, crop_xywh, resize, resize_keepratio, resize_keepratioalign)
    125         resize_keepratioalign,
    126     ):
--> 127         assert os.path.exists(filename) and os.path.isfile(
    128             filename
    129         ), f"{filename} not exists"

AssertionError: http://195.196.36.242/mjpg/video.mjpg not exists

best regards

get_info crashes when using a TransportStream container

Hey,

I started experimenting with ffmpegcv and have come across this issue where when I use a .ts (TransportStream) container, get_info crashes as the XML output from ffprobe does not contain the "nb_frames" field.
When using an mp4 container everything works as expected.

I should point out that I am using Ubuntu 18.04 with ffmpeg version 4.3.2 at the moment and have also tested with ffmpeg 6.0 running on Ubuntu 22.04.

Could you please add a check to only set the field in the video info if it exists in the xml output?

Thanks!

Parallel encoding in VideoWriterNV

I want to write multiple videos at a same time using VideoWriterNV. My plan is to use multiprocessing.
My concern is that whether VideoWriterNV encoding works in multiple processes since there is only one NVENC per GPU.
If yes, will it achieve a proportional comsumption time? (i.e. 1+1+1≈3)
Plus, is there any built-in function in VideoWriterNV for parallel encoding that is more efficient that multiprocessing?
Thanks!

VideoCaptureStream

Did a quick test

import ffmpegcv as fcv
cap = fcv.VideoCaptureStream("rtsp://admin:[email protected]:554/stream1", cv2.CAP_FFMPEG)

gives:

File "/usr/local/bin/.virtualenvs/catdetector/lib/python3.8/site-packages/ffmpegcv/__init__.py", line 398, in VideoCaptureStream
   return FFmpegReaderStream.VideoReader(
 File "/usr/local/bin/.virtualenvs/catdetector/lib/python3.8/site-packages/ffmpegcv/ffmpeg_reader_stream.py", line 20, in VideoReader
   assert pix_fmt in ["rgb24", "bgr24", "yuv420p", "nv12", "gray"]
AssertionError

Stream is 720, x 1280 color h264. Works with cv2
cap = cv2.VideoCapture("rtsp://admin:[email protected]:554/stream1")

Video Writer is not fast as expected. even slower..

I run the following comparison code with the line profiler of python library. (Python version 3.8.5 / FFMPEG version 4.2.7-0ubuntu0.1)

In this scenario, we assume that all images (389 .png) are in the list with RGB order, each of which is a (high) resolution of 2160x3840. Therefore in make_video_ffmpegcv, i assigned pix_fmt="rgb24". I also checked that the gpu memories are assigned during the running make_video_ffmpegcv. Is there anything that i missed for the function make_video_ffmpegcv ?

Best,

스크린샷 2023-03-21 오전 11 50 33

스크린샷 2023-03-21 오전 11 52 19

Broken pipe error

ffmpeg,videowriter working fine 24/7 on 4 cameras.
Occasionally, once or twice a day, I get:

File ".../site-packages/ffmpegcv/ffmpeg_writer.py", line 70, in write
    self.process.stdin.write(img)
BrokenPipeError: [Errno 32] Broken pipe

No pipes are involved in this operation, so it must be ffmpegcv's pipe

B

ffmpegcv is not running.

Following the tutorial on PyPI to install ffmpegcv, I did as instructed, but it still doesn't work. The images below show ffmpeg 6.0.1 installed, ffmpegcv installed, and finally, an image showing an error when I try to run video capture and then save the video capture.

image

image

image

how to solve this issue?

Different behavior with Opencv.

Issue:

In H264 encoded MTS files, ffmpegcv read twice frames than cv2.
Other programs (including Premere Pro and Windows) report same results as cv2, not ffmpegcv.

Code

>>> import cv2
>>> import ffmpegcv
>>> capture_cv2 = cv2.VideoCapture("1-1.MTS")
>>> capture_cv2.get(cv2.CAP_PROP_FPS)
25.0
>>> capture_cv2.get(cv2.CAP_PROP_FRAME_COUNT)
15086.0
>>> capture_cv2.release()
>>> capture_ffmpegcv = ffmpegcv.VideoCapture("1-1.MTS")
>>> capture_ffmpegcv.fps
50.0
>>> capture_ffmpegcv.count
30134
>>> capture_ffmpegcv.release()

need help with 0 bytes

Hi, first of all thank you for this wonderful solution. I am trying to convert my exising cv2 writer to ffmpegcv but video is always 0 bytes no matter how i tried. Can you please point out what I did wrong? Thank you in advance. I understand the image width height can be automated but I just wanna stick to current setup for future changes. CV2 ran without problem.

        frame = cv2.imread(os.path.join(image_folder, images[0]))
        height, width, layers = frame.shape
        #video = cv2.VideoWriter(video_name, fourcc, 15, (width, height))
        video = ffmpegcv.VideoWriterNV(video_name, 'hevc_nvenc', 15, (width, height)) #40 
        for image in images:
            img = cv2.imread(os.path.join(image_folder, image))
            video.write(img)            
        video.release()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.