miguelgrinberg / flask-video-streaming Goto Github PK
View Code? Open in Web Editor NEWSupporting code for my article on video streaming with Flask.
Home Page: http://blog.miguelgrinberg.com/post/video-streaming-with-flask
License: MIT License
Supporting code for my article on video streaming with Flask.
Home Page: http://blog.miguelgrinberg.com/post/video-streaming-with-flask
License: MIT License
Hey Miguel,
I am working on a project that takes multiple camera streams and displays them in flask when a user connects.
My issue is that when I try to generate the frames in flask I only get the first frame and then the browser stops rendering the jpeg stream.
I am able to confirm correct data is getting to the flask server from the camera clients by using mplayer.
The full opensource project is here on github: https://github.com/devopsec/shomesec
I think the issue is that the data coming from the camera clients may need some extra parsing to properly format the frame before sending to flask?
My assumption was that this should work without parsing if I made the buffer size = frame size, but please let me know if that is a bad assumption
https://github.com/devopsec/shomesec/blob/master/httpserver/webserver.py#L38
Hello
I am writing a preview portion for video management system, it works like a charm in chrome with standard
Here is my stream class:
`class Stream:
run = False
FNULL = open(os.devnull, 'w')
overlay = ffmpeg.input("rlclogo.png")
def __init__(self, camid):
camUtil = CameraUtil()
self.camid = camid
self.streamurl = camUtil.get_stream_from_id(self.camid)['streamURL']
print(self.streamurl)
self.args = ffmpeg.input(self.streamurl)
# vcodec="libvpx",
# acodec="libvorbis",
self.args = ffmpeg.output(self.args, "-",
f="matroska",
vcodec="copy",
acodec="copy",
blocksize="1024",
# strftime="1",
# segment_time="60",
# segment_format="matroska"
preset="ultrafast",
metadata="title='test'"
)
self.args = ffmpeg.get_args(self.args)
print(self.args)
self.pipe = subprocess.Popen(['ffmpeg'] + self.args,
stdout=subprocess.PIPE,)
#stderr=self.FNULL)
def dep_stream(self):
def gen():
try:
f = self.pipe.stdout
byte = f.read(1024)
while byte:
yield byte
byte = f.read(1024)
finally:
self.pipe.kill()
return Response(gen(), status=200,
mimetype='video/webm',
headers={'Access-Control-Allow-Origin': '*',
"Content-Type": "video/webm",
})`
My html playback portion:
<video id="live_page_player" id="video" preload="auto" autoplay width="1280" height="720"> <source src="/stream/{{ camid }}" type='video/webm;codecs="vp8, vorbis"'/> YOUR BROWSER DOES NOT SUPPORT HTML5, WHAT YEAR ARE YOU FROM?! </video>
Did I do something dumb?! Or am I missing something, because it works google chrome like a charm
(I am pasting part of the screenshot because people sit in the cubicles and you see them but you will get the idea)
Help plez
hi miguelgrinberg, I have read your article about video streaming.I think i have understand it. And lately I am doing something about audio streaming. Unfortunately, it is difficult for me. I do not know what format data should be transferred after I get data from hardware. And how can I pass it from server to browser? How can I play it?
Can you give me some example of audio streaming just like your flask-video-streaming?
I am working on a module to stream the OpenCV frame to web server using Flask. When I tested the script locally, the Frame rate of OpenCV and flask response are in sync.
The problem started to occur when, I deployed the script in cloud server. The OpenCV FPS and flask response rate are not in sync. Due to that, I lost some frame in between.
After debugging the code for some time, I found that if I decrease the image size to 1280*720 then flask responded after 1 sec. If, I decrease the size to 640 * 360 response rate increased. But still it's not in sync. I tried to restrict the FPS to 12 but not able to make it in sync.
Is there any way so that we can synchronize the flask response with OpenCV frame rate using image size 1280 * 720 with FPS 12 -15?
import cv2
import time
import threading
from flask import Flask, Response
app = Flask(__name__)
@app.route("/")
def index():
"""Video streaming home page."""
return "Hello World"
def run_video():
# grab global references to the video stream, output frame, and
# lock variables
global cap, outputFrame, lock
total = 0
fpsLimit = 0.8 # Restricting the FPS.
startTime = time.time()
# loop over frames from the video stream
while True:
cap = cv2.VideoCapture('input_video/camera_1.mp4')
while (cap.isOpened()):
nowTime = time.time()
if (nowTime - startTime) > fpsLimit:
ret, img = cap.read()
print("FPS rate : ", int(1 / (nowTime - startTime)))
startTime = time.time()
if ret == True:
total += 1
with lock:
outputFrame = img.copy()
def generate():
global outputFrame, lock, prev_response_time
while True:
with lock:
if outputFrame is None:
continue
(flag, encodedImage) = cv2.imencode(".jpg", outputFrame)
# ensure the frame was successfully encoded
if not flag:
continue
current_time = time.time()
response_time = current_time - prev_response_time
print("display_response_time", int(1 / (response_time)))
prev_response_time = current_time
# yield the output frame in the byte format
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' +
bytearray(encodedImage) + b'\r\n')
@app.route("/video_feed")
def video_feed():
return Response(generate(),
mimetype="multipart/x-mixed-replace; boundary=frame")
# check to see if this is the main thread of execution
lock = threading.Lock()
prev_response_time = 0
if __name__ == '__main__':
thread = threading.Thread(target=run_video)
thread.daemon = True
thread.start()
# start the flask app
app.run(host='0.0.0.0', threaded=True, port=5000)
# release the video stream pointer
# cap.stop()
https://stackoverflow.com/questions/66381734/can-we-sync-flask-streaming-response-with-opencv-fps
First off, I just wanted to thank you for creating this project!
My repo is a bit different as I needed to be able to stream to Android clients and wanted to play around with flask-RESTful as seen here: https://github.com/robsmall/flask-raspi-video-streamer. I have been playing around with this repo and am looking to record the frame
s to a video file on disk while keeping a similar architecture and continuing to stream the video.
I was looking at opencv and pillow to achieve this but have been unable to do so. I was curious if you have any pointers around how to write the frame
objects to a video file on disk, while still streaming.
Thanks in advance!
I'm having issues with the library on a Pi, sometimes the camera feed freezes on a frame.
The only thing I've changed from the file is
with picamera.PiCamera() as camera:
camera.resolution = (320, 240) # <-- New Line
# let camera warm up
time.sleep(2)
What could cause that? Any suggestions on what I should try to fix it?
Thanks.
Hi,
i'm trying to do something similar to your project but using IIS - it works using just Flask but it does not work in combination with IIS
I've posted a question here: https://stackoverflow.com/questions/62000712/flask-iis-motion-jpeg
Thank you!
Miguel,
Is there a way to rotate the image. I read the blog where you indicate to look in camera_pi.py but had no luck finding a way to do it.
Dear author,
Thanks for sharing the codes! I have a problem about how to control the speed of the video feed flow showing in the webpage?
i want to use the video controls of the html and display the feed using them , to get the full screen feature .
This is the html script i use
<head></head>
<body>
<http>
<video controls="" autoplay="">
<source src="http://192.168.1.8:5001/video_feed" >
</video>
</http>
</body>
but it doesn't work , nothing happens when i open the web page.
if my approach is wrong , how i get the full screen functionality ?
How to send meta data in the yield function in flask streaming?
I tried this code
As an example I used this to the length of the image as an additional info. I will add other info also. But I can't retrieve any info once I put the server's link in the Img tag in Javascript. Can I send any string information along with frames in flask streaming and retrieve it in Javascript
yield(b'--frame\r\n'
b'Content-Type: image/jpeg\r\n'
b'Content-Length: ' + str(len(img)).encode()+ b'\r\n'
b'\r\n'+ img + b'\r\n')
Hello I'm new for flask and html, I try to encode the frame into base64 and display on the webpage, but it not work when I use this change in index.html:
<img src="data:;base64, {{ url_for('video_feed') }}">
nor this:
def gen(camera): """Video streaming generator function.""" while True: frame = camera.get_frame() frame = bytes(frame, encoding='utf8') yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
Where should I modify the code?
I want to use yolo to call multiple cameras at the same time for identification. Please help me.
Hi @miguelgrinberg ,
It seems that the code does not work with IE (but works fine with Chrome and Firefox). Did you check that?
Bests,
-Thanh
Hi,
I am building a real time face recognizer wherein I am streaming an openCV webcam frame (image with rectangle around face and name of an individual printed on jpeg) to a web client. I would like to yield a string along with the existing jpeg in the gen() function and capture it in index.html. I understand I could yield the string alongside the jpeg, but not sure how to handle it in front end. I want to stream the video as well as print the name of the person on the front end. Could you please help with this?
Thanks,
AK
Hello,
I was able to run your default code base a while back and it showed the video in a browser perfectly. I couple days ago the videos stopped playing in all of my browsers on my computer. I thought I must have messed up the code somehow but the stream plays correctly on my phone browser. Are you aware of any new updates to browsers that might be blocking your stream? I also pulled down your latest code base and ran it to no avail on my PC browsers, but it indeed works on my phone browser. Thank you for your time.
,Dan
I want to pass an additional argument to get frames
like:
while True:
frame = camera.get_frame(arg1)
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + dummy + b'\r\n')
But When I do This things break.
How can I achieve This
When i run opencv-camera app with higner verison flask like 1.0.2, the param "threaded" could be ignore to support multi-thread.
(line 986 options.setdefault("threaded", True)
in newest flask app.py)
In addition, something is deprecated in MarkupSafe, do we need to upgrade?
(sry my English is not good~)
flask-video-streaming/base_camera.py
Line 89 in 26e0d89
I'm confused. How is the clsparameter passed in?
I'm trying to use this app with a sim808 module to connect to the Internet (max 85.6kbps(down-load/up-load))and with ngrok to forward http, if I launch app.py and go to ngrok url I can see one frame (resolution 320x240) but then no more frame is sent and on the console I see this error messages
Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 270, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 261, in execute write(data) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 242, in write self.wfile.write(data) IOError: [Errno 104] Connection reset by peer
What could be the problem? Too slow internet connection? Can I manage to get and set less frame per second?
Thanks...
My project is about algorithm,now I want inport different stream at the same time,and play in the same web page,but I don't know how to solve it?
When click one url,inport it to algorithm program,then play in the web page!
Hello @miguelgrinberg, thanks for your great tutorial on video streaming with flask. I've been working on a web app that streams some live video (from a webcam). The stream works great but when I need to call other API endpoints it doesn't work. It seems that my flask app is busy with the streaming endpoint and doesn't process any other request. I've done some research, tried disabling debug mode and using gunicorn and gevent but nothing worked so far. Do you have any idea how I can solve this?
Thanks!
I've modified this code by filling my own Python Queue with data locally from a separate thread and implementing my own "frames()" method as shown below. I've verified that a good JPEG is being returned by the yield operator by saving a file and viewing it.
The problem is that when I visit the Flask-served webpage in Chrome / Firefox / Safari, no image is rendered - even though the browser debugging tools show large data transfers at a high rate and, if I enable them, the print statements before my yield operator show data being returned at a high rate with the proper size.
The only time an image shows up in the browser is when I kill my Python Flask server from the command line. When I kill it, immediately a good-looking image shows up in the browser. What am I doing wrong? Howcome the images are not being displayed until I kill the server? Thanks.
from base_camera import BaseCamera
# Include global includes
from Globals import globals
# Convert to proper data type
import numpy as np
# Delay
import time
class Camera(BaseCamera):
@staticmethod
def frames():
frameCounter = 1
while True:
if globals.VLQueueWebServer.qsize() > 3:
#print "VL Size: %s"%(globals.VLQueueWebServer.qsize())
frame = globals.VLQueueWebServer.get()
if len(frame) > 0:
#print "VL Frame Size: %s"%(len(frame))
frame = np.fromstring(frame, dtype=np.uint8)
if frameCounter == 50:
outFile = open("/tmp/output-vl-" + str(frameCounter) + ".jpg", 'wb')
outFile.write(frame)
outFile.close()
yield frame.tobytes()
#time.sleep(1)
frameCounter = frameCounter + 1
else:
time.sleep(0.05)
Here is the main app:
@app.route('/')
def index():
"""Video streaming home page."""
return render_template('index.html')
def gen(camera):
"""Video streaming generator function."""
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/vl_video_feed')
def vl_video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen(VLCam()),
mimetype='multipart/x-mixed-replace; boundary=vlframe')
# Serve the Webpage
app.run(host='0.0.0.0', threaded=True)
Hi,
i have tried to run your project on raspberry with PI Camera but when try to open it in chromium browser i don't see anything otherwise it run with emulated mode. One question I should uncomment the line "from amera_pi import Camera" in app.py when use Pi Camera ?
Hello,
First, thanks for your great work. It worked like a charm, really easy to set up and quite simple to follow ๐
However, I have a question - I would like to read the video stream that is produced by this Flask app using OpenCV.
I usually would use:
VideoStream('http://127.0.0.1:5000/video_feed').start()
BUT this throws an error:
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 0x7f9494206400] Expected boundary '--' not found, instead found a line of 127 bytes
as (I think) there isn't a specific file that VideoStream needs to start reading.
I would appreciate it a lot if you throw any ideas my way ๐
Hi Miguel, firstly I wanted to thank you for all the great work you've done. I'm a complete noob to programming so I apologize in advance for my question. I'm trying to combine the capabilities of your "flask-video-streaming" and your "socketio-examples/audio" into one application. My end-goal is to be able stream video from server (a raspberry pi) to the browser as well as be able to stream voice audio from the browser to the server. I have combined them (see my forked repo here) into one app but I think there's some process blocking going on because the webpage does not fully load. After some troubleshooting, I think my app.py gets hung up on the video streaming generator function (code line # 77). Not sure what to do. Am I just combining these two code repos incorrectly?
@miguelgrinberg Hi, when I download your code and run the code 'app.py'๏ผ but it can not display any images on the website, and an error occured as shown below:
Exception happened during processing of request from ('10.24.80.10', 49824)
Traceback (most recent call last):
File "/home/macwg/ssd/anaconda2/lib/python2.7/SocketServer.py", line 596, in process_request_thread
self.finish_request(request, client_address)
File "/home/macwg/ssd/anaconda2/lib/python2.7/SocketServer.py", line 331, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/home/macwg/ssd/anaconda2/lib/python2.7/SocketServer.py", line 654, in init
self.finish()
File "/home/macwg/ssd/anaconda2/lib/python2.7/SocketServer.py", line 713, in finish
self.wfile.close()
File "/home/macwg/ssd/anaconda2/lib/python2.7/socket.py", line 283, in close
self.flush()
File "/home/macwg/ssd/anaconda2/lib/python2.7/socket.py", line 307, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
Can you tell me why and how to sovle? Thanks, best regards.
This is working with my facial recognition, only problem is that I need to it run in ssl/tls. How does this run in https instead of http.
I am using this code where I can display multiple streams using urls like localhost:5000/1 and localhost:5000/2
I used this repository for the code https://github.com/pambot/ozymandias/blob/master/web/ozy_app.py
I am getting this error
return next(self._iterator)
ValueError: generator already executing
This is the code
def video_generator(topic):
"""Video streaming generator function."""
consumer = KafkaConsumer('flask',
bootstrap_servers='localhost:9092',
auto_offset_reset='latest',
fetch_max_bytes=15728640,
max_partition_fetch_bytes=15728640,
group_id=topic)
for msg in consumer:
if msg.key == topic:
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + msg.value + b'\r\n')
time.sleep(0.1)
@app.route('/video/<topic>')
def video(topic):
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(video_generator(topic),
mimetype='multipart/x-mixed-replace; boundary=frame')
I tried the tip from this blog http://anandology.com/blog/using-iterators-and-generators/
which said
The only way to fix it is by wrapping it in an iterator and have a lock that allows only one thread to call next method of the generator.
class threadsafe_iter:
"""Takes an iterator/generator and makes it thread-safe by
serializing call to the next
method of given iterator/generator.
"""
def __init__(self, it):
self.it = it
self.lock = threading.Lock()
def __iter__(self):
return self
def next(self):
with self.lock:
return self.it.next()
Now you can take any iterator or generator and make it thread-safe by wrapping it with threadsafe_iter.
# thread unsafe generator
c1 = count()
# now it is thread-safe
c1 = threadsafe_iter(c1)
I tried using it like this
@app.route('/video/<topic>')
def video(topic):
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(threadsafe_iter(video_generator(topic)),
mimetype='multipart/x-mixed-replace; boundary=frame')
Thank you.
Hey!
First of all I have to mention how greatly I appreciated the article - being a Python newbie it helped me a great deal to read your code which is well structured and super easy to understand.
And my question: I was working on a way to implement rotating the camera feed using different endpoints.
def gen(camera):
"""Video streaming generator function."""
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/camera/<regex("([0-9]*)"):keys>')
def video_feed(keys):
"""Video streaming route. Put this in the src attribute of an img tag."""
if keys != '':
return Response(gen(Camera(int(keys))), mimetype='multipart/x-mixed-replace; boundary=frame')
else:
return Response(gen(Camera(0)), mimetype='multipart/x-mixed-replace; boundary=frame')
The BaseCamera class:
thread = None # background thread that reads frames from camera
frame = None # current frame is stored here by background thread
last_access = 0 # time of last client access to the camera
event = CameraEvent()
angle = 0
def __init__(self, angle):
self.update_angle(angle)
def update_angle(self, angle):
BaseCamera.thread = None
self.angle = angle
self.start_thread()
def start_thread(self):
"""Start the background camera thread if it isn't running yet."""
if BaseCamera.thread is None:
BaseCamera.last_access = time.time()
# start background frame thread
BaseCamera.thread = threading.Thread(target=self._thread, args=[self.angle])
BaseCamera.thread.start()
# wait until frames are available
while self.get_frame() is None:
time.sleep(0)
@staticmethod
def get_frame():
"""Return the current camera frame."""
BaseCamera.last_access = time.time()
# wait for a signal from the camera thread
BaseCamera.event.wait()
BaseCamera.event.clear()
return BaseCamera.frame
@staticmethod
def frames(angle):
""""Generator that returns frames from the camera.
:param angle: rotation angle
"""
raise RuntimeError('Must be implemented by subclasses.')
@classmethod
def _thread(cls, angle):
"""Camera background thread."""
print('Starting camera thread.')
frames_iterator = cls.frames(angle)
for frame in frames_iterator:
BaseCamera.frame = frame
BaseCamera.event.set() # send signal to clients
time.sleep(0)
# if there hasn't been any clients asking for frames in
# the last 10 seconds then stop the thread
if time.time() - BaseCamera.last_access > 10:
frames_iterator.close()
print('Stopping camera thread due to inactivity.')
break
BaseCamera.thread = None
Now the modified OpenCV implementation:
class Camera(BaseCamera):
video_source = 0
@staticmethod
def set_video_source(source):
Camera.video_source = source
@staticmethod
def frames(angle):
camera = cv2.VideoCapture(Camera.video_source)
if not camera.isOpened():
raise RuntimeError('Could not start camera.')
while True:
# read current frame
_, img = camera.read()
img = imutils.rotate(img, angle)
# encode as a jpeg image and return it
yield cv2.imencode('.jpg', img)[1].tobytes()
Rotating in itself works fine, however my problem is that if I open the stream with a given endpoint (let's say with /camera/) and then I open it with another one (like /camera/90) the camera feed is not released (eg. the code piece frames_iterator.close()
isn't invoked on the already running thread). I am assuming this has to do that on refresh / navigate I get an exception for [Errno 32] Broken pipe
(please let me know if you think the call stack would help - the exception is coming from the SocketServer, not from my code).
I'm not sure if I understood the part about the limitations in your article: I used SocketIO for Flask (utilizing eventlet) so I was hoping it'd handle the life cycle of the resources locked by one client properly but I still have the issue :(
Do you have any suggestions what should I look at?
Cheers!
Hi Miguel
Great example - thanks.
I implemented a wrapper around my own camera like you described and everything worked more or less out of the box so that was very cool.
Besides visualizing the jpeg stream on the site I would like to calculate a value from each frame (like the sum of all values) and show that in a live updating graph next to the video.
I got the live graph working using functionality from bokeh (from anaconda) with hardcoded dummy values, my only problem right now is how do I get access to the framestream from a different handler than the def video_feed()?
Do you have any ideas regarding this?
Below is the central code.
The live graph works by long polling on the handler /data.
So somehow i need to intercept frames in the /data handler and calculate the
needed value.
`
x = list(np.arange(0, 6, 0.1))
y1 = [sin(xx) + random() for xx in x]
y2 = [sin(xx) + random() for xx in x]
@app.route('/data', methods=['GET', 'OPTIONS', 'POST'])
def hello_world():
x.append(x[-1]+0.1)
y1.append(sin(x[-1])+random())
return jsonify(x=x[-500:],
y1=y1[-500:],
)
@app.route('/')
def index():
"""Video streaming home page."""
source = AjaxDataSource(data=dict(x=[], y1=[]),
data_url='/data',
polling_interval=100)
p = figure(toolbar_location="above", sizing_mode="scale_width")
p.toolbar.logo = None
p.line(x='x', y='y1', source=source)
p.x_range.follow = "end"
p.x_range.follow_interval = 10
script1, div1 = components(p)
return render_template("dashboard.html",
div1=div1, script1=script1
)
def gen(camera):
"""Video streaming generator function."""
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if name == 'main':
app.run(debug=True, host='0.0.0.0', threaded=True)
`
Hi, I was trying to write a camera class to get images from a Basler camera.
The code seems to run fine for a while. However sometimes I get the following error:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/dreyer/git/flask-video-streaming/base_camera.py", line 93, in _thread
for frame in frames_iterator:
File "/home/dreyer/git/flask-video-streaming/camera_basler.py", line 19, in frames
image = converter.Convert(grabResult)
File "/usr/local/lib/python2.7/dist-packages/pypylon/pylon.py", line 6311, in Convert
return _pylon.ImageFormatConverter_Convert(self, *args)
InvalidArgumentException: Cannot convert image. The passed source image is invalid. : InvalidArgumentException thrown (file 'ImageFormatConverter.cpp', line 77)
Any ideas what might be wrong? Thanks.
Thanks for publishing this code. It works great and I'm able to stream both a Pi camera or a USB camera. However, I'm having difficulty getting more than one to stream at the same time. I'm running the Flask app in apache2 using the WSGI module.
I've tried calling the same route with different parameters to access the different devices, and this works, but not both at the same time. The first call starts a stream, then the next only displays the first cam's images, so I have two of the same stream on the same page.
@blueprint.route('/video_feed/<camera_type>/<device>')
def video_feed(camera_type, device):
"""Video streaming route. Put this in the src attribute of an img tag."""
camera_stream = import_module('mycodo.mycodo_flask.camera.camera_' + camera_type).Camera
return Response(gen(camera_stream(opencv_device=int(device))),
mimetype='multipart/x-mixed-replace; boundary=frame')
<img src="/video_feed/opencv/0">
<img src="/video_feed/picamera/0">
I then tried making two different routes, but that produced the same result as the first method.
@blueprint.route('/video_opencv/<device>')
def video_opencv(device):
"""Video streaming route. Put this in the src attribute of an img tag."""
from mycodo.mycodo_flask.camera.camera_opencv import Camera
Camera.set_video_source(int(device))
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
@blueprint.route('/video_picamera')
def video_picamera():
"""Video streaming route. Put this in the src attribute of an img tag."""
from mycodo.mycodo_flask.camera.camera_picamera import Camera
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
<img src="/video_opencv/0">
<img src="/video_picamera">
Could you help me understand what I'm missing in trying to get this to work? Thanks.
It looks like a bunch of stuff is handled at the Class level for Camera (thread management, video source, etc). How would one adapt this project to support more than one camera int he same web page?
Hi Miguel,
I've been wanting to add an FPS counter to the camera stream. Do you perhaps know where I should start modifying the code that makes the most sense?
I was thinking of starting with the frames_iterator loop for _thread in base_camera.py that would calculate the rate of frames being sent to a client, but I'm unsure if that's the best way to go about it.
I'm currently streaming using OpenCV, so I've also been thinking of perhaps modifying the code in camera_opencv.py instead. Do you have any advice on how I should approach this?
Many thanks!
Hello,
I am trying to create an UI for changing basic camera settings like resolution, hflip, etc, but I am having trouble correctly setting those properties. I have tried the following in camera_pi.py but it just leads the index.html page to keep loading forever without actually returning a feed. Any idea as to where/how I should apply those PiCamera settings?
THanks again!
class Camera(BaseCamera):
@staticmethod
def frames():
with Camera.set_res(1920, 1080) as camera:
# let camera warm up
time.sleep(2)
stream = io.BytesIO()
for _ in camera.capture_continuous(stream, 'jpeg',
use_video_port=True):
# return current frame
stream.seek(0)
yield stream.read()
# reset stream for next frame
stream.seek(0)
stream.truncate()
@staticmethod
def set_res(x, y):
with picamera.PiCamera() as camera:
camera.resolution(x, y)
return camera
Thank you for the code. I really appreciate it.
I was testing on a kind of video-streaming with GUI on web. And I am wondering if i can make a GUI with 'push' button to stop the streaming and 'play' button to replay the streaming.
the code below is added to the Camera class. (camera_opencv.py)
@staticmethod
def cam_stop():
Camera.bRunning = False
@staticmethod
def cam_start(cls):
Camera.bRunning = True
and bRunning flag is watched at get_frame function in base_camera.py
def get_frame(self):
if self.bRunning is False:
return None
"""Return the current camera frame"""
BaseCamera.last_access = time.time()
# wait for a signal from the camera thread
# print("wait")
BaseCamera.event.wait()
# print("clear")
BaseCamera.event.clear()
return BaseCamera.frame
but it didn't work.
is it possible to switch play and stop the streaming?
How I can stream rtsp with opencv?
I read that you can replace 0 (video source) with the file or url of the video.
I edited the following line from camera_opencv.py:
video_source = 0
by
video_source = 'rtsp://video.crearchile.com:8080/live/canal2.mp4'
and I got this:
Starting camera thread.
Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/home/nboettcher/Downloads/rtsp/flask-video-streaming/base_camera.py", line 93, in _thread
for frame in frames_iterator:
File "/home/nboettcher/Downloads/rtsp/flask-video-streaming/camera_opencv.py", line 17, in frames
raise RuntimeError('Could not start camera.')
RuntimeError: Could not start camera.
I am trying to add a button for saving a snapshot. Forgive me my ignorance here, despite programming experience, this is actually my first Python project.
I would like to save a snapshot, preferably without refreshing the page. Any guidance here would be great. I appreciate this great project and your work on it. Here's all the info and code
I am getting the following error:
As a note, I am running the app multi-threaded, but I guess the mmal being used by the stream is another issue.
so far, here's my simple approach to test this. I have added a link with a route to run the code
<head>
<title>Video Streaming Demonstration</title>
</head>
<body>
<h1>Video Streaming Demonstration</h1>
<a href="{{ url_for('take_pic') }}">Turn on</a><br/>
<img src="{{ url_for('video_feed') }}">
</body>
</html>
and here's my app.py
from flask import Flask, render_template, Response
from camera_pi import Camera
from picamera import PiCamera
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html')
def gen(camera):
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/video_feed')
def video_feed():
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
@app.route('/take_pic')
def take_pic():
with PiCamera() as camera:
camera.capture('image.jpg')
return '', 204 # no content
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', threaded=True)
I need a help sir!!
trying to stream multiple camera simultaneously
before that i ll get data from user and then in third page only i ll stream cameras, where only one camera stream is showing where other one is not showing with page loading continuously,
code:
from flask import Flask, render_template, Response, jsonify, request
import os
import cv2
import sys
import numpyapp = Flask(name)
@app.route("/")
def hello():
return render_template('form.html')@app.route("/signin")
def signin():
global EmployeeId
#global cam1, cam2EmployeeId = request.args.get('text','') #camera1 = cv2.VideoCapture(0) #messge1 = 'Warning: unable to open video source', 0 #camera2 = cv2.VideoCapture('rtsp://admin:aravind@[email protected]:554/1') #messge2 = 'Warning: unable to open video source', 2 if len(EmployeeId) != 10 or len(EmployeeId) < 10: return render_template('form.html') else: return render_template('serial.html')
@app.route("/signout")
def signout():
return render_template('form.html')@app.route("/about")
def about():
return render_template('about.html')@app.route("/get_data")
def get_data():global SerialNo SerialNo = request.args.get('SerialNo','') TracerNo = request.args.get('TracerNo','') Doomcamera = request.args.get('Doomcamera','') Sparecasset = request.args.get('Sparecasset','') Securitybolt = request.args.get('Securitybolt','')
print(len(SerialNo))
os.makedirs('/home/imran/Desktop/ramesh/python_test/14.9.19/' + EmployeeId + '/' + SerialNo) SF = open('/home/imran/Desktop/ramesh/python_test/14.9.19/' + EmployeeId + '/' + SerialNo + '/' + 'data.txt', 'w+') SF.write("Data got from the user \n\n") SF.writelines("Serial Number:" + (SerialNo) + "\n") SF.writelines("Tracer Number:" + (TracerNo)+ "\n") SF.writelines("Doom Camera Number:" + (Doomcamera)+ "\n") SF.writelines("Spare Cas-set Number:" + (Sparecasset)+ "\n") SF.writelines("Security Bolt Number:" + (Securitybolt)+ "\n") return render_template('index1.html')
def get_frame1():
global out1
cap1 = cv2.VideoCapture(0)
fourcc1 = cv2.VideoWriter_fourcc(*'XVID')
out1 = cv2.VideoWriter('/home/imran/Desktop/ramesh/python_test/14.9.19/' + EmployeeId + '/' + SerialNo + '/' + 'cam8.avi',fourcc1,20.0, (640,480))
while True:sucess, data = cap1.read() jpeg = cv2.imencode('.jpg',data)[1] jpg_data = jpeg.tostring() if sucess == True: out1.write(data) yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + jpg_data + b'\r\n\r\n') del(cap1)
def get_frame2():
global out2
cap2 = cv2.VideoCapture('rtsp://admin:aravind@[email protected]:554/h264/ch1/main/av_stream')
fourcc1 = cv2.VideoWriter_fourcc(*'XVID')
out2 = cv2.VideoWriter('/home/imran/Desktop/ramesh/python_test/14.9.19/' + EmployeeId + '/' + SerialNo + '/' + 'cam7.avi',fourcc1,30, (1280,720))while True: #global ret0, data0 sum1, data0 = cap2.read() jpeg0 = cv2.imencode('.jpg',data0)[1] jpg_data0 = jpeg0.tostring() if sum1 == True: out2.write(data0) yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + jpg_data0 + b'\r\n\r\n') del(cap2)
@app.route('/videofeed1')
def videofeed1():
return Response(get_frame1(),mimetype='multipart/x-mixed-replace; boundary=frame')@app.route('/videofeed2')
def videofeed2():
return Response(get_frame2(),mimetype='multipart/x-mixed-replace; boundary=frame')@app.route('/start')
def start():
message = 'recording'
return render_template ('index.html',l=message)@app.route('/stop')
def stop():
out1.release()
out2.release()
return render_template('serial.html')@app.route('/previous')
def previous():
return render_template('serial.html')if name == "main":
app.run(host='localhost',port='5002',debug=True, threaded=False)
.
.
.
when i am try stream multiple camera only (without getting any datas from user) i can stream the both camera without any error/page loading
code is:
from flask import Flask, render_template, Response
import cv2
import os, sys
import numpy as npapp = Flask(name)
@app.route('/')
def index():
return render_template('index1.html')def get_frame1():
global out1, out2
cap1 = cv2.VideoCapture(0)
fourcc1 = cv2.VideoWriter_fourcc(*'XVID')
out1 = cv2.VideoWriter('/home/imran/Desktop/ramesh/07-01-2020/testing/video/cam8.avi',fourcc1,20.0, (640,480))
while True:sucess, data = cap1.read() jpeg = cv2.imencode('.jpg',data)[1] jpg_data = jpeg.tostring() if sucess == True: out1.write(data) yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + jpg_data + b'\r\n\r\n') del(cap1)
def get_frame2():
#global out2
cap2 = cv2.VideoCapture('rtsp://admin:aravind@[email protected]:554/h264/ch1/main/av_stream')while True: #global ret0, data0 sum1, data0 = cap2.read() jpeg0 = cv2.imencode('.jpg',data0)[1] jpg_data0 = jpeg0.tostring() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + jpg_data0 + b'\r\n\r\n') del(cap2)
@app.route('/videofeed1')
def videofeed1():
return Response(get_frame1(),mimetype='multipart/x-mixed-replace; boundary=frame')@app.route('/videofeed2')
def videofeed2():
return Response(get_frame2(),mimetype='multipart/x-mixed-replace; boundary=frame')@app.route('/start')
def start():
global out1, out2
cam1 = cv2.VideoCapture('rtsp://admin:aravind@[email protected]:554/1')fourcc1 = cv2.VideoWriter_fourcc(*'XVID') out2 = cv2.VideoWriter('/home/imran/Desktop/ramesh/07-01-2020/testing/video/cam7.avi',fourcc1,30, (1280,720)) while True: ret0, frame0 = cam1.read() if ret0 == True : out2.write(frame0) else: break #return message1 #return render_template('index.html', l = message1)
@app.route('/stop')
def stop():
out1.release()
out2.release()
return render_template('serial.html')if name == 'main':
app.run(host='localhost',port=5050,threaded=True,debug=True)
HI,
Can we use your camera module to read .mp4 file and undergo face detection from the browser:
i tried using the below approach, could u pls confirm the approach to be followed
app.py
@app.route('/uploader')
def uploader():
result='http://www.sample-videos.com/video/mp4/720/big_buck_bunny_720p_1mb.mp4'
out1=result.encode("utf-8")
c1 = facedetection(out1) // face detection class
return Response(c1.getfaceCoordinates(),mimetype='multipart/x-mixed-replace; ')
return redirect('result.html')
)
facedetection.py
class facedetection:
def __init__(self, val):
self.val = val
def getfaceCoordinates (self):
count=0
faceCascade = cv2.CascadeClassifier(classifier)
vid = imageio.get_reader(self.val, 'ffmpeg')
for i, im in enumerate(vid):
image1 = vid.get_data(i)
gray = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, 1.3, 5)
print('...len....',len(faces))
for (x,y,w,h) in faces:
print ('...detected faces...........')
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = image[y:y+h, x:x+w]
yield (b'--image1\r\n'b'Content-Type: image/jpeg\r\n\r\n' + roi_gray + b'\r\n')
]
Hi again,
Can you implement video_source as parameter of video_feed?
If you want to stream many videos, it's better only change the video_source from app.py
Thanks
I tried running the Python script with both python
and python3
, but I keep getting the same error:
with picamera.PiCamera() as camera:
AttributeError: 'module' object has no attribute 'PiCamera'
I installed all the dependencies in both Python and Python3.
I'm using a Raspberry Pi Camera v2 with Raspberry Pi Zero W.
running "python app.py" on Raspberry PI, when I hit Ctrl-C to exit the app, nothing actually happens. The web page continues as normal with a video feed.
Hi,
Thanks for this tutorial. I am building a web app with some realtime streaming support. I am using the method described in this repo. But the delay is uncertain. sometimes delay is huge. Sometimes its fine. Most of the time the delay is getting larger over time.
Where seems to be the problem?
I am getting
TypeError: 'generator' object is not callable
The view function did not return a valid response. The return type must be a string, tuple, Response instance, or WSGI callable, but it was a generator.
on werkzeug when running the code can you tell me possible changes i have to make.
I am running the code in Jupyter Notebook
The version of flask I am working with is 1.1.2
The version of werkzeug i am working with is 1.0.1
I have installed the picamera module on my raspberry pi. I did python -c "import picamera"
to test if the module is installed and it seems that its installed properly, however Im trying to use the camera_pi.py module and its saying No mudule named 'picamera', please help.
Thank you very much for sharing video streaming code.
I am going to change the Camera class. I need to pass in some arguments to class methods. However, frames() method is defined as static method. Even, when I change it into instance method, I need to change the Base Camera Class to accept "self" as input argument. But, I prefer not to change the Base Camera Class.
Would you please let me know the reason why you used static method in Camera class?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.