waveform80 / picamera Goto Github PK
View Code? Open in Web Editor NEWA pure Python interface to the Raspberry Pi camera module
Home Page: https://picamera.readthedocs.io/
License: BSD 3-Clause "New" or "Revised" License
A pure Python interface to the Raspberry Pi camera module
Home Page: https://picamera.readthedocs.io/
License: BSD 3-Clause "New" or "Revised" License
Import PiCameraError et al into encoders
Now there's a cool idea. Permit segmentation of video files by parameters to start_recording(). We should permit segmentation by time elapsed, by bytes written, and potentially by callback (the former two can be implemented as fixed callbacks if we implement the latter).
How to manage output filenames? Should we use a format-driven method like capture_continuous, or an iterator-driven method like capture_sequence (or perhaps find a way to permit both? E.g. if output is a string, and segmentation is requested assuming it's a format-string, otherwise assume it's an iterator of strings/streams).
If iterator-driven, what happens when we reach the end of the iterator (if it's finite)? Should add a "cycle" parameter (when False, terminate recording, when True, cycle back to first element of the iterator - can use itertools to cache the cycle in this case).
Long running processes raise an exception in the background encoder thread as noted in jbeale's forum post. Feels like a race condition - perhaps the background thread is attempting to return a buffer after the encoder or connection has been disabled?
Hi Dave
You are a mindreader! Twice you added features that I thought would be nice to have. It's great to see such active development of Picamera. I hope you can find the time to provide some info on how to best use Picamera.
Let me first explain what I want to achieve: I want to create motion triggered video capture device (for security purposes). I want some time (say 5-10 seconds) before the motion trigger to be recorded as well.
My first approach was use the recent addition to Picamera of image capture during video recording. I continuously record 10 second sections of video to memory (split_recording) and also capture images every 0.3 seconds to memory (use_video_port=True). I use the images to detect motion (more or less counting the changed pixels). If motion is detected I save the current video section and possibly the next section of video to disk. This works, I get sections of video around the movement, but the program wasn't very elegant.
Then you read my mind again and introduce the circular buffer. This promises to make the video recording a bit more streamlined. The problem is that I can't get my head around the circular buffer reading (pun intended). I noticed when testing with the provided 2nd example, modified to write to a new file each time instead of overwrite the same file, that one gets video with overlapping events, if the files are written more frequently than the buffer duration. This is to be expected I guess, because writing starts at the first available frame which may also be present in the previous video.
So I guess my question, at long last, is: how can I best apply the circular buffer for my purpose?
Thanks for your patience and insights.
Marcel
The library should include an optional property for controlling the camera's LED via the GPIO pins. Although this will introduce a dependency on a GPIO library, this can be made optional (i.e. have the attribute throw an error in the case a suitable GPIO library is not available).
Hi,
thanks for the great work !
I read the documentation on ReadTheDocs.org and just wanted to say that the ISO parameter does work. The values supported are between 100 and 800, it does make a difference (I did some test with a relatively low-light environment and the pictures with low ISO where very dark, and much better with higher ISO).
I guess this is probably not the best way to contact you but I couldn't find any other way ? (I'm new to GitHub)
Several people at the Manchester Raspberry Pi jam expressed interest in capturing stills while recording video (not such a niche piece of functionality as I'd originally thought). Whilst this is currently possible with picamera there are several drawbacks: the stills are a different resolution (may/may not be a bad thing), and the video encoder drops frames whilst capturing stills (definitely a bad thing). Furthermore, attempting to perform a video-port-based capture while recording results in a very confusing exception (again, a bad thing).
It may be possible to strap a splitter over the video port, then use one of its outputs for the video encoder, and another for a JPEG encoder (permitting video-port-based still capture while recording video without dropping frames ... maybe).
Picamera's current "raw" capture capabilities only extend as far as outputting raw YUV or RGB data (i.e. the same thing raspistillyuv does), not the equivalent of raspistill -raw (which outputs a JPEG along with "truly" raw data from the camera prior to various bits of processing being performed). There's apparently some interest in getting access to "true" raw output and it doesn't look difficult so this should be added for the next release.
Need to consider the best method of doing this though: are people interested in JPEG+raw output (as with raspistill -raw) or do they just care about the raw bit (in which case it's a waste of time outputting the JPEG).
Several options have recently been added to raspivid that we ought to support in picamera, specifically quantisation in raspberrypi/userland#121 and inline headers in raspberrypi/userland#122
The PiCamera.crop property has several issues:
Need further investigation on this but it may turn out that we have to retire crop and convert it into a straight "zoom" parameter. That would require targetting for 1.0 though to avoid breaking anyone's code.
This post from MacHack references the vc.ril.resize MMAL component (a GPU-side frame resizer) which can be used to provide full-frame video (and preview perhaps?) output, albeit at a slower 15fps.
Firstly we need to come up with a way of permitting full-frame video capture (nice interface? Maybe another parameter to start_recording like use_resizer?), and secondly we need to investigate the possibility of using this with previews. If it works with previews, it could solve the disparity between previews and still captures (I don't think anyone is going to particularly care about 30fps vs 15fps previews - although it might be worth polling the forums about this).
The video splitter component does not support RGB format. This means that since release 0.8, video-port-based raw captures can only be in YUV format. This is probably quite a niche requirement, but it's not good to remove functionality and hence we should strive to re-introduce this capability before a 1.0 release.
See http://www.raspberrypi.org/phpBB3/viewtopic.php?p=440460#p440460 for more details
There's been some work on this with OpenGL/ES on the Pi forums. See if you can dig out the post again for investigation
Apparently the video port can be configured with a JPEG encoder, allowing rapid capture of frames (see thread at http://www.raspberrypi.org/phpBB3/viewtopic.php?f=43&t=45178) - investigate if continuous() method can be improved with this (and whether start_recording() should also support this as a format?)
There's a ton of video output formats in the mmal headers, but it may be the same as with the still image port - everything but H.264 is disabled.
I'm attempting to write a program that captures a raw stream from the video port and passes it to OpenCV, but the program fails when trying to capture the stream. I know it's possible, because before I switched to using Python, I was using a C++ class that could do this. At least, I think that's what it does.
It's definitely not an OpenCV error, as the program dies before any OpenCV functions are called.
Here's the error:
Traceback (most recent call last):
File "main.py", line 122, in <module>
main()
File "main.py", line 117, in main
frame = cam.read()
File "/home/pi/2014-Vision/src/picam.py", line 16, in read
self.cam.capture(stream, 'rgba', True, (320, 240))
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 859, in capture
if not encoder.wait(30):
File "/usr/lib/python2.7/dist-packages/picamera/encoders.py", line 326, in wait
raise self.exception
AssertionError
And the code:
from __future__ import division
import picamera
import cv2
import numpy as np
import io
import time
class PiCam:
def __init__(self):
self.cam = picamera.PiCamera()
def read(self):
stream = open('image.data', 'wb')
self.cam.capture(stream, 'bgra', True, (320, 240))
image = np.fromfile(stream, dtype = np.uint8)
return image
cam = PiCam()
cv2.namedWindow("Camera Image", cv2.WINDOW_AUTOSIZE)
while True:
cv2.imshow("Camera Image", cam.read())
cv2.waitKey(1)
return
The major question with this is what's the process for getting it accepted upstream (or is there an equivalent to Ubuntu's PPAs?)
Under certain circumstances (resolution related?), MJPEG recording fails to stop. Looks like an explicit end capture signal is required (opposite of PiCamera._enable_port
)
jbeale reports that the YUV raw capture example fails at the point numpy attempts to read from the stream. I've replicated the issue in python 2 (in python 3 it just segfaults - another numpy issue or a bad build?) - although for the life of me I could swear this used to work. Numpy doesn't even seem to like loading the data from an opened file object. I can only get it working with a filename; that's doubly strange because with a filename the example never could've worked (the file would always start reading from 0) and I've definitely had this working in the past. Is this an upstream issue with some recent release of numpy? Need to investigate numpy's source...
Looks like raw capture is useful for several things so we ought to support it, presumably as another capture method ("capture_raw"?)
BorisS ran into an issue with the splitter - it appears that attempting multiple recording splits results in a header wait timeout. Investigate whether this only happens at full-frame or below (resizer usage makes no difference), and whether an SPS header can be forced by split_recording (there's an MMAL parameter for this - I forget off the top of my head, but I think the latest branch of raspivid uses it under certain circumstances).
While it's flagged in the documentation, the suggested workaround of setting the camera to full resolution doesn't appear to get the live preview and taken image to match.
For context, I'm creating a stop motion animation package. The 'onion-skinning' effect is working perfectly with the Alpha preview laid over the top of the last image taken (via Pygame), but since the live preview isn't an accurate representation of the actual image, it's impossible to align them.
If anyone has got a suggestion on how to post-process the image via PIL / Pygame in the short-term, I'd really appreciate the input. Any information on if/when there might be a fix / workaround would be really appreciated.
Thanks for the library - it's otherwise fantastic!
Specifically, we need examples demonstrating integration with popular imaging libraries - PIL and OpenCV would be good. For example: capture image with camera into a PIL object without touching the disk (e.g. via BytesIO), and demonstrate something similar (e.g. some rudimentary processing) with OpenCV.
Also, the new use_video_port parameter could do with a section to itself demonstrating how to obtain the maximum capture framerate, what the differences between the still and video ports are (resolution and quality wise), how to use generator functions with capture_sequence, etc.
Just a reminder to remove the deprecated continuous method for 1.0
There's a rather cool patch that's just landed in raspivid (raspberrypi/userland#132) which permits recording to a circular in-memory buffer and then permitting the last n seconds (where n <= the length of the buffer) to be dumped to disk on a key-press.
It's currently possible to implement this in picamera, but non-trivial as it'd involve firstly building a custom stream to replicate the circular buffer, and then dealing with the MMAL interface to figure out the location of the SPS headers and I-frames (actually the latter could probably be done with the new frame property, but not the former currently).
We should make this easier for new users firstly by providing such a circular stream in the package, and secondly by making it easier to query the location of SPS headers.
I have slightly modified on of the examples by commenting out the preview because I'm running my raspberry headless:
import time
import picamera
with picamera.PiCamera() as camera:
#camera.start_preview()
try:
for i, filename in enumerate(camera.continuous('image{counter:02d}.jpg')):
print(filename)
time.sleep(1)
if i == 10:
break
finally:
#camera.stop_preview()
pass
The output is:
ssh://[email protected]:22/home/pi/picam/bin/python -u /home/pi/picam/cont_capture.py
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
Traceback (most recent call last):
File "/home/pi/picam/cont_capture.py", line 6, in
for i, filename in enumerate(camera.continuous('image{counter:02d}.jpg')):
File "/home/pi/picam/local/lib/python2.7/site-packages/picamera/init.py", line 1118, in continuous
self._still_encoder.wait()
File "/home/pi/picam/local/lib/python2.7/site-packages/picamera/init.py", line 339, in wait
raise self.exception
picamera.PiCameraError: Unable to return a buffer to the encoder port: Argument is invalid
Process finished with exit code 1
(Thanks for creating this module!)
Add a frame counter to the PiVideoEncoder
class' _callback_write
method (in picamera/encoders.py
) and a property to the PiCamera
class (in picamera/camera.py
) to query the frame-counter (should probably raise a runtime error if a recording is not in progress, in which case no video encoder instance will be present to query).
Chris Cummings has done some truly fascinating work at http://robotblogging.blogspot.co.uk/2013/10/gpu-accelerated-camera-processing-on.html - investigate the possibility of integrating his GPU filters into picamera somehow (from some of the comments this might involve a move to OpenMAX)
Hi,
I'm getting an error when I try and execute some picamera code on a Model A. I've developed some software on a model b and it works fine. I have come to run this on a model a and I keep getting the same error when instantiating picamera.Camera().
mmal: mmal_vc_component_enable: failed to enable component: ENOSPC Unexpected error - Camera component couldn't be enabled: Out of resources (other than memory) Traceback (most recent call last): File "/home/pi/dev/pelmetcam/pelmetcam.py", line 230, in with picamera.PiCamera(False) as camera: File "/home/pi/dev/picamera/picamera/camera.py", line 239, in __init__ self._init_camera() File "/home/pi/dev/picamera/picamera/camera.py", line 329, in _init_camera prefix="Camera component couldn't be enabled") File "/home/pi/dev/picamera/picamera/exc.py", line 102, in mmal_check }.get(status, "Unknown status error"))) picamera.exc.PiCameraError: Camera component couldn't be enabled: Out of resources (other than memory)
Its exactly the same setup, the same sd card, everything. I've rebuilt the sd card from a new raspbian image, always with the same result, its runs fine on my model b, but I execute it on model a and it fails. I've done an apt-get update, upgrade and rpi-update but with no joy.
The model b is a 256Mb Rev 1, while the model a is a 256Mb Rev 2. Other than the rev difference, they are virtually identical.
Im not entirely convinced this is an error with picamera specifically but the error is being generated from picamera.Camera() so I though I'd see if you had any ideas! I have run a couple of the picamera test programs on the model a and they are fine, but when run within my code it always fails.
Any ideas?
Thanks
Martin
Still need to figure out a way to test the output of video recordings ... defer to ffmpeg / libav? Is there a python interface for these or do we need to resort to screen-scraping?
Hi, I love the ease of use code. I have an issue though, that when my code takes between 33 and 93 photos before they start going dark. See the two screenshots listed below.
if it helps, I have seen other people discuss this kind of issue when using the raspistill software: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=43&t=45783&sid=3721788cbae337d91e3036ccd25ec53e&start=100
Essentially, they thought it was to do with using the raspistill '-n' switch and that if left out, it would work ok. JamesSH said this has been fixed in the latest version of his code.
Here is the code I am using to take the photos:
#create controller
gpsc = GpsController()
#start controller
gpsc.start()
# finish off building the master filename
FILENAME = FILENAME+datetime.datetime.now().strftime("%Y-%m-%d_%H%M_")
with picamera.PiCamera() as camera:
# Setup variables for PiCamera
camera.exif_tags['IFD0.Copyright'] = 'insert copyright here '
camera.exif_tags['IFD0.ImageDescription'] = 'insert description here'
camera.exif_tags['GPS.GPSDOP'] = DOP
camera.exif_tags['GPS.GPSAltitude'] = ALT
camera.exif_tags['GPS.GPSMeasureMode'] = str(GPSMODE)
camera.exif_tags['GPS.GPSLatitudeRef'] = exif_lat_ref(GPSLAT)
camera.exif_tags['GPS.GPSLatitude'] = LAT
camera.exif_tags['GPS.GPSLongitudeRef'] = exif_long_ref(GPSLONG)
camera.exif_tags['GPS.GPSLongitude'] = LONG
camera.resolution = (photo_width, photo_height)
sound_buzzer(beep_short)
for i, filename in enumerate(camera.capture_continuous(FILENAME+'{counter:05d}.jpg')):
print(filename)
time.sleep(interval_time)
sound_buzzer(beep_short)
if i == num_photos-1:
break
#stop controller
gpsc.stopController()
Just for other people wondering what's going on in the above code. I am reading an Adafruit Ultimate GPS (code missing), and sounding a buzzer just before every photo is being taken. I start and pass my code the number of photos to be taken (num_photos) and the interval between photos (interval_time).
I am using version: 0.5 of the Picamera code.
Thanks
As mentioned in the comments for #34, it should be easy to query the pts (presentation timestamp) and dts (decoder timestamp) properties from the buffer headers. At the moment, I'm not convinced we need dts (the comments in the buffer header indicate that this only deviates from pts when B-frames are involved, but as far as I know the H.264 encoder never outputs B-frames). However, pts should be easy to extract and make accessible to end users.
This brings up the question of design. I'm not convinced I named the frame
property very well. Without looking at the docs it's no obvious whether it's referring to the frame image, an object representing the frame, a frame number, or something else. I could rename it frame_number
but now that pts is getting added another possibility presents itself:
Change the frame
property from returning a simple frame number to being something that returns a namedtuple
which represents various properties about the current frame (number, pts, and maybe a keyframe property?).
This is a backwards incompatible change, but given we haven't made a release including the frame
property yet I don't think that's a huge deal. And the only change for any users using it at the moment is from frame
to, say, frame.index
Exists some type(HLS, streaming over http) with stream using picamera, for example: https://github.com/sightmachine/SimpleCV/blob/develop/SimpleCV/Stream.py, in this code, for example: JpegStreamer() ?
Thankz for good working
Should we allow anything between 0 and 1600, or just specific values like 0 (auto), 80, 100, 200, 400, 800, and 1600?
Hi,
I'm getting a problem as my calling program uses the GPIO, but Camera.close() calls GPIO.cleanup() which resets the GPIO state and make my program crash.
As an example, I want to use a button to start the camera, the code for my button exists in my calling program, I run Camera and stop it when I press the button, the next time I want to use the GPIO I cant because camera.close() has called cleanup(), resetting GPIO.
A possible solution could be to pass an optional useLed (which could default to True) on init of the PiCamera class? I have made the change in my fork of picamera. https://github.com/martinohanlon/picamera/blob/master/picamera/camera.py
To replicate the issue, you can run the following:
import picamera
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(17, GPIO.OUT)
GPIO.output(17, True)
with picamera.PiCamera() as camera:
camera.resolution = (800, 600)
camera.framerate = 25
camera.start_recording("my_video.h264")
camera.wait_recording(1)
camera.stop_recording()
camera.close()
GPIO.output(17, False)
...and the exception raised is not defined, Bug inception! ;)
cam.start_recording(process.stdin)
time.sleep(5)
cam.stop_recording()
Traceback (most recent call last):
cam.stop_recording()
File "/usr/local/lib/python2.7/dist-packages/picamera/camera.py", line 571, in stop_recording
self.wait_recording(0)
File "/usr/local/lib/python2.7/dist-packages/picamera/camera.py", line 558, in wait_recording
self._video_encoder.wait(timeout)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 285, in wait
raise self.exception
NameError: global name 'PiCameraError' is not defined
The data is written to process.stdin
correctly but the script crashes anyway.
To mirror capture_continuous we should have a record_continuous. Similar calling convention (filename_or_obj for first parameter and filenames are format() based substitution templates), but first iteration of loop calls start_recording, subsequent iterations call split_recording, and terminating the iterator calls stop_recording (need to check the last bit always works under all circumstances).
Hello,
I'm trying to get the motion detection example at the bottom of recipes2.rst to work (out of the box - leaving in the random stub ;-) I've copied and pasted the code and all is fine until Motion is detected.
./test_vidcap.py
Motion detected!
Traceback (most recent call last):
File "./test_vidcap.py", line 55, in
camera.split_recording('after.h264')
File "/home/pi/picamera/picamera/camera.py", line 736, in split_recording
self._video_encoder.split(output)
File "/home/pi/picamera/picamera/encoders.py", line 533, in split
raise PiCameraRuntimeError('Timed out waiting for an SPS header')
picamera.exc.PiCameraRuntimeError: Timed out waiting for an SPS header
pi@raspberrypi ~/picamera $ head -3 picamera.egg-info/PKG-INFO
Metadata-Version: 1.1
Name: picamera
Version: 1.1
I've tried dropping the resolution line as I thought it might be related to #46 but no joy.
A thread on the Pi Camera forum indicates it may be possible to guard against lockup of the camera when multiple processes attempt to access it:
While fumbling with vcdbg syms and vdbg dump 0xdec02020 (the address of cam_alloc_alloc_count), I found that when the camera is not being used, the value of cam_alloc_alloc_count is 0, and when in use, it is 1.
Now to find an appropriate place to implement a test for this to prevent crashing the camera.
This should be added to PiCamera.__init__
but looks like it'll require a bit more header translation, hence it won't make it into 0.5, but should be added before 1.0.
There's a few brave users out there jumping in with Python 3, but the instructions don't make it clear that easy_install3 is required for installation in this case.
@Dishwishy has done something interesting with their fork of picamera: Dishwishy/picamera@dde849a. It seems as if it's necessary to lower the "layer" of the preview (an attribute I don't know anything about yet) to render things on top of it. Given this is an eventual goal of #16 we should add a preview_layer
property to permit configuration at runtime.
The encoder's _callback_write() method assumes as write method on the underlying file-like object which returns the number of bytes written. This is correct for everything in Python 3, and for all IO streams in Python 2 which are implemented by the "io" module. However for legacy Python 2 streams like cStringIO.StringIO, it's not true and not required by the file-like object interface specification in Python 2. The method should probably be made more intelligent to use tell() in the case that write doesn't return a value (although only in Python 2).
For 0.6 (if we do one) or 1.0, some refactoring is necessary. __init__.py
has grown excessively large. The encoders should be split into their own module, and probably PiCamera should be moved into its own too (although it, and the exception classes, should remain accessible from the root package namespace for user friendliness). Need to investigate the dependencies to see how easily this can be done.
It seems like off-loading processing to another machine is a popular theme of camera projects. Consequently, the recipes chapter ought to include a couple of samples of streaming both images and video across the network to another machine.
I'm trying to use the camera board on raspberryPi and your modules to send the image throw the network.
I used yours examples (4.6 Capturing to a network stream), using my pc as Server and the raspi as client and when I starts the scripts the terminal say to me :
Image is 640x480
Image is verified
but I don't see the images
so I don't know why this line doesn't work !:
image = Image.open(image_stream)
What Do I have to Do ?
Thanks very much
Filippo
When the RPIO library is installed instead of RPi.GPIO, the picamera library fails to initialize with a InvalidChannelException when GPIO.setup is called. It appears that while RPi.GPIO supports pin 5, RPIO doesn't. A forum post from jbeale points to a possible solution with pin 24. Need to test whether this will also work with RPi.GPIO or whether the two libraries will require different calls.
Whenever I try and call start_recording I get an the error (see below), is this known? Standard raspivid and raspistill work fine.
import picamera
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.start_recording('my_video.h264')
camera.wait_recording(60)
camera.stop_recording()
mmal: mmal_vc_port_parameter_set: failed to set port parameter 32:2:ENOSYS
Traceback (most recent call last):
File "test2.py", line 5, in
camera.start_recording('my_video.h264')
File "/usr/local/lib/python2.7/dist-packages/picamera/camera.py", line 689, in start_recording
self._video_encoder = PiVideoEncoder(self, enc_port, format, **options)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 306, in init
super(PiVideoEncoder, self).init(parent, port, format, *_options)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 96, in init
self._create_encoder(format, *_options)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 397, in _create_encoder
prefix="Unable to set inline_headers")
File "/usr/local/lib/python2.7/dist-packages/picamera/exc.py", line 102, in mmal_check
}.get(status, "Unknown status error")))
picamera.exc.PiCameraError: Unable to set inline_headers: Function not implemented
Latest raspivid adds support for selecting encoding profile and GoP configuration. We should add options to start_recording() to support this too.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.