techyian / mmalsharp Goto Github PK
View Code? Open in Web Editor NEWC# wrapper to Broadcom's MMAL with an API to the Raspberry Pi camera.
License: MIT License
C# wrapper to Broadcom's MMAL with an API to the Raspberry Pi camera.
License: MIT License
Add support for MMALVideoDecoder component. This work will be done alongside #18
Raised to discuss API changes for the 0.4 release
Benchmark Image/Video capture methods to date and look at areas where performance can be improved.
Provide support for converting between different colour spaces. This is for use with the AnnotateImage
method in MMALEncoderComponent
.
All components support the MMAL_PARAMETER_SUPPORTED_ENCODINGS check which determines which encoding types can be used with a specific component. Would be useful to check this before allowing image capture to proceed to ensure the correct encoding is set against a component.
Created to track documentation commits for v0.2
Implement ability to add the splitter component to the pipeline. You should be able to attach additional components to the output ports from the splitter. Multiple image capture and video recording should be possible via the splitter.
Add support for MMALResizerComponent
Visual Studio allows to make Documentation in Code:
/// <summary>
/// This class provides an interface to the Raspberry Pi camera module.
/// </summary>
public sealed class MMALCamera
{
}
But this documentation is not shown as long as you don't generate an XML file at build and include this file in your NuGet package. In Visual Studio you can go to Project Settings->Build
and enable XML documentation file
. This will generate a MMALSharp.xml
in your output directory which you can include to your NuGet package. After this all developers that are using your library will see your documentation in Visual Studio.
To be set on port format. Helpers currently in MMAL:
Explore other FourCC codes which may work with the framework.
Using NGINX RTMP module as an example, MMALSharp has an issue when streaming to an RTMP feed in that we are not feeding stdin data quickly enough to FFmpeg. Needs further investigation.
Hi Ian,
I have just seen that you have published MMALSharp on NuGet. When I first saw this project on GitHub, the only thing I wanted was to download it on NuGet. This finally becomes possible ๐ .
However I experienced a big problem with MMALSharp 0.4.0.178 installed via NuGet:
StyleCop.Analysers completely breaks the development experience because it gets activated in every project using MMALSharp. For the development of your Library it makes sense but I don't want to write my private code like MMALSharp's rules. Usually there should not even appear a dependency for StyleCop.Analysers in a NuGet release package.
And one stylistic advice: Microsoft recommends using stable NuGet packages. NLog for example could be upgraded to a stable version. And for everything that does not come from Microsoft I would recommend using the latest version for compatibility reasons. Although NuGet packages should always be backwards compatible, there are many developers that want to get rid of old methods and simply delete them. To allow users of MMALSharp to use the same library themselves it should target the latest version.
Daniel
Hi Ian,
I am still trying to get an uncompressed image from MMALSharp.
At first I tried your approach from TakeRawPicture
but got an EINVAL when I tried to commit the image format ARGB. Then I searched with the new utility method from #17 for available encodings.
Finally I tried many versions with ResizerComponent all resulting in an EINVAL at ConfigureOutputPort. No matter whether I use I420
, BGRA
or YUYV
, it always crashes.
Here is my code:
public int Width => Resolution.As8MPixel.Width;
public int Height => Resolution.As8MPixel.Height;
public void Initialize()
{
MemoryCaptureHandler stillHandler = new MemoryCaptureHandler(null);
MMALResizerComponent stillResizer = new MMALResizerComponent(Width, Height, stillHandler);
MMALNullSinkComponent previewSink = new MMALNullSinkComponent();
MMALCameraConfig.StillResolution = Resolution.As8MPixel;
cam.ConfigureCameraSettings();
PrintEncoding(cam.Camera.StillPort.GetSupportedEncodings(), "Still Port");
PrintEncoding(stillResizer.Outputs[0].GetSupportedEncodings(), "Resizer");
stillResizer.ConfigureInputPort(MMALEncoding.OPAQUE, MMALEncoding.I420, cam.Camera.StillPort);
stillResizer.ConfigureOutputPort(MMALEncoding.I420, MMALEncoding.I420, 0);
cam.Camera.PreviewPort.ConnectTo(previewSink);
cam.Camera.StillPort.ConnectTo(stillResizer);
}
The MemoryCaptureHandler
simply writes all bytes on a MemoryStream.
Daniel
When running the simple FrameToVideo sample code, my application is throwing an exception during the FrameToVideo method where it appears that the delimiter for the jpg extension is getting dropped (see below).
The โ.โ between the filename and extension appears to be being dropped. I looked at the VideoUtilities code and on line 48, any trailing โ.โs appear to be trimmed correctly, but never reinserted before the extension is appended.
My test code follows. Iโve modified it slightly from the original sample, but only superficially.
Add new TakePicture method which will trigger every x ms/sec/mins
Plan to update to .NET Standard 2.0 very soon for greater API compatibility and fewer dependencies.
To be done on the camera control port using the MMAL_PARAMETER_CAMERA_CUSTOM_SENSOR_CONFIG
parameter. This will allow a user to manually force the camera to use one of the pre-defined sensor modes. Related to #26.
See:
OV5647 modes
IMX219 modes
Currently spitting output to Console but can get overwhelming with the amount of callbacks from MMAL. Need to only output the most important info to console when debugging mode is enabled, otherwise output to log file.
Potential memory leak has cropped up from running unit tests. Needs further investigation
Investigate what's happening here. BufferNumRecommended is always 0 and BufferSizeRecommended is always 1. Up until now I've managed to work around this by using BufferNumMin/BufferSizeMin but I believe this is causing #23 .
The above methods should be self-contained, allowing a user to quickly take images/video without the additional boilerplate code to configure ports etc. This means an encoder & renderer will be created on the fly and disposed of when the capturing is complete, but as a result, the Timelapse & continual capture methods will be removed.
Update docs on how to create timelapse/continual modes using the manual construction method.
When explicitly allocating pointers, we need to ensure that these are correctly freed in the event of an exception occurring.
Wrap MMALCheck
calls in a try/finally and make sure we free any pointers in use.
Unit tests have revealed that the PPM encoder results in an Out of Resources error. The file format is inefficient and use of the format may require more resources to be allocated to the GPU, or a smaller image resolution.
Currently when recording a video, frames are recorded as a raw elementary stream and are not multiplexed into a container. The user can use VLC via the command line vlc --demux=h264
to allow it to play elementary streams.
This issue is to track a helper method using FFmpeg to convert stdout data frames to a playable file.
Begin writing unit tests. These should cover all functionality added in v0.1 and v0.2.
The NuGet reference to StyleCop.Analysers was removed with e2d1b1b but the settings file is still existent. Maybe I have to install a plugin first.
A documentation would be very useful for contributors. And if active code analyzers are not an option, there should be a detailed article about the intended code style.
As seen in RaspistillYUV, MMALSharp should allow the ability to capture image data directly from the camera component instead of passing off to an image encoder.
Implement connection callback functionality upon enable.
Notes:
Enable MMAL_PARAMETER_ZERO_COPY parameter
As per the MMAL API, we should allow users to set configuration parameters against a Video Renderer component, e.g. X/Y coords with width/height, Opacity. See here
For example, when taking dark photos we need to adjust the framerate as the shutter speed is constrained by it. This should be done in global config.
Curious to find out how people are using the library and to get feedback :)
Added to keep track of all commits regarding Video capture.
This issue does not track Splitter component development.
Features required:
Since bringing in the ability to capture raw image data from the still port, a sly bug has arose whereby the frame buffer width/height values are not set correctly from the MMAL_VIDEO_FORMAT_T struct. The struct members seem to be out by 16 bytes. Confirmed the bug by setting values against the struct members in MMAL_AUDIO_FORMAT_T.
Since implementing the Cancellation token work it seems there is now an issue with the disposal of unmanaged resources due to stopping capture via the ProcessAsync
method.
As per the work 6by9 has done for raspiraw, there are two additional components available in later firmwares for accessing raw camera data directly from the CSI2 bus.
For reference see https://www.raspberrypi.org/forums/viewtopic.php?t=109137
https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=218458
For example app using these components see https://github.com/6by9/raspiraw/blob/a0d4c31d24531e9a5eafeb129ac194f05526cd31/raspiraw.c
Since adding support for .NET Core, there looks to be a marshalling issue with the structs used for EXIF and annotation.
Hi,
I got the following exception during instanciation of MMALCamera :
An exception of type 'MMALSharp.MMALNoMemoryException' occurred in System.Private.CoreLib.dll but was not handled in user code: 'Out of memory. Unable to create component'
at MMALSharp.MMALCallerHelper.MMALCheck(MMAL_STATUS_T status, String prefix)
at MMALSharp.MMALComponentBase.CreateComponent(String name)
at MMALSharp.MMALComponentBase..ctor(String name)
at MMALSharp.Components.MMALCameraComponent..ctor()
at MMALSharp.MMALCamera..ctor()
at MMALSharp.MMALCamera.<>c.<.cctor>b__38_0()
at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)
at System.Lazy`1.CreateValue()
at MMALSharp.MMALCamera.get_Instance()
Do you have any idea of my mistake ?
For informations, I'm running the code on raspbian9 (RPi3-revB) with .netcore2.1
Fabien
I'm getting this error on a Raspberry Pi 3B+ running Raspbian:
Type could not be marshaled because the length of an embedded array instance does not match the declared length in the layout.
My .Net Core app is pretty much the example from the Wiki (although it seems the MMALEncoding
values are different?):
MMALCamera cam = MMALCamera.Instance;
AsyncContext.Run(async () =>
{
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
{
await cam.TakePicture(imgCaptureHandler, MMALEncoding.MMAL_ENCODING_JPEG, MMALEncoding.MMAL_ENCODING_I420);
}
});
cam.Cleanup();
Stack trace:
at System.Runtime.InteropServices.Marshal.StructureToPtr(Object structure, IntPtr ptr, Boolean fDeleteOld)
b__0>d.MoveNext() in C:\Projects\PiCameraProject\PiCameraProject\Program.cs:line 42
at MMALSharp.Components.MMALImageEncoder.AddExifTag(ExifTag exifTag)
at MMALSharp.Components.MMALImageEncoder.b__24_0(ExifTag c)
at System.Collections.Generic.List1.ForEach(Action
1 action)
at MMALSharp.Components.MMALImageEncoder.AddExifTags(ExifTag[] exifTags)
at MMALSharp.Components.MMALImageEncoder.ConfigureOutputPort(Int32 outputPort, MMALEncoding encodingType, MMALEncoding pixelFormat, Int32 quality, Int32 bitrate)
at MMALSharp.MMALCamera.d__25.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at PiCameraProject.Program.<>c__DisplayClass3_0.<
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Nito.AsyncEx.Synchronous.TaskExtensions.WaitAndUnwrapException(Task task)
at Nito.AsyncEx.AsyncContext.<>c__DisplayClass15_0.b__0(Task t)
at System.Threading.Tasks.ContinuationTaskFromTask.InnerInvoke()
at System.Threading.Tasks.Task.<>c.<.cctor>b__276_1(Object obj)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Nito.AsyncEx.Synchronous.TaskExtensions.WaitAndUnwrapException(Task task)
at Nito.AsyncEx.AsyncContext.Run(Func`1 action)
at PiCameraProject.Program.Main(String[] args) in C:\Projects\PiCameraProject\PiCameraProject\Program.cs:line 36
Provide the ability to render preview to the Pi's display. This should be worked on alongside #12.
The Encoder / Decoder components can be fed data from a file instead of from the Camera component directly.
Implement support for EXIF. Should also support user defined EXIF tags.
Update XML documentation for currently implemented Managed areas of MMALSharp.
Suppress warnings in native areas.
Allow image annotation, should feature all options that Raspistill does and also allow the user to enter custom text.
Look at ways in which to overlay additional UI elements to the preview renderer and also image frames.
Examples include fonts and lines to make basic shapes.
Add support for MMALImageDecoder component. This work will be done alongside #18
Don't force a NLog config on applications consuming the library - let them decide where logs should be stored etc.
Add support for Motion Detection using both MJPEG and H.264 encodings.
Phase 1 will support MJPEG using a frame difference algorithm.
Explore video encoder conversion passed in from file.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.