Coder Social home page Coder Social logo

ljmuastroecology / flirpy Goto Github PK

View Code? Open in Web Editor NEW
184.0 18.0 54.0 13.02 MB

Python library to interact with FLIR camera cores

License: Other

Python 94.86% Shell 0.04% C++ 4.22% C 0.88%
python flir-cameras cameras thermal flir-camera-cores thermal-camera thermal-imaging

flirpy's People

Contributors

aglinn avatar alyetama avatar jveitchmichaelis avatar pavanivitor avatar veech avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flirpy's Issues

Flirpy Lepton3.5 breakoutboard RPI4

I am using RPI4 with breakoutboard and trying to grab images using Lepton 3.5 using flirpy.
I am getting following error when I am trying to run example program capture_lepton.py
Please let me know the solutions?
error:

Traceback (most recent call last):
File "/home/pi/flirpy/examples/capture_lepton.py", line 4, in
image = cam.grab()
File "/home/pi/.local/lib/python3.7/site-packages/flirpy/camera/lepton.py", line 156, in grab
self.setup_video(device_id)
File "/home/pi/.local/lib/python3.7/site-packages/flirpy/camera/lepton.py", line 97, in setup_video
raise ValueError("Lepton not connected.")
ValueError: Lepton not connected.

"pip install flirpy" breaks previously installed OpenCV version 3.4

After running "pip install flirpy" I get an error from cv2.imshow() saying:

The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'

This was previously working before the pip install. I'm using OpenCV 3.4

Error "'split_file' is not defined" while calling flirpy.io.teax.process_file()

Thank you for developing this package! It really is very useful.

I want to convert .TMC images to more common file formats (such as .tiff or .jpg). While trying to use the section flirpy.io.teax for this purpose, I came across an issue which I cannot seem to resolve. No matter how I try calling the function process_file() , I always receive an Error message: NameError: name 'split_file' is not defined.

Has anyone else ever encountered a similar problem?

RAW-16 Not Working on OS X

Hi,

First let me thank you for creating this repository. It is very useful and the code is very clean. I am trying to get the RAW pre AGC 16-bit data, but using the provided instructions I am only getting uint8:

camera = Boson()
camera.setup_video(device_id=2)
image = camera.grab()
print(image.dtype)
uint8

I did check that the following lines return 0 so I am not sure why this is happening:

The order of these calls matters!

self.cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"Y16 "))
self.cap.set(cv2.CAP_PROP_CONVERT_RGB, False)

P.S: I am running this on MAC

How to reduce the buffer size of flir boson

I connect a depth camera and a boson320 at the same time and find that boson always show frame about 1 sec later. I have face the same problem using USB camera, and solve by using usbCam.set(cv2.CAP_PROP_BUFFERSIZE, 1). that means I reduce camera buffer size to 1. I think I can solve this problem same way, but I do not know how to reduce the buffer size. Does someone know how to reduce buffer size? Thanks a lot :-D

Got flirpy to work just fine except no color

I'm running flirpy on my raspberry pi 4 using the example code and a lepton pure thermal 2. Works fine, except the picture is just black and white. How do I add false color?
Here is my code:
``
Import cv2
from flirpy.camera.lepton import Lepton
import numpy as np

with Lepton() as camera:
    while True:
        img = camera.grab().astype(np.float32)

        # Rescale to 8 bit
        img = 255* (img - img.min())/(img.max()- 
 img.min())

        cv2.imshow('Lepton', img.astype(np.uint8))
        if cv2.waitKey(1) == 27:
            break  # esc to quit
   
cv2.destroyAllWindows()

``

Flir Tau2 support

In the docs it is mentioned that this library supports flir Tau Cameras. Does this support Flir Tau2 Cameras too?

Multiple Flir Lepton Image Streaming

Hi,

I am having trouble streaming 2 streams of images from 2 flir lepton 3.5s connected via the Purethermal 2 USB boards connected to windows pc.
I'm kinda new to programming and python as well.
I intend to get the image array via flirpy->colour with matplotlib cmap-> read frames with opencv
I tried reading directly from opencv but the image colouring is not as good as through matplotlib

I was able to stream 1 image with 1 flir, but I'm unsure how i can do it for 2 or more cameras.

I wrote this simple code to stream 2 images:

from flirpy.camera.lepton import Lepton
from matplotlib import pyplot as plt
import numpy as np
import cv2

camera0 = Lepton(1)
camera1 = Lepton(2)

image0 = camera0.grab()
image1 = camera1.grab()

print(image0)

plt.imshow(image0, cmap="inferno" )#, interpolation='nearest'
plt.show()

print(image1)

plt.imshow(image1, cmap="inferno" )#, interpolation='nearest'
plt.show()

camera0.close()
camera1.close()

checksum/escaping inconsistent in Python 2

In some instances reading camera configuration can fail on Python 2 when Python 3 works as expected:

e.g.

Python 3.5.3 (default, Sep 27 2018, 17:25:39) 
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from flirpy.camera.boson import Boson
>>> cam = Boson()
>>> cam.get_camera_serial()
11482
>>> cam.get_sensor_serial()
1647278
Python 2.7.13 (default, Sep 26 2018, 18:42:22) 
[GCC 6.3.0 20170516] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from flirpy.camera.boson import Boson
>>> cam = Boson()
>>> cam.get_camera_serial()
11482
>>> cam.get_sensor_serial()
WARNING:flirpy.camera.boson:Invalid checksum
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/flirpy/camera/boson.py", line 286, in get_sensor_serial
    return struct.unpack(">I", res)[0]
struct.error: unpack requires a string argument of length 4

Issues with lepton.py -> struct unpack and understanding the metadata

I was having trouble running the lepton.py module with a Lepton 2.5 with a PureThermal2 board via USB on a Linux Ubuntu 18.04. I came across some issues and wanted to both point them out and ask some questions.

  1. In line 118 of lepton.py, you send to struct.unpack the image[-2 , :], where I believe you are trying to extract the last two lines and should add a semicolon such that image[-2 : , : ] is sent to struct unpack.
    res = struct.unpack("<2cII16x4h6xIh2xh8xhI4xhhhhhh64xI172x", image[-2,:]) # Error
  2. When changing the value to the last two lines of the image, line 118 runs (it was failing before), but now the byte string unpacking doesn't really make sense in terms of the values extracted.

I was wondering whether the byte string is still relevant, and how exactly I should be reading in the camera metadata for pixel-level temperature calculation.

Thanks for the help!

lepton.find_video_device() does not find Lepton when picamera connected

dev.append(i)

Automatic video capture device enumeration using lepton.find_video_device() on a Raspberry + Lepton 3.5 + PureThermal setup does not work for me when picamera is also connected.

I guess the problem is at the 66. 'i' is the number of order the video device has been found, which depends on the folder structure. However, cv2.videocapture(id) expects an id that is equivalent to the last figures of the filename of the device driver (as in /sys/class/video4linux/).

Background info:
Video devices:
['/sys/class/video4linux/video12', '/sys/class/video4linux/video1', '/sys/class/video4linux/video10', '/sys/class/video4linux/video2', '/sys/class/video4linux/video11', '/sys/class/video4linux/video0']

print d => "[1, 5]" based on filtering capabilities.

The correct id would be '0' (../video0) and cam = camera.setup_video(0) is working.

Question about the generalization of this code

Hello. Is this code adaptive to the FLIR E8-XT thermal camera? It seems this code is only adapted to a certain camera (maybe I am wrong). Is there any example code that can show to be used in a more general FLIR thermal camera? Thanks.

No output generated if output path contains special characters

Hey jveitchmichaelis,

I found another little bug in the split_seqs script (as in flirpy 0.1.1). I had the character "ä" (used quite often in German) in my output path and the script stopped generating any preview JPEGs or TIFFs, however without throwing any exception.

You can likely reproduce the error by running:

python split_seqs.py --input "*.SEQ" --output "outä"

It is not a big deal as I can just rename my paths. But just to let you know.

Regards,
Lukas

AttributeError: 'str' object has no attribute 'decode'

Hi, I just running the example code using MacOS 11.3.1, opencv-python 4.5.2.52 (also the headless), python 3.9.5 with PureThermal 2 and Lepton 2.5

from flirpy.camera.lepton import Lepton

cam = Lepton()
image = cam.grab()
print(image)
print(cam.frame_count)
print(cam.frame_mean)
print(cam.ffc_temp_k)
print(cam.fpa_temp_k)
cam.close()

The error showed AttributeError: 'str' object has no attribute 'decode'

Do I miss something ? I already installed all the package required.
I also try to run this code but the result is same

import cv2
from flirpy.camera.lepton import Lepton

with Lepton() as camera:
	while True:
		img = camera.grab().astype(np.float32)

		# Rescale to 8 bit
		img = 255*(img - img.min())/(img.max()-img.min())
		
		# Apply colourmap - try COLORMAP_JET if INFERNO doesn't work.
		# You can also try PLASMA or MAGMA
		img_col = cv2.applyColorMap(img.astype(np.uint8), cv2.COLORMAP_INFERNO)

		cv2.imshow('Lepton', img_col)
		if cv2.waitKey(1) == 27:
			break  # esc to quit
		
cv2.destroyAllWindows()

Thank's for helping and sharing

Enumerate all devices, not just the first one

See discussion in: #19

Currently the device finding code locates the first camera (e.g. if the user doesn't specify a device). It's also a good idea to return all the cameras for those folks who have multiple cameras connected. The proposal is to extend find_devices to instead return a list.

The default action if calling setup_video(None) will be to take the first value in that list, if it exists. Or if you have several cameras, you would be able to do:

devices = boson.find_devices()
cameras = ...
for dev in devices:
    camera = Boson()
    camera.setup_video(dev)
    cameras.append(camera)

The only trouble with this approach is that it's difficult to associate the port numbers with the cameras (except maybe on Linux). I don't know if it will be trivial on Windows without digging into the COM.

We will probably need to do something like:

  • Query all serial devices with the correct pid/vid and make a lookup table with serial number + port
  • Query all video devices and also check the serial numbers, associate with the lookup table

Convert sequence to raw failling

Dear all,
Congrats for the great software!
I was trying to use the split_sec.py file to convert a sequence made by a Flir T640.
I am facing the following problem:

  File "split_seq.py", line 111, in <module>
    folders = splitter.process(files)
  File "/home/murilo/anaconda3/lib/python3.7/site-packages/flirpy/io/seq.py", line 79, in process
    self._process_seq(seq, folder)
  File "/home/murilo/anaconda3/lib/python3.7/site-packages/flirpy/io/seq.py", line 200, in _process_seq
    image = frame.get_radiometric_image(meta)
  File "/home/murilo/anaconda3/lib/python3.7/site-packages/flirpy/io/fff.py", line 49, in get_radiometric_image
    image = raw2temp(self.get_image(), meta)
  File "/home/murilo/anaconda3/lib/python3.7/site-packages/flirpy/io/fff.py", line 56, in get_image
    offset = self._find_data_offset(self.data)
  File "/home/murilo/anaconda3/lib/python3.7/site-packages/flirpy/io/fff.py", line 46, in _find_data_offset
    return res.end()+14
AttributeError: 'NoneType' object has no attribute 'end'

I could not find many information of what could be wrong. It even converts the first frame correctly to .fff and .txt.
I would love to have any tip, such as what is this offset and if I can find it in any other way.
Thank you very much!
Murilo

AttributeError: type object 'Boson' has no attribute 'logger'

Branch: main

Describe the Issue
While using the Boson camera, when I am trying to grab a frame with grab() function, the issue depicted below is displayed:
image

To Reproduce
Steps to reproduce the behavior (script):

import cv2
from flirpy.camera.boson import Boson
import numpy as np

with Boson() as camera:
    while True:

        img = camera.grab().astype(np.float32)
        img = 255 * (img - img.min()) / (img.max() - img.min())
        img = img.astype(np.uint8)

        if cv2.waitKey(1) == 27:
            break  # esc to quit

    cv2.destroyAllWindows()

Expected behavior
When the code is running properly, the expected behaviour is to grab the camera frames while looping.

Proposed solution
In the boson.py I commented the @classmethod in line 130 prior to def find_video_device(self) and the issue was resolved.

Multiple Boson under win32 - find_cameras returns only first

I have several BOSON on Windows 10 system. Auto detection seems not to work.
"find_cameras.exe" returns only one camera.
Is that correctly observed?

If so, I would like to suggest that at more appropriate function would be to return a list of all devices.

Minor bug in line 185, seq.py, Fff call with missing parameters

Hi,
this is a really cool and handy package. Thank you!
Nevertheless, there is a minor bug in line 185, seq.py.

Fff(chunck)

should be called with paramters such as height and width, otherwise the package will not work for cameras of different picture size such as

Fff(chunck, height=self.height, width=self.width)

Best regards

error converting A655SC seq file to readable images

Hello,

Using Python 3.8.7 on a windows 10 laptop
With this command:
python split_seqs.py --no_sync_rgb --no_export_preview --no_export_tiff
I can get the raw data: '.ttt' files and the metadata .txt files out of the seq file.

When I try to get the thermal images, running like this:
python split_seqs.py --no_sync_rgb --no_export_preview
I get the following error:

Traceback (most recent call last): File "split_seqs.py", line 110, in <module> folders = splitter.process(files) File "C:\Users\daale010\AppData\Local\Programs\Python\Python38\lib\site-packages\flirpy\io\seq.py", line 79, in process self._process_seq(seq, folder) File "C:\Users\daale010\AppData\Local\Programs\Python\Python38\lib\site-packages\flirpy\io\seq.py", line 200, in _process_seq image = frame.get_radiometric_image(meta) File "C:\Users\daale010\AppData\Local\Programs\Python\Python38\lib\site-packages\flirpy\io\fff.py", line 49, in get_radiometric_image image = raw2temp(self.get_image(), meta) File "C:\Users\daale010\AppData\Local\Programs\Python\Python38\lib\site-packages\flirpy\io\fff.py", line 56, in get_image offset = self._find_data_offset(self.data) File "C:\Users\daale010\AppData\Local\Programs\Python\Python38\lib\site-packages\flirpy\io\fff.py", line 46, in _find_data_offset return res.end()+14 AttributeError: 'NoneType' object has no attribute 'end'

I uploaded the file to wetransfer,
https://we.tl/t-2WoLGFZzAD

I hope its something small and I can use your library to process images from this camera.

VideoCapture.set type error with bool arguments

Seems to be either an issue on ARM or an issue with OpenCV 4.3.

On OpenCV 4.2 / x86_64 / Python 3.7

>>> import cv2
>>> cap = cv2.VideoCapture(0)
>>> cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
True
>>> cap.set(cv2.CAP_PROP_CONVERT_RGB, 1)
True
>>> cap.set(cv2.CAP_PROP_CONVERT_RGB, 0)
True

However using False on OpenCV 4.3/aarch64/Python 3.7 now causes a TypeError.

Should probably move everything to 0/1 which should be back compatible at least.

Lepton v3.5 FFC control

Hi guys,
thank you for the great work!
Is there a way to control the Lepton 3.5 FFC with your library? I need to set the FFC mode to manual (so it won't happen every 3 minutes or 1.5 celsius but only when I call it). I would also like to change the number of integrated frames.

All the solutions I found online require programming in C which I don't know. Hopefully your project will finally solve my issue.

how to get temperature?

Hello , is there any way to get the temperature from the pixel values? if yes, can you help?

Tau2 with TeaxGrabber - timeouts in fpa and housing temperatures

Hello,

I'm using Tau2 with the TeaxGrabber.

get_fpa_temperatures() and get_housing_temperatures() sometimes returns the correct values, and sometimes returns errors such as:

Initial packet byte incorrect. Byte was: 67
Error reply from camera. Try re-sending command, or check parameters.

Initial packet byte incorrect. Byte was: 93
Error reply from camera. Try re-sending command, or check parameters.

Initial packet byte incorrect. Byte was: 0
Error reply from camera. Try re-sending command, or check parameters.
.....

I noticed no pattern - sometimes I'm able to get 5 measurements in a row, sometimes it fails after just 1 measurement.
I tried adding sleep() between sending to receiving, without results.
I tried reconnecting the Tau2, and resetting the PC a couple of times.

Thanks for the great package (again :-))

flirpy/flirpy/camera/tau.py

Lines 153 to 176 in b48b643

def get_fpa_temperature(self):
function = ptc.READ_SENSOR_TEMPERATURE
argument = struct.pack(">h", 0x00)
self._send_packet(function, argument)
res = self._read_packet(function)
temperature = struct.unpack(">h", res[7])[0]
temperature /= 10.0
log.info("FPA temp: {}C".format(temperature))
return temperature
def get_housing_temperature(self):
function = ptc.READ_SENSOR_TEMPERATURE
argument = struct.pack(">h", 0x0A)
self._send_packet(function, argument)
time.sleep(1)
res = self._read_packet(function)
temperature = struct.unpack(">h", res[7])[0]
temperature /= 100.0
log.info("Housing temp: {}C".format(temperature))
return temperature

Lepton 3.5 Windows

Hello,

I am using a Flir Lepton 3.5 and everything worked fine, till now. I want to do my own calibration and the images i got are always a 2D Matrix with temperature values in centi Kelvin. Is there any way to get the counts (raw data) instead of the calculated temperature values?

Cannot capture from lepton

Hello,
I tried Flirpy today and noticed some weird issues. Honestly, I don't know how to fix it, please help!!!. (Details and screenshot attached below)

Device/installation details:
Host OS: Ubuntu 20.04.1 (dual boot)
FLIR device: FLIR Lepton 3.5
I/O hardware: Groupgets PureThermal 2
Opencv 4.3.0 installed on the host. Although not need if opencv-python-headless is installed
Python: 3.8.2 & PIP: v20.... something

Screenshot(tried both usb3.0 & usb 2.0)
image

No matching distribution for opencv-python-headless

I am working on Lepton 3.5 camera using raspberry pi 4. I did clone the flirpy github repository.
I am getting following error.
Why it is? Any solution please..

pi@raspberrypi:~/flirpy $ pip install flirpy
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting flirpy
Using cached flirpy-0.1.1-py2.py3-none-any.whl (10.2 MB)
Requirement already satisfied: tqdm<5.0,>=4.42 in /home/pi/.local/lib/python3.7/site-packages (from flirpy) (4.47.0)
Requirement already satisfied: pyserial<4.0,>=3.4 in /usr/lib/python3/dist-packages (from flirpy) (3.4)
ERROR: Could not find a version that satisfies the requirement opencv-python-headless<5.0,>=4.2 (from flirpy) (from versions: 3.4.2.16, 3.4.2.17, 3.4.3.18, 3.4.4.19, 3.4.6.27, 3.4.7.28, 4.0.1.24, 4.1.0.25, 4.1.1.26)
ERROR: No matching distribution found for opencv-python-headless<5.0,>=4.2 (from flirpy)

PureThermal 2 - FLIR Lepton Temperature Measurement

Hi @jveitchmichaelis, I am using PureThermal 2 - FLIR Lepton camera for thermal imaging. But when I capture the image, lot of meta-data are missing. How can I capture image or use my camera to make it compatible with exiftool? When I run exiftool its just showing the following details:

ExifTool Version Number : 10.80
File Name : my_photo-1.jpg
Directory : .
File Size : 5.7 kB
File Modification Date/Time : 2020:05:04 20:28:10+05:30
File Access Date/Time : 2020:05:04 20:28:47+05:30
File Inode Change Date/Time : 2020:05:04 20:28:10+05:30
File Permissions : rw-rw-r--
File Type : JPEG
File Type Extension : jpg
MIME Type : image/jpeg
JFIF Version : 1.02
Resolution Unit : inches
X Resolution : 120
Y Resolution : 120
Image Width : 160
Image Height : 120
Encoding Process : Baseline DCT, Huffman coding
Bits Per Sample : 8
Color Components : 3
Y Cb Cr Sub Sampling : YCbCr4:2:2 (2 1)
Image Size : 160x120
Megapixels : 0.019

Also, how can I use flirpy with my camera to measure the temperature in degree Celsius?

Thank you.

How to read a fff file without TypeError?

Hi thanks for your devotion to this wonderful library flirpy

I ran the code split_seqs to convert seq file to many fffs.
After processing, when i read one fff file with the function Fff that is imported by from flirpy.io.fff import Fff.

My code
from flirpy.io.fff import Fff
a = Fff('./test/frame_000000.fff', height=480, width=640)

Below TypeError occurs
TypeError: init() got an unexpected keyword argument 'height'

What should i do?

Cameras aren't release automatically

Repeatedly instantiating the same camera causes V4L2 to panic because the device isn't released. This can cause test fixtures to fail.

TODO: add a release method to camera.core.close() which can be implemented in each subclass (e.g. Boson).

Change camera settings

Hello,
first of all, great job doing this library!

Second: is there a way to send config commands to the camera? I am currently usin a PT Mini with Lepton 3.5 and Raspberry Pi. I would like to know if there is a way to set emissivity from flirpy library. Checking the files I only saw something about getting these parameters values.

Thanks in advance,
Eduardo

Test cases for escaping behaviour on Boson

We should probably have unit tests to ensure that (a) received packets are unstuffed correctly and (b) we are correctly escaping packets sent to the camera.

For example this used to be a known problematic case: b'\x8e\x00\x00\x00\x00\x00\x00\x05\x000\x00\x00\x00\x00\x01\xaf\x9e\x91\x99\xae'

Invalid checksum on Boson (possibly related to unstuffing)

Noticed this when reading certain FPA temperature outputs with a Boson. I've not investigated what the byte values are - the temperature itself is correct, but the CRC comparison fails. Presumably this is due to an error with bitstuffing (e.g. if there is an escaped character that we're not properly handling). No reason to assume the CRC is actually wrong from the camera, so it's a bug on flirpy's end. Could also be something odd like the CRC itself contains a string that needs escaping?

Might also be that we're exiting the receive loop too early when we think the message is complete (end byte seen) and it's actually not.

Not a critical error for the moment as you can suppress the warnings... but should fix at some point.

Boson: Disable all Non-Uniformity Correction

First, thanks a ton for the wonderful codebase -- made my life so much easier.

I wanted to know if it is possible to disable all corrections on the Boson 640 camera? I want to get raw data for performing custom NUC and hence the question.

Thank you.

RGB synchronization fails if there are more IR than RGB frames

I wanted to point out a bug in the split_seqs script. The synchronization between IR and RGB works fine as long as there are more RGB than IR frames. However, if there are more Ir than RGB frames, the synchronization logic fails.

I ran into this issue after switching from the 8 Hz Flir Duo Pro R to the 30 Hz version. As the visual stream is at 29.87 Hz, there are more IR than RGB frames generated.

Can't capture image on raspberry pi

Installed via pip install flirpy - appears to have installed with no issues, though manually needed to install exiftools. Is this related to bug #29?

It appears that all the tests pass when I run the pytest suite as described. But when I run the following example code:

from flirpy.camera.lepton import Lepton
camera = Lepton()
img=camera.grab()
camera.close()

I receive this warning on he command line:
[ WARN:0] global /tmp/pip-wheel-1xaftst0/opencv-python-headless/opencv/modules/videoio/src/cap_v4l.cpp (893) open VIDEOIO(V4L2:/dev/video1): can't open camera by index

in my jupyter noebook I get this error

TypeError Traceback (most recent call last)
in
2
3 camera = Lepton()
----> 4 img=camera.grab()
5 camera.close()

~/.local/lib/python3.7/site-packages/flirpy/camera/lepton.py in grab(self, device_id, telemetry_mode, strip_telemetry)
154
155 if self.cap is None:
--> 156 self.setup_video(device_id)
157
158 res, image = self.cap.read()

~/.local/lib/python3.7/site-packages/flirpy/camera/lepton.py in setup_video(self, device_id)
110 # The order of these calls matters!
111 self.cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"Y16 "))
--> 112 self.cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
113
114 def decode_telemetry(self, image, mode="footer"):

TypeError: Argument 'value' must be double, not bool

Getting "[WinError 206] The filename or extension is too long"

Hey jveitchmichaelis,

thanks for this great tool. It helps me a lot in my application. I ran into the following issue on most of the *.SEQ files that I have. Interestingly, the error does not occur on SEQ files which contain a smaller number of frames, e.g. 200 or 400. However, on the larger SEQ files this happens.

C:\Users\Lukas\Desktop\flir test>python split_seqs.py -i "DJI_0052.SEQ"
INFO:__main__:Loading: DJI_0052.SEQ
INFO:flirpy.io.seq:Splitting 1 files
  0%|                                                                                            | 0/1 [00:00<?, ?it/s]INFO:flirpy.io.seq:Splitting DJI_0052.SEQ into C:\Users\Lukas\Desktop\flir test\DJI_0052
2697it [04:31,  9.94it/s]
INFO:flirpy.io.seq:Extracting metadata
  0%|                                                                                            | 0/1 [04:32<?, ?it/s]
Traceback (most recent call last):
  File "split_seqs.py", line 92, in <module>
    folders = splitter.process(files)
  File "C:\Anaconda3\lib\site-packages\flirpy\io\seq.py", line 96, in process
    self.exiftool.write_meta(filemask)
  File "C:\Anaconda3\lib\site-packages\flirpy\util\exiftool.py", line 78, in write_meta
    res = subprocess.call(cmd, cwd=cwd, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
  File "C:\Anaconda3\lib\subprocess.py", line 267, in call
    with Popen(*popenargs, **kwargs) as p:
  File "C:\Anaconda3\lib\subprocess.py", line 709, in __init__
    restore_signals, start_new_session)
  File "C:\Anaconda3\lib\subprocess.py", line 997, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 206] Der Dateiname oder die Erweiterung ist zu lang

The error message in English is "The filename or extension is too long".

Do you have an idea why this happens? I also tried executing the same operation directly in the root directory of my hard drive to ensure short path names, but I get the same error.

Thank you so much.

Lukas

Flirpy doesn't correctly detect capture capability (Lepton)

Hi, my system used to work fine, but now find_video_device() cannot find the correct camera device.

About the setup:

  • Raspberry Pi 4 B+
  • Lepton 3.5, PureThermal 2.0
  • flirpy updated to 0.1.0

/dev/video* folder looks like this:

crw-rw----+ 1 root video 81, 0 Aug 17 18:45 /dev/video0
crw-rw----+ 1 root video 81, 1 Aug 17 18:45 /dev/video1
crw-rw----+ 1 root video 81, 2 Aug 17 18:16 /dev/video10
crw-rw----+ 1 root video 81, 3 Aug 17 18:16 /dev/video11
crw-rw----+ 1 root video 81, 5 Aug 17 18:16 /dev/video12
crw-rw----+ 1 root video 81, 4 Aug 17 18:16 /dev/video2

I have a pi_camera and a PureThermal 2.0 attached.

The error message I get. Pls note that find:video_device() finds device #1.
VIDEOIO ERROR: V4L2: Could not obtain specifics of capture window.
VIDEOIO ERROR: V4L: can't open camera by index 1
/dev/video1 does not support memory mapping

Manually setting the device ID to 0 I still get the camera works fine.

Any idea is appreciated,
Zoltán

Tau Camera API documentation

Hello!

I am currently working with the Tau2 camera. However, I see that the API is pretty different compared to the Boson or the Lepton (no grab() function for example). Could you share with me the workflow (pipeline) which should be followed to capture a frame and display it ?

Thanks!

Installing flirpy inside conda, split_seqs command is not recognized

Attempted to install flirpy inside of a conda environment on both Windows 10 and Ubuntu 18.04. In both cases, after installing through pip, no command split_seqs is found.

I can safely import flirpy inside of python, but after reading the split_seq example, I'm not sure what to do to split a seq file into images through python.

Use case for raw2temp

I'm wondering if you can provide an example use case of raw2temp...information online is scarce related to converting boson data to temperatures.

can't open file 'split_seqs'

Hi
I'm used to grabbing frames from seqs in R for further analysis, but need to do it in Python now. Python is new to me.

I successfully did "pip install flirpy" in conda. In PyCharm I can see that flirpy is in the list of packages in the interpreter window. However, when I run "python split_seqs -h" in conda, I get "python: can't open file 'split_seqs': [Errno 2] No such file or directory".

I something wrong with my install?
Regards

TeaxGrabber with ThermalGrabber USB outputs a scrambled image

Hello,
First , thanks for all the work! Really nice package.

I'm trying to grab images with the Tau2 with ThermalCapture Grabber USB.
I cloned the latest version of flirpy (0.2.3), setup a venv, installed using requirements.txt and tried running:

from flirpy.camera.tau import TeaxGrabber

camera = TeaxGrabber()
image = camera.grab()
camera.close()
np.save('test', image)

The resultent image is scrambled. I think this is sync issue of some sort.
When I use ThermalCaputre Viewer the output is OK, so I'm positive the camera and grabber are working.

I attached the output of the code. When looking in the ThermalCaputre Viewer it can be seen perfectly.
test_TeaxGrabber_0.2.3.zip

Thank you again,
with regards,
Navot

Wrong header information to read the fff file.

Thank you for your kind help last time.

I got the new issue that when i read the raw numpy array by the function get_image(), the output result was wrong.

image

This is the output example which has the one vertical slip and please see the upper side, there are two black rows with some white scatters. (note that all values above 12000 were cut off to better viewing and applied normalization to stretch 0 to 255)

Probably, our seq values follow the range of 6500 - 16300. The white scatters have very high values (e.g., 13107, 16242, ... etc)
Maybe the white scatters were originally included in the whole image. But the unknown reason induces this phenomenon.

I attached the fff file with meta data and it's output image.

Refer...
I confirmed it was correct when i converted the seq file to the mat files which are read by hdf5storage library.

https://drive.google.com/drive/folders/1LE0rytJa5Fz-Rs1u38qBDJ0JJzfoK7Xv?usp=sharing

Add generic threaded camera interface

This would be convenient for applications where you want to query a camera sporadically, whilst also having the ability to monitor internal parameters like temperature. In this case you really want the camera to be constantly serviced (i.e. images retrieved) in a separate thread which ensures that you're always grabbing the latest frame.

It's also useful for cameras like TeAx's thermalgrabber which is basically a streaming system and really ought to be processed continuously in a separate thread.

camera.boson - AttributeError: type object 'Boson' has no attribute 'logger'

I get the following error when attaching to a Boson camera:

File "C:\Program Files\Python37\lib\site-packages\flirpy\camera\boson.py", line 146, in find_video_device
self.logger.info("Device ID: {}".format(device_id))
AttributeError: type object 'Boson' has no attribute 'logger'

The reason is that the

@classmethod
def find_video_device(self):

does not have access to the self object where the logger is instantiated as self.logger = logging.getLogger(name)

Repair:
If the @classmethod is removed, the code works in my particular case. I am not able to say if this is the best or correct modification.

Thanks for providing the library and great if someone can update the code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.