Coder Social home page Coder Social logo

openstitching / stitching Goto Github PK

View Code? Open in Web Editor NEW
1.9K 15.0 148.0 136 KB

A Python package for fast and robust Image Stitching

License: Apache License 2.0

Python 99.49% Dockerfile 0.51%
computer-vision image-stitching opencv-python panorama python

stitching's Introduction

stitching

A Python package for fast and robust Image Stitching.

Based on opencv's stitching module and inspired by the stitching_detailed.py python command line tool.

inputs

result

Installation

use the docker image

or pip to install stitching from PyPI.

pip install stitching

Usage

Python CLI

The command line interface (cli) is available after installation

stitch -h show the help

stitch * stitches all files in the current directory

stitch img_dir/IMG*.jpg stitches all files in the img_dir directory starting with "IMG" and ending with ".jpg"

stitch img1.jpg img2.jpg img3.jpg stitches the 3 explicit files of the current directory

Enable verbose mode with stitch * -v. This will create a folder where all intermediate results are stored so that you can find out where there are problems with your images, if any

Docker CLI

If you are familiar with Docker and don't feel like setting up Python and an environment, you can also use the openstitching/stitch Docker image

docker container run --rm -v /path/to/data:/data openstitching/stitch:{version} -h

You can use the Python CLI as described above (read "current directory" as "/data directory"). NOTE a single * wont work in linux because of the host shell expansion, you must use more explicit file pattern / file names.

Python Script

You can also use the Stitcher class in your script

from stitching import Stitcher
stitcher = Stitcher()

Specify your custom settings as

stitcher = Stitcher(detector="sift", confidence_threshold=0.2)

or

settings = {"detector": "sift", "confidence_threshold": 0.2}
stitcher = Stitcher(**settings)

Create a Panorama from your Images:

  • from a list of filenames
panorama = stitcher.stitch(["img1.jpg", "img2.jpg", "img3.jpg"])
  • from a single item list with a wildcard
panorama = stitcher.stitch(["img?.jpg"])
  • from a list of already loaded images
panorama = stitcher.stitch([cv.imread("img1.jpg"), cv.imread("img2.jpg")])

The equivalent of the --affine cli parameter within the script is

from stitching import AffineStitcher
stitcher = AffineStitcher()
panorama = stitcher.stitch(...)

The equivalent of the -v/--verbose cli parameter within the script is

panorama = stitcher.stitch_verbose(...)

Questions

For questions please use our discussions. Please do not use our issue section for questions.

Contribute

Read through how to contribute for information on topics like finding and fixing bugs and improving / maintaining this package.

Tutorial

This package provides utility functions to deeply analyse what's happening behind the stitching. A tutorial was created as Jupyter Notebook. The preview is here.

You can e.g. visualize the RANSAC matches between the images or the seam lines where the images are blended:

matches1 matches2 seams1 seams2

Literature

This package was developed and used for our paper Automatic stitching of fragmented construction plans of hydraulic structures

License

Apache License 2.0

stitching's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stitching's Issues

Bug in Largest Interior/Inscribed Rectangle implementation

Traceback (most recent call last):
File "f:\Yuri_Approach\my_new_approach.py", line 291, in
lir = cropper.estimate_largest_interior_rectangle(mask)
File "f:\Yuri_Approach\stitching\cropper.py", line 98, in estimate_largest_interior_rectangle
raise StitchingError("Invalid Contour. Try without cropping.")
stitching.stitching_error.StitchingError: Invalid Contour. Try without cropping.

Stitching equirectangular images with Neural Style Transfer applied.

Hi!

I have been using your stitcher for blending equirectangular images with a self-created overlap (by pasting a part of the picture on the opposite side).

image

Then applying a Neural Style Transfer which uses an input style image and original image to restyle the original image to the input style image as if it was created in that style.

Afterwards I split the image down the middle and swap the pieces so it can be blended.
Using the original unstyled image to find key-features but applying all transformations/blending to the styled image this
generates a very good result, as there is no visible line in the output image on the blended part.

image

But this creates a new stitch line on the opposite side of the image (in 360° view).
It seems this stitch is created because one of the two images slightly shifted vertically and there also seem to be a kind of fading color effect on the sides of the image which is visible when connecting the sides of the image.
(very bad quality because this screenshot is taken in a 360° web viewer)

image

Are there any parameters I could change to combat this effect?

Thanks in advance.

StitchingError: Invalid Contour. Try without cropping.

I've tried everything to get the cropper part of the code to work, but I keep getting the same error no matter what I do.

Code works until:

Cropper(crop = True)
from stitching.cropper import Cropper
cropper = Cropper()

mask = cropper.estimate_panorama_mask(warped_low_imgs, warped_low_masks, low_corners, low_sizes)
plot_image(mask, (5,5))

When I run the next section, I get an error:

from largest interiorrectangle import lir, pt1, pt2, lir_basis, lir_within_contour, lir_within_polygon
lir = cropper.estimate_largest_interior_rectangle(mask)
print(lir)

Error:

StitchingError Treaceback(most recent call last)
~\AppData\Local\Temp/ipykernnel_14476/1283171828.py in <module>
----> 1 lir = cropper.estimate_largest_interior_rectangle(mask)
2 print(lir)

~\anaconda3\lib\site-packages\stitching\cropper.py in estimate_largest_interior_rectangle(self, mask)

contours, hierarchy = cv.findContours(mask, cv.RETR_TREE, cv.CHAIN_APPROX_NONE)
if not hierarchy.shape == (1, 1, 4) or not np.all(hierarchy == -1):
raise StitchingError("Invalid Contour. Try without cropping.")
contour = contours[0][:, 0, :]

StitchingError: Invalid Contour. Try without cropping.

Any advice?

Bug in FeatureMatcher()

It seems that each element interfere to each other.
After removing 7th image, I can get correct the result.
image

Only after physically removing the 7th image, I had got the correct result.
If I understand the theory correctly, the matching result should only depend on 2 feature packs. Why is 3rd party intervention having an impact?
This result doesn't depend on whether I use match-mask or not as well.

Stitching long jump frames

I am trying to create a single image that shows different stages of the jump

start-28 3
start-34 0
start-36 4
Is it possible to do it with this tool?

Camera parameters adjusting failed when using homography setting on nearly affine images

I am trying to stitch about 30 images of almost flat ground. Images were taken top-down, and are overlapping by about 30%. I am generating a match file, in which almost all images are somewhat connected, but after about 4-5 minutes of running, the following error appears:

  File ".\texture_stitching.py", line 19, in <module>
    panorama = stitcher.stitch(imgs)
  File "C:\Users\Maciej\anaconda3\lib\site-packages\stitching\stitcher.py", line 94, in stitch
    cameras = self.refine_camera_parameters(features, matches, cameras)
  File "C:\Users\Maciej\anaconda3\lib\site-packages\stitching\stitcher.py", line 147, in refine_camera_parameters
    return self.camera_adjuster.adjust(features, matches, cameras)
  File "C:\Users\Maciej\anaconda3\lib\site-packages\stitching\camera_adjuster.py", line 49, in adjust
    raise StitchingError("Camera parameters adjusting failed.")
stitching.stitching_error.StitchingError: Camera parameters adjusting failed.

I am using python 3.8 and newest version of opencv.
Here is an example photo and match file:

graph matches_graph{
"grass001_scan_0001.jpg" -- "grass001_scan_0008.jpg"[label="Nm=22, Ni=6, C=0.410959"];
"grass001_scan_0001.jpg" -- "grass001_scan_0032.jpg"[label="Nm=6, Ni=4, C=0.408163"];
"grass001_scan_0002.jpg" -- "grass001_scan_0003.jpg"[label="Nm=34, Ni=17, C=0.934066"];
"grass001_scan_0002.jpg" -- "grass001_scan_0007.jpg"[label="Nm=45, Ni=23, C=1.06977"];
"grass001_scan_0002.jpg" -- "grass001_scan_0008.jpg"[label="Nm=22, Ni=8, C=0.547945"];
"grass001_scan_0002.jpg" -- "grass001_scan_0009.jpg"[label="Nm=14, Ni=5, C=0.409836"];
"grass001_scan_0002.jpg" -- "grass001_scan_0021.jpg"[label="Nm=6, Ni=4, C=0.408163"];
"grass001_scan_0003.jpg" -- "grass001_scan_0004.jpg"[label="Nm=42, Ni=16, C=0.776699"];
"grass001_scan_0003.jpg" -- "grass001_scan_0006.jpg"[label="Nm=47, Ni=32, C=1.44796"];
"grass001_scan_0004.jpg" -- "grass001_scan_0005.jpg"[label="Nm=37, Ni=19, C=0.994764"];
"grass001_scan_0004.jpg" -- "grass001_scan_0026.jpg"[label="Nm=10, Ni=5, C=0.454545"];
"grass001_scan_0005.jpg" -- "grass001_scan_0011.jpg"[label="Nm=23, Ni=8, C=0.536913"];
"grass001_scan_0005.jpg" -- "grass001_scan_0012.jpg"[label="Nm=50, Ni=26, C=1.13043"];
"grass001_scan_0005.jpg" -- "grass001_scan_0016.jpg"[label="Nm=14, Ni=5, C=0.409836"];
"grass001_scan_0005.jpg" -- "grass001_scan_0018.jpg"[label="Nm=14, Ni=5, C=0.409836"];
"grass001_scan_0005.jpg" -- "grass001_scan_0027.jpg"[label="Nm=6, Ni=4, C=0.408163"];
"grass001_scan_0006.jpg" -- "grass001_scan_0024.jpg"[label="Nm=11, Ni=5, C=0.442478"];
"grass001_scan_0006.jpg" -- "grass001_scan_0030.jpg"[label="Nm=13, Ni=5, C=0.420168"];
"grass001_scan_0007.jpg" -- "grass001_scan_0010.jpg"[label="Nm=39, Ni=20, C=1.01523"];
"grass001_scan_0007.jpg" -- "grass001_scan_0015.jpg"[label="Nm=14, Ni=5, C=0.409836"];
"grass001_scan_0010.jpg" -- "grass001_scan_0020.jpg"[label="Nm=13, Ni=5, C=0.420168"];
"grass001_scan_0011.jpg" -- "grass001_scan_0013.jpg"[label="Nm=20, Ni=9, C=0.642857"];
"grass001_scan_0011.jpg" -- "grass001_scan_0014.jpg"[label="Nm=35, Ni=17, C=0.918919"];
"grass001_scan_0012.jpg" -- "grass001_scan_0017.jpg"[label="Nm=6, Ni=4, C=0.408163"];
"grass001_scan_0012.jpg" -- "grass001_scan_0023.jpg"[label="Nm=6, Ni=4, C=0.408163"];
"grass001_scan_0013.jpg" -- "grass001_scan_0022.jpg"[label="Nm=14, Ni=5, C=0.409836"];
"grass001_scan_0014.jpg" -- "grass001_scan_0019.jpg"[label="Nm=25, Ni=11, C=0.709677"];
"grass001_scan_0017.jpg" -- "grass001_scan_0028.jpg"[label="Nm=14, Ni=5, C=0.409836"];
"grass001_scan_0021.jpg" -- "grass001_scan_0029.jpg"[label="Nm=14, Ni=5, C=0.409836"];
"grass001_scan_0024.jpg" -- "grass001_scan_0025.jpg"[label="Nm=34, Ni=17, C=0.934066"];
"grass001_scan_0026.jpg" -- "grass001_scan_0031.jpg"[label="Nm=25, Ni=15, C=0.967742"];

grass001_scan_0011_prev

I have also tinkered with different detectors (orb, akaze, sift - best matches with orb I believe) and thresholds, but with no luck. Is there anything I can do with it?

Cropper.prepare raises an exception when one rectangle does not overlap the ROI

Depending on the input, some images might not overlap the largest internal rectangle (see example).

Cropper.prepare raises an exception, but it would be more desirable to just ignore non-overlapping rectangles, or return their indices so that they can be removed as when subsetting to the largest component of the graph.

sample_mask

prevent distortion with different wrapping method

Hi, I used this stitching for 3 images which are from an area (in different directions for movement)
stitch works but the images seem to have a fixed height, and the result is a compressed height view
what should I do? I think it is related to the wrapping method
result

How to add options in editor

I have no idea how to add the options from "-h" on console to a code editor such as Pycharm. How can I achieve this?

Heading of the pano

Hi,
Thank for making this tool, and sharing it with the public. I also commend your efforts to support the community for answering questions. I can imagine it takes a lot of effort. I'll try to keep my question brief:

I have visual odometry running, so I know the exact heading of each input frame. How can I find the heading of the panorama?

Support for stitching multi channel images (e.g. as numpy arrays instead of filenames)

Hi,

Thanks for this package. I was trying it out and noticed that currently Stitcher.stich() only supports a list of filenames.

I was wondering whether it could be possible to support a list of NumPy arrays instead, for instance. This would be useful for example in cases where you only want to stitch specific channels of different images.

Currently, it also seems to fail with multiple channel images with the error:

...
---> 53             raise StitchingError("No match exceeds the " "given confidence threshold.")
     54 
     55         return indices

StitchingError: No match exceeds the given confidence threshold.

Thank you and apologies if you've tackled this issue already elsewhere.
NelsonGon

Install CLI in user data path

"Your CLI script could also be included in the package and listed as a console script in the setup.cfg: that way as soon as your package is pip installed, that script would be on the user's PATH"

RROR: Cannot install stitching because these package versions have conflicting dependencies.

win11
anaconda python 3.6

pip install stitching


  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/fc/3c/f0ac5c2c73df30483e3677b6568ccd19d900a6f78a95bfcf20bbbb468986/opencv_python-4.1.2.30-cp36-cp36m-win_amd64.whl (33.0 MB)
     |████████████████████████████████| 33.0 MB 819 kB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9c/88/06cdc6239013e13aec97f474638fc4e7c00e5b7fb954a1d0ec2a5fc8db7a/opencv_python-4.1.1.26-cp36-cp36m-win_amd64.whl (39.0 MB)
     |████████████████████████████████| 39.0 MB 6.8 MB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/dc/54/a6b7727c67d4e14194549a9e1a1acd7902ebae2f4a688d84b658ae40b5fb/opencv_python-4.1.0.25-cp36-cp36m-win_amd64.whl (37.3 MB)
     |████████████████████████████████| 37.3 MB 6.4 MB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/2f/5c/51da5c41afe040fb5d417dd55dde618ccec05c40da52e068b6ef03cf4c21/opencv_python-4.0.1.24-cp36-cp36m-win_amd64.whl (30.5 MB)
     |████████████████████████████████| 30.5 MB 6.8 MB/s
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/be/4b/cb39e9a28ed08252f5b4e3bc1ccb733217c93dc72d975557abcb574010b0/opencv_python-4.0.1.23-cp36-cp36m-win_amd64.whl (30.5 MB)
     |████████████████████████████████| 30.5 MB 6.4 MB/s
INFO: pip is looking at multiple versions of stitching to determine which version is compatible with other requirements. This could take a while.
Collecting stitching
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a1/25/5597e0324ac0a095a7d2d84da83994b4810fe98abe03af13f74fa4a12181/stitching-0.2.0-py3-none-any.whl (25 kB)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a8/e9/f6f03a9e12212c100fb77932050ca3bd4c04a5b58df7c51af370090bdf07/stitching-0.1.0-py3-none-any.whl (22 kB)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/0d/97/e23f0e34ff3ab13112446ee6b79a56e08280ee3d9a7d8309d63efb0d246f/stitching-0.0.1-py3-none-any.whl (22 kB)
ERROR: Cannot install stitching because these package versions have conflicting dependencies.

The conflict is caused by:
    largestinteriorrectangle 0.1.0 depends on numba>=0.55.0
    largestinteriorrectangle 0.0.2 depends on numba>=0.55.0
    largestinteriorrectangle 0.0.1 depends on numba>=0.55.0

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies

camera is fixed above, long object moves below

HI
I would like to automatically stitch 20-40 images
all of them are very planar and rectangular geometric
distorsion is negligible, optical details is slightly blurred
like camera is not fully in focus a bit (slight spherical aberration):

camera is fixed above, long object moves below

What happens with stiching, it only stitches maybe 5-6 from
40 pictures. I think it should at least report which pics are not
stiched and suggest to experiment with some settings

SURF not available,stitching.stitching_error.StitchingError: No match exceeds the given confidence theshold.

import stitching


stitcher = stitching.Stitcher()
panorama = stitcher.stitch(["F:/2022/PJ/123/1.jpg", "F:/2022/PJ/123/2.jpg", "F:/2022/PJ/123/3.jpg"])

python 4.py

SURF not available
Traceback (most recent call last):
  File "4.py", line 5, in <module>
    panorama = stitcher.stitch(["F:/2022/PJ/123/1.jpg", "F:/2022/PJ/123/2.jpg", "F:/2022/PJ/123/3.jpg"])
  File "D:\anaconda3\envs\stitching\lib\site-packages\stitching\stitcher.py", line 87, in stitch
    imgs, features, matches = self.subset(imgs, features, matches)
  File "D:\anaconda3\envs\stitching\lib\site-packages\stitching\stitcher.py", line 133, in subset
    matches,
  File "D:\anaconda3\envs\stitching\lib\site-packages\stitching\subsetter.py", line 28, in subset
    indices = self.get_indices_to_keep(features, matches)
  File "D:\anaconda3\envs\stitching\lib\site-packages\stitching\subsetter.py", line 53, in get_indices_to_keep
    raise StitchingError("No match exceeds the " "given confidence theshold.")
stitching.stitching_error.StitchingError: No match exceeds the given confidence theshold.

Stitching horizontally and vertically stacked images

I'm trying to create a panorama from images looking like these:

6
7
8
0
1
2
3
5
4

However, I'm getting this error:

Traceback (most recent call last):
  File "/home/aurelien/Projects/Alteia/detection-chambres-orange/streetview/download.py", line 132, in <module>
    panorama = stitcher.stitch(image_paths)
  File "/home/aurelien/Projects/Venvs/compviz/lib/python3.10/site-packages/stitching/stitcher.py", line 94, in stitch
    cameras = self.refine_camera_parameters(features, matches, cameras)
  File "/home/aurelien/Projects/Venvs/compviz/lib/python3.10/site-packages/stitching/stitcher.py", line 147, in refine_camera_parameters
    return self.camera_adjuster.adjust(features, matches, cameras)
  File "/home/aurelien/Projects/Venvs/compviz/lib/python3.10/site-packages/stitching/camera_adjuster.py", line 49, in adjust
    raise StitchingError("Camera parameters adjusting failed.")
stitching.stitching_error.StitchingError: Camera parameters adjusting failed.

I also tried the first three images, but I'm getting this:

test

Are vertically stacked images supported?

TypeError: only integer scalar arrays can be converted to a scalar index

I get this error from stitch *
stitching frame1.jpg frame10.jpg frame11.jpg frame12.jpg frame13.jpg frame14.jpg frame15.jpg frame16.jpg frame17.jpg frame18.jpg frame19.jpg frame2.jpg frame20.jpg frame21.jpg frame22.jpg frame23.jpg frame24.jpg frame25.jpg frame26.jpg frame27.jpg frame28.jpg frame29.jpg frame3.jpg frame30.jpg frame31.jpg frame32.jpg frame33.jpg frame34.jpg frame35.jpg frame36.jpg frame37.jpg frame38.jpg frame39.jpg frame4.jpg frame40.jpg frame41.jpg frame42.jpg frame43.jpg frame44.jpg frame45.jpg frame46.jpg frame47.jpg frame48.jpg frame49.jpg frame5.jpg frame50.jpg frame6.jpg frame7.jpg frame8.jpg frame9.jpg into result.jpg
on Windows 10 with Python Python 3.8.7 and cv2. version'4.5.1'.

module 'cv2.cv2' has no attribute 'detail_BundleAdjusterRay'

Hey,

First of all, thank you for this wonderful source code. It wasreally helpful for me as a beginner.

I am having a problem with you Stitching Tutorial jupyter notebook.
In this line

from stitching.image_handler import ImageHandler

img_handler = ImageHandler()
img_handler.set_img_names(weir_imgs)

medium_imgs = list(img_handler.resize_to_medium_resolution())
low_imgs = list(img_handler.resize_to_low_resolution(medium_imgs))
final_imgs = list(img_handler.resize_to_final_resolution())

I am havig this error:

AttributeError: module 'cv2.cv2' has no attribute 'detail_BundleAdjusterRay'

and I tried searching it but there is no solution that I found on this one. Can you help me solve this? Greatly appreciated.

Camera parameters adjusting failed. on two images with slightly different angle

I'm trying to stitch 2 images with similar view.
I've tried change "adjuster" to "no", "affine", "reproj" but didn't get good result. pic_link
If I use "ray", it will have error.
stitching.stitching_error.StitchingError: Camera parameters adjusting failed.
My setting:

settings = {"detector": "sift", "confidence_threshold": 0.5, "adjuster": "no"}
stitcher = stitching.Stitcher(**settings)

Thanks.

Add affine mode to CLI for scans and specialized devices

this would overwrite the following values:

settings = {"matcher_type": "affine",
"estimator": "affine",
"adjuster": "affine",
"wave_correct_kind": "no",
"warper_type": "affine",
"compensator": "no"}

    stitcher->setEstimator(makePtr<detail::AffineBasedEstimator>());
    stitcher->setWaveCorrection(false);
    stitcher->setFeaturesMatcher(makePtr<detail::AffineBestOf2NearestMatcher>(false, false));
    stitcher->setBundleAdjuster(makePtr<detail::BundleAdjusterAffinePartial>());
    stitcher->setWarper(makePtr<AffineWarper>());
    stitcher->setExposureCompensator(makePtr<detail::NoExposureCompensator>());

360x180 degree equirectangular stitching

Hello!

Is it possible to use stitching to generate 360x180 degree equirectangular images?

My context: I like shooting panorama photos, and I'm looking for alternatives to Hugin to merge them together. Right now I use a two-stage approach, where I first merge the 7 bracketed exposures of each camera position, and then stitch the resulting 38 photos together. The exposure merging I managed to write some software for to automate (brakketor), and I'm searching for something to help me automate the stitching as well.

Cheers,
Sybren

The 'parallel' target is not currently supported on 32 bit hardware.

os : windows 10
cpu : Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz (12 CPUs)
gpu : gtx 1660ti
python 3.8.6
usage : stitch imgs/*

(venv) C:\Users\HwangMW\Desktop\stitching>stitch imgs/*
stitching imgs\weir_1.jpg imgs\weir_2.jpg imgs\weir_3.jpg imgs\weir_noise.jpg into result.jpg
Traceback (most recent call last):
File "C:\Users\HwangMW\AppData\Local\Programs\Python\Python38-32\lib\runpy.py", line 194, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Users\HwangMW\AppData\Local\Programs\Python\Python38-32\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\Users\HwangMW\Desktop\stitching\venv\Scripts\stitch.exe_main
.py", line 7, in
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\stitching\cli\stitch.py", line 278, in main
panorama = stitcher.stitch(img_names)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\stitching\stitcher.py", line 100, in stitch
self.prepare_cropper(imgs, masks, corners, sizes)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\stitching\stitcher.py", line 176, in prepare_cropper
self.cropper.prepare(imgs, masks, corners, sizes)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\stitching\cropper.py", line 57, in prepare
lir = self.estimate_largest_interior_rectangle(mask)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\stitching\cropper.py", line 94, in estimate_largest_interior_rectangle
import largestinteriorrectangle
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\largestinteriorrectangle_init
.py", line 1, in
from .lir import lir
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\largestinteriorrectangle\lir.py", line 1, in
from .lir_basis import largest_interior_rectangle as lir_basis
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\largestinteriorrectangle\lir_basis.py", line 13, in
def horizontal_adjacency(grid):
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\decorators.py", line 219, in wrapper
disp.compile(sig)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\dispatcher.py", line 965, in compile
cres = self._compiler.compile(args, return_type)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\dispatcher.py", line 125, in compile
status, retval = self._compile_cached(args, return_type)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\dispatcher.py", line 139, in _compile_cached
retval = self._compile_core(args, return_type)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\dispatcher.py", line 152, in _compile_core
cres = compiler.compile_extra(self.targetdescr.typing_context,
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler.py", line 716, in compile_extra
return pipeline.compile_extra(func)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler.py", line 452, in compile_extra
return self._compile_bytecode()
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler.py", line 520, in _compile_bytecode
return self._compile_core()
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler.py", line 499, in _compile_core
raise e
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler.py", line 486, in _compile_core
pm.run(self.state)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler_machinery.py", line 368, in run
raise patched_exception
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler_machinery.py", line 356, in run
self._runPass(idx, pass_inst, state)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler_lock.py", line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler_machinery.py", line 311, in _runPass
mutated |= check(pss.run_pass, internal_state)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\compiler_machinery.py", line 273, in check
mangled = func(compiler_state)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\typed_passes.py", line 394, in run_pass
lower.lower()
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\lowering.py", line 168, in lower
self.lower_normal_function(self.fndesc)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\lowering.py", line 222, in lower_normal_function
entry_block_tail = self.lower_function_body()
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\lowering.py", line 251, in lower_function_body
self.lower_block(block)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\lowering.py", line 265, in lower_block
self.lower_inst(inst)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\core\lowering.py", line 567, in lower_inst
func(self, inst)
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\parfors\parfor_lowering.py", line 58, in _lower_parfor_parallel
ensure_parallel_support()
File "c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\numba\parfors\parfor.py", line 5016, in ensure_parallel_support
raise errors.UnsupportedParforsError(msg)
numba.core.errors.UnsupportedParforsError: Failed in nopython mode pipeline (step: native lowering)
The 'parallel' target is not currently supported on 32 bit hardware.
During: lowering "id=0[LoopNest(index_variable = parfor_index.10, range = (0, grid_size0.1, 1)), LoopNest(index_variable = parfor_index.11, range = (0, grid_size1.2, 1))]{127: <ir.Block at c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\largestinteriorrectangle\lir_basis.py (14)>}Var($parfor_index_tuple_var.17, lir_basis.py:14)" at c:\users\hwangmw\desktop\stitching\venv\lib\site-packages\largestinteriorrectangle\lir_basis.py (14)

This is the result of running after installation.
Why is this?

Bug in subsetter.subset_matches()

Hi Lukas

Original confindence matrix is:
image

indices_to_keep is [ 1 4 5 6 7 8 9 10 11 16 17 18 20 21 22 24 25 29 30 31 32 33 34 35
36]

But the result of matches = subsetter.subset_matches(matches, indices)
is
image

What I am interested is pair [1,25] = 1.1
Where'd 1.1 go? :)

Seems that subsetter.subset_matches() has a bug...
I am digging in it.

Please let me know your thought.

Thanks.

Skipped test_stitcher_boat1 and test_stitcher_boat2 (test_stitcher.py) don't pass anymore

Hi, thank you for your great works!

I have run the test_stitcher.py , but the test_stitcher_boat1 and test_stitcher_boat2 gave the error as the following :

E
Error
Traceback (most recent call last):
File "C:\Users\chqiu\anaconda3\envs\deepface\lib\unittest\case.py", line 59, in testPartExecutor
yield
File "C:\Users\chqiu\anaconda3\envs\deepface\lib\unittest\case.py", line 628, in run
testMethod()
File "D:\2.1\stitching\stitching\tests\test_stitcher.py", line 74, in test_stitcher_boat2
"boat6.jpg",
File "D:\2.1\stitching\stitching\stitching\stitcher.py", line 94, in stitch
cameras = self.refine_camera_parameters(features, matches, cameras)
File "D:\2.1\stitching\stitching\stitching\stitcher.py", line 147, in refine_camera_parameters
return self.camera_adjuster.adjust(features, matches, cameras)
File "D:\2.1\stitching\stitching\stitching\camera_adjuster.py", line 49, in adjust
raise StitchingError("Camera parameters adjusting failed.")
stitching.stitching_error.StitchingError: Camera parameters adjusting failed.

So how to fix it? thanks a lot
QC

GPU acceleration

Hi! I've find this repo useful for image stitching. Is there any chance for this code to be run on a GPU for acceleration? I've found the 'try_use_gpu' attributes and changed it into True but nothing happened. Could you please help this out?

AttributeError: module 'cv2.detail' has no attribute 'WAVE_CORRECT_AUTO'

I installed it by pip,but it showed me AttributeError: module 'cv2.detail' has no attribute 'WAVE_CORRECT_AUTO'.What should I do?
C:\Users\Administrator>pip install stitching
Requirement already satisfied: stitching in d:\python\anaconda3\lib\site-package
s (0.3.0)
Requirement already satisfied: largestinteriorrectangle in d:\python\anaconda3\l
ib\site-packages (from stitching) (0.1.1)
Requirement already satisfied: opencv-python>=4.0.1 in d:\python\anaconda3\lib\s
ite-packages (from stitching) (4.2.0.34)
Requirement already satisfied: numpy>=1.14.5 in d:\python\anaconda3\lib\site-pac
kages (from opencv-python>=4.0.1->stitching) (1.21.6)
Requirement already satisfied: numba in d:\python\anaconda3\lib\site-packages (f
rom largestinteriorrectangle->stitching) (0.41.0)
Requirement already satisfied: llvmlite>=0.26.0dev0 in d:\python\anaconda3\lib\s
ite-packages (from numba->largestinteriorrectangle->stitching) (0.26.0)

ConfidenceThreshold 0 does not work together with RangeMatcher

Hi,

I would like to use your library in order to make a 360 picture of a room (with a set of 12-13 pictures). I'm using band-width=1 because I know the order, confidence_threshold 0 and Subsetter(0). But when I try to link images, some links are a bit weird because features are only located in one place (as you can see below). Am I missing an option which can ce useful in my usecase? or maybe it's my camera which optimize settings (according to the environment) and create mismatches between images

image

Thanks for your help

Support for translation

Hi @lukasalexanderweber,

The OpenCV bundle adjuster does not support translation between frames, but only in-place rotation, in the global optimization step.

During my thesis work two years ago I made something similar to your package, wrapping some of OpenCV's stitching module, but not really quite as polished. As I was dealing with frames that at times suffered from quite a severe translation between them, I found OpenCV's results to be somewhat underwhelming. The bundle adjuster works just fine when the translation is small, but gradually starts to fail the larger it gets.

I started looking into other packages that implemented sparse bundle adjustment, as they had support for translation, but did not have the time to implement the Python bindings and ended up using directly in C++.

Have you ever looked into this? Recently, a package with bindings of Lourakis' sparse bundle adjustment C library has been updated after many years for Python 3 compatibility. Would you be interested in trying to add this functionality to your package?

cropper.estimate_largest_interior_rectangle takes an eternity on Windows 11

Great work,

Perhaps cropper.estimate_largest_interior_rectangle could be computed much faster by:

# transform the panorama image to grayscale and threshold it 
gray = cv2.cvtColor(result, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)[1]

# Finds contours from the binary image
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)

# get the maximum contour area
c = max(cnts, key=cv2.contourArea)

# get a bbox from the contour area
(x, y, w, h) = cv2.boundingRect(c)

# crop the image to the bbox coordinates
result = result[y:y + h, x:x + w] 

Blurred result when stitching one image after another

When i try to stitch 20 images, an error occurred. The code only stitched 16 images.
Traceback (most recent call last):
File "/home/zx/tiegui/stitching-main/setup.py", line 53, in
panorama2 = stitcher2.stitch([imgA,imgB])
File "/home/zx/tiegui/stitching-main/stitching/stitcher.py", line 92, in stitch
imgs, features, matches = self.subset(imgs, features, matches)
File "/home/zx/tiegui/stitching-main/stitching/stitcher.py", line 138, in subset
matches,
File "/home/zx/tiegui/stitching-main/stitching/subsetter.py", line 28, in subset
indices = self.get_indices_to_keep(features, matches)
File "/home/zx/tiegui/stitching-main/stitching/subsetter.py", line 53, in get_indices_to_keep
raise StitchingError("No match exceeds the " "given confidence threshold.")
stitching.stitching_error.StitchingError: No match exceeds the given confidence threshold.
But the image has enough overlapping region.
The result of 16 images and the first half is vague.
20230202113005607
When stitch this image, the error occurred.
20230202113007191

Inform user if only a subset of images are stitched

From #30:

I think it should at least report which pics are not
stiched and suggest to experiment with some settings

Mostly users of the stitching package will have a set of images which ALL should be included in the final panorama. By setting the confidence threshold to 1, by default images are sorted out if they don't have good enough matches. In this case the user should be informed.

What are the elements that make a successful stitching of a panoramic image possible?

Very good work, thank you for your continued support.
I have a question
What are the elements that make a successful stitching of a panoramic image possible?
Imagine a scenario where there is a building in the middle and there are about 8 surveillance cameras or UAV(unmanned aerial vehicle) around it, each taking pictures at a certain angle (which will circle around the building), what are the factors that need to be taken into account if you eventually want to achieve a successful stitching?
For example, what height do I need to place these cameras? What is the minimum number of cameras needed and how many degrees each one covers to achieve a perfect stitching?

Feature request - improve matches graph

Hi guys,

Firstly thank you all for you amazing work. I would like to have your thoughts on a new feature (afaik not available for now).

I have a list of ten images and I know the order of them, it would be nice to pass this order in order to override confidence_threshold and get the right graph. Of course it would be an option.

Example: I have 3 photos (A, B and C) and I know the exact order (B --> C --> A). instead of letting the package find the order and maybe be wrong (for example by rejecting A of the result graph), I could pass the graph below as an option to bypass the graph generation.

graph matches_graph{
"B" -- "C"
"C" -- "A"
}

Sorry if I'm not clear, I would like to contribute so what do you think ? Do you have any suggestions ?

range_width having no effect in FeatureMatcher?

Hi, I'm trying to stitch a sequence of images that I know are adjacent, and I thought I understood setting range_width to 1 when matching might improve results, but it doesn't seem to have any effect. Perhaps I have misunderstood how this parameter works, but it doesn't seem to be doing anything from what I can tell.

I tried taking a fresh copy of Stitching Tutorial.ipynb and changing the range_width parameter in the matching section, and I still get a full matrix of confidence values for every pair of images no matter what value I choose – I expected that setting it to 1 would force it to only consider adjacent pairs (e.g. pair 1-2 would have a confidence but pair 1-3 would not). That's what the C++ code seems to be doing, but I admit I haven't dug deeply enough into it.

Here's my results:
Screenshot 2022-07-25 at 12 21 45

You can see on the final block that under the hood the type is being set to cv2.detail.BestOf2NearestRangeMatcher correctly.

Do you have any suggestions, or have I misunderstood how this is supposed to work? If I manually set the confidence of non-adjacent images to zero then I get a better stitching result in later stages, but I was hoping for a significant performance increase of comparing fewer images too. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.