久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

Python,OpenCV——對齊和覆蓋多個圖像,一個接一

Python, OpenCV -- Aligning and overlaying multiple images, one after another(Python,OpenCV——對齊和覆蓋多個圖像,一個接一個)
本文介紹了Python,OpenCV——對齊和覆蓋多個圖像,一個接一個的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

問題描述

我的項目是對齊航拍照片以制作馬賽克地圖.我的計劃是從兩張照片開始,將第二張與第一張對齊,然后從兩張對齊的圖像中創(chuàng)建一個初始馬賽克".完成后,我將第三張照片與初始馬賽克對齊,然后將第四張照片與結(jié)果對齊,依此類推,從而逐步構(gòu)建地圖.

My project is to align aerial photos to make a mosaic-map out of them. My plan is to start with two photos, align the second with the first, and create an "initial mosaic" out of the two aligned images. Once that is done, I then align the third photo with the initial mosaic, and then align the fourth photo with the result of that, etc, thereby progressively constructing the map.

我有兩種技術(shù)可以做到這一點,但更準(zhǔn)確的一種是使用 calcOpticalFlowPyrLK(),它只適用于雙圖像階段,因為兩個輸入圖像的大小必須相同.因此我嘗試了一個新的解決方案,但它不太準(zhǔn)確,而且每一步引入的錯誤都會堆積起來,最終產(chǎn)生一個荒謬的結(jié)果.

I have two techniques for doing this, but the more accurate one, which makes use of calcOpticalFlowPyrLK(), only works for the two-image phase because the two input images must be the same size. Because of that I tried a new solution, but it is less accurate and the error introduced at every step piles up, eventually producing a nonsensical result.

我的問題有兩個方面,但如果您知道其中一個問題的答案,則不必同時回答兩個問題,除非您愿意.首先,有沒有辦法使用類似于 calcOpticalFlowPyrLK() 的東西,但有兩個不同大小的圖像(這包括任何潛在的解決方法)?其次,有沒有辦法修改檢測器/描述符解決方案以使其更準(zhǔn)確?

My question is two-fold, but if you know the answer to one, you don't have to answer both, unless you want to. First, is there a way to use something similar to calcOpticalFlowPyrLK() but with two images of different sizes (this includes any potential workarounds)? And second, is there a way to modify the detector/descriptor solution to make it more accurate?

這是僅適用于兩個圖像的準(zhǔn)確版本:

Here's the accurate version that works only for two images:

# load images
base = cv2.imread("images/1.jpg")
curr = cv2.imread("images/2.jpg")

# convert to grayscale
base_gray = cv2.cvtColor(base, cv2.COLOR_BGR2GRAY)

# find the coordinates of good features to track  in base
base_features = cv2.goodFeaturesToTrack(base_gray, 3000, .01, 10)

# find corresponding features in current photo
curr_features = np.array([])
curr_features, pyr_stati, _ = cv2.calcOpticalFlowPyrLK(base, curr, base_features, curr_features, flags=1)

# only add features for which a match was found to the pruned arrays
base_features_pruned = []
curr_features_pruned = []
for index, status in enumerate(pyr_stati):
    if status == 1:
        base_features_pruned.append(base_features[index])
        curr_features_pruned.append(curr_features[index])

# convert lists to numpy arrays so they can be passed to opencv function
bf_final = np.asarray(base_features_pruned)
cf_final = np.asarray(curr_features_pruned)

# find perspective transformation using the arrays of corresponding points
transformation, hom_stati = cv2.findHomography(cf_final, bf_final, method=cv2.RANSAC, ransacReprojThreshold=1)

# transform the images and overlay them to see if they align properly
# not what I do in the actual program, just for use in the example code
# so that you can see how they align, if you decide to run it
height, width = curr.shape[:2]
mod_photo = cv2.warpPerspective(curr, transformation, (width, height))
new_image = cv2.addWeighted(mod_photo, .5, base, .5, 1)

這是適用于多個圖像的不準(zhǔn)確的一個(直到錯誤變得太大):

Here's the inaccurate one that works for multiple images (until the error becomes too great):

# load images
base = cv2.imread("images/1.jpg")
curr = cv2.imread("images/2.jpg")


# convert to grayscale
base_gray = cv2.cvtColor(self.base, cv2.COLOR_BGR2GRAY)

# DIFFERENCES START
curr_gray = cv2.cvtColor(self.curr_photo, cv2.COLOR_BGR2GRAY)

# create detector, get keypoints and descriptors
detector = cv2.ORB_create()
base_keys, base_desc = detector.detectAndCompute(base_gray, None)
curr_keys, curr_desc = detector.detectAndCompute(curr_gray, None)

matcher = cv2.DescriptorMatcher_create("BruteForce-Hamming")

max_dist = 0.0
min_dist = 100.0

for match in matches:
     dist = match.distance
     min_dist = dist if dist < min_dist else min_dist
     max_dist = dist if dist > max_dist else max_dist

good_matches = [match for match in matches if match.distance <= 3 * min_dist ]

base_matches = []
curr_matches = []
for match in good_matches:
    base_matches.append(base_keys[match.queryIdx].pt)
    curr_matches.append(curr_keys[match.trainIdx].pt)

bf_final = np.asarray(base_matches)
cf_final = np.asarray(curr_matches)

# SAME AS BEFORE

# find perspective transformation using the arrays of corresponding points
transformation, hom_stati = cv2.findHomography(cf_final, bf_final, method=cv2.RANSAC, ransacReprojThreshold=1)

# transform the images and overlay them to see if they align properly
# not what I do in the actual program, just for use in the example code
# so that you can see how they align, if you decide to run it
height, width = curr.shape[:2]
mod_photo = cv2.warpPerspective(curr, transformation, (width, height))
new_image = cv2.addWeighted(mod_photo, .5, base, .5, 1)

最后,這里有一些我正在使用的圖片:

Finally, here are some images that I'm using:

推薦答案

Homographies compose,所以如果你有 img1img2 之間以及 img2 之間的單應(yīng)性img3 那么這兩個單應(yīng)性的組合給出了 img1img3 之間的單應(yīng)性.

Homographies compose, so if you have the homographies between img1 and img2 and between img2 and img3 then the composition of those two homographies gives the homography between img1 and img3.

您的尺寸當(dāng)然不正確,因為您正試圖將 img3 與包含 img1img2 的拼接圖像匹配.但你不需要這樣做.在您獲得每對連續(xù)圖像之間的所有單應(yīng)性之前,不要縫合它們.然后您可以通過以下兩種方式之一進行;從后面工作或從前面工作.我將用于例如h31 是指將 img3 扭曲成 img1 坐標(biāo)的單應(yīng)性.

Your sizes are off of course because you're trying to match img3 to the stitched image containing img1 and img2. But you don't need to do that. Don't stitch them until you have all the homographies between each successive pair of images. Then you can proceed in one of two ways; work from the back or work from the front. I'll use for e.g. h31 to refer to the homography which warps img3 into coordinates of img1.

從前面(偽代碼):

warp img2 into coordinates of img1 with h21
warp img3 into coordinates of img1 with h31 = h32 @ h21
warp img4 into coordinates of img1 with h41 = h43 @ h31
...
stitch/blend images together

這里的@是矩陣乘法運算符,它將實現(xiàn)我們的單應(yīng)性組合(注意,最安全的方法是除以單應(yīng)性中的最后一項,以確保它們都縮放相同).

Here @ is the matrix multiplication operator, which will achieve our homography composition (note that it is safest to divide by the final entry in the homography to ensure that they're all scaled the same).

從后面(偽代碼):

...
warp prev stitched img into coordinates of img3 with h43
stitch warped stitched img with img3
warp prev stitched img into coordinates of img2 with h32
stitch warped stitched img with img2
warp prev stitched img into coordinates of img1 with h21
stitch warped stitched img with img1

這個想法是要么從前面開始,將所有內(nèi)容變形到第一個圖像坐標(biāo)框架中,或者從后面開始,變形到前一個圖像并縫合,然后將縫合的圖像變形到前一個圖像,然后重復(fù).我認(rèn)為第一種方法可能更容易.在任何一種情況下,您都必須擔(dān)心單應(yīng)性估計中錯誤的傳播,因為它們會在多個組合單應(yīng)性上累積.

The idea is either you start from the front, and warp everything into the first images coordinate frame, or start from the back, warp to the previous image and stitch, and then warp that stitched image into the previous image, and repeat. I think the first method is probably easier. In either case you have to worry about the propagation of errors in your homography estimation as they will build up over multiple composed homographies.

這是將多個圖像與單應(yīng)性混合在一起的幼稚方法.更復(fù)雜的方法是使用捆綁調(diào)整,它考慮了所有圖像的特征點.然后為了獲得良好的混合,步驟是增益補償以消除相機增益調(diào)整和漸暈,然后是多波段混合以防止模糊.請參閱 Brown 和 Lowe 的開創(chuàng)性論文 這里 以及一個出色的示例和免費演示軟件 這里.

This is the na?ve approach to blend multiple images together with just the homographies. The more sophisticated method is to use bundle adjustment, which takes into account feature points across all images. Then for good blending the steps are gain compensation to remove camera gain adjustments and vignetting, and then multi-band blending to prevent blurring. See the seminal paper from Brown and Lowe here and a brilliant example and free demo software here.

這篇關(guān)于Python,OpenCV——對齊和覆蓋多個圖像,一個接一個的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請聯(lián)系我們刪除處理,感謝您的支持!

相關(guān)文檔推薦

How to draw a rectangle around a region of interest in python(如何在python中的感興趣區(qū)域周圍繪制一個矩形)
How can I detect and track people using OpenCV?(如何使用 OpenCV 檢測和跟蹤人員?)
How to apply threshold within multiple rectangular bounding boxes in an image?(如何在圖像的多個矩形邊界框中應(yīng)用閾值?)
How can I download a specific part of Coco Dataset?(如何下載 Coco Dataset 的特定部分?)
Detect image orientation angle based on text direction(根據(jù)文本方向檢測圖像方向角度)
Detect centre and angle of rectangles in an image using Opencv(使用 Opencv 檢測圖像中矩形的中心和角度)
主站蜘蛛池模板: 天天拍夜夜爽 | 亚洲精选一区 | 久久久久国产一区二区三区 | 国产精品一区二区欧美黑人喷潮水 | 99爱在线| 久久久久综合 | 丁香婷婷久久久综合精品国产 | 国产韩国精品一区二区三区 | 波多野结衣中文字幕一区二区三区 | 粉嫩一区二区三区性色av | 亚洲精品视 | 欧美三级电影在线播放 | 国产在线观看免费 | 中文字幕av网站 | 男人的天堂久久 | 免费中文字幕日韩欧美 | 一级做a| 久久久久久国产精品久久 | 日韩欧美中文字幕在线观看 | 日韩在线免费 | 亚洲欧美一区二区三区在线 | 91人人看 | 国产精品一区久久久 | 欧美99| 久久久精品天堂 | 国产精品揄拍一区二区 | 日韩在线观看一区 | 丁香婷婷综合激情五月色 | 无吗视频| 久久久久久亚洲精品 | 欧美不卡在线 | 精品国产免费人成在线观看 | 国产69久久精品成人看动漫 | 91在线网| 久久手机视频 | 不卡一区| 国产一级视频在线观看 | 成人在线视频免费播放 | 精品在线免费观看视频 | 国产精品视频一二三区 | 91亚洲精品国偷拍自产在线观看 |