久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

如何獲得正確的 alpha 值以完美融合兩個圖像?

How to obtain the right alpha value to perfectly blend two images?(如何獲得正確的 alpha 值以完美融合兩個圖像?)
本文介紹了如何獲得正確的 alpha 值以完美融合兩個圖像?的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

問題描述

我一直在嘗試混合兩個圖像.我目前采用的方法是,我獲取兩個圖像的重疊區域的坐標,并且僅對于重疊區域,我在添加之前與 0.5 的硬編碼 alpha 混合.所以基本上我只是從兩個圖像的重疊區域中獲取每個像素值的一半,然后添加它們.這并沒有給我一個完美的融合,因為 alpha 值被硬編碼為 0.5.這是 3 張圖像混合的結果:

如您所見,從一張圖像到另一張圖像的過渡仍然可見.如何獲得可以消除這種可見過渡的完美 alpha 值?還是沒有這樣的事情,我采取了錯誤的方法?

這是我目前進行混合的方式:

for i in range(3):base_img_warp[overlap_coords[0],overlap_coords[1],i] = base_img_warp[overlap_coords[0],overlap_coords[1],i]*0.5next_img_warp[overlap_coords[0],overlap_coords[1],i] = next_img_warp[overlap_coords[0],overlap_coords[1],i]*0.5final_img = cv2.add(base_img_warp, next_img_warp)

如果有人想試一試,這里有兩張扭曲的圖像,以及它們重疊區域的蒙版:

混合蒙版(只是作為一種印象,必須是浮點矩陣):

圖像馬賽克:

I've been trying to blend two images. The current approach I'm taking is, I obtain the coordinates of the overlapping region of the two images, and only for the overlapping regions, I blend with a hardcoded alpha of 0.5, before adding it. SO basically I'm just taking half the value of each pixel from overlapping regions of both the images, and adding them. That doesn't give me a perfect blend because the alpha value is hardcoded to 0.5. Here's the result of blending of 3 images:

As you can see, the transition from one image to another is still visible. How do I obtain the perfect alpha value that would eliminate this visible transition? Or is there no such thing, and I'm taking a wrong approach?

Here's how I'm currently doing the blending:

for i in range(3):
            base_img_warp[overlap_coords[0], overlap_coords[1], i] = base_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5
            next_img_warp[overlap_coords[0], overlap_coords[1], i] = next_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5
final_img = cv2.add(base_img_warp, next_img_warp)

If anyone would like to give it a shot, here are two warped images, and the mask of their overlapping region: http://imgur.com/a/9pOsQ

解決方案

Here is the way I would do it in general:

int main(int argc, char* argv[])
{
    cv::Mat input1 = cv::imread("C:/StackOverflow/Input/pano1.jpg");
    cv::Mat input2 = cv::imread("C:/StackOverflow/Input/pano2.jpg");

    // compute the vignetting masks. This is much easier before warping, but I will try...
    // it can be precomputed, if the size and position of your ROI in the image doesnt change and can be precomputed and aligned, if you can determine the ROI for every image
    // the compression artifacts make it a little bit worse here, I try to extract all the non-black regions in the images.
    cv::Mat mask1;
    cv::inRange(input1, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask1);
    cv::Mat mask2;
    cv::inRange(input2, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask2);


    // now compute the distance from the ROI border:
    cv::Mat dt1;
    cv::distanceTransform(mask1, dt1, CV_DIST_L1, 3);
    cv::Mat dt2;
    cv::distanceTransform(mask2, dt2, CV_DIST_L1, 3);

    // now you can use the distance values for blending directly. If the distance value is smaller this means that the value is worse (your vignetting becomes worse at the image border)
    cv::Mat mosaic = cv::Mat(input1.size(), input1.type(), cv::Scalar(0, 0, 0));
    for (int j = 0; j < mosaic.rows; ++j)
    for (int i = 0; i < mosaic.cols; ++i)
    {
        float a = dt1.at<float>(j, i);
        float b = dt2.at<float>(j, i);

        float alpha = a / (a + b); // distances are not between 0 and 1 but this value is. The "better" a is, compared to b, the higher is alpha.
        // actual blending: alpha*A + beta*B
        mosaic.at<cv::Vec3b>(j, i) = alpha*input1.at<cv::Vec3b>(j, i) + (1 - alpha)* input2.at<cv::Vec3b>(j, i);
    }

    cv::imshow("mosaic", mosaic);

    cv::waitKey(0);
    return 0;
}

Basically you compute the distance from your ROI border to the center of your objects and compute the alpha from both blending mask values. So if one image has a high distance from the border and other one a low distance from border, you prefer the pixel that is closer to the image center. It would be better to normalize those values for cases where the warped images aren't of similar size. But even better and more efficient is to precompute the blending masks and warp them. Best would be to know the vignetting of your optical system and choose and identical blending mask (typically lower values of the border).

From the previous code you'll get these results: ROI masks:

Blending masks (just as an impression, must be float matrices instead):

image mosaic:

這篇關于如何獲得正確的 alpha 值以完美融合兩個圖像?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

相關文檔推薦

How to draw a rectangle around a region of interest in python(如何在python中的感興趣區域周圍繪制一個矩形)
How can I detect and track people using OpenCV?(如何使用 OpenCV 檢測和跟蹤人員?)
How to apply threshold within multiple rectangular bounding boxes in an image?(如何在圖像的多個矩形邊界框中應用閾值?)
How can I download a specific part of Coco Dataset?(如何下載 Coco Dataset 的特定部分?)
Detect image orientation angle based on text direction(根據文本方向檢測圖像方向角度)
Detect centre and angle of rectangles in an image using Opencv(使用 Opencv 檢測圖像中矩形的中心和角度)
主站蜘蛛池模板: 精品一区二区电影 | 日本在线免费看最新的电影 | 亚洲成在线观看 | 午夜电影合集 | 毛片入口 | 日韩一级黄色毛片 | 国产精品久久二区 | 中文字幕在线看第二 | 国产在线一区二区 | 亚洲精品888 | 日韩二区 | 久久精品国产免费看久久精品 | 久久久久久久久久久久91 | 国产日韩亚洲欧美 | 国产精品一区二区三区四区五区 | 男女免费观看在线爽爽爽视频 | av一级久久 | 久久99精品久久久久蜜桃tv | 久久久91精品国产一区二区三区 | 网页av| 97伦理电影网 | 国产精彩视频在线观看 | 国产精品久久久久一区二区 | 在线日韩不卡 | 在线不卡av | 爱爱综合网 | 欧美一级免费看 | 日韩电影一区 | 亚洲视频在线观看 | 日本久草视频 | 欧美淫 | 精品区一区二区 | 日韩一区精品 | 99在线视频观看 | 国产情侣久久 | 欧美一区二区三区久久精品 | 亚洲欧美在线视频 | 不卡av在线| 国产99视频精品免费播放照片 | 视频一区二区三区中文字幕 | 国产精品一区二区不卡 |