久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

將立方體貼圖轉換為等距柱狀全景圖

Converting a Cubemap into Equirectangular Panorama(將立方體貼圖轉換為等距柱狀全景圖)
本文介紹了將立方體貼圖轉換為等距柱狀全景圖的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

問題描述

我想從立方體貼圖 [圖 1] 轉換為等距柱狀圖 [圖 2].

圖1

圖2

可以從球形變為立方形(通過以下操作:

目標是將圖像投影到等距柱狀圖格式,如下所示:

轉換算法相當簡單.為了在給定具有 6 個面的立方體貼圖的情況下計算等距柱狀圖圖像中每個像素的最佳估計顏色:

  • 首先計算每個像素對應的極坐標球面圖像.
  • 其次,使用極坐標形成一個向量并確定立方體貼圖的哪個面和哪個像素面向向量謊言;就像從立方體中心發出的光線會擊中其中一個它的側面和該側面的特定點.

請記住,給定立方貼圖特定面上的歸一化坐標 (u,v),有多種方法可以估計等距柱狀圖圖像中像素的顏色.最基本的方法是一種非常原始的近似值,為簡單起見將在本答案中使用,是將坐標四舍五入到特定像素并使用該像素.其他更高級的方法可以計算幾個相鄰像素的平均值.

算法的實現將因上下文而異.我在 Unity3D C# 中做了一個快速實現,展示了如何在真實世界場景中實現算法.它在 CPU 上運行,有很大的改進空間但很容易理解.

使用UnityEngine;公共靜態類 CubemapConverter{public static byte[] ConvertToEquirectangular(Texture2D sourceTexture, int outputWidth, int outputHeight){Texture2D equiTexture = new Texture2D(outputWidth, outputHeight, TextureFormat.ARGB32, false);浮動 u, v;//歸一化紋理坐標,從0到1,從左下角開始浮動 p theta;//極坐標int cubeFaceWidth,cubeFaceHeight;cubeFaceWidth = sourceTexture.width/4;//4個水平面cubeFaceHeight = sourceTexture.height/3;//3個垂直面for (int j = 0; j 

為了利用 GPU,我創建了一個執行相同轉換的著色器.它比在 CPU 上逐個像素地運行轉換要快得多,但不幸的是 Unity 對立方體貼圖施加了分辨率限制,因此在使用高分辨率輸入圖像的場景中它的用處受到限制.

Shader "Conversion/CubemapToEquirectangular" {特性 {_MainTex("Cubemap (RGB)", CUBE) = "" {}}子著色器{經過 {ZTest 總是剔除 ZWrite Off霧 { 模式關閉 }CG程序#pragma 頂點頂點#pragma 片段 frag#pragma fragmentoption ARB_precision_hint_fastest//#pragma fragmentoption ARB_precision_hint_nicest#include "UnityCG.cginc"#define PI 3.141592653589793#define TWOPI 6.283185307179587結構 v2f {float4 位置:位置;float2 uv : TEXCOORD0;};samplerCUBE _MainTex;v2f vert( appdata_img v ){v2f o;o.pos = mul(UNITY_MATRIX_MVP, v.vertex);o.uv = v.texcoord.xy * float2(TWOPI, PI);返回o;}fixed4 frag(v2f i) : 顏色{浮動 theta = i.uv.y;浮動 phi = i.uv.x;float3 單位 = float3(0,0,0);unit.x = sin(phi) * sin(theta) * -1;unit.y = cos(theta) * -1;unit.z = cos(phi) * sin(theta) * -1;返回 texCUBE(_MainTex, unit);}ENDCG}}回退關閉}

通過在轉換過程中采用更復雜的方法來估計像素的顏色或對結果圖像進行后處理(或實際上兩者兼而有之),可以大大提高結果圖像的質量.例如,可以生成更大尺寸的圖像以應用模糊過濾器,然后將其下采樣到所需的尺寸.

我使用兩個編輯器向導創建了一個簡單的 Unity 項目,這些向導展示了如何正確利用 C# 代碼或上面顯示的著色器.在這里獲取:https://github.com/Mapiarz/CubemapToEquirectangular

記住在 Unity 中為輸入圖像設置正確的導入設置:

  • 點過濾
  • 真彩色格式
  • 禁用 mipmap
  • 非 2 的冪:無(僅適用于 2DTextures)
  • 啟用讀/寫(僅適用于 2DTextures)

I want to convert from cube map [figure1] into an equirectangular panorama [figure2].

Figure1

Figure2

It is possible to go from Spherical to Cubic (by following: Convert 2:1 equirectangular panorama to cube map ), but lost on how to reverse it.

Figure2 is to be rendered into a sphere using Unity.

解決方案

Assuming the input image is in the following cubemap format:

The goal is to project the image to the equirectangular format like so:

The conversion algorithm is rather straightforward. In order to calculate the best estimate of the color at each pixel in the equirectangular image given a cubemap with 6 faces:

  • Firstly, calculate polar coordinates that correspond to each pixel in the spherical image.
  • Secondly, using the polar coordinates form a vector and determine on which face of the cubemap and which pixel of that face the vector lies; just like a raycast from the center of a cube would hit one of its sides and a specific point on that side.

Keep in mind that there are multiple methods to estimate the color of a pixel in the equirectangular image given a normalized coordinate (u,v) on a specific face of a cubemap. The most basic method, which is a very raw approximation and will be used in this answer for simplicity's sake, is to round the coordinates to a specific pixel and use that pixel. Other more advanced methods could calculate an average of a few neighbouring pixels.

The implementation of the algorithm will vary depending on the context. I did a quick implementation in Unity3D C# that shows how to implement the algorithm in a real world scenario. It runs on the CPU, there is a lot room for improvement but it is easy to understand.

using UnityEngine;

public static class CubemapConverter
{
    public static byte[] ConvertToEquirectangular(Texture2D sourceTexture, int outputWidth, int outputHeight)
    {
        Texture2D equiTexture = new Texture2D(outputWidth, outputHeight, TextureFormat.ARGB32, false);
        float u, v; //Normalised texture coordinates, from 0 to 1, starting at lower left corner
        float phi, theta; //Polar coordinates
        int cubeFaceWidth, cubeFaceHeight;

        cubeFaceWidth = sourceTexture.width / 4; //4 horizontal faces
        cubeFaceHeight = sourceTexture.height / 3; //3 vertical faces


        for (int j = 0; j < equiTexture.height; j++)
        {
            //Rows start from the bottom
            v = 1 - ((float)j / equiTexture.height);
            theta = v * Mathf.PI;

            for (int i = 0; i < equiTexture.width; i++)
            {
                //Columns start from the left
                u = ((float)i / equiTexture.width);
                phi = u * 2 * Mathf.PI;

                float x, y, z; //Unit vector
                x = Mathf.Sin(phi) * Mathf.Sin(theta) * -1;
                y = Mathf.Cos(theta);
                z = Mathf.Cos(phi) * Mathf.Sin(theta) * -1;

                float xa, ya, za;
                float a;

                a = Mathf.Max(new float[3] { Mathf.Abs(x), Mathf.Abs(y), Mathf.Abs(z) });

                //Vector Parallel to the unit vector that lies on one of the cube faces
                xa = x / a;
                ya = y / a;
                za = z / a;

                Color color;
                int xPixel, yPixel;
                int xOffset, yOffset;

                if (xa == 1)
                {
                    //Right
                    xPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceWidth);
                    xOffset = 2 * cubeFaceWidth; //Offset
                    yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = cubeFaceHeight; //Offset
                }
                else if (xa == -1)
                {
                    //Left
                    xPixel = (int)((((za + 1f) / 2f)) * cubeFaceWidth);
                    xOffset = 0;
                    yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = cubeFaceHeight;
                }
                else if (ya == 1)
                {
                    //Up
                    xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
                    xOffset = cubeFaceWidth;
                    yPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceHeight);
                    yOffset = 2 * cubeFaceHeight;
                }
                else if (ya == -1)
                {
                    //Down
                    xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
                    xOffset = cubeFaceWidth;
                    yPixel = (int)((((za + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = 0;
                }
                else if (za == 1)
                {
                    //Front
                    xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
                    xOffset = cubeFaceWidth;
                    yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = cubeFaceHeight;
                }
                else if (za == -1)
                {
                    //Back
                    xPixel = (int)((((xa + 1f) / 2f) - 1f) * cubeFaceWidth);
                    xOffset = 3 * cubeFaceWidth;
                    yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = cubeFaceHeight;
                }
                else
                {
                    Debug.LogWarning("Unknown face, something went wrong");
                    xPixel = 0;
                    yPixel = 0;
                    xOffset = 0;
                    yOffset = 0;
                }

                xPixel = Mathf.Abs(xPixel);
                yPixel = Mathf.Abs(yPixel);

                xPixel += xOffset;
                yPixel += yOffset;

                color = sourceTexture.GetPixel(xPixel, yPixel);
                equiTexture.SetPixel(i, j, color);
            }
        }

        equiTexture.Apply();
        var bytes = equiTexture.EncodeToPNG();
        Object.DestroyImmediate(equiTexture);

        return bytes;
    }
}

In order to utilize the GPU I created a shader that does the same conversion. It is much faster than running the conversion pixel by pixel on the CPU but unfortunately Unity imposes resolution limitations on cubemaps so it's usefulness is limited in scenarios when high resolution input image is to be used.

Shader "Conversion/CubemapToEquirectangular" {
  Properties {
        _MainTex ("Cubemap (RGB)", CUBE) = "" {}
    }

    Subshader {
        Pass {
            ZTest Always Cull Off ZWrite Off
            Fog { Mode off }      

            CGPROGRAM
                #pragma vertex vert
                #pragma fragment frag
                #pragma fragmentoption ARB_precision_hint_fastest
                //#pragma fragmentoption ARB_precision_hint_nicest
                #include "UnityCG.cginc"

                #define PI    3.141592653589793
                #define TWOPI 6.283185307179587

                struct v2f {
                    float4 pos : POSITION;
                    float2 uv : TEXCOORD0;
                };

                samplerCUBE _MainTex;

                v2f vert( appdata_img v )
                {
                    v2f o;
                    o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                    o.uv = v.texcoord.xy * float2(TWOPI, PI);
                    return o;
                }

                fixed4 frag(v2f i) : COLOR 
                {
                    float theta = i.uv.y;
                    float phi = i.uv.x;
                    float3 unit = float3(0,0,0);

                    unit.x = sin(phi) * sin(theta) * -1;
                    unit.y = cos(theta) * -1;
                    unit.z = cos(phi) * sin(theta) * -1;

                    return texCUBE(_MainTex, unit);
                }
            ENDCG
        }
    }
    Fallback Off
}

The quality of the resulting images can be greatly improved by either employing a more sophisticated method to estimate the color of a pixel during the conversion or by post processing the resulting image (or both, actually). For example an image of bigger size could be generated to apply a blur filter and then downsample it to the desired size.

I created a simple Unity project with two editor wizards that show how to properly utilize either the C# code or the shader shown above. Get it here: https://github.com/Mapiarz/CubemapToEquirectangular

Remember to set proper import settings in Unity for your input images:

  • Point filtering
  • Truecolor format
  • Disable mipmaps
  • Non Power of 2: None (only for 2DTextures)
  • Enable Read/Write (only for 2DTextures)

這篇關于將立方體貼圖轉換為等距柱狀全景圖的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

相關文檔推薦

What do compilers do with compile-time branching?(編譯器如何處理編譯時分支?)
Can I use if (pointer) instead of if (pointer != NULL)?(我可以使用 if (pointer) 而不是 if (pointer != NULL) 嗎?)
Checking for NULL pointer in C/C++(在 C/C++ 中檢查空指針)
Math-like chaining of the comparison operator - as in, quot;if ( (5lt;jlt;=1) )quot;(比較運算符的數學式鏈接-如“if((5<j<=1)))
Difference between quot;if constexpr()quot; Vs quot;if()quot;(“if constexpr()之間的區別與“if())
C++, variable declaration in #39;if#39; expression(C++,if 表達式中的變量聲明)
主站蜘蛛池模板: 蜜臀久久99精品久久久久久宅男 | 亚洲久久 | 亚洲国产成人av好男人在线观看 | 精品久久精品 | 亚洲精选久久 | 99精品免费久久久久久久久日本 | www.成人.com| 爱草视频| 日韩综合色 | 国产一区二区在线视频 | 国产一区亚洲 | h视频在线免费 | 曰韩一二三区 | 久久亚洲免费 | 91免费观看国产 | 久久天堂 | 欧美精品福利视频 | 久久国色 | 夜夜艹 | 成人免费在线观看 | 国产精品99视频 | 亚洲一区二区在线播放 | 欧美精品在线一区 | 亚洲精品久久久 | 日韩精品一区二区三区 | 在线小视频 | 日韩欧美在线不卡 | 欧美日韩第一页 | 91美女视频 | 伊人最新网址 | 亚洲欧美激情精品一区二区 | 在线欧美日韩 | 欧美一区二| 欧美精品一区二区三区在线四季 | 欧美一区二区三区 | www.av在线| 国产一区二区三区四区 | 美女天天操 | 欧洲视频一区二区 | 蜜桃av一区二区三区 | 国产在线看片 |