久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

在 C++ 中轉置矩陣的最快方法是什么?

What is the fastest way to transpose a matrix in C++?(在 C++ 中轉置矩陣的最快方法是什么?)
本文介紹了在 C++ 中轉置矩陣的最快方法是什么?的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

問題描述

我有一個需要轉置的矩陣(相對較大).例如假設我的矩陣是

I have a matrix (relatively big) that I need to transpose. For example assume that my matrix is

a b c d e f
g h i j k l
m n o p q r 

我希望結果如下:

a g m
b h n
c I o
d j p
e k q
f l r

最快的方法是什么?

推薦答案

這是個好問題.您想要在內存中實際轉置矩陣而不僅僅是交換坐標的原因有很多,例如在矩陣乘法和高斯拖尾中.

This is a good question. There are many reason you would want to actually transpose the matrix in memory rather than just swap coordinates, e.g. in matrix multiplication and Gaussian smearing.

首先讓我列出我用于轉置的一個函數(請參閱我的答案的結尾,我找到了一個更快的解決方案)

First let me list one of the functions I use for the transpose ( please see the end of my answer where I found a much faster solution)

void transpose(float *src, float *dst, const int N, const int M) {
    #pragma omp parallel for
    for(int n = 0; n<N*M; n++) {
        int i = n/N;
        int j = n%N;
        dst[n] = src[M*j + i];
    }
}

現在讓我們看看為什么轉置很有用.考慮矩陣乘法 C = A*B.我們可以這樣做.

Now let's see why the transpose is useful. Consider matrix multiplication C = A*B. We could do it this way.

for(int i=0; i<N; i++) {
    for(int j=0; j<K; j++) {
        float tmp = 0;
        for(int l=0; l<M; l++) {
            tmp += A[M*i+l]*B[K*l+j];
        }
        C[K*i + j] = tmp;
    }
}

然而,那樣的話,將會有很多緩存未命中.一個更快的解決方案是先對 B 進行轉置

That way, however, is going to have a lot of cache misses. A much faster solution is to take the transpose of B first

transpose(B);
for(int i=0; i<N; i++) {
    for(int j=0; j<K; j++) {
        float tmp = 0;
        for(int l=0; l<M; l++) {
            tmp += A[M*i+l]*B[K*j+l];
        }
        C[K*i + j] = tmp;
    }
}
transpose(B);

矩陣乘法是O(n^3),轉置是O(n^2),所以轉置對計算時間的影響應該可以忽略不計(對于大n).在矩陣乘法循環中,平鋪甚至比轉置更有效,但要復雜得多.

Matrix multiplication is O(n^3) and the transpose is O(n^2), so taking the transpose should have a negligible effect on the computation time (for large n). In matrix multiplication loop tiling is even more effective than taking the transpose but that's much more complicated.

我希望我知道一種更快的轉置方法(我找到了一個更快的解決方案,請參閱我的答案結尾).當 Haswell/AVX2 幾周后出來時,它將具有聚集功能.我不知道這在這種情況下是否會有幫助,但我可以想象收集一列并寫出一行.也許它會使轉置變得不必要.

I wish I knew a faster way to do the transpose ( I found a faster solution, see the end of my answer). When Haswell/AVX2 comes out in a few weeks it will have a gather function. I don't know if that will be helpful in this case but I could image gathering a column and writing out a row. Maybe it will make the transpose unnecessary.

對于高斯涂抹,您所做的是水平涂抹然后垂直涂抹.但是垂直涂抹有緩存問題所以你要做的是

For Gaussian smearing what you do is smear horizontally and then smear vertically. But smearing vertically has the cache problem so what you do is

Smear image horizontally
transpose output 
Smear output horizontally
transpose output

這是英特爾的一篇論文解釋說http:///software.intel.com/en-us/articles/iir-gaussian-blur-filter-implementation-using-intel-advanced-vector-extensions

Here is a paper by Intel explaining that http://software.intel.com/en-us/articles/iir-gaussian-blur-filter-implementation-using-intel-advanced-vector-extensions

最后,我在矩陣乘法(和高斯拖尾)中實際做的不是完全采用轉置,而是采用特定矢量大小(例如,SSE/AVX 為 4 或 8)的寬度的轉置.這是我使用的功能

Lastly, what I actually do in matrix multiplication (and in Gaussian smearing) is not take exactly the transpose but take the transpose in widths of a certain vector size (e.g. 4 or 8 for SSE/AVX). Here is the function I use

void reorder_matrix(const float* A, float* B, const int N, const int M, const int vec_size) {
    #pragma omp parallel for
    for(int n=0; n<M*N; n++) {
        int k = vec_size*(n/N/vec_size);
        int i = (n/vec_size)%N;
        int j = n%vec_size;
        B[n] = A[M*i + k + j];
    }
}

我嘗試了幾個函數來為大矩陣找到最快的轉置.最后,最快的結果是使用帶有 block_size=16 的循環阻塞(我找到了一個使用 SSE 和循環阻塞的更快解決方案 - 見下文).此代碼適用于任何 NxM 矩陣(即矩陣不必是正方形).

I tried several function to find the fastest transpose for large matrices. In the end the fastest result is to use loop blocking with block_size=16 ( I found a faster solution using SSE and loop blocking - see below). This code works for any NxM matrix (i.e. the matrix does not have to be square).

inline void transpose_scalar_block(float *A, float *B, const int lda, const int ldb, const int block_size) {
    #pragma omp parallel for
    for(int i=0; i<block_size; i++) {
        for(int j=0; j<block_size; j++) {
            B[j*ldb + i] = A[i*lda +j];
        }
    }
}

inline void transpose_block(float *A, float *B, const int n, const int m, const int lda, const int ldb, const int block_size) {
    #pragma omp parallel for
    for(int i=0; i<n; i+=block_size) {
        for(int j=0; j<m; j+=block_size) {
            transpose_scalar_block(&A[i*lda +j], &B[j*ldb + i], lda, ldb, block_size);
        }
    }
}

ldaldb 是矩陣的寬度.這些需要是塊大小的倍數.查找值并為例如分配內存一個 3000x1001 的矩陣我做這樣的事情

The values lda and ldb are the width of the matrix. These need to be multiples of the block size. To find the values and allocate the memory for e.g. a 3000x1001 matrix I do something like this

#define ROUND_UP(x, s) (((x)+((s)-1)) & -(s))
const int n = 3000;
const int m = 1001;
int lda = ROUND_UP(m, 16);
int ldb = ROUND_UP(n, 16);

float *A = (float*)_mm_malloc(sizeof(float)*lda*ldb, 64);
float *B = (float*)_mm_malloc(sizeof(float)*lda*ldb, 64);

對于 3000x1001,返回 ldb = 3008 lda = 1008

For 3000x1001 this returns ldb = 3008 and lda = 1008

我找到了一個使用 SSE 內在函數的更快的解決方案:

I found an even faster solution using SSE intrinsics:

inline void transpose4x4_SSE(float *A, float *B, const int lda, const int ldb) {
    __m128 row1 = _mm_load_ps(&A[0*lda]);
    __m128 row2 = _mm_load_ps(&A[1*lda]);
    __m128 row3 = _mm_load_ps(&A[2*lda]);
    __m128 row4 = _mm_load_ps(&A[3*lda]);
     _MM_TRANSPOSE4_PS(row1, row2, row3, row4);
     _mm_store_ps(&B[0*ldb], row1);
     _mm_store_ps(&B[1*ldb], row2);
     _mm_store_ps(&B[2*ldb], row3);
     _mm_store_ps(&B[3*ldb], row4);
}

inline void transpose_block_SSE4x4(float *A, float *B, const int n, const int m, const int lda, const int ldb ,const int block_size) {
    #pragma omp parallel for
    for(int i=0; i<n; i+=block_size) {
        for(int j=0; j<m; j+=block_size) {
            int max_i2 = i+block_size < n ? i + block_size : n;
            int max_j2 = j+block_size < m ? j + block_size : m;
            for(int i2=i; i2<max_i2; i2+=4) {
                for(int j2=j; j2<max_j2; j2+=4) {
                    transpose4x4_SSE(&A[i2*lda +j2], &B[j2*ldb + i2], lda, ldb);
                }
            }
        }
    }
}

這篇關于在 C++ 中轉置矩陣的最快方法是什么?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

相關文檔推薦

Sorting zipped (locked) containers in C++ using boost or the STL(使用 boost 或 STL 在 C++ 中對壓縮(鎖定)容器進行排序)
Rotating a point about another point (2D)(圍繞另一個點旋轉一個點 (2D))
Image Processing: Algorithm Improvement for #39;Coca-Cola Can#39; Recognition(圖像處理:Coca-Cola Can 識別的算法改進)
How do I construct an ISO 8601 datetime in C++?(如何在 C++ 中構建 ISO 8601 日期時間?)
Sort list using STL sort function(使用 STL 排序功能對列表進行排序)
Is list::size() really O(n)?(list::size() 真的是 O(n) 嗎?)
主站蜘蛛池模板: 久久亚洲国产 | 国产一区 日韩 | av黄色免费在线观看 | 欧美不卡| 国产午夜精品视频 | 一区二区三区小视频 | 国产亚洲一区二区三区在线 | 中文字幕在线观看精品 | 91视频国产一区 | 久久久久久高潮国产精品视 | 国产一区二区不卡 | 欧美精品久久久 | 一区二区三区视频免费看 | 成人精品一区二区 | 久久伊 | 亚洲天堂色 | 天天插天天操 | 国产精品日日做人人爱 | 伊人超碰在线 | 亚洲国产精品久久久久婷婷老年 | 一区二区在线看 | 欧美日韩综合视频 | 亚洲444kkkk在线观看最新 | 午夜小视频在线观看 | 国产精品久久久久久久三级 | 在线亚洲免费视频 | 欧美一区二区三区大片 | 亚洲精品美女视频 | 国色天香成人网 | 中文字幕一区在线 | 91精品国产乱码久久久 | 欧美精品一区二区三区在线播放 | 美女人人操 | 久色一区 | 综合久久亚洲 | www.99精品| 国产精品91视频 | 黄色大片免费网站 | 亚洲一区中文字幕在线观看 | 国产成人精品久久 | 精品视频在线免费观看 |