久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

    1. <small id='XOayk'></small><noframes id='XOayk'>

      <i id='XOayk'><tr id='XOayk'><dt id='XOayk'><q id='XOayk'><span id='XOayk'><b id='XOayk'><form id='XOayk'><ins id='XOayk'></ins><ul id='XOayk'></ul><sub id='XOayk'></sub></form><legend id='XOayk'></legend><bdo id='XOayk'><pre id='XOayk'><center id='XOayk'></center></pre></bdo></b><th id='XOayk'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='XOayk'><tfoot id='XOayk'></tfoot><dl id='XOayk'><fieldset id='XOayk'></fieldset></dl></div>

      <tfoot id='XOayk'></tfoot>
    2. <legend id='XOayk'><style id='XOayk'><dir id='XOayk'><q id='XOayk'></q></dir></style></legend>

          <bdo id='XOayk'></bdo><ul id='XOayk'></ul>

      1. 共享內存中用于多處理的大型 numpy 數組:這種方法

        Large numpy arrays in shared memory for multiprocessing: Is something wrong with this approach?(共享內存中用于多處理的大型 numpy 數組:這種方法有問題嗎?)
      2. <small id='wnzg2'></small><noframes id='wnzg2'>

        <i id='wnzg2'><tr id='wnzg2'><dt id='wnzg2'><q id='wnzg2'><span id='wnzg2'><b id='wnzg2'><form id='wnzg2'><ins id='wnzg2'></ins><ul id='wnzg2'></ul><sub id='wnzg2'></sub></form><legend id='wnzg2'></legend><bdo id='wnzg2'><pre id='wnzg2'><center id='wnzg2'></center></pre></bdo></b><th id='wnzg2'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='wnzg2'><tfoot id='wnzg2'></tfoot><dl id='wnzg2'><fieldset id='wnzg2'></fieldset></dl></div>

            <tbody id='wnzg2'></tbody>
          <tfoot id='wnzg2'></tfoot>

              <bdo id='wnzg2'></bdo><ul id='wnzg2'></ul>
                  <legend id='wnzg2'><style id='wnzg2'><dir id='wnzg2'><q id='wnzg2'></q></dir></style></legend>
                • 本文介紹了共享內存中用于多處理的大型 numpy 數組:這種方法有問題嗎?的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  多處理是一個很棒的工具,但使用大內存塊并不是那么直接.您可以在每個進程中加載??塊并將結果轉儲到磁盤上,但有時您需要將結果存儲在內存中.最重要的是,使用花哨的 numpy 功能.

                  Multiprocessing is a wonderful tool but is not so straight forward to use large memory chunks with it. You can load chunks in each process and dump results on disk but sometimes you need to store the results in the memory. And on top, use the fancy numpy functionality.

                  我已經閱讀/谷歌了很多,并想出了一些答案:

                  I have read/googled a lot and came up with some answers:

                  在共享內存中使用numpy數組進行多處理

                  在多處理進程之間共享大型只讀 Numpy 數組

                  Python 多處理全局 numpy 數組

                  如何如何在 python 子進程之間傳遞大型 numpy 數組而不保存到磁盤?

                  等等等等等等.

                  它們都有缺點:不太主流的庫(sharedmem);全局存儲變量;不太容易閱讀代碼、管道等.

                  They all have drawbacks: Not-so-mainstream libraries (sharedmem); globally storing variables; not so easy to read code, pipes, etc etc.

                  我的目標是在我的工作人員中無縫使用 numpy,而不用擔心轉換和其他東西.

                  My goal was to seamlessly use numpy in my workers without worrying about conversions and stuff.

                  經過多次試驗,我想出了 this.它適用于我的 ubuntu 16、python 3.6、16GB、8 核機器.與以前的方法相比,我做了很多捷徑".沒有全局共享狀態,沒有需要在 worker 內部轉換為 numpy 的純內存指針,作為進程參數傳遞的大型 numpy 數組等.

                  After much trials I came up with this. And it works on my ubuntu 16, python 3.6, 16GB, 8 core machine. I did a lot of "shortcuts" compared to previous approaches. No global shared state, no pure memory pointers that need to be converted to numpy inside workers, large numpy arrays passed as process arguments, etc.

                  Pastebin 鏈接上面,但我會在這里放幾個片段.

                  Pastebin link above, but I will put few snippets here.

                  一些進口:

                  import numpy as np
                  import multiprocessing as mp
                  import multiprocessing.sharedctypes
                  import ctypes
                  

                  分配一些共享內存并將其包裝到一個 numpy 數組中:

                  Allocate some shared mem and wrap it into an numpy array:

                  def create_np_shared_array(shape, dtype, ctype)
                       . . . . 
                      shared_mem_chunck = mp.sharedctypes.RawArray(ctype, size)
                      numpy_array_view = np.frombuffer(shared_mem_chunck, dtype).reshape(shape)
                      return numpy_array_view
                  

                  創建共享數組并在其中放入一些東西

                  Create shared array and put something in it

                  src = np.random.rand(*SHAPE).astype(np.float32)
                  src_shared = create_np_shared_array(SHAPE,np.float32,ctypes.c_float)
                  dst_shared = create_np_shared_array(SHAPE,np.float32,ctypes.c_float)
                  src_shared[:] = src[:]  # Some numpy ops accept an 'out' array where to store the results
                  

                  產生進程:

                  p = mp.Process(target=lengthly_operation,args=(src_shared, dst_shared, k, k + STEP))
                  p.start()
                  p.join()
                  

                  以下是一些結果(完整參考請參見 pastebin 代碼):

                  Here are some results (see pastebin code for full reference):

                  Serial version: allocate mem 2.3741257190704346 exec: 17.092209577560425 total: 19.46633529663086 Succes: True
                  Parallel with trivial np: allocate mem 2.4535582065582275 spawn  process: 0.00015354156494140625 exec: 3.4581971168518066 total: 5.911908864974976 Succes: False
                  Parallel with shared mem np: allocate mem 4.535916328430176 (pure alloc:4.014216661453247 copy: 0.5216996669769287) spawn process: 0.00015664100646972656 exec: 3.6783478260040283 total: 8.214420795440674 Succes: True
                  

                  我還做了一個 cProfile(為什么在分配共享內存時要多花 2 秒?)并意識到有一些對 tempfile.py{ 的調用'_io.BufferedWriter' 對象的 'write' 方法}.

                  I also did a cProfile (why 2 extra seconds when allocating shared mem?) and realized that there are some calls to the tempfile.py, {method 'write' of '_io.BufferedWriter' objects}.

                  問題

                  • 我做錯了嗎?
                  • (大型)陣列是否來回腌制而我沒有獲得任何加快速度的東西?請注意,第二次運行(使用常規 np 數組未通過正確性測試)
                  • 有沒有辦法進一步改進時序、代碼清晰度等?(針對多處理范例)

                  備注

                  • 我不能使用進程池,因為 mem 必須在 fork 處繼承,而不是作為參數發送.

                  推薦答案

                  共享數組的分配很慢,因為顯然是先寫入磁盤,所以可以通過mmap共享.有關參考,請參閱 heap.py 和 sharedctypes.py.這就是 tempfile.py 出現在分析器中的原因.我認為這種方法的優點是共享內存在崩潰的情況下會被清理干凈,而 POSIX 共享內存無法保證這一點.

                  Allocation of the shared array is slow, because apparently it's written to disk first, so it can be shared through a mmap. For reference see heap.py and sharedctypes.py. This is why tempfile.py shows up in the profiler. I think the advantage of this approach is that the shared memory is cleaned up in case of a crash, and this cannot be guaranteed with POSIX shared memory.

                  感謝 fork,您的代碼不會發生酸洗,正如您所說,內存是繼承的.第二次運行不起作用的原因是因為不允許子進程寫入父進程的內存.相反,私有頁面是動態分配的,只有在子進程結束時才會被丟棄.

                  There is no pickling happening with your code, thanks to fork and, as you said, the memory is inherited. The reason the 2nd run doesn't work is because the child processes are not allowed to write in the memory of the parent. Instead, private pages are allocated on the fly, only to be discared when the child process ends.

                  我只有一個建議:你不必自己指定ctype,可以通過np.ctypeslib._typecodes從numpy dtype中找出正確的類型.或者只是對所有內容使用 c_byte 并使用 dtype itemsize 來計算緩沖區的大小,無論如何它都會被 numpy 強制轉換.

                  I only have one suggestion: You don't have to specify a ctype yourself, the right type can be figured out from the numpy dtype through np.ctypeslib._typecodes. Or just use c_byte for everything and use the dtype itemsize to figure out the size of the buffer, it will be casted by numpy anyway.

                  這篇關于共享內存中用于多處理的大型 numpy 數組:這種方法有問題嗎?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  How to bind a function to an Action from Qt menubar?(如何將函數綁定到 Qt 菜單欄中的操作?)
                  PyQt progress jumps to 100% after it starts(PyQt 啟動后進度躍升至 100%)
                  How to set yaxis tick label in a fixed position so that when i scroll left or right the yaxis tick label should be visible?(如何將 yaxis 刻度標簽設置在固定位置,以便當我向左或向右滾動時,yaxis 刻度標簽應該可見
                  `QImage` constructor has unknown keyword `data`(`QImage` 構造函數有未知關鍵字 `data`)
                  Change x-axis ticks to custom strings(將 x 軸刻度更改為自定義字符串)
                  How to show progress bar while saving file to excel in python?(如何在python中將文件保存為excel時顯示進度條?)

                      <tbody id='sXog8'></tbody>
                    <legend id='sXog8'><style id='sXog8'><dir id='sXog8'><q id='sXog8'></q></dir></style></legend>

                      <small id='sXog8'></small><noframes id='sXog8'>

                      • <tfoot id='sXog8'></tfoot>
                        <i id='sXog8'><tr id='sXog8'><dt id='sXog8'><q id='sXog8'><span id='sXog8'><b id='sXog8'><form id='sXog8'><ins id='sXog8'></ins><ul id='sXog8'></ul><sub id='sXog8'></sub></form><legend id='sXog8'></legend><bdo id='sXog8'><pre id='sXog8'><center id='sXog8'></center></pre></bdo></b><th id='sXog8'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='sXog8'><tfoot id='sXog8'></tfoot><dl id='sXog8'><fieldset id='sXog8'></fieldset></dl></div>

                          • <bdo id='sXog8'></bdo><ul id='sXog8'></ul>
                          • 主站蜘蛛池模板: 久久久久国色av免费观看性色 | 日韩成人免费视频 | 欧美在线a | 九九爱这里只有精品 | 亚洲网站免费看 | 成人精品一区二区三区中文字幕 | 日本一区二区三区四区 | 欧美国产一区二区三区 | 欧美a免费| 国产精品爱久久久久久久 | 中文字幕免费 | 日韩国产高清在线观看 | 国产人成精品一区二区三 | 国产美女一区二区 | 成人三级在线观看 | 国产一区999 | 成人福利在线观看 | 国产精品毛片一区二区在线看 | 久久精品国产一区二区电影 | 成人精品视频 | 国产亚洲人成a在线v网站 | 中文字幕一区在线观看视频 | 精品亚洲第一 | 在线播放中文字幕 | 欧美激情精品久久久久 | 欧美综合久久久 | 成人高清在线视频 | 成人av网站在线观看 | 污视频在线免费观看 | 秋霞在线一区 | 国产a区| 日韩伦理电影免费在线观看 | 成人久久久久久久久 | 91短视频网址 | 91精品国产乱码久久久久久久久 | 精品国产乱码久久久久久图片 | 一区视频在线免费观看 | 国产精品亚洲一区 | 亚洲国产精品日韩av不卡在线 | 国产精品.xx视频.xxtv | 一区二区三区欧美大片 |