久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

    • <bdo id='jayln'></bdo><ul id='jayln'></ul>

      <tfoot id='jayln'></tfoot>

      <legend id='jayln'><style id='jayln'><dir id='jayln'><q id='jayln'></q></dir></style></legend>
        <i id='jayln'><tr id='jayln'><dt id='jayln'><q id='jayln'><span id='jayln'><b id='jayln'><form id='jayln'><ins id='jayln'></ins><ul id='jayln'></ul><sub id='jayln'></sub></form><legend id='jayln'></legend><bdo id='jayln'><pre id='jayln'><center id='jayln'></center></pre></bdo></b><th id='jayln'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='jayln'><tfoot id='jayln'></tfoot><dl id='jayln'><fieldset id='jayln'></fieldset></dl></div>

        <small id='jayln'></small><noframes id='jayln'>

      1. Python Multiprocessing.Pool 延遲迭代

        Python Multiprocessing.Pool lazy iteration(Python Multiprocessing.Pool 延遲迭代)

          • <small id='i0HPn'></small><noframes id='i0HPn'>

              • <bdo id='i0HPn'></bdo><ul id='i0HPn'></ul>
                <tfoot id='i0HPn'></tfoot>
                    <tbody id='i0HPn'></tbody>

                  <legend id='i0HPn'><style id='i0HPn'><dir id='i0HPn'><q id='i0HPn'></q></dir></style></legend>

                  <i id='i0HPn'><tr id='i0HPn'><dt id='i0HPn'><q id='i0HPn'><span id='i0HPn'><b id='i0HPn'><form id='i0HPn'><ins id='i0HPn'></ins><ul id='i0HPn'></ul><sub id='i0HPn'></sub></form><legend id='i0HPn'></legend><bdo id='i0HPn'><pre id='i0HPn'><center id='i0HPn'></center></pre></bdo></b><th id='i0HPn'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='i0HPn'><tfoot id='i0HPn'></tfoot><dl id='i0HPn'><fieldset id='i0HPn'></fieldset></dl></div>

                1. 本文介紹了Python Multiprocessing.Pool 延遲迭代的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  限時送ChatGPT賬號..

                  我想知道 python 的 Multiprocessing.Pool 類與 map、imap 和 map_async 一起工作的方式.我的特殊問題是我想映射一個創(chuàng)建大量內(nèi)存對象的迭代器,并且不希望所有這些對象同時生成到內(nèi)存中.我想看看各種 map() 函數(shù)是否會使我的迭代器干涸,或者僅在子進程緩慢推進時智能地調(diào)用 next() 函數(shù),所以我像這樣破解了一些測試:

                  I'm wondering about the way that python's Multiprocessing.Pool class works with map, imap, and map_async. My particular problem is that I want to map on an iterator that creates memory-heavy objects, and don't want all these objects to be generated into memory at the same time. I wanted to see if the various map() functions would wring my iterator dry, or intelligently call the next() function only as child processes slowly advanced, so I hacked up some tests as such:

                  def g():
                    for el in xrange(100):
                      print el
                      yield el
                  
                  def f(x):
                    time.sleep(1)
                    return x*x
                  
                  if __name__ == '__main__':
                    pool = Pool(processes=4)              # start 4 worker processes
                    go = g()
                    g2 = pool.imap(f, go)
                    g2.next()
                  

                  map、imap 和 map_async 等等.然而,這是最明顯的例子,因為在 g2 上簡單地調(diào)用一次 next() 會打印出我的生成器 g() 中的所有元素,而如果 imap '懶惰地'這樣做,我希望它只調(diào)用 go.next() 一次,因此只打印出 '1'.

                  And so on with map, imap, and map_async. This is the most flagrant example however, as simply calling next() a single time on g2 prints out all my elements from my generator g(), whereas if imap were doing this 'lazily' I would expect it to only call go.next() once, and therefore print out only '1'.

                  有人能弄清楚發(fā)生了什么嗎?是否有某種方法可以讓進程池根據(jù)需要懶惰地"評估迭代器?

                  Can someone clear up what is happening, and if there is some way to have the process pool 'lazily' evaluate the iterator as needed?

                  謝謝,

                  加布

                  推薦答案

                  我們先看看程序的結(jié)尾.

                  Let's look at the end of the program first.

                  多處理模塊在程序結(jié)束時使用 atexit 調(diào)用 multiprocessing.util._exit_function.

                  The multiprocessing module uses atexit to call multiprocessing.util._exit_function when your program ends.

                  如果你刪除 g2.next(),你的程序會很快結(jié)束.

                  If you remove g2.next(), your program ends quickly.

                  _exit_function 最終調(diào)用Pool._terminate_pool.主線程將 pool._task_handler._state 的狀態(tài)從 RUN 更改為 TERMINATE.同時 pool._task_handler 線程在 Pool._handle_tasks 中循環(huán),并在達到條件時退出

                  The _exit_function eventually calls Pool._terminate_pool. The main thread changes the state of pool._task_handler._state from RUN to TERMINATE. Meanwhile the pool._task_handler thread is looping in Pool._handle_tasks and bails out when it reaches the condition

                              if thread._state:
                                  debug('task handler found thread._state != RUN')
                                  break
                  

                  (參見/usr/lib/python2.6/multiprocessing/pool.py)

                  (See /usr/lib/python2.6/multiprocessing/pool.py)

                  這是阻止任務(wù)處理程序完全使用您的生成器 g() 的原因.如果你查看 Pool._handle_tasks 你會看到

                  This is what stops the task handler from fully consuming your generator, g(). If you look in Pool._handle_tasks you'll see

                          for i, task in enumerate(taskseq):
                              ...
                              try:
                                  put(task)
                              except IOError:
                                  debug('could not put task on queue')
                                  break
                  

                  這是使用您的生成器的代碼.(taskseq 并不完全是您的生成器,但隨著 taskseq 被消耗,您的生成器也是如此.)

                  This is the code which consumes your generator. (taskseq is not exactly your generator, but as taskseq is consumed, so is your generator.)

                  相反,當你調(diào)用 g2.next() 時,主線程調(diào)用 IMapIterator.next,并在到達 self._cond.wait(超時).

                  In contrast, when you call g2.next() the main thread calls IMapIterator.next, and waits when it reaches self._cond.wait(timeout).

                  主線程正在等待而不是調(diào)用 _exit_function 是允許任務(wù)處理程序線程正常運行的原因,這意味著在生成器 put 的任務(wù)到 workers' 時完全消耗生成器inqueuePool._handle_tasks 函數(shù)中.

                  That the main thread is waiting instead of calling _exit_function is what allows the task handler thread to run normally, which means fully consuming the generator as it puts tasks in the workers' inqueue in the Pool._handle_tasks function.

                  底線是所有 Pool 映射函數(shù)都會消耗給定的整個可迭代對象.如果你想分塊使用生成器,你可以這樣做:

                  The bottom line is that all Pool map functions consume the entire iterable that it is given. If you'd like to consume the generator in chunks, you could do this instead:

                  import multiprocessing as mp
                  import itertools
                  import time
                  
                  
                  def g():
                      for el in xrange(50):
                          print el
                          yield el
                  
                  
                  def f(x):
                      time.sleep(1)
                      return x * x
                  
                  if __name__ == '__main__':
                      pool = mp.Pool(processes=4)              # start 4 worker processes
                      go = g()
                      result = []
                      N = 11
                      while True:
                          g2 = pool.map(f, itertools.islice(go, N))
                          if g2:
                              result.extend(g2)
                              time.sleep(1)
                          else:
                              break
                      print(result)
                  

                  這篇關(guān)于Python Multiprocessing.Pool 延遲迭代的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數(shù)傳遞給 pool.map() 函數(shù))
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數(shù)?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)

                  <legend id='mcTkN'><style id='mcTkN'><dir id='mcTkN'><q id='mcTkN'></q></dir></style></legend>
                      <bdo id='mcTkN'></bdo><ul id='mcTkN'></ul>
                      1. <i id='mcTkN'><tr id='mcTkN'><dt id='mcTkN'><q id='mcTkN'><span id='mcTkN'><b id='mcTkN'><form id='mcTkN'><ins id='mcTkN'></ins><ul id='mcTkN'></ul><sub id='mcTkN'></sub></form><legend id='mcTkN'></legend><bdo id='mcTkN'><pre id='mcTkN'><center id='mcTkN'></center></pre></bdo></b><th id='mcTkN'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='mcTkN'><tfoot id='mcTkN'></tfoot><dl id='mcTkN'><fieldset id='mcTkN'></fieldset></dl></div>

                          <tbody id='mcTkN'></tbody>

                          <tfoot id='mcTkN'></tfoot>

                            <small id='mcTkN'></small><noframes id='mcTkN'>

                          • 主站蜘蛛池模板: 91观看| 成人做爰69片免费观看 | 黄色av大片 | 精品久久久久久亚洲精品 | 亚洲一区二区视频在线播放 | 成人在线观看网站 | av网站推荐| 国产一区二区三区四区三区四 | 中文字幕日韩一区 | 欧美日高清视频 | 欧美在线观看一区 | 日韩国产在线 | 国产成人精品综合 | 国产精品日韩欧美一区二区 | 国产91在线播放精品91 | 一级毛片免费完整视频 | 国产亚洲一区二区三区 | 日韩一区二区精品 | 91 久久 | 国产精品视频一二三区 | 一区日韩| 亚洲黄色一级毛片 | 在线观看午夜视频 | 自拍第一页 | 欧美a级网站| av福利网站| 欧美性生活一区二区三区 | 在线国产一区二区 | 国产成人精品免费视频 | 99精品99 | 成人精品久久 | 视频一区在线播放 | 久久久91精品国产一区二区三区 | 国产精品高潮呻吟久久aⅴ码 | 亚洲成人免费 | www.日韩免费 | 欧美日韩国产在线 | 欧美精品成人一区二区三区四区 | 国产精品区二区三区日本 | 播放一级毛片 | 情侣酒店偷拍一区二区在线播放 |