久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

  1. <i id='k6VZ4'><tr id='k6VZ4'><dt id='k6VZ4'><q id='k6VZ4'><span id='k6VZ4'><b id='k6VZ4'><form id='k6VZ4'><ins id='k6VZ4'></ins><ul id='k6VZ4'></ul><sub id='k6VZ4'></sub></form><legend id='k6VZ4'></legend><bdo id='k6VZ4'><pre id='k6VZ4'><center id='k6VZ4'></center></pre></bdo></b><th id='k6VZ4'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='k6VZ4'><tfoot id='k6VZ4'></tfoot><dl id='k6VZ4'><fieldset id='k6VZ4'></fieldset></dl></div>
  2. <small id='k6VZ4'></small><noframes id='k6VZ4'>

      <legend id='k6VZ4'><style id='k6VZ4'><dir id='k6VZ4'><q id='k6VZ4'></q></dir></style></legend>
        <bdo id='k6VZ4'></bdo><ul id='k6VZ4'></ul>

    1. <tfoot id='k6VZ4'></tfoot>

      何時在進程上調(diào)用 .join()?

      When to call .join() on a process?(何時在進程上調(diào)用 .join()?)
            <tbody id='TIBd7'></tbody>
        • <legend id='TIBd7'><style id='TIBd7'><dir id='TIBd7'><q id='TIBd7'></q></dir></style></legend>
            <i id='TIBd7'><tr id='TIBd7'><dt id='TIBd7'><q id='TIBd7'><span id='TIBd7'><b id='TIBd7'><form id='TIBd7'><ins id='TIBd7'></ins><ul id='TIBd7'></ul><sub id='TIBd7'></sub></form><legend id='TIBd7'></legend><bdo id='TIBd7'><pre id='TIBd7'><center id='TIBd7'></center></pre></bdo></b><th id='TIBd7'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='TIBd7'><tfoot id='TIBd7'></tfoot><dl id='TIBd7'><fieldset id='TIBd7'></fieldset></dl></div>

            <small id='TIBd7'></small><noframes id='TIBd7'>

                <bdo id='TIBd7'></bdo><ul id='TIBd7'></ul>
                <tfoot id='TIBd7'></tfoot>
              • 本文介紹了何時在進程上調(diào)用 .join()?的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

                問題描述

                限時送ChatGPT賬號..

                我正在閱讀有關(guān) Python 中的多處理模塊的各種教程,但無法理解為什么/何時調(diào)用 process.join().例如,我偶然發(fā)現(xiàn)了這個例子:

                I am reading various tutorials on the multiprocessing module in Python, and am having trouble understanding why/when to call process.join(). For example, I stumbled across this example:

                nums = range(100000)
                nprocs = 4
                
                def worker(nums, out_q):
                    """ The worker function, invoked in a process. 'nums' is a
                        list of numbers to factor. The results are placed in
                        a dictionary that's pushed to a queue.
                    """
                    outdict = {}
                    for n in nums:
                        outdict[n] = factorize_naive(n)
                    out_q.put(outdict)
                
                # Each process will get 'chunksize' nums and a queue to put his out
                # dict into
                out_q = Queue()
                chunksize = int(math.ceil(len(nums) / float(nprocs)))
                procs = []
                
                for i in range(nprocs):
                    p = multiprocessing.Process(
                            target=worker,
                            args=(nums[chunksize * i:chunksize * (i + 1)],
                                  out_q))
                    procs.append(p)
                    p.start()
                
                # Collect all results into a single result dict. We know how many dicts
                # with results to expect.
                resultdict = {}
                for i in range(nprocs):
                    resultdict.update(out_q.get())
                
                # Wait for all worker processes to finish
                for p in procs:
                    p.join()
                
                print resultdict
                

                據(jù)我了解,process.join() 會阻塞調(diào)用進程,直到調(diào)用join方法的進程完成執(zhí)行.我也相信上面代碼示例中啟動的子進程在完成目標(biāo)函數(shù)后,即在他們將結(jié)果推送到out_q之后完成執(zhí)行.最后,我相信 out_q.get() 會阻塞調(diào)用過程,直到有結(jié)果被提取.因此,如果您考慮代碼:

                From what I understand, process.join() will block the calling process until the process whose join method was called has completed execution. I also believe that the child processes which have been started in the above code example complete execution upon completing the target function, that is, after they have pushed their results to the out_q. Lastly, I believe that out_q.get() blocks the calling process until there are results to be pulled. Thus, if you consider the code:

                resultdict = {}
                for i in range(nprocs):
                    resultdict.update(out_q.get())
                
                # Wait for all worker processes to finish
                for p in procs:
                    p.join()
                

                主進程被 out_q.get() 調(diào)用阻塞,直到每個工作進程 完成將其結(jié)果推送到隊列.因此,當(dāng)主進程退出 for 循環(huán)時,每個子進程都應(yīng)該已完成執(zhí)行,對嗎?

                the main process is blocked by the out_q.get() calls until every single worker process has finished pushing its results to the queue. Thus, by the time the main process exits the for loop, each child process should have completed execution, correct?

                如果是這樣的話,此時是否有任何理由調(diào)用 p.join() 方法?不是所有的工作進程都已經(jīng)完成,那么這如何導(dǎo)致主進程等待所有工作進程完成"?我之所以問,主要是因為我在多個不同的示例中看到了這一點,并且我很好奇我是否未能理解某些內(nèi)容.

                If that is the case, is there any reason for calling the p.join() methods at this point? Haven't all worker processes already finished, so how does that cause the main process to "wait for all worker processes to finish?" I ask mainly because I have seen this in multiple different examples, and I am curious if I have failed to understand something.

                推薦答案

                嘗試運行這個:

                import math
                import time
                from multiprocessing import Queue
                import multiprocessing
                
                def factorize_naive(n):
                    factors = []
                    for div in range(2, int(n**.5)+1):
                        while not n % div:
                            factors.append(div)
                            n //= div
                    if n != 1:
                        factors.append(n)
                    return factors
                
                nums = range(100000)
                nprocs = 4
                
                def worker(nums, out_q):
                    """ The worker function, invoked in a process. 'nums' is a
                        list of numbers to factor. The results are placed in
                        a dictionary that's pushed to a queue.
                    """
                    outdict = {}
                    for n in nums:
                        outdict[n] = factorize_naive(n)
                    out_q.put(outdict)
                
                # Each process will get 'chunksize' nums and a queue to put his out
                # dict into
                out_q = Queue()
                chunksize = int(math.ceil(len(nums) / float(nprocs)))
                procs = []
                
                for i in range(nprocs):
                    p = multiprocessing.Process(
                            target=worker,
                            args=(nums[chunksize * i:chunksize * (i + 1)],
                                  out_q))
                    procs.append(p)
                    p.start()
                
                # Collect all results into a single result dict. We know how many dicts
                # with results to expect.
                resultdict = {}
                for i in range(nprocs):
                    resultdict.update(out_q.get())
                
                time.sleep(5)
                
                # Wait for all worker processes to finish
                for p in procs:
                    p.join()
                
                print resultdict
                
                time.sleep(15)
                

                然后打開任務(wù)管理器.您應(yīng)該能夠看到 4 個子進程在被操作系統(tǒng)終止之前進入僵尸狀態(tài)幾秒鐘(由于加入調(diào)用):

                And open the task-manager. You should be able to see that the 4 subprocesses go in zombie state for some seconds before being terminated by the OS(due to the join calls):

                在更復(fù)雜的情況下,子進程可能永遠(yuǎn)處于僵尸狀態(tài)(就像您在其他 問題),如果你創(chuàng)建了足夠多的子進程,你可以填充進程表,導(dǎo)致操作系統(tǒng)出現(xiàn)問題(這可能會殺死你的主進程以避免失敗).

                With more complex situations the child processes could stay in zombie state forever(like the situation you was asking about in an other question), and if you create enough child-processes you could fill the process table causing troubles to the OS(which may kill your main process to avoid failures).

                這篇關(guān)于何時在進程上調(diào)用 .join()?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請聯(lián)系我們刪除處理,感謝您的支持!

                相關(guān)文檔推薦

                What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數(shù)傳遞給 pool.map() 函數(shù))
                multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當(dāng)其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數(shù)?)
                yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)
                  <bdo id='PSTzm'></bdo><ul id='PSTzm'></ul>

                  <tfoot id='PSTzm'></tfoot>

                  <small id='PSTzm'></small><noframes id='PSTzm'>

                    <tbody id='PSTzm'></tbody>

                      <legend id='PSTzm'><style id='PSTzm'><dir id='PSTzm'><q id='PSTzm'></q></dir></style></legend>

                      <i id='PSTzm'><tr id='PSTzm'><dt id='PSTzm'><q id='PSTzm'><span id='PSTzm'><b id='PSTzm'><form id='PSTzm'><ins id='PSTzm'></ins><ul id='PSTzm'></ul><sub id='PSTzm'></sub></form><legend id='PSTzm'></legend><bdo id='PSTzm'><pre id='PSTzm'><center id='PSTzm'></center></pre></bdo></b><th id='PSTzm'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='PSTzm'><tfoot id='PSTzm'></tfoot><dl id='PSTzm'><fieldset id='PSTzm'></fieldset></dl></div>
                        • 主站蜘蛛池模板: 欧美一级在线 | 麻豆changesxxx国产 | 日本欧美视频 | 婷婷综合色 | 国产精品久久久久久久免费大片 | 色天堂影院 | 亚洲精品久久久一区二区三区 | 成人午夜在线观看 | 国产激情视频在线观看 | 伊人伊成久久人综合网站 | 亚洲高清在线观看 | 国内精品视频免费观看 | 99re6热在线精品视频播放 | 成人欧美一区二区三区在线播放 | 国产激情视频网 | 中文字幕av在线一二三区 | 在线看片国产 | 亚洲一区二区在线播放 | 久久999 | 美女视频一区二区 | 国产成人综合亚洲欧美94在线 | 久热精品在线 | 国产精品美女一区二区 | 国产高清在线精品 | 91视频88av| 国产一区二区三区在线 | 欧美一区二区在线播放 | 啪啪av| 国产在线精品一区 | 欧美精品一区二区三区四区 | 国产三区四区 | 一级片免费视频 | 久草.com| 一区二区在线观看免费视频 | 精品国产乱码久久久久久1区2区 | 国产精品福利一区二区三区 | 毛片视频网站 | 懂色中文一区二区三区在线视频 | 国产精品午夜电影 | 亚洲精品一区二区冲田杏梨 | 亚洲高清一区二区三区 |