久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

    • <bdo id='mha4I'></bdo><ul id='mha4I'></ul>
    <tfoot id='mha4I'></tfoot>

  • <i id='mha4I'><tr id='mha4I'><dt id='mha4I'><q id='mha4I'><span id='mha4I'><b id='mha4I'><form id='mha4I'><ins id='mha4I'></ins><ul id='mha4I'></ul><sub id='mha4I'></sub></form><legend id='mha4I'></legend><bdo id='mha4I'><pre id='mha4I'><center id='mha4I'></center></pre></bdo></b><th id='mha4I'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='mha4I'><tfoot id='mha4I'></tfoot><dl id='mha4I'><fieldset id='mha4I'></fieldset></dl></div>
    <legend id='mha4I'><style id='mha4I'><dir id='mha4I'><q id='mha4I'></q></dir></style></legend>

    <small id='mha4I'></small><noframes id='mha4I'>

      1. Python 多處理從不加入

        Python multiprocessing never joins(Python 多處理從不加入)

            1. <tfoot id='pEFQO'></tfoot>
                <tbody id='pEFQO'></tbody>

                <bdo id='pEFQO'></bdo><ul id='pEFQO'></ul>
                <i id='pEFQO'><tr id='pEFQO'><dt id='pEFQO'><q id='pEFQO'><span id='pEFQO'><b id='pEFQO'><form id='pEFQO'><ins id='pEFQO'></ins><ul id='pEFQO'></ul><sub id='pEFQO'></sub></form><legend id='pEFQO'></legend><bdo id='pEFQO'><pre id='pEFQO'><center id='pEFQO'></center></pre></bdo></b><th id='pEFQO'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='pEFQO'><tfoot id='pEFQO'></tfoot><dl id='pEFQO'><fieldset id='pEFQO'></fieldset></dl></div>

                <legend id='pEFQO'><style id='pEFQO'><dir id='pEFQO'><q id='pEFQO'></q></dir></style></legend>

                <small id='pEFQO'></small><noframes id='pEFQO'>

                  本文介紹了Python 多處理從不加入的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

                  問題描述

                  限時送ChatGPT賬號..

                  我正在使用 multiprocessing,特別是一個 Pool 來分拆幾個線程"來完成我擁有的一堆慢速工作.但是,由于某種原因,我無法讓主線程重新加入,即使所有的孩子似乎都已經(jīng)死了.

                  I'm using multiprocessing, and specifically a Pool to spin off a couple of 'threads' to do a bunch of slow jobs that I have. However, for some reason, I can't get the main thread to rejoin, even though all of the children appear to have died.

                  已解決:看來這個問題的答案是只啟動多個 Process 對象,而不是使用 Pool.目前尚不清楚為什么,但我懷疑剩余的進程是池的管理器,并且當(dāng)進程完成時它并沒有死亡.如果其他人有這個問題,這就是答案.

                  Resolved: It appears the answer to this question is to just launch multiple Process objects, rather than using a Pool. It's not abundantly clear why, but I suspect the remaining process is a manager for the pool and it's not dying when the processes finish. If anyone else has this problem, this is the answer.

                  主線程

                  pool = Pool(processes=12,initializer=thread_init)
                  for x in xrange(0,13):
                      pool.apply_async(thread_dowork)
                  pool.close()
                  sys.stderr.write("Waiting for jobs to terminate
                  ")
                  pool.join()
                  

                  xrange(0,13) 比進程數(shù)多一,因為我認(rèn)為我有一個關(guān)閉,并且一個進程沒有得到工作,所以是'不會死,我想強迫它去工作.我也試過 12 次.

                  The xrange(0,13) is one more than the number of processes because I thought I had an off by one, and one process wasn't getting a job, so wasn't dying and I wanted to force it to take a job. I have tried it with 12 as well.

                  多處理函數(shù)

                  def thread_init():
                      global log_out
                      log_out = open('pool_%s.log'%os.getpid(),'w')
                      sys.stderr = log_out
                      sys.stdout = log_out
                      log_out.write("Spawned")
                      log_out.flush()
                      log_out.write(" Complete
                  ")
                      log_out.flush()
                  
                  
                  def thread_dowork():
                      log_out.write("Entered function
                  ")
                      log_out.flush()
                      #Do Work
                      log_out.write("Exiting ")
                      log_out.flush()
                      log_out.close()
                      sys.exit(0)
                  

                  所有 12 個孩子的日志文件的輸出是:

                  The output of the logfiles for all 12 children is:

                  Spawned
                  Complete
                  Entered function
                  Exiting
                  

                  主線程打印等待作業(yè)終止",然后就坐在那里.

                  The main thread prints 'Waiting for jobs to terminate', and then just sits there.

                  top 僅顯示腳本的一份副本(我相信是主要的).htop 顯示兩個副本,一個是從頂部開始的副本,另一個是其他副本.根據(jù)它的 PID,它也不是孩子.

                  top shows only one copy of the script (the main one I believe). htop shows two copies, one of which is the one from top, and the other one of which is something else. Based on its PID, it's none of the children either.

                  有人知道我不知道的事情嗎?

                  Does anyone know something I don't?

                  推薦答案

                  我真的沒有答案,但我閱讀了 Apply_async 的文檔,這似乎與您所說的問題背道而馳......

                  I don't really have an answer but I read the docs for Apply_async and it seems counter to your stated problem...

                  回調(diào)應(yīng)該立即完成,否則線程處理結(jié)果將被阻止.

                  Callbacks should complete immediately since otherwise the thread which handles the results will get blocked.

                  我不熟悉池,但在我看來,您的用例可以通過 本周 Python 模塊

                  I'm not familiar with the Pool but it seems to me that your use-case could easily be handled by this recipe on Python Module of the Week

                  這篇關(guān)于Python 多處理從不加入的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數(shù)傳遞給 pool.map() 函數(shù))
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當(dāng)其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數(shù)?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)

                    • <legend id='5aoRD'><style id='5aoRD'><dir id='5aoRD'><q id='5aoRD'></q></dir></style></legend>
                        <tbody id='5aoRD'></tbody>
                        <i id='5aoRD'><tr id='5aoRD'><dt id='5aoRD'><q id='5aoRD'><span id='5aoRD'><b id='5aoRD'><form id='5aoRD'><ins id='5aoRD'></ins><ul id='5aoRD'></ul><sub id='5aoRD'></sub></form><legend id='5aoRD'></legend><bdo id='5aoRD'><pre id='5aoRD'><center id='5aoRD'></center></pre></bdo></b><th id='5aoRD'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='5aoRD'><tfoot id='5aoRD'></tfoot><dl id='5aoRD'><fieldset id='5aoRD'></fieldset></dl></div>

                        • <tfoot id='5aoRD'></tfoot>

                            <bdo id='5aoRD'></bdo><ul id='5aoRD'></ul>

                            <small id='5aoRD'></small><noframes id='5aoRD'>

                            主站蜘蛛池模板: 欧美精品一区在线发布 | 黄免费在线| 二区视频| 国产视频久久 | 亚洲综合热 | 天堂久久一区 | 国产永久免费 | 国产成人免费视频网站视频社区 | 91免费在线看 | 波多野结衣一区二区 | a在线免费观看 | 特级生活片 | 国产1区2区在线观看 | 久夜精品 | 国产高清一区二区三区 | 97精品视频在线观看 | 国产探花 | 青青草国产在线观看 | 久久精品一区二区三区四区 | 久热精品在线观看视频 | 99久久99| 超碰美女在线 | 在线国产一区二区 | 玖玖玖在线观看 | 国产性色视频 | 国产1区2区3区 | 成人免费在线播放 | 免费成人毛片 | 欧美激情一区二区三区 | 色视频在线免费观看 | www视频在线观看 | 国产黑丝在线 | 一二三四在线视频观看社区 | 亚洲网站在线播放 | 中国一级特黄真人毛片免费观看 | 久久久www成人免费无遮挡大片 | 华人黄网站大全 | 日韩精品一区二区三区在线观看 | 色网站在线免费观看 | 国产一区二区在线视频 | 久久亚洲精品国产精品紫薇 |