久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

    • <bdo id='mha4I'></bdo><ul id='mha4I'></ul>
    <tfoot id='mha4I'></tfoot>

  • <i id='mha4I'><tr id='mha4I'><dt id='mha4I'><q id='mha4I'><span id='mha4I'><b id='mha4I'><form id='mha4I'><ins id='mha4I'></ins><ul id='mha4I'></ul><sub id='mha4I'></sub></form><legend id='mha4I'></legend><bdo id='mha4I'><pre id='mha4I'><center id='mha4I'></center></pre></bdo></b><th id='mha4I'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='mha4I'><tfoot id='mha4I'></tfoot><dl id='mha4I'><fieldset id='mha4I'></fieldset></dl></div>
    <legend id='mha4I'><style id='mha4I'><dir id='mha4I'><q id='mha4I'></q></dir></style></legend>

    <small id='mha4I'></small><noframes id='mha4I'>

      1. Python 多處理從不加入

        Python multiprocessing never joins(Python 多處理從不加入)

            1. <tfoot id='pEFQO'></tfoot>
                <tbody id='pEFQO'></tbody>

                <bdo id='pEFQO'></bdo><ul id='pEFQO'></ul>
                <i id='pEFQO'><tr id='pEFQO'><dt id='pEFQO'><q id='pEFQO'><span id='pEFQO'><b id='pEFQO'><form id='pEFQO'><ins id='pEFQO'></ins><ul id='pEFQO'></ul><sub id='pEFQO'></sub></form><legend id='pEFQO'></legend><bdo id='pEFQO'><pre id='pEFQO'><center id='pEFQO'></center></pre></bdo></b><th id='pEFQO'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='pEFQO'><tfoot id='pEFQO'></tfoot><dl id='pEFQO'><fieldset id='pEFQO'></fieldset></dl></div>

                <legend id='pEFQO'><style id='pEFQO'><dir id='pEFQO'><q id='pEFQO'></q></dir></style></legend>

                <small id='pEFQO'></small><noframes id='pEFQO'>

                  本文介紹了Python 多處理從不加入的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  限時送ChatGPT賬號..

                  我正在使用 multiprocessing,特別是一個 Pool 來分拆幾個線程"來完成我擁有的一堆慢速工作.但是,由于某種原因,我無法讓主線程重新加入,即使所有的孩子似乎都已經死了.

                  I'm using multiprocessing, and specifically a Pool to spin off a couple of 'threads' to do a bunch of slow jobs that I have. However, for some reason, I can't get the main thread to rejoin, even though all of the children appear to have died.

                  已解決:看來這個問題的答案是只啟動多個 Process 對象,而不是使用 Pool.目前尚不清楚為什么,但我懷疑剩余的進程是池的管理器,并且當進程完成時它并沒有死亡.如果其他人有這個問題,這就是答案.

                  Resolved: It appears the answer to this question is to just launch multiple Process objects, rather than using a Pool. It's not abundantly clear why, but I suspect the remaining process is a manager for the pool and it's not dying when the processes finish. If anyone else has this problem, this is the answer.

                  主線程

                  pool = Pool(processes=12,initializer=thread_init)
                  for x in xrange(0,13):
                      pool.apply_async(thread_dowork)
                  pool.close()
                  sys.stderr.write("Waiting for jobs to terminate
                  ")
                  pool.join()
                  

                  xrange(0,13) 比進程數多一,因為我認為我有一個關閉,并且一個進程沒有得到工作,所以是'不會死,我想強迫它去工作.我也試過 12 次.

                  The xrange(0,13) is one more than the number of processes because I thought I had an off by one, and one process wasn't getting a job, so wasn't dying and I wanted to force it to take a job. I have tried it with 12 as well.

                  多處理函數

                  def thread_init():
                      global log_out
                      log_out = open('pool_%s.log'%os.getpid(),'w')
                      sys.stderr = log_out
                      sys.stdout = log_out
                      log_out.write("Spawned")
                      log_out.flush()
                      log_out.write(" Complete
                  ")
                      log_out.flush()
                  
                  
                  def thread_dowork():
                      log_out.write("Entered function
                  ")
                      log_out.flush()
                      #Do Work
                      log_out.write("Exiting ")
                      log_out.flush()
                      log_out.close()
                      sys.exit(0)
                  

                  所有 12 個孩子的日志文件的輸出是:

                  The output of the logfiles for all 12 children is:

                  Spawned
                  Complete
                  Entered function
                  Exiting
                  

                  主線程打印等待作業終止",然后就坐在那里.

                  The main thread prints 'Waiting for jobs to terminate', and then just sits there.

                  top 僅顯示腳本的一份副本(我相信是主要的).htop 顯示兩個副本,一個是從頂部開始的副本,另一個是其他副本.根據它的 PID,它也不是孩子.

                  top shows only one copy of the script (the main one I believe). htop shows two copies, one of which is the one from top, and the other one of which is something else. Based on its PID, it's none of the children either.

                  有人知道我不知道的事情嗎?

                  Does anyone know something I don't?

                  推薦答案

                  我真的沒有答案,但我閱讀了 Apply_async 的文檔,這似乎與您所說的問題背道而馳......

                  I don't really have an answer but I read the docs for Apply_async and it seems counter to your stated problem...

                  回調應該立即完成,否則線程處理結果將被阻止.

                  Callbacks should complete immediately since otherwise the thread which handles the results will get blocked.

                  我不熟悉池,但在我看來,您的用例可以通過 本周 Python 模塊

                  I'm not familiar with the Pool but it seems to me that your use-case could easily be handled by this recipe on Python Module of the Week

                  這篇關于Python 多處理從不加入的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數傳遞給 pool.map() 函數)
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)

                    • <legend id='5aoRD'><style id='5aoRD'><dir id='5aoRD'><q id='5aoRD'></q></dir></style></legend>
                        <tbody id='5aoRD'></tbody>
                        <i id='5aoRD'><tr id='5aoRD'><dt id='5aoRD'><q id='5aoRD'><span id='5aoRD'><b id='5aoRD'><form id='5aoRD'><ins id='5aoRD'></ins><ul id='5aoRD'></ul><sub id='5aoRD'></sub></form><legend id='5aoRD'></legend><bdo id='5aoRD'><pre id='5aoRD'><center id='5aoRD'></center></pre></bdo></b><th id='5aoRD'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='5aoRD'><tfoot id='5aoRD'></tfoot><dl id='5aoRD'><fieldset id='5aoRD'></fieldset></dl></div>

                        • <tfoot id='5aoRD'></tfoot>

                            <bdo id='5aoRD'></bdo><ul id='5aoRD'></ul>

                            <small id='5aoRD'></small><noframes id='5aoRD'>

                            主站蜘蛛池模板: 伊人色综合久久天天五月婷 | 日韩一区精品 | 欧美视频区| 成人精品一区二区户外勾搭野战 | 中文字幕不卡 | 欧美精品中文字幕久久二区 | 国产亚洲黄色片 | 超黄视频网站 | 精品美女视频在线观看免费软件 | 盗摄精品av一区二区三区 | jlzzjlzz国产精品久久 | 视频在线观看亚洲 | 日韩精品亚洲专区在线观看 | 国产精品成人一区二区三区夜夜夜 | 国产精品亚洲二区 | 色爱综合网 | 欧美中文一区 | 久久99成人 | 网黄在线 | 久久久久久九九九九九九 | 91亚洲国产成人久久精品网站 | 免费国产黄网站在线观看视频 | 国产一区二区三区不卡av | 国产精品视频一二三区 | 中文字幕 欧美 日韩 | 一级毛片免费 | 91精品中文字幕一区二区三区 | 久久精品二区亚洲w码 | 亚洲精品乱码 | 成人久久网 | 中文字幕一区二区三区乱码图片 | 91中文字幕在线观看 | 久久亚洲国产 | 国产www在线 | 久久精品日产第一区二区三区 | 综合在线视频 | 日韩一区二区免费视频 | 97超碰成人 | 国产精品美女一区二区 | 最新国产视频 | 精产嫩模国品一二三区 |