久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

      <tfoot id='lshdA'></tfoot>

      <legend id='lshdA'><style id='lshdA'><dir id='lshdA'><q id='lshdA'></q></dir></style></legend>
    1. <small id='lshdA'></small><noframes id='lshdA'>

      <i id='lshdA'><tr id='lshdA'><dt id='lshdA'><q id='lshdA'><span id='lshdA'><b id='lshdA'><form id='lshdA'><ins id='lshdA'></ins><ul id='lshdA'></ul><sub id='lshdA'></sub></form><legend id='lshdA'></legend><bdo id='lshdA'><pre id='lshdA'><center id='lshdA'></center></pre></bdo></b><th id='lshdA'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='lshdA'><tfoot id='lshdA'></tfoot><dl id='lshdA'><fieldset id='lshdA'></fieldset></dl></div>

      • <bdo id='lshdA'></bdo><ul id='lshdA'></ul>

      1. python多處理:某些函數(shù)完成后不返回(隊(duì)列材料太大

        python multiprocessing: some functions do not return when they are complete (queue material too big)(python多處理:某些函數(shù)完成后不返回(隊(duì)列材料太大))

            <tbody id='xEJQe'></tbody>

          <small id='xEJQe'></small><noframes id='xEJQe'>

          <tfoot id='xEJQe'></tfoot>

              <legend id='xEJQe'><style id='xEJQe'><dir id='xEJQe'><q id='xEJQe'></q></dir></style></legend>

                <i id='xEJQe'><tr id='xEJQe'><dt id='xEJQe'><q id='xEJQe'><span id='xEJQe'><b id='xEJQe'><form id='xEJQe'><ins id='xEJQe'></ins><ul id='xEJQe'></ul><sub id='xEJQe'></sub></form><legend id='xEJQe'></legend><bdo id='xEJQe'><pre id='xEJQe'><center id='xEJQe'></center></pre></bdo></b><th id='xEJQe'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='xEJQe'><tfoot id='xEJQe'></tfoot><dl id='xEJQe'><fieldset id='xEJQe'></fieldset></dl></div>

                  <bdo id='xEJQe'></bdo><ul id='xEJQe'></ul>
                  本文介紹了python多處理:某些函數(shù)完成后不返回(隊(duì)列材料太大)的處理方法,對(duì)大家解決問(wèn)題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)吧!

                  問(wèn)題描述

                  限時(shí)送ChatGPT賬號(hào)..

                  我正在使用多處理的進(jìn)程和隊(duì)列.我并行啟動(dòng)了幾個(gè)函數(shù),并且大多數(shù)函數(shù)都表現(xiàn)良好:它們完成,它們的輸出進(jìn)入它們的隊(duì)列,它們顯示為 .is_alive() == False.但是由于某種原因,一些函數(shù)沒(méi)有運(yùn)行.它們總是顯示 .is_alive() == True,即使在函數(shù)的最后一行(打印語(yǔ)句說(shuō)完成")完成之后也是如此.無(wú)論我啟動(dòng)了哪些功能,都會(huì)發(fā)生這種情況,即使它只有一個(gè).如果不并行運(yùn)行,則函數(shù)運(yùn)行良好并正常返回.什么種類可能是問(wèn)題?

                  I am using multiprocessing's Process and Queue. I start several functions in parallel and most behave nicely: they finish, their output goes to their Queue, and they show up as .is_alive() == False. But for some reason a couple of functions are not behaving. They always show .is_alive() == True, even after the last line in the function (a print statement saying "Finished") is complete. This happens regardless of the set of functions I launch, even it there's only one. If not run in parallel, the functions behave fine and return normally. What kind of thing might be the problem?

                  這是我用來(lái)管理作業(yè)的通用函數(shù).我沒(méi)有展示的只是我傳遞給它的函數(shù).它們很長(zhǎng),經(jīng)常使用 matplotlib,有時(shí)會(huì)啟動(dòng)一些 shell 命令,但我不知道失敗的命令有什么共同點(diǎn).

                  Here's the generic function I'm using to manage the jobs. All I'm not showing is the functions I'm passing to it. They're long, often use matplotlib, sometimes launch some shell commands, but I cannot figure out what the failing ones have in common.

                  def  runFunctionsInParallel(listOf_FuncAndArgLists):
                      """
                      Take a list of lists like [function, arg1, arg2, ...]. Run those functions in parallel, wait for them all to finish, and return the list of their return values, in order.   
                      """
                      from multiprocessing import Process, Queue
                  
                      def storeOutputFFF(fff,theArgs,que): #add a argument to function for assigning a queue
                          print 'MULTIPROCESSING: Launching %s in parallel '%fff.func_name
                          que.put(fff(*theArgs)) #we're putting return value into queue
                          print 'MULTIPROCESSING: Finished %s in parallel! '%fff.func_name
                          # We get this far even for "bad" functions
                          return
                  
                      queues=[Queue() for fff in listOf_FuncAndArgLists] #create a queue object for each function
                      jobs = [Process(target=storeOutputFFF,args=[funcArgs[0],funcArgs[1:],queues[iii]]) for iii,funcArgs in enumerate(listOf_FuncAndArgLists)]
                      for job in jobs: job.start() # Launch them all
                      import time
                      from math import sqrt
                      n=1
                      while any([jj.is_alive() for jj in jobs]): # debugging section shows progress updates
                          n+=1
                          time.sleep(5+sqrt(n)) # Wait a while before next update. Slow down updates for really long runs.
                          print('
                  ---------------------------------------------------
                  '+ '	'.join(['alive?','Job','exitcode','Func',])+ '
                  ---------------------------------------------------')
                          print('
                  '.join(['%s:	%s:	%s:	%s'%(job.is_alive()*'Yes',job.name,job.exitcode,listOf_FuncAndArgLists[ii][0].func_name) for ii,job in enumerate(jobs)]))
                          print('---------------------------------------------------
                  ')
                      # I never get to the following line when one of the "bad" functions is running.
                      for job in jobs: job.join() # Wait for them all to finish... Hm, Is this needed to get at the Queues?
                      # And now, collect all the outputs:
                      return([queue.get() for queue in queues])
                  

                  推薦答案

                  好吧,好像函數(shù)的輸出太大時(shí),用來(lái)填充Queue的管道被堵塞了(我粗略的理解?這是一個(gè)未解決的/關(guān)閉的錯(cuò)誤?http://bugs.python.org/issue8237).我已經(jīng)修改了我的問(wèn)題中的代碼,以便有一些緩沖(在進(jìn)程運(yùn)行時(shí)定期清空隊(duì)列),這解決了我所有的問(wèn)題.所以現(xiàn)在這需要一組任務(wù)(函數(shù)及其參數(shù)),啟動(dòng)它們,并收集輸出.我希望它看起來(lái)更簡(jiǎn)單/更干凈.

                  Alright, it seems that the pipe used to fill the Queue gets plugged when the output of a function is too big (my crude understanding? This is an unresolved/closed bug? http://bugs.python.org/issue8237). I have modified the code in my question so that there is some buffering (queues are regularly emptied while processes are running), which solves all my problems. So now this takes a collection of tasks (functions and their arguments), launches them, and collects the outputs. I wish it were simpler /cleaner looking.

                  編輯(2014 年 9 月;2017 年 11 月更新:重寫以提高可讀性):我正在使用我此后所做的增強(qiáng)來(lái)更新代碼.新代碼(功能相同,但功能更好)在這里:https://gitlab.com/cpbl/cpblUtilities/blob/master/parallel.py

                  Edit (2014 Sep; update 2017 Nov: rewritten for readability): I'm updating the code with the enhancements I've made since. The new code (same function, but better features) is here: https://gitlab.com/cpbl/cpblUtilities/blob/master/parallel.py

                  調(diào)用說(shuō)明也在下方.

                  def runFunctionsInParallel(*args, **kwargs):
                      """ This is the main/only interface to class cRunFunctionsInParallel. See its documentation for arguments.
                      """
                      return cRunFunctionsInParallel(*args, **kwargs).launch_jobs()
                  
                  ###########################################################################################
                  ###
                  class cRunFunctionsInParallel():
                      ###
                      #######################################################################################
                      """Run any list of functions, each with any arguments and keyword-arguments, in parallel.
                  The functions/jobs should return (if anything) pickleable results. In order to avoid processes getting stuck due to the output queues overflowing, the queues are regularly collected and emptied.
                  You can now pass os.system or etc to this as the function, in order to parallelize at the OS level, with no need for a wrapper: I made use of hasattr(builtinfunction,'func_name') to check for a name.
                  Parameters
                  ----------
                  listOf_FuncAndArgLists : a list of lists 
                      List of up-to-three-element-lists, like [function, args, kwargs],
                      specifying the set of functions to be launched in parallel.  If an
                      element is just a function, rather than a list, then it is assumed
                      to have no arguments or keyword arguments. Thus, possible formats
                      for elements of the outer list are:
                        function
                        [function, list]
                        [function, list, dict]
                  kwargs: dict
                      One can also supply the kwargs once, for all jobs (or for those
                      without their own non-empty kwargs specified in the list)
                  names: an optional list of names to identify the processes.
                      If omitted, the function name is used, so if all the functions are
                      the same (ie merely with different arguments), then they would be
                      named indistinguishably
                  offsetsSeconds: int or list of ints
                      delay some functions' start times
                  expectNonzeroExit: True/False
                      Normal behaviour is to not proceed if any function exits with a
                      failed exit code. This can be used to override this behaviour.
                  parallel: True/False
                      Whenever the list of functions is longer than one, functions will
                      be run in parallel unless this parameter is passed as False
                  maxAtOnce: int
                      If nonzero, this limits how many jobs will be allowed to run at
                      once.  By default, this is set according to how many processors
                      the hardware has available.
                  showFinished : int
                      Specifies the maximum number of successfully finished jobs to show
                      in the text interface (before the last report, which should always
                      show them all).
                  Returns
                  -------
                  Returns a tuple of (return codes, return values), each a list in order of the jobs provided.
                  Issues
                  -------
                  Only tested on POSIX OSes.
                  Examples
                  --------
                  See the testParallel() method in this module
                      """
                  

                  這篇關(guān)于python多處理:某些函數(shù)完成后不返回(隊(duì)列材料太大)的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來(lái)源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問(wèn)題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個(gè)參數(shù)傳遞給 pool.map() 函數(shù))
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開(kāi)
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進(jìn)程池.當(dāng)其中一個(gè)工作進(jìn)程確定不再需要完成工作時(shí),如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊(duì)列引用傳遞給 pool.map_async() 管理的函數(shù)?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯(cuò)誤的另一個(gè)混淆,“模塊對(duì)象沒(méi)有屬性“f)
                • <tfoot id='KoLVC'></tfoot>
                  <i id='KoLVC'><tr id='KoLVC'><dt id='KoLVC'><q id='KoLVC'><span id='KoLVC'><b id='KoLVC'><form id='KoLVC'><ins id='KoLVC'></ins><ul id='KoLVC'></ul><sub id='KoLVC'></sub></form><legend id='KoLVC'></legend><bdo id='KoLVC'><pre id='KoLVC'><center id='KoLVC'></center></pre></bdo></b><th id='KoLVC'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='KoLVC'><tfoot id='KoLVC'></tfoot><dl id='KoLVC'><fieldset id='KoLVC'></fieldset></dl></div>

                  <small id='KoLVC'></small><noframes id='KoLVC'>

                        <bdo id='KoLVC'></bdo><ul id='KoLVC'></ul>
                          <tbody id='KoLVC'></tbody>
                            <legend id='KoLVC'><style id='KoLVC'><dir id='KoLVC'><q id='KoLVC'></q></dir></style></legend>

                          1. 主站蜘蛛池模板: 日韩欧美视频一区 | 黄色大片av | av激情网 | 97国产在线视频 | 美女无遮挡网站 | 国产精品1区2区 | 日韩视频一区二区三区 | 成人在线观看免费爱爱 | 日本a视频 | 欧美专区在线 | 日韩成人一区 | av大片在线观看 | 天天久久综合 | 黄色小视频免费看 | 美女天天干 | 天堂中文字幕免费一区 | 免费在线成人 | 成人精品免费 | 成人在线观看网站 | 伊人黄色| 91蜜桃在线观看 | 91欧美激情一区二区三区成人 | www.黄色在线 | 自拍偷拍欧美 | 国产精品久久久久久中文字 | 毛片网站在线观看 | 久久99精品久久久久久琪琪 | 欧美亚洲三级 | 亚洲欧美精品一区 | 97视频免费 | 欧美精品二区 | 日本亚洲天堂 | 日本欧美久久久久免费播放网 | 中文字幕在线观看免费视频 | 亚洲第一免费视频 | 成人午夜小视频 | 国产精品午夜视频 | 99久久精品国产一区二区三区 | 中国免费毛片 | 少妇视频在线观看 | 日韩国产欧美 |