久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

    <bdo id='tnpU7'></bdo><ul id='tnpU7'></ul>
<legend id='tnpU7'><style id='tnpU7'><dir id='tnpU7'><q id='tnpU7'></q></dir></style></legend>

    <i id='tnpU7'><tr id='tnpU7'><dt id='tnpU7'><q id='tnpU7'><span id='tnpU7'><b id='tnpU7'><form id='tnpU7'><ins id='tnpU7'></ins><ul id='tnpU7'></ul><sub id='tnpU7'></sub></form><legend id='tnpU7'></legend><bdo id='tnpU7'><pre id='tnpU7'><center id='tnpU7'></center></pre></bdo></b><th id='tnpU7'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='tnpU7'><tfoot id='tnpU7'></tfoot><dl id='tnpU7'><fieldset id='tnpU7'></fieldset></dl></div>

        <small id='tnpU7'></small><noframes id='tnpU7'>

      1. <tfoot id='tnpU7'></tfoot>

        使用 Python 的 Multiprocessing 模塊執行同時和單獨的

        Using Python#39;s Multiprocessing module to execute simultaneous and separate SEAWAT/MODFLOW model runs(使用 Python 的 Multiprocessing 模塊執行同時和單獨的 SEAWAT/MODFLOW 模型運行)

            <tbody id='PqHQE'></tbody>

          • <legend id='PqHQE'><style id='PqHQE'><dir id='PqHQE'><q id='PqHQE'></q></dir></style></legend>
              <tfoot id='PqHQE'></tfoot>
              • <bdo id='PqHQE'></bdo><ul id='PqHQE'></ul>
                1. <i id='PqHQE'><tr id='PqHQE'><dt id='PqHQE'><q id='PqHQE'><span id='PqHQE'><b id='PqHQE'><form id='PqHQE'><ins id='PqHQE'></ins><ul id='PqHQE'></ul><sub id='PqHQE'></sub></form><legend id='PqHQE'></legend><bdo id='PqHQE'><pre id='PqHQE'><center id='PqHQE'></center></pre></bdo></b><th id='PqHQE'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='PqHQE'><tfoot id='PqHQE'></tfoot><dl id='PqHQE'><fieldset id='PqHQE'></fieldset></dl></div>

                  <small id='PqHQE'></small><noframes id='PqHQE'>

                2. 本文介紹了使用 Python 的 Multiprocessing 模塊執行同時和單獨的 SEAWAT/MODFLOW 模型運行的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  限時送ChatGPT賬號..

                  我正在嘗試在我的 8 處理器 64 位 Windows 7 機器上運行 100 個模型.我想同時運行 7 個模型實例以減少我的總運行時間(每個模型運行大約 9.5 分鐘).我已經查看了與 Python 的多處理模塊有關的幾個線程,但仍然缺少一些東西.

                  concurrent.futures.ThreadPoolExecutor 既簡單又足夠了,但它需要 第三方對 Python 2.x 的依賴(自 Python 3.2 起就在 stdlib 中).

                  #!/usr/bin/env python導入操作系統導入并發期貨def 運行(文件名,def_param):... # 在 `filename` 上調用外部程序# 填充文件ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'wdir = os.path.join(ws, r'fieldgen
                  eals')files = (os.path.join(wdir, f) for f in os.listdir(wdir) if f.endswith('.npy'))# 啟動線程以 concurrent.futures.ThreadPoolExecutor(max_workers=8) 作為執行者:future_to_file = dict((executor.submit(run, f, ws), f) for f in files)concurrent.futures.as_completed(future_to_file) 中的未來:f = future_to_file[未來]如果 future.exception() 不是無:print('%r 產生異常: %s' % (f, future.exception()))# run() 不返回任何東西,所以 `future.result()` 總是 `None`

                  或者如果我們忽略 run() 引發的異常:

                  從 itertools 導入重復... # 相同# 啟動線程以 concurrent.futures.ThreadPoolExecutor(max_workers=8) 作為執行者:executor.map(運行,文件,重復(ws))# run() 不返回任何內容,因此可以忽略 `map()` 結果

                  subprocess + threading(手動池)解決方案

                  #!/usr/bin/env python從 __future__ 導入 print_function導入操作系統導入子流程導入系統從隊列導入隊列從線程導入線程def 運行(文件名,def_param):... # 定義 exe, swt_namsubprocess.check_call([exe, swt_nam]) # 運行外部程序def 工人(隊列):"""處理隊列中的文件."""對于 iter(queue.get, None) 中的 args:嘗試:運行(*參數)except Exception as e: # 捕獲異常以避免退出# 過早線程print('%r failed: %s' % (args, e,), file=sys.stderr)# 啟動線程q = 隊列()線程= [線程(目標=工作者,參數=(q,))為_在范圍內(8)]對于線程中的 t:t.daemon = True # 如果程序死了,線程就死了t.start()# 填充文件ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'wdir = os.path.join(ws, r'fieldgen
                  eals')對于 os.listdir(wdir) 中的 f:如果 f.endswith('.npy'):q.put_nowait((os.path.join(wdir, f), ws))for _ in threads: q.put_nowait(None) # 表示不再有文件for t in threads: t.join() # 等待完成

                  I'm trying to complete 100 model runs on my 8-processor 64-bit Windows 7 machine. I'd like to run 7 instances of the model concurrently to decrease my total run time (approx. 9.5 min per model run). I've looked at several threads pertaining to the Multiprocessing module of Python, but am still missing something.

                  Using the multiprocessing module

                  How to spawn parallel child processes on a multi-processor system?

                  Python Multiprocessing queue

                  My Process:

                  I have 100 different parameter sets I'd like to run through SEAWAT/MODFLOW to compare the results. I have pre-built the model input files for each model run and stored them in their own directories. What I'd like to be able to do is have 7 models running at a time until all realizations have been completed. There needn't be communication between processes or display of results. So far I have only been able to spawn the models sequentially:

                  import os,subprocess
                  import multiprocessing as mp
                  
                  ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                  files = []
                  for f in os.listdir(ws + r'fieldgen
                  eals'):
                      if f.endswith('.npy'):
                          files.append(f)
                  
                  ## def work(cmd):
                  ##     return subprocess.call(cmd, shell=False)
                  
                  def run(f,def_param=ws):
                      real = f.split('_')[2].split('.')[0]
                      print 'Realization %s' % real
                  
                      mf2k = r'c:modflowmf2k.1_19inmf2k.exe '
                      mf2k5 = r'c:modflowMF2005_1_8inmf2005.exe '
                      seawatV4 = r'c:modflowswt_v4_00_04exeswt_v4.exe '
                      seawatV4x64 = r'c:modflowswt_v4_00_04exeswt_v4x64.exe '
                  
                      exe = seawatV4x64
                      swt_nam = ws + r'
                  eals
                  eal%sssss.nam_swt' % real
                  
                      os.system( exe + swt_nam )
                  
                  
                  if __name__ == '__main__':
                      p = mp.Pool(processes=mp.cpu_count()-1) #-leave 1 processor available for system and other processes
                      tasks = range(len(files))
                      results = []
                      for f in files:
                          r = p.map_async(run(f), tasks, callback=results.append)
                  

                  I changed the if __name__ == 'main': to the following in hopes it would fix the lack of parallelism I feel is being imparted on the above script by the for loop. However, the model fails to even run (no Python error):

                  if __name__ == '__main__':
                      p = mp.Pool(processes=mp.cpu_count()-1) #-leave 1 processor available for system and other processes
                      p.map_async(run,((files[f],) for f in range(len(files))))
                  

                  Any and all help is greatly appreciated!

                  EDIT 3/26/2012 13:31 EST

                  Using the "Manual Pool" method in @J.F. Sebastian's answer below I get parallel execution of my external .exe. Model realizations are called up in batches of 8 at a time, but it doesn't wait for those 8 runs to complete before calling up the next batch and so on:

                  from __future__ import print_function
                  import os,subprocess,sys
                  import multiprocessing as mp
                  from Queue import Queue
                  from threading import Thread
                  
                  def run(f,ws):
                      real = f.split('_')[-1].split('.')[0]
                      print('Realization %s' % real)
                      seawatV4x64 = r'c:modflowswt_v4_00_04exeswt_v4x64.exe '
                      swt_nam = ws + r'
                  eals
                  eal%sssss.nam_swt' % real
                      subprocess.check_call([seawatV4x64, swt_nam])
                  
                  def worker(queue):
                      """Process files from the queue."""
                      for args in iter(queue.get, None):
                          try:
                              run(*args)
                          except Exception as e: # catch exceptions to avoid exiting the
                                                 # thread prematurely
                              print('%r failed: %s' % (args, e,), file=sys.stderr)
                  
                  def main():
                      # populate files
                      ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                      wdir = os.path.join(ws, r'fieldgen
                  eals')
                      q = Queue()
                      for f in os.listdir(wdir):
                          if f.endswith('.npy'):
                              q.put_nowait((os.path.join(wdir, f), ws))
                  
                      # start threads
                      threads = [Thread(target=worker, args=(q,)) for _ in range(8)]
                      for t in threads:
                          t.daemon = True # threads die if the program dies
                          t.start()
                  
                      for _ in threads: q.put_nowait(None) # signal no more files
                      for t in threads: t.join() # wait for completion
                  
                  if __name__ == '__main__':
                  
                      mp.freeze_support() # optional if the program is not frozen
                      main()
                  

                  No error traceback is available. The run() function performs its duty when called upon a single model realization file as with mutiple files. The only difference is that with multiple files, it is called len(files) times though each of the instances immediately closes and only one model run is allowed to finish at which time the script exits gracefully (exit code 0).

                  Adding some print statements to main() reveals some information about active thread-counts as well as thread status (note that this is a test on only 8 of the realization files to make the screenshot more manageable, theoretically all 8 files should be run concurrently, however the behavior continues where they are spawn and immediately die except one):

                  def main():
                      # populate files
                      ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                      wdir = os.path.join(ws, r'fieldgen	est')
                      q = Queue()
                      for f in os.listdir(wdir):
                          if f.endswith('.npy'):
                              q.put_nowait((os.path.join(wdir, f), ws))
                  
                      # start threads
                      threads = [Thread(target=worker, args=(q,)) for _ in range(mp.cpu_count())]
                      for t in threads:
                          t.daemon = True # threads die if the program dies
                          t.start()
                      print('Active Count a',threading.activeCount())
                      for _ in threads:
                          print(_)
                          q.put_nowait(None) # signal no more files
                      for t in threads: 
                          print(t)
                          t.join() # wait for completion
                      print('Active Count b',threading.activeCount())
                  

                  **The line which reads "D:\Data\Users..." is the error information thrown when I manually stop the model from running to completion. Once I stop the model running, the remaining thread status lines get reported and the script exits.

                  EDIT 3/26/2012 16:24 EST

                  SEAWAT does allow concurrent execution as I've done this in the past, spawning instances manually using iPython and launching from each model file folder. This time around, I'm launching all model runs from a single location, namely the directory where my script resides. It looks like the culprit may be in the way SEAWAT is saving some of the output. When SEAWAT is run, it immediately creates files pertaining to the model run. One of these files is not being saved to the directory in which the model realization is located, but in the top directory where the script is located. This is preventing any subsequent threads from saving the same file name in the same location (which they all want to do since these filenames are generic and non-specific to each realization). The SEAWAT windows were not staying open long enough for me to read or even see that there was an error message, I only realized this when I went back and tried to run the code using iPython which directly displays the printout from SEAWAT instead of opening a new window to run the program.

                  I am accepting @J.F. Sebastian's answer as it is likely that once I resolve this model-executable issue, the threading code he has provided will get me where I need to be.

                  FINAL CODE

                  Added cwd argument in subprocess.check_call to start each instance of SEAWAT in its own directory. Very key.

                  from __future__ import print_function
                  import os,subprocess,sys
                  import multiprocessing as mp
                  from Queue import Queue
                  from threading import Thread
                  import threading
                  
                  def run(f,ws):
                      real = f.split('_')[-1].split('.')[0]
                      print('Realization %s' % real)
                      seawatV4x64 = r'c:modflowswt_v4_00_04exeswt_v4x64.exe '
                      cwd = ws + r'
                  eals
                  eal%sss' % real
                      swt_nam = ws + r'
                  eals
                  eal%sssss.nam_swt' % real
                      subprocess.check_call([seawatV4x64, swt_nam],cwd=cwd)
                  
                  def worker(queue):
                      """Process files from the queue."""
                      for args in iter(queue.get, None):
                          try:
                              run(*args)
                          except Exception as e: # catch exceptions to avoid exiting the
                                                 # thread prematurely
                              print('%r failed: %s' % (args, e,), file=sys.stderr)
                  
                  def main():
                      # populate files
                      ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                      wdir = os.path.join(ws, r'fieldgen
                  eals')
                      q = Queue()
                      for f in os.listdir(wdir):
                          if f.endswith('.npy'):
                              q.put_nowait((os.path.join(wdir, f), ws))
                  
                      # start threads
                      threads = [Thread(target=worker, args=(q,)) for _ in range(mp.cpu_count()-1)]
                      for t in threads:
                          t.daemon = True # threads die if the program dies
                          t.start()
                      for _ in threads: q.put_nowait(None) # signal no more files
                      for t in threads: t.join() # wait for completion
                  
                  if __name__ == '__main__':
                      mp.freeze_support() # optional if the program is not frozen
                      main()
                  

                  解決方案

                  I don't see any computations in the Python code. If you just need to execute several external programs in parallel it is sufficient to use subprocess to run the programs and threading module to maintain constant number of processes running, but the simplest code is using multiprocessing.Pool:

                  #!/usr/bin/env python
                  import os
                  import multiprocessing as mp
                  
                  def run(filename_def_param): 
                      filename, def_param = filename_def_param # unpack arguments
                      ... # call external program on `filename`
                  
                  def safe_run(*args, **kwargs):
                      """Call run(), catch exceptions."""
                      try: run(*args, **kwargs)
                      except Exception as e:
                          print("error: %s run(*%r, **%r)" % (e, args, kwargs))
                  
                  def main():
                      # populate files
                      ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                      workdir = os.path.join(ws, r'fieldgen
                  eals')
                      files = ((os.path.join(workdir, f), ws)
                               for f in os.listdir(workdir) if f.endswith('.npy'))
                  
                      # start processes
                      pool = mp.Pool() # use all available CPUs
                      pool.map(safe_run, files)
                  
                  if __name__=="__main__":
                      mp.freeze_support() # optional if the program is not frozen
                      main()
                  

                  If there are many files then pool.map() could be replaced by for _ in pool.imap_unordered(safe_run, files): pass.

                  There is also mutiprocessing.dummy.Pool that provides the same interface as multiprocessing.Pool but uses threads instead of processes that might be more appropriate in this case.

                  You don't need to keep some CPUs free. Just use a command that starts your executables with a low priority (on Linux it is a nice program).

                  ThreadPoolExecutor example

                  concurrent.futures.ThreadPoolExecutor would be both simple and sufficient but it requires 3rd-party dependency on Python 2.x (it is in the stdlib since Python 3.2).

                  #!/usr/bin/env python
                  import os
                  import concurrent.futures
                  
                  def run(filename, def_param):
                      ... # call external program on `filename`
                  
                  # populate files
                  ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                  wdir = os.path.join(ws, r'fieldgen
                  eals')
                  files = (os.path.join(wdir, f) for f in os.listdir(wdir) if f.endswith('.npy'))
                  
                  # start threads
                  with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
                      future_to_file = dict((executor.submit(run, f, ws), f) for f in files)
                  
                      for future in concurrent.futures.as_completed(future_to_file):
                          f = future_to_file[future]
                          if future.exception() is not None:
                             print('%r generated an exception: %s' % (f, future.exception()))
                          # run() doesn't return anything so `future.result()` is always `None`
                  

                  Or if we ignore exceptions raised by run():

                  from itertools import repeat
                  
                  ... # the same
                  
                  # start threads
                  with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
                       executor.map(run, files, repeat(ws))
                       # run() doesn't return anything so `map()` results can be ignored
                  

                  subprocess + threading (manual pool) solution

                  #!/usr/bin/env python
                  from __future__ import print_function
                  import os
                  import subprocess
                  import sys
                  from Queue import Queue
                  from threading import Thread
                  
                  def run(filename, def_param):
                      ... # define exe, swt_nam
                      subprocess.check_call([exe, swt_nam]) # run external program
                  
                  def worker(queue):
                      """Process files from the queue."""
                      for args in iter(queue.get, None):
                          try:
                              run(*args)
                          except Exception as e: # catch exceptions to avoid exiting the
                                                 # thread prematurely
                              print('%r failed: %s' % (args, e,), file=sys.stderr)
                  
                  # start threads
                  q = Queue()
                  threads = [Thread(target=worker, args=(q,)) for _ in range(8)]
                  for t in threads:
                      t.daemon = True # threads die if the program dies
                      t.start()
                  
                  # populate files
                  ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                  wdir = os.path.join(ws, r'fieldgen
                  eals')
                  for f in os.listdir(wdir):
                      if f.endswith('.npy'):
                          q.put_nowait((os.path.join(wdir, f), ws))
                  
                  for _ in threads: q.put_nowait(None) # signal no more files
                  for t in threads: t.join() # wait for completion
                  

                  這篇關于使用 Python 的 Multiprocessing 模塊執行同時和單獨的 SEAWAT/MODFLOW 模型運行的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數傳遞給 pool.map() 函數)
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)

                      <tbody id='Uumga'></tbody>

                    • <small id='Uumga'></small><noframes id='Uumga'>

                      <legend id='Uumga'><style id='Uumga'><dir id='Uumga'><q id='Uumga'></q></dir></style></legend>
                      <tfoot id='Uumga'></tfoot>
                        <i id='Uumga'><tr id='Uumga'><dt id='Uumga'><q id='Uumga'><span id='Uumga'><b id='Uumga'><form id='Uumga'><ins id='Uumga'></ins><ul id='Uumga'></ul><sub id='Uumga'></sub></form><legend id='Uumga'></legend><bdo id='Uumga'><pre id='Uumga'><center id='Uumga'></center></pre></bdo></b><th id='Uumga'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='Uumga'><tfoot id='Uumga'></tfoot><dl id='Uumga'><fieldset id='Uumga'></fieldset></dl></div>

                          • <bdo id='Uumga'></bdo><ul id='Uumga'></ul>
                            主站蜘蛛池模板: h肉视频 | 小h片免费观看久久久久 | 综合九九 | 亚洲电影成人 | 天堂色综合 | 美女一区 | 91资源在线 | 成人亚洲性情网站www在线观看 | 成人国产精品久久 | 欧美性猛交 | 欧美在线一区二区视频 | 日韩精品一区二区三区中文在线 | 久久久久久国 | 97免费视频在线观看 | 亚洲精品自在在线观看 | 国产三级一区二区 | av一区二区三区在线观看 | 最近中文字幕免费 | 美日韩免费视频 | 亚洲综合大片69999 | 色综合成人网 | 在线看片国产 | 2020国产在线 | 日韩电影中文字幕 | 国产91丝袜在线播放 | 日韩成人在线免费观看 | 日本三级全黄三级a | 欧美视频在线一区 | 人成在线 | jlzzxxxx18hd护士| 欧美成视频 | 天天操 夜夜操 | 国产精品久久久久久久久久久免费看 | 日韩av成人在线 | 久久一区 | 日韩久久久一区二区 | 波多野结衣中文字幕一区二区三区 | 中文一区| 欧美最猛黑人xxxⅹ 粉嫩一区二区三区四区公司1 | 波多野结衣一区二区 | 亚洲精品久久久久久久久久久久久 |